In my latest post for DevCycle, I dive into how we tackled the tricky job of rolling out new AI models at Helix without risking a big public flop. Instead of launching updates blindly, we used feature flags to test new models with a small group of users first, either using email or device IDs. This let us spot issues early, tweak things quickly, and ensure Helix’s image recognition stayed sharp and accurate. Honestly, feature flags made the whole process way smoother!
I recently encountered a production bug that I couldn't reproduce locally, which was complicated by internal microservices, running memory-intensive machine learning models on GPUs, that weren't exposed through the public API.
To solve this, I used Codezero, where I could access the kubernetes cluster running the microservice from my local machine using just a DNS name.
Installed the Codezero agent using brew.
PrismJS Highlighted Shell Command
brew install c6o/tap/codezero-app
Logged into my Codezero account, created a new teamspace and installed it into my kubernetes cluster.
Adding AI tools to my chatbot has been great; I can give it functions to perform specific tasks, such as querying my database. This week, I'm going to integrate MCP servers into the mix and see how they work in conjunction with the existing tools.
See you next week!
Mark C Allen
1:1 Chat
Need a Different Perspective
Do you have a problem with your release process? Has Kubernetes got you down? Do you need an outsider's perspective on what's holding up your deployments? If you have 25 minutes, book a time with me.