The Transitory Nature of “Build-Your-Own” AI Agents
Honored mentioned for this newsletter were Dimitri Masin credits and dedicated to him. AI agents are all the rage right now, but they’re really just an implementation detail. Chasing them is like skating to where the puck is instead of where it’s going.
🏗️ The transitory nature of “Build-Your-Own” AI Agents
We’re in the thick of the AI agent hype cycle. Companies are trying to build and deploy their own, and startups are promising platforms to do just that. But does anyone really want to manage an army of custom AI agents? What companies truly want is an “API to a brain”—a digital employee they can direct. This “steerable, safe intelligence on tap” is the real prize.
From my experience with ML and MLOps, I understand why some think building agents in-house matters. There is a key difference though: with ML, most models had to be company-specific and trained on unique data. Not so with today’s generative AI—it’s general purpose. Like with human employees, the digital employee would be the same for company A and B. Expecting to build and manage your own AI agents is like running a private university just to hire graduates later. No one would do that.
🔮 Where will the puck be next December?
Instead of DIY AI agents, we’ll see “API-to-a-brain” services—intelligent, contextual, secure. These will be domain-specific “brains” with superhuman skills in areas like engineering or operations—endpoints that you steer rather than armies you must recruit, train, and manage.
The Future of AI: Steerable, Safe Intelligence
Many have discovered already that LLMs and data/action layers alone are just two components. True “steerable intelligence” requires a lot more. Here are a few examples from the ops domain that we needed to build on our journey there:
1. Continuous Learning Algorithms: Systems must learn like humans, making sense of new information and unlearning outdated or incorrect facts. This goes beyond traditional fine-tuning and requires robust, dynamic knowledge updating.
Recommended by LinkedIn
2. Deeper Understanding: Standard RAG won’t cut it for genuine comprehension. We need proprietary knowledge graphs and richer context models that let these brains grasp nuanced user requests.
3. Reasoning for Specialist Skills: As humans do, intelligent systems need to be taught how to handle deception, manage objections gracefully, deal with vulnerable users, or perform domain-specific assessments. This is not trivial given that systems with more reasoning are progressively harder to steer.
And there are many more examples across other domains like autonomous research systems or fully autonomous engineers where action taking and raw LLM output are just the tip of the iceberg.
Where Will the Puck Be Next December?
Instead of DIY AI agents, we’ll see “API-to-a-brain” services—intelligent, contextual, secure. These will be domain-specific “brains” with superhuman skills in areas like engineering or operations—endpoints that you steer rather than armies you must recruit, train, and manage.
Conclusion
The future of AI is not about managing an army of custom AI agents. It’s about having access to an intelligent, steerable, and safe digital employee that can adapt and learn continuously. This shift will allow companies to focus on their core competencies while leveraging the power of advanced AI to drive innovation and efficiency.
CEO at Gradient Labs | AI support agents for enterprises
1wThank you for the credit SHIVASAI GUPTA CH