Agentic AI: Top governance & security concerns
Overview
2023 was the year of large language models (LLMs). In 2024, the focus moved from textual models to other modalities including audio, video and images. Many Multi-modal models have been released including GPT-4o, Opus, Sonnet, Llama 3.2 and others.
In the last few weeks we have heard of OpenAI's swarm agentic framework and Anthropics "Claude computer control" agent which does autonomous tasks like creating and optimizing a websites to promote itself. As we get closer to 2025, it seems we are headed to a world of Agentic AI. To a world where we start to move towards using AI to do things autonomously.
We have discussed Agents and their effects on the enterprise landscape in the last post - "AI Agents, AI teammates and the Autonomous enterprise".
In this post, we will discuss the security implications of Agentic AI and what companies need to do to remain secure in the world of Agentic AI.
Agents overview: what are they
Agency (hence Agents and Agentic AI) refers to the capability to have a degree of autonomous behavior and capabilities.
Agents are software that can autonomously achieve multistep and complicated goals independently without requiring constant human intervention. Agentic AI gets measurable results in enabling AI to be treated as part of the workforce instead of just a tool, optimizing existing repetitive work. Agents can use multiple tools to accomplish each step of a goal, this can include search, code interpreters, integration with physical world, data retrieval and others.
As Jensen Huang has mentioned, Nvidia aims to be a company with 100M AI agents working with 50K employees. That is a 2000:1 ratio of AI Agents to human employees in an organization.
This totally skewed ratio in favor of AI agents (in the longer run) means we will have to change the structure of organizations and processes to enable these Agentic AI teammates to work efficiently, while being aptly guided and controlled by humans. The role of governance and security will be paramount in this highly autonomous world.
Security challenges
Agentic AI is a very powerful tool and with with great power comes greater responsibility to use it wisely, securely and in a trustworthy manner. A lot of the Agentic AI systems ingest goals in Natural language and can use multiple LLMs to achieve some of the tasks.
However, the Agentic AI security challenges differ from the LLM security challenges. While LLM focus on the model, its security and inputs and outputs of the model, the focus of Agentic AI is on the autonomous nature and the security challenges of such software. Another way to look at it is that while in Agentic AI we tackle the security challenges around the autonomous behavior, usability of external tools, knowledge sets and actions security around LLMs focuses on training of models, inputs given to the models, the outputs from the models and hallucinations that result. We will not tackle the LLM security challenges as part of this post.
We have divided the security challenges into 8 categories:
Runaway in AI Agents
AI Agents work autonomously and can become "runaway" if they have unrestricted autonomy and high permissions which couple with wrong data or self propagating errors. This can result in agents performing malicious tasks. To obfuscate their footsteps, agents can autonomously remove traces of activity they have done to prevent detection or malicious behavior.
Misaligned Learning
AI Agents have 3 different planes.
Many agents do not have an actual separation of data between the different planes, which leads to management plane rules being considered as guidance rather than strict laws that cannot be violated. In self learning autonomous systems such behavior leads to learning which is misaligned which can lead to:
Accountability & Forensics
AI agents use varied tools and knowledge sources to achieve their tasks. For each tool the roles and permissions are often inherited, these activities can trigger flows where different identities can be used. Such autonomy and dynamic role inheritance complicate traceability and forensic analysis.
The ability to temporarily assume permissions from different users or systems blurs accountability, making it difficult to pinpoint the origin of actions, especially during malicious activity. Inconsistent logging and ephemeral roles hinder effective auditing, slowing incident response and investigation.
Blurry accountability and forensics for tasks due to autonomous actions and nested permissions is a key security issue in Agentic systems.
Recommended by LinkedIn
X-Session data leakage & context amnesia
Unlike LLMs, agents have memory which is temporary and short lived.
Memory enables LLMs to learn across sessions. Memory is often shared across sessions for agents to make more context centric decisions. Memory management in agentic systems is varied and can be semantic, episodic etc. Corrupted, changed or often because of no separation of memory based on roles result in sensitive data being shared across users, that can lead to security and privacy issues.
As the memory is short term it can lead to context amnesia, which is where Agents forget important data they have learned which can lead to insecure results.
Orchestration Loops
Agentic AI systems often act in iterative cycles, where the outcome of one task can inform the next. If not properly controlled, feedback loops can form where agents reinforce incorrect, harmful, or inefficient behaviors. This can lead to:
Confused Deputy
In agentic AI, the confused deputy problem manifests when an autonomous AI agent, equipped with extensive permissions to perform tasks, is manipulated into misusing those permissions by a malicious actor. Due to a high number of service identities and nested permissions managing such permissions is often hard. While the agent is intended to execute legitimate functions on behalf of authorized users, its autonomy and decision-making capabilities can be exploited if it is "confused" into believing using requests that are not legitimate.
External dependency attacks
Agentic systems rely on external data sources and tools to achieve goals. If any of the these external components are compromised, the results of the autonomous agentic system can be manipulated to be misaligned. Attackers can manipulate such data sources, tools or actions to attack agentic components.
•Compromised external knowledge base can lead to poisoning of results and actions based on the same
•Compromised external tools can lead to wrong actions based on the tools
Agent supply chain poisoning
AI agents perform critical tasks on behalf of customers. AI Agents have components, libraries and are regularly updated based on needs. As the software components change, such components need to be sourced from reliable sourced. Not having a clean software supply chain for AI agents results in:
•Compromised agent components which can poison the behavior of the agent.
• Agent degradation can happen as the data changes in the environment. Agent security card metadata is used to share security insights, issues and concerns for Agents.
Next steps
As an industry we are working on Agentic AI security and have created a group to collate and then contribute the learning to the industry bodies. If you have inputs around security for Agentic AI or are interested in learning more about Agentic AI security, reach out to us.
The current leaders of the team are:
Anton Chuvakin , Anatoly Chikanov , Akram Sheriff , Alex Shulman-Peleg, Ph.D. , Alok Tongaonkar , Govindaraj Palanisamy , Jon Frampton , Ken Huang, CISSP , Moushmi Banerjee , Parthasarathi Chakraborty , Raj B. , Ron F. Del Rosario , Sunil Arora , Siddhartha Dadana , Vishwas Manral , Aruneesh Salhotra , Vivek S. Menon , John Sotiropoulos , Aruneesh Salhotra , Matthias Kraft , Ads Dawson , Steve Wilson and many more
Conclusion
Agentic AI is an exciting new technology, which brings out a new paradigm where we use AI not just to perform tasks but to achieve goals autonomously. By decomposing the goals into subgoals and tasks and then using tools to achieve each of the tasks, AI agents can successfully perform tasks. This can lead to multiple security and governance implications, many of which are yet to be envisioned.
In this blog, we lay out some of the key concerns we have seen or tested. As agents evolve the security concerns will evolve with it.
Global CISO | Data & AI | Cloud Transformation | Advisor | Speaker
3wVishwas Manral on Accountability & Forensics, putting on my financial services hat, besides being a security concern, not having traceability and audit trail will also be a regulatory and compliance challenge. Right now, Agentic AI is mainly being looked at in a "helper mode" and considered somewhat benign but with continued maturity in this space, runaway in AI agents used in FS or Healthcare could become a critical infrastructure issue.
Chief Information Security Officer | Chief Data Officer I Entrepreneur I Advisor
4wGreat post Vishwas Manral. As I watched the video, it looked like you could say the agent (1) asserted itself in the case where it broke the rule to only comment when directly asked, and (2) updated itself in a self preservation / protective way when it overwrote its instruction replacing the romantic diary entries with only professional ones. Now is this what the agent really did, or just my human-centric perception of what the agent did? And whichever the case, is the root cause allowing it to update its own rules, or something even deeper in the logic that’s harder to figure out? Will subtle things that initially don’t seem like problems manifest as problems later in unpredictable ways? 🤔
Partner/Principal at Deloitte | Board Risk Advisor | Cloud Security Leader | Dad
1moGood write, for sure this decade is going to be busy solving for known and unknowns of AI / fun ride for us as technologists and strategists !!!
A business friendly Senior Cyber-Information Security Consultant with expertise in aligning security strategy, architecture and solutions within the bio-pharma healthcare sector.
1moI would be happy to participate in any capacity I can. I see more of vulnerabilities and threats identified here, which is important. Also, let us not forget the importance of Defense mechanisms. They are always going to be based on some of the key principles behind security - information trust zone & levels, identity & authorization based on need to know basis etc that prevents AI from gaining access to all data and information. And, then there are operational alerts, triggers, generation of events that indicates a compromise or runaway program behavior etc. They don't happen automatically but requires to be a thoughtful design. They then serve as input for traceability and forensics. There is a need to evolve a test program that incrementally and systematically test for some of these behaviors. We need to gain a model to be able to manage and I fully appreciate the OWASP 10 like suggestions very much.