Investor POV

Securing the autonomous future: Trust, safety, and reliability of agentic AI

George Mathew, Hunter Korn, Ash Tutika, William Blackwell | October 29, 2025| 9 min. read
AI Agents Security Market Map

In our last post, “The race to secure enterprise AI,” we covered how enterprises are securing AI deployment. As generative AI becomes indispensable to business operations, security teams are struggling to keep pace with threats emerging during both model development and runtime, creating a significant bottleneck to successful adoption.

In this second installment of our three-part series, we dive deeper into the deployment of AI Agents. While AI Agents share some features and security considerations with traditional models, they go a step further — actively making decisions, taking actions, and interacting with systems or environments to achieve goals autonomously. This evolution raises the stakes for security, requiring enterprises to not only secure the models themselves but also manage agent identity and access, monitor behavior in production, and secure the tooling, code, and data Agents use to function.

1. AI Agent security as a multi-trillion dollar problem

In 2024, our post on “Intelligence-First Design” predicted how the rise of AI co-workers and agentic applications would revolutionize software design. As software evolves from merely digitizing industries to augmenting knowledge-based work, transforming labor, and services, the market is expanding beyond its current $650B size, creating a multi-trillion-dollar opportunity for founders and enterprises.

As deployments scale, potential cybersecurity incidents and the opportunity for threat actors to monetize novel attack techniques scale too. And alongside that, the growing threat of AI-driven cybercrime, projected to exceed $15T by 2030, underscores the importance of securing AI Agents in production, as many of those attacks will be seeking to exploit business processes that will be increasingly driven by AI co-workers or agentic applications.

2. Recap of AI Agent architectures and the anatomy of an Agent

Our previous post, “State of the AI Agent Ecosystem,” examined the emerging application architecture for AI Agents, key enterprise use cases, and challenges. Over the past year, agentic systems have matured significantly, from basic task agents to process agents managing end-to-end workflows with partial autonomy. Enterprises are now deploying role-specific agents with memory and multi-step reasoning, and early adopters are piloting AI employees that own business outcomes through orchestrated multi-agent teams.

The emergence of AI agent orchestration, reflection, and evaluation as key design patterns has driven the development of new solutions to address these gaps. Moreover, advancements in AI models like GPT5 and Sonnet 4.1 can enable them to complete tasks that last nearly an hour, compared to just a few seconds with earlier models like GPT 3.5. As AI Agents increasingly operate autonomously, accessing numerous tools and sensitive internal data, robust identity verification and secure authentication processes have become essential to ensure safe and effective interactions within complex ecosystems.

3. Where are the opportunities in AI Agent security?

Individually, the components that create an AI Agent — software, LLM, databases — have established security solutions. But it’s the complexity that’s introduced when combining all these components into a singular system that opens the door for novel technology. Additionally, understanding the intent of data flowing between components requires solutions that will understand whether particular prompts or natural language are part of normal interactions, or potentially harmful or malicious in intent.

When we look at where the cybersecurity opportunities lie in relation to AI agents, we see five areas ripe for innovation:

  1. Managing AI Agent identity and access
  2. Agent governance, observability, and monitoring: The telemetry needed across the components of the AI agent system to manage and monitor agents and then detect and respond to malicious or improper actions.
  3. Agent integration security: Observability and security across the boundary of an AI Agent system, understanding what actions those agents are taking or being asked to take, and ensuring those fit within the intent of the AI Agent system.
  4. Novel threats to AI Agents
  5. Data privacy and security: Protecting data throughout its lifecycle and across AI Agent systems.

Access governance for autonomous AI Agent identities

There are two models for agent identity and access:

  • Delegated access: Where an Agent acts ‘on behalf of’ a user, using a user-scoped access token. Examples include copilots and AI coding assistants.
  • Autonomous agents: Where Agents have their own unique identities and themselves authenticate to carry out tasks autonomously. Examples include infrastructure Agents and RPA bots.

The diagram below shows a high-level example of the instantiation of these models.

autonomous AI models
For illustrative purposes only

 

Today, most enterprises are leaning on the delegated access model, seeing benefits from employee productivity use cases. However, over time, we expect the balance to shift further toward the autonomous Agents model, as enterprises become increasingly AI Agent-native and replace human-centric workflows with AI Agent-led processes.

In both models, there is a need to manage and govern identities (human or non-human), and to secure workload credentials used for authentication and authorization workflows by Agents. This lends itself to solutions that can provide identity governance and secure credentials management at speed.

This is not a new problem and can likely be well-managed using incumbent capabilities: existing identity governance solutions, vaulting capabilities, Privileged Access Management (PAM) vendors, and certificate management services. The question is whether those incumbents can adapt to service the dynamic, sprawling — and in some instances, ephemeral — nature of an AI Agent-heavy environment.

If enterprises can set up effective onboarding processes for AI Agent identities, then the incumbents are well placed to capture the opportunity here. Realistically, we’ll likely see a rush to deploy AI Agent capabilities at a pace that means that identities and credentials are not well managed, meaning that — like data security and DevOps — there will be a huge mess to clean up retrospectively. This is where the non-human identity (NHI) vendors are seeing traction in the same way Data Security Posture Management (DSPM) has helped enterprises clean up a sprawling and out-of-control data estate. But ultimately, improper management of credentials related to NHIs is a people and process problem, not so much a technology problem.

We also see an emerging need around Agent permissions and authorization governance — an extension to what has been gaining traction as Identity Security Posture Management (ISPM); that is, providing visibility and governance around scoped permission sets (including OAuth token and API scopes). However, we see this as a necessary feature in future platforms and not a standalone capability, and recent acquisitions have shown the willingness of incumbent players to buy into these features rather than building their own.

A further challenge here is that there will likely be a need for tracing from action to identity. For example, was the Agent acting under its own agency or responding to a human instruction? This may become important from a liability standpoint and isn’t well catered for with existing protocols (e.g., OAuth 2.0).

The AI Agent identity problem

Finally, it’s worth clarifying that there are two distinct angles to the Agent identity problem.

The first concerns internal machine-to-machine communication — enabling an AI Agent to interact securely with services inside the enterprise ecosystem and with the components of its own system (orchestration agent, vector store, execution agents, LLMs). This is where standards like SPIFFE/SPIRE (typically considered the gold standard) for workload identity become critical, driving demand for solutions that provide agile certificate management. It’s important to note that workload identity is only the foundation — an authorization layer is still required to define and enforce what each agent is permitted to do once authenticated.

The second concerns external service access — especially in cases where certificate-based authentication isn’t supported or where Agents need scoped, highly controlled permissions (including operating on behalf of a human user). These scenarios typically require token-based authentication (API keys, JWTs, etc.), which is where secrets vaults, modern PAM vendors, and more recently non-human identity (NHI) providers are best suited. There’s still a requirement to authenticate to a vault or IdP to retrieve the token or credentials, and that’s once again where SPIFFE (and certificates) can play a role – solving the ‘Credential Zero’ problem.

Providing full-stack observability and monitoring for AI Agents

This is where we see the greatest opportunity for innovation. The interactions within an AI Agent system – spanning identity, data, application, infrastructure, and AI models – will become incredibly difficult to track across these multi-model components.

AI agent orchestration
For illustrative purposes only

 

While there are good individual solutions to monitor each aspect, the ability to tie all of this together and interpret the context of the internal traffic will be crucial to first ensuring Agents behave within the governance framework of an enterprise. Then, there must be solutions to identify anomalous activity attempting to fly under the radar — hidden within all the noise and lost between partial integrations.

We see this as the insider threat problem for AI Agents, and there is an opportunity for specific solutions that provide Agent monitoring, detection, and response capabilities for malicious or ‘malfunctioning’ Agents.

Agent governance and observability will not only be imperative for security, but also mandatory in highly regulated environments to provide an audit or forensics trail.

AI Agent context-aware network and API monitoring capabilities

AI Agents need to communicate outside their own system boundary to provide value. This might be as simple as a user prompt, a direct Application Programming Interface (API) call, or leveraging Model Context Protocol (MCP) to access complex datasets or trigger workflows in external tooling.

ai agent security
For illustrative purposes only

 

Monitoring traffic into and out of the Agent system will become critical for detecting when an AI Agent is being targeted by malicious actors, or when an AI Agent itself is going rogue — and potentially even sending out malicious data.

Numerous companies provide traffic and API monitoring capabilities. However, with new protocols like MCP and Agent2Agent (A2A) — along with the nuances described for understanding agent interactions — there is a potential need to overlay Agent-specific intelligence into those capabilities. And this is where solutions like MCP Proxies that can understand AI-generated and AI-bound traffic will become important. More broadly, there will be a need for new capabilities to manage MCP usage at scale — including proxying, monitoring, allow-listing, and MCP security policies.

There is also a need to have solutions that can detect when traffic is potentially coming from malicious or ‘known bad’ AI Agents. As internet traffic becomes dominated by Agents and bots, things like agent account takeover (AATO) will become increasingly important to detect — and may require solutions that can provide enhanced enrichment and threat intelligence to detect and filter that traffic that is deemed malicious or coming from rogue MCP servers or AI Agents — including those leveraging the latest evasion methods at scale.

Ultimately, integration monitoring capabilities may become a feature in existing network and API security monitoring platforms. But as CISOs race to secure agents and with MCP servers appearing all over their estate, the need to have something in place today will favor more agile startups in the near term. And incumbents will need to adapt to incorporate features for AI Agent-specific threats or risk becoming legacy.

Protecting AI Agents against novel threats

The threats to agents and models themselves are now well documented. Some of these are extensions to the same threats we’ve seen for AI model security, such as prompt injection, model poisoning, and model theft (albeit amplified by scale) – but the rapidly developing architecture for AI Agents introduces even more nuanced and novel threats. These include AI Agent-specific attacks such as goal manipulation, command injection, or even rogue Agents in multi-Agent systems, as well as protecting Agent infrastructure — such as MCP servers — from manipulation.

The handling of model-specific threats can simply be seen as an extension of the existing AI model security company capabilities – it makes sense that, for example, AI firewalls will provide a proxy for communications with AI Agents.

Novel attacks targeting AI Agents will require comprehensive observability and monitoring solutions. These must understand not just an Agent’s intentions but also detect when an Agent is reaching beyond its intended scope. They must also protect the system from Agent-specific threats and allow for corrective action to be taken when a successful subversion is discovered. This is where we see innovation happening.

This is also where sandboxing for AI Agents becomes important, to ultimately prevent them from executing malicious software or OS-level commands when attackers break through the other control barriers.

Data-centric security and privacy controls for AI Agent systems

AI models have already caused significant challenges in managing both data privacy and security. With AI Agents, that problem is exacerbated. Enterprises are already tasked with controlling data for AI models — both during model development and runtime; AI agents will access and share data across a plethora of systems and models — accessing multiple data sources via integrations and passing data from one agent to the next, potentially even across multiple geographies and jurisdictions.

Protecting personally identifiable information (PII) and tracing data residency and access as it flows through multi-Agent systems to ensure continued compliance will be paramount as an enabler, particularly in highly regulated environments. This challenge further extends to data lineage; there will be a need to track how data is being created — whether it’s user-generated or Agent-generated — and then how it changes in terms of sensitivity or applicability as it moves through agentic systems.

This is a monumental data governance problem, and one that can likely only be solved with data-centric security and data privacy solutions due to the limited control over data once it enters an agentic system.

AI Agent security market map

AI Agents Security market map
For illustrative purposes only

 

There is substantial overlap between this map and the one from our previous post on AI security, as AI Agents are fundamentally built on LLMs. However, new categories have emerged to address threats and considerations specific to Agents — such as the use of MCP, Agent identities, and distinct guardrails and observability requirements.

The AI Agent security landscape includes:

  1. Established security vendors adapting or expanding their offerings to support agentic systems
  2. First-wave AI security companies evolving to cover agent-specific risks
  3. A new generation of companies focused solely on securing agent-based architectures

Navigating this space requires precision. Overgeneralized claims about providing “comprehensive Agent security” can obscure the real value a company offers. At its core, Agent security spans multiple categories reflected in this map, and will be filled by a combination of incumbents, model security companies, and new ScaleUps.

Considerations for enterprises and ScaleUps

As enterprises develop their security strategies for AI Agents, they should be engaging with their existing vendors to understand their product roadmaps in relation to securing AI Agents — and determine where gaps will exist.

Enterprises

Some enterprises are already looking to solve identity governance and access for Agents by using their existing identity and access management (IAM) and identity governance and administration (IGA) stack. This requires a disciplined approach to ensure all Agent identities are well managed, and likely only possible to maintain by more digitally mature organizations with healthy cyber budgets — other enterprises may struggle, and this opens the door for the need for discovery and governance capabilities around machine identities like those offered by the earlier-stage NHI vendors.

Where we’ve heard more consistent concern across enterprise conversations is in relation to the visibility and monitoring of AI Agents themselves. We’ve heard this referred to as the “UEBA for Agents.” AI Agents are inherently probabilistic, meaning that it is impossible to test every possible combination of their actions; monitoring Agent activity and applying an approach similar to insider risk is likely to be more effective than trying to apply wholly deterministic rulesets.

Startups and ScaleUps

For founders, determining which innovation opportunity areas they fit into and ensuring messaging aligns with that can help build credibility with CISOs. Enterprises know this is a complex, multifaceted problem to solve, and going into a conversation with an approach of  “We solve all of AI Agent security” is likely to draw skepticism. Focus on which risk area you solve best and how you integrate into their broader technology stack.

Secondly, identify the right budget owner and line item to help simplify procurement decisions. As with AI, many decisions around Agent security are also multi-stakeholder; while there are benefits to unlocking AI innovation budgets, startups aiming for more traditional security line items might be able to fit into an improvement program or more easily prove value through a PoC.

This space is rapidly evolving, and we’ll likely see significant change not just in security solutions but enterprise architectures in the near term. We’re always keen to hear from those driving secure adoption of AI. Reach out if you want to continue the conversation with our team:


*Note: Insight Partners has invested in Sourcegraph, Atlan, Bolt (Stackblitz), Databricks, Tavily, Reco, Spur, HoneyHive, Fiddler, Trust3 AI, Weights & Biases, Onetrust, Delinea, Teleport, CrewAI, Promptfoo, Keeper, Keyfactor, E2B, Frontegg, Ory, PlainID, Anjuna, Aviatrix, Skyflow, Kiteworks, and Docker.