Companies need better controls to manage key threats rising from the growth of agentic AI. These new features provide a starting point.

Organizations' adoption of artificial intelligence (AI) agents has dramatically expanded their attack surface and opened them up to new classes of attacks, but software and cybersecurity firms are only beginning to develop ways for organizations to rein in agentic, autonomous activity.
This week, Microsoft launched several measures to secure agentic AI and deploy agents to improve security. At RSAC Conference, it launched a preview feature that allows companies to create guardrails in Microsoft Foundry, its AI platform-as-a-service, and announced agentic capabilities for its Security Copilot. Finally, the company has added identities for agents in its Entra ID service so that companies can more closely track agents, control them with permissions, and log their behavior.
As AI agents took off in 2025, the enterprise security landscape shifted dramatically. Not only do security teams need to worry about users and applications accessing data and resources, but they also need to be concerned about AI agents, which effectively blend the two.
Related:Delinea's StrongDM Acquisition Highlights the Changing Role of PAM
Already, agentic browsers, agentic swarms, and multi-agent systems, such as OpenClaw, are causing problems for enterprises. More than half of companies surveyed by analyst firm Omdia say they lack confidence that they can secure the resources regularly accessed by nonhuman identities. (Omdia is part of Dark Reading's parent company, Informa TechTarget.)
"Every time you have this sort of reinvention of the app stack, that creates new attack surfaces, which means new threat vectors and new types of threats," says Herain Oberoi, vice president of data and AI security at Microsoft. "As a security platform vendor, we have to keep thinking about where we extend our platform to support that."
"Within Entra, we're taking the entire richness of the identity stack ... and applying it for agents," he says.
One key function is the agent registry, he explains.
"We believe in all situations, an agent should have its own identity, and when it's working on behalf of the user, the registry knows through the metadata that it's working on behalf of the user," Oberoi says.
As part of its RSAC announcements, Microsoft said it has launched agent identities in Azure AI Foundry, so AI agents have a special identity managed by Microsoft Entra ID. Some experts have argued that identity will become the foundation of cybersecurity in the age of agentic AI, and Microsoft's announcement underscores those arguments.
Related:SpecterOps Launches BloodHound Scentry to Accelerate the Practice of Identity Attack Path Management
Microsoft also announced support for controls to limit AI agents. Expanding the definition of guardrails to include these types of controls, Microsoft will allow its users to create collections of controls assigned to models or agents. In addition, the company has expanded its identity to protect critical data and resources from AI systems and will use AI agents to continuously safeguard against other AI systems.
Microsoft has also added an AI pillar to its Zero Trust Workshop and practical defense-in-depth strategies for securing autonomous agents.
Related:1Password Addresses Critical AI Browser Agent Security Gap
Triaging security incidents and alerts is one of the first applications of Security Copilot, and the company continues to invest in the platform's capabilities, Oberoi says.
The triage agent can run in the background and produce a summarized list of alerts, while a posture agent can assess the organization's data security posture and make recommendations where it believes there's risk, he explains.
"The space continues to evolve," Oberoi says.
Allowing companies to set controls to rein in agents, use identity to gain visibility into their configurations, and measure the impact agents have on their security posture are all critical requirements to manage risk in the future, Oberoi says. The company has created evaluation tools that can flag when an agent is being given risky capabilities, he says.
"We were able to move quickly here [because] we already had a lot of the underlying tech applied to users, apps, and devices, and so extending it to agents wasn't a complete start from scratch," he says. "I would say this is a problem space that we're going to have to continue to look at."

Organizations' adoption of artificial intelligence (AI) agents has dramatically expanded their attack surface and opened them up to new classes of attacks, but software and cybersecurity firms are only beginning to develop ways for organizations to rein in agentic, autonomous activity.
This week, Microsoft launched several measures to secure agentic AI and deploy agents to improve security. At RSAC Conference, it launched a preview feature that allows companies to create guardrails in Microsoft Foundry, its AI platform-as-a-service, and announced agentic capabilities for its Security Copilot. Finally, the company has added identities for agents in its Entra ID service so that companies can more closely track agents, control them with permissions, and log their behavior.
As AI agents took off in 2025, the enterprise security landscape shifted dramatically. Not only do security teams need to worry about users and applications accessing data and resources, but they also need to be concerned about AI agents, which effectively blend the two.
Related:Delinea's StrongDM Acquisition Highlights the Changing Role of PAM
Already, agentic browsers, agentic swarms, and multi-agent systems, such as OpenClaw, are causing problems for enterprises. More than half of companies surveyed by analyst firm Omdia say they lack confidence that they can secure the resources regularly accessed by nonhuman identities. (Omdia is part of Dark Reading's parent company, Informa TechTarget.)
"Every time you have this sort of reinvention of the app stack, that creates new attack surfaces, which means new threat vectors and new types of threats," says Herain Oberoi, vice president of data and AI security at Microsoft. "As a security platform vendor, we have to keep thinking about where we extend our platform to support that."
Giving Agents Identities
Security has to catch up and anticipate where the agentic AI attack surface will pose the greatest risk, Oberoi says. In fact, the proliferation of AI agents and the lack of ways for most companies to manage them is the most pressing of the four major changes to the threat landscape — more than AI sprawl, data leakage, or new regulations, he says."Within Entra, we're taking the entire richness of the identity stack ... and applying it for agents," he says.
One key function is the agent registry, he explains.
"We believe in all situations, an agent should have its own identity, and when it's working on behalf of the user, the registry knows through the metadata that it's working on behalf of the user," Oberoi says.
As part of its RSAC announcements, Microsoft said it has launched agent identities in Azure AI Foundry, so AI agents have a special identity managed by Microsoft Entra ID. Some experts have argued that identity will become the foundation of cybersecurity in the age of agentic AI, and Microsoft's announcement underscores those arguments.
Related:SpecterOps Launches BloodHound Scentry to Accelerate the Practice of Identity Attack Path Management
Microsoft also announced support for controls to limit AI agents. Expanding the definition of guardrails to include these types of controls, Microsoft will allow its users to create collections of controls assigned to models or agents. In addition, the company has expanded its identity to protect critical data and resources from AI systems and will use AI agents to continuously safeguard against other AI systems.
Microsoft has also added an AI pillar to its Zero Trust Workshop and practical defense-in-depth strategies for securing autonomous agents.
AI vs. AI
Microsoft is putting agents to work improving AI security. It updated its Security Copilot to help security teams more efficiently triage events and uncover potential risks. It expanded its Security Triage Agent to use the new identity registry for AI agents and created a new Security Analyst agent to conduct "deep, multi-step investigations" across infrastructure using telemetry and data from Microsoft Defender and Sentinel, the company said.Related:1Password Addresses Critical AI Browser Agent Security Gap
Triaging security incidents and alerts is one of the first applications of Security Copilot, and the company continues to invest in the platform's capabilities, Oberoi says.
The triage agent can run in the background and produce a summarized list of alerts, while a posture agent can assess the organization's data security posture and make recommendations where it believes there's risk, he explains.
"The space continues to evolve," Oberoi says.
Allowing companies to set controls to rein in agents, use identity to gain visibility into their configurations, and measure the impact agents have on their security posture are all critical requirements to manage risk in the future, Oberoi says. The company has created evaluation tools that can flag when an agent is being given risky capabilities, he says.
"We were able to move quickly here [because] we already had a lot of the underlying tech applied to users, apps, and devices, and so extending it to agents wasn't a complete start from scratch," he says. "I would say this is a problem space that we're going to have to continue to look at."