AI-Native Security Is a Must to Counter AI-Based Attacks

Attacks by artificial intelligence agents are a reality. Experts at Nvidia's GTC conference say defenders need to use the same tools to fight them off.

1774591517318

Slow human-controlled defenses won't be enough for autonomous agents spun off by technologies like OpenClaw, experts say. Artificial intelligence-native security will be needed to fend off threats.

"You're going to see an AI-led attack, full agentic attacks that we're starting to see already today. The only way to deal with those is a full agentic defense," said Francis deSouza, Google Cloud's chief operating officer and president of security products, during a panel discussion at Nvidia's GTC conference in San Jose, Calif., earlier this month.

During the discussion, panelists noted that AI-native security models prevent rogue agent break-ins. Such models include agents that spot security weaknesses and scan subagents before deployment, control dynamic system access for agents, and generate audit trails to track agent identity and activity.

At GTC, Nvidia CEO Jensen Huang highlighted OpenClaw's ability to create agents that can scan file systems, access personal information, and communicate with large language models. Those autonomous functions have been a source of security concerns.

Panelists said that OpenClaw could create a new attack surface in which AI agents can run for weeks and months and activate after a long slumber. For example, agents could scout for weaknesses in SharePoint systems, stay idle, and activate attacks at specific times.

Tackling agentic threats on abandoned or insecure assets isn't humanly possible, and only AI-driven security models — operating at what panelists called "machine speed" — can battle rogue agents, Google's deSouza said.

Nvidia introduced a fork of OpenClaw called NemoClaw, which is designed to address such concerns. It enforces privacy and security guardrails over how agents handle data.

Agentic Security Cuts Both Ways​

Free-roaming agents can be a boon and a liability. They can find and close security gaps, but they also exploit vulnerabilities.

"It was fine because you had security by obscurity. Nobody could find them, and it didn't really matter," deSouza said. “But now, as you have agents roaming your environment, they will find them, and they will expose them.”

DeSouza recommended creating an AI-native dynamic access control system to check access for autonomous agents. Agents must not inherit the identities of human users, as permissions may change in real time as an agent traverses the workflow, he says.

"We really need to think about what it means natively to create this infrastructure for agents itself," deSouza said.

The technology stack needs to evolve to include data typically not included in agents, such as a knowledge graph or a context graph with information about why a decision was made, said Amit Zavery, chief product and operating officer at ServiceNow.

ServiceNow has built an AI security system called AI Control Tower, which uses an access graph to analyze tasks and identities to determine system access for agents. It works alongside Knowledge Graph — a layer that maps agents to data inside and outside ServiceNow — to build full context around a task, the data involved, and the identity requesting access.

AI Control Tower also provides real-time agent visibility and maintains audit logs of autonomous agents. A trust layer determines when human intervention is required before an agent can access data.

OpenClaw is a good reason to rethink security, but most considerations — such as depth of defense, standing privileges, monitoring of execution, and activity — should remain the same, said Elia Zaitsev, chief technology officer at CrowdStrike.

"The basic hygiene of security shouldn't change just because you have a different sort of intelligence driving the joystick," Zaitsev said.

The considerations for AI agents should also include identity — on whose behalf the agent is acting — and the scope of what agents are allowed to do, said Anirvan Mukherjee, head of AI and machine learning at Palantir.

But OpenClaw is unique in that it can spin out subagents writing their own code. The development layer will ultimately be the first line of defense, panelists said.

"That code will have to go through a software development life cycle to make sure that it's secure before it's ever deployed," Google's deSouza said.
 
Top
Cart