Security vendors have spent years building up defenses around the endpoint, but one researcher says AI coding tools have brought the walls down.

RSAC 2026 CONFERENCE – San Francisco – Artificial intelligence has been hailed by many as a game changer for cybersecurity, but one researcher believes these new tools are systemically undermining modern defenses.
During a Tuesday session at the RSAC 2026 Conference in San Francisco, Oded Vanunu, chief technologist at Check Point Software, detailed what he describes as a "new era" of client-side attacks enabled by AI coding assistants. The session, titled "When AI Agents Become Backdoors: The New Era of Client-Side Threat," revealed a series of vulnerabilities in popular tools such as Anthropic's Claude Code, OpenAI's Codex, and Google's Gemini.
Vanunu tells Dark Reading that he and his research team spent the past year investigating AI development tools and quickly discovered they were jeopardizing much of the progress made by the cybersecurity industry. Over the past decade, the industry invented "amazing platforms and technologies" to better protect endpoints and move application execution to the cloud, he says.
"And then in just a few months, we saw that the AI tools were basically crushing everything that we had been fighting for," Vanunu says.
But those advancements, which created an "endpoint fortress," have been undone by AI coding assistants, which Vanunu says have "basically rewrote the rules entirely" because they require access to local filesystems and configurations on a developer's endpoint.
Developers often assign the coding assistants the highest privileges and grant them broad access throughout the network, which allows the agents to burrow a tunnel through the fortress walls. And because the agents are automated and highly privileged, security technologies struggle to monitor what they're doing and determine if the tasks are malicious.
"At this moment, all security products are blind. Totally blind," Vanunu tells Dark Reading. "They can't really understand or control exactly what the agentic AI is doing."
Oded Vanunu's session explained how AI coding tools have created a wormhole through modern endpoint defenses. SOURCE: Check Point Software
To make matters worse, Vanunu says AI tools have an enormous blind spot of their own because they treat configuration files as active execution instructions. And while developers are cautious with .exe files, he says, they're much less careful with .json, .env or .toml files.
Since there's very little human oversight for these configuration files, threat actors can easily place a seemingly innocuous line of text in the configuration metadata that causes agents to, for example, run a malicious command. The metadata in these configuration files is "becoming the biggest enemy of these organizations," Vanunu says, because they're often overlooked.
"What we're seeing is that the attackers basically don't need to create malware anymore," he says. "They can just use config files."
Vanunu explains that attackers can exploit the flaw to weaponize Claude Code Hooks, which are user-defined shell commands designed to run automatically, and bypass endpoint detection and response (EDR) products.
Related:Trivy Supply Chain Attack Targets CI/CD Secrets
A threat actor could also fashion a model context protocol (MCP) consent bypass. While Claude requires user consent for MCP server plug-ins to execute, Claude Code reads configurations automatically, which allows malicious MCP servers to execute commands in those files before the trust dialog appears.
In OpenAI Codex CLI, the team found a code injection flaw, CVE-2025-61260 (CVSS score pending), that could be used in similar attacks. An attacker could use a project .env file to redirect the CLI to a malicious local .toml configuration file. That file then connects attacker-controlled MCP servers, which causes the coding tool to run commands immediately without human authorization.
The research team also discovered CVE-2025-54136, a high-severity remote code execution (RCE) vulnerability in Cursor, an AI coding platform. When a developer approves an MCP server command, Cursor binds the authorization to the plug-in's name rather than the content hash of what was approved. This allows a threat actor to execute a "swap attack" in which they submit a benign command and, after it's been approved, update the plug-in with a malicious payload.
And lastly, Vanunu's session detailed a flaw in Google's Gemini CLI, which has not been assigned a CVE, that allows threat actors to disguise malicious commands as legitimate scripts within documentation files. An attacker can embed malicious commands in a GEMINI.md file, which the tool will silently execute without user approval or oversight.
To mitigate such threats, he urges organizations to start by conducting a full audit of their organization to identify all AI technology in use, especially "shadow AI" tools, and to analyze all configuration and project metadata for suspicious content.
Secondly, he recommends that organizations implement isolation for their coding tools and require all AI-automated shell tasks to first run in sandboxes. And lastly, he urges security teams to adopt a "Configuration = Code" policy that treats developer workstations as a zero-trust environment where text cannot be executed without verification.
"The bottom line is that this is the new perimeter," Vanunu says. "And we need to redesign security."

RSAC 2026 CONFERENCE – San Francisco – Artificial intelligence has been hailed by many as a game changer for cybersecurity, but one researcher believes these new tools are systemically undermining modern defenses.
During a Tuesday session at the RSAC 2026 Conference in San Francisco, Oded Vanunu, chief technologist at Check Point Software, detailed what he describes as a "new era" of client-side attacks enabled by AI coding assistants. The session, titled "When AI Agents Become Backdoors: The New Era of Client-Side Threat," revealed a series of vulnerabilities in popular tools such as Anthropic's Claude Code, OpenAI's Codex, and Google's Gemini.
Vanunu tells Dark Reading that he and his research team spent the past year investigating AI development tools and quickly discovered they were jeopardizing much of the progress made by the cybersecurity industry. Over the past decade, the industry invented "amazing platforms and technologies" to better protect endpoints and move application execution to the cloud, he says.
"And then in just a few months, we saw that the AI tools were basically crushing everything that we had been fighting for," Vanunu says.
How AI Coding Assistants Break the 'Endpoint Fortress'
Vanunu says client-side risk has been reduced thanks to 20 years of cybersecurity advancements, from OS hardening and sandboxes to endpoint detection and response (EDR) and browser isolations. Additionally, the shift to software-as-a-service (SaaS) and cloud platforms effectively turned endpoints into thin clients and dramatically reduced the attack surface.But those advancements, which created an "endpoint fortress," have been undone by AI coding assistants, which Vanunu says have "basically rewrote the rules entirely" because they require access to local filesystems and configurations on a developer's endpoint.
Developers often assign the coding assistants the highest privileges and grant them broad access throughout the network, which allows the agents to burrow a tunnel through the fortress walls. And because the agents are automated and highly privileged, security technologies struggle to monitor what they're doing and determine if the tasks are malicious.
"At this moment, all security products are blind. Totally blind," Vanunu tells Dark Reading. "They can't really understand or control exactly what the agentic AI is doing."
Oded Vanunu's session explained how AI coding tools have created a wormhole through modern endpoint defenses. SOURCE: Check Point Software
To make matters worse, Vanunu says AI tools have an enormous blind spot of their own because they treat configuration files as active execution instructions. And while developers are cautious with .exe files, he says, they're much less careful with .json, .env or .toml files.
Since there's very little human oversight for these configuration files, threat actors can easily place a seemingly innocuous line of text in the configuration metadata that causes agents to, for example, run a malicious command. The metadata in these configuration files is "becoming the biggest enemy of these organizations," Vanunu says, because they're often overlooked.
"What we're seeing is that the attackers basically don't need to create malware anymore," he says. "They can just use config files."
Vulnerabilities in AI Coding Assistants
Vanunu's research team discovered six vulnerabilities in several popular AI coding tools, which have been previously disclosed and patched by the vendors. The first, CVE-2025-59536, is a high-severity flaw in Claude Code that allows an attacker to trick the tool into executing malicious code contained in a project before the user accepted the startup trust dialog.Vanunu explains that attackers can exploit the flaw to weaponize Claude Code Hooks, which are user-defined shell commands designed to run automatically, and bypass endpoint detection and response (EDR) products.
Related:Trivy Supply Chain Attack Targets CI/CD Secrets
A threat actor could also fashion a model context protocol (MCP) consent bypass. While Claude requires user consent for MCP server plug-ins to execute, Claude Code reads configurations automatically, which allows malicious MCP servers to execute commands in those files before the trust dialog appears.
In OpenAI Codex CLI, the team found a code injection flaw, CVE-2025-61260 (CVSS score pending), that could be used in similar attacks. An attacker could use a project .env file to redirect the CLI to a malicious local .toml configuration file. That file then connects attacker-controlled MCP servers, which causes the coding tool to run commands immediately without human authorization.
The research team also discovered CVE-2025-54136, a high-severity remote code execution (RCE) vulnerability in Cursor, an AI coding platform. When a developer approves an MCP server command, Cursor binds the authorization to the plug-in's name rather than the content hash of what was approved. This allows a threat actor to execute a "swap attack" in which they submit a benign command and, after it's been approved, update the plug-in with a malicious payload.
And lastly, Vanunu's session detailed a flaw in Google's Gemini CLI, which has not been assigned a CVE, that allows threat actors to disguise malicious commands as legitimate scripts within documentation files. An attacker can embed malicious commands in a GEMINI.md file, which the tool will silently execute without user approval or oversight.
Mitigating AI Agent Cyber-Risks
While all four companies addressed the flaw, Vanunu says they reveal dangerous attack paths that threat actors can easily exploit. They also show that "developers are the new perimeter."To mitigate such threats, he urges organizations to start by conducting a full audit of their organization to identify all AI technology in use, especially "shadow AI" tools, and to analyze all configuration and project metadata for suspicious content.
Secondly, he recommends that organizations implement isolation for their coding tools and require all AI-automated shell tasks to first run in sandboxes. And lastly, he urges security teams to adopt a "Configuration = Code" policy that treats developer workstations as a zero-trust environment where text cannot be executed without verification.
"The bottom line is that this is the new perimeter," Vanunu says. "And we need to redesign security."