Google's Vertex AI Is Over-Privileged. That's a Problem

Palo Alto Networks researchers show how attackers could exploit AI agents on Google's Vertex AI to steal data and break into restricted cloud infrastructure.

1775276193873

The AI agents many organizations have begun deploying to automate complex business and operational workflows can be quietly turned against them if not properly configured with the right permissions.

Recent research by Palo Alto Networks has shown how the risk can materialize in Google Cloud's Vertex AI platform, where excessive default permissions give attackers a way to abuse a deployed AI agent and use it to steal sensitive data, access restricted internal infrastructure, and potentially execute other unauthorized actions.

Excessive Permissions​

Google has updated its official documentation to more explicitly explain how Vertex AI uses agents and other resources after Palo Alto Networks disclosed its findings to the search and cloud giant. Google has also recommended that organizations that want to use least-privilege access in their agentic AI environment replace the default service agent on Vertex Agent Engine with their own custom dedicated service account.

Vertex AI is a Google Cloud platform that allows organizations to build, deploy, and manage AI-powered applications. It offers an Agent Engine and Application Development Kit that developers can use to create autonomous agents for performing tasks like querying databases, interacting with APIs, managing files, and making automated decisions with minimal human oversight. Many enterprises use these agents, or similar ones on other cloud platforms, to automate workflows, analyze data, power customer service tools, and AI-enable existing cloud services, granting them wide access permissions in the process.

And it's that wide access that creates opportunities for attackers to hijack those agents and turn them into double agents, doing the dirty work while appearing seemingly normal to the organizations using them, Palo Alto said in its report.

On Google's Vertex AI platform, the researchers discovered a default service account tied to every deployed Vertex AI agent called Per-Project, Per-Product Service Agent (P4SA) with excessive default permissions. The researchers showed how an attacker who is able to extract the agent's service account credentials could use them to gain access to sensitive areas of the customer's cloud environment. They showed how the same credentials would allow an attacker to download proprietary container images from Google's own internal infrastructure and to discover hardcoded references to internal Google storage buckets for potential future attacks.

Significant Security Risk​

"This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into an insider threat," Palo Alto researcher Ofir Shaty wrote. "The scopes set by default on the Agent Engine could potentially extend access beyond the GCP environment and into an organization's Google Workspace, including services such as Gmail, Google Calendar, and Google Drive."

To demonstrate the threat, Palo Alto's researchers built a proof-of-concept Vertex AI agent, that, once deployed, sends a request to Google's internal metadata service to extract the live credentials of the P4SA service agent running underneath it. The researchers used the permissions associated with those credentials to break out of the AI agent's environment and into the customer's broader Google Cloud Project and also into Google's own internal infrastructure.

Palo Alto did not respond immediately to a Dark Reading request that sought to understand if they would expect to find similarly excessive default agent permissions on AI platforms from other major cloud vendors. But Ian Swanson, VP of AI security at the company, says the broad takeaway for organization is the need for them to pay attention to the security risks that AI agents can inadvertently introduce.

“Agents represent a shift in enterprise productivity from AI that talks to AI that acts," he says. And that means the risks are no longer just about data leakage but also about agents taking unauthorized action. "When deploying agents, organizations must realize that there can be no AI without security of AI. Security teams must be able to discover agents wherever they live in enterprise environments, assess potential risk before deployment, and protect agents at runtime as they enter business and operational workflows," he says.

A Google spokeswoman pointed to the company's recent documentation update as a measure it has taken to make organizations more aware of the permissions that agents have on Vertex AI. "A key best practice for securing Agent Engine and ensuring least-privilege execution is Bring Your Own Service Account (BYOSA)," the spokeswoman said, quoting the Palo Alto report. "Using BYOSA, Agent Engine users can enforce the principle of least privilege, granting the agent only the specific permissions it requires to function and effectively mitigating the risk of excessive privileges."
 
Top
Cart