CISOs Debate Human Role in AI-Powered Security

The idea of a "human in the loop" in AI deployment was challenged during a security executive panel at the RSAC 2026 Conference this week.

1774322668306

RSAC 2026 CONFERENCE – San Francisco – Do AI deployments need a "human in the loop" or will people merely slow things down?

That was a key question during an RSAC 2026 Conference panel in which security executives from Google Cloud, Vodafone, and PayPal discussed evolving AI use cases and how to safely deploy it in one's environment.

In the panel titled "From Threat to Strategy: The CISO's Playbook for the AI Revolution," The Wall Street Journal's James Rundle asked Google Cloud chief operating officer (COO) and president of security products Francis deSouza, Vodafone global chief information security officer (CISO) Emma Smith, and PayPal senior VP and CISO Shaun Khalfan how security leaders can best adapt to the new AI landscape. The trio also discussed the role of humans in AI-powered security.

For as many problems as the "AI revolution" hopes to solve, the introduction of LLM-powered security products has introduced and/or exacerbated other issues in the security landscape.

Related:Trivy Supply Chain Attack Targets CI/CD Secrets

Thanks to the high security standard needed to secure AI tools (lest they leak sensitive corporate documents and the like, thanks to a prompt injection), the shared data security model between AI vendor and customer remains something of a mess. AI advances outside the security organization, such as vibe coding, have also created challenges; an organization may lean too hard onto AI generated code without the right humans in the loop, making the CISO's job more complex. It must also be noted that many organizations have yet to find success in their AI security deployments, according to studies.

Google's AI presence speaks for itself, as 50% of its code is AI generated with developer assistance. Vodafone security analysts are using it to automate various workflows and conduct other tasks, like making board executive summaries of technical subject matter — and Khalfan said PayPal is using AI to help detect fraud in its billion transactions per month.

Smith said Vodafone began implementing AI when the company realized it was moving slower than AI would enable, and concluded that it would take a top-down approach from leadership to integrate it correctly. As in, everyone needs to be on the same page for how to implement AI technology in a safe, ethical, responsible way.

Vodafone's solution has been AI Booster, a centralized machine learning platform leveraging Google's technology that's designed to help deploy AI and ML models at scale. It includes a central, reusable codebase that allows it to deploy established use cases quickly via pre-trained models and custom tools, and tracks how successful these processes have been, business-wise.

Related:AI Conundrum: Why MCP Security Can't Be Patched Away

Smith said Vodafone did that for business reasons, in part to track the value of different initiatives, but it also gives her privacy engineering team a framework to do interventions on each use case and ensure the proper guardrails are in place.

Humans on vs. in the Loop​

One surprising note came in discussing the idea of placing a "human in the loop" — the concept that AI tools should include humans at some steps or even every step in order to ensure accuracy of an LLM's output. Although humans are part of the process, deSouza said that human-led defenses are often too slow to stop things like agent-led cyberattacks, and, as such, Google is moving toward agent-led defense.

Smith agreed. "I totally agree that a human in the loop is not scalable if we think about our traditional security controls, the ones that rely on human behaviors are the ones that we don't rely on the most," she said. "Let's face it, we rely on the ones that are technical and automated and that we can prove over time. A human in the loop is not the solution for the long term, certainly on scaled operations, and I also worry that it will give a boring job to the human in the loop."

Related:GlassWorm Malware Evolves to Hide in Dependencies

Instead, organizations should think about ways to get a human "on the loop" to get insights from AI, rather than controlling or overseeing the tools, because "it's just not going to scale," Smith said.

She added that Vodafone has built a heat map that looks at the confidence in an AI's outcome and potential risk outcome. For very high risk impact use cases, Vodafone likely wouldn't pursue such an approach unless there was a big business benefit, "and then it would absolutely have a human in the loop."

The Importance of Data Security and Collaboration​

Khalfan followed Smith by emphasizing the importance of putting everything one does in a data security wrapper. While PayPal is a proponent of the engineering and technological benefits of AI tooling, he added that "it's just as important to have a risk and compliance wrapper around it."

"When we think about our key AI principles, it's data and security. It's privacy, it's transparency, it's explainability," he said. "As we wrap everything we're doing in these principles, it helps us keep this anchor of all of the efforts that we're making."

An example of this is that PayPal's AI model teams rank them in tiers based on data sensitivity, establishing use cases, and then establishing what controls need to be in place to protect any sensitive data stored within. These controls are intended to protect the models against tampering and prompt injections. It means accounting for the many identities that AI agents will need.

Part of this too, Khalfan said, involves collaborating with the larger ecosystem, such as the Coalition for Secure AI (CoSAI), an industry-wide initiative that aims to facilitate collaboration between stakeholders and ensure more secure AI deployments. It offers a wide range of white papers and documentation based on multiple different workstreams.

Alexandra Rose, director of government partnerships and the Counter Threat Unit at Sophos, tells Dark Reading that safe AI deployment is about encouraging curiosity and innovation while ensuring security.

"I think it's important that security is not the world of no," she says. "It's how do we get to yes, and how do we get to a yes in a way that that we're protected?"
 
Top
Cart