The concept of a human in the loop for artificial intelligence deployments is being reexamined as organizations scale AI-driven security operations. At the RSAC 2026 Conference in San Francisco, a panel of senior security executives debated whether human oversight is essential or merely a bottleneck in AI-powered defense systems.
Moderated by James Rundle of The Wall Street Journal, the session titled 'From Threat to Strategy: The CISO's Playbook for the AI Revolution' featured Francis deSouza, chief operating officer and president of security products at Google Cloud; Emma Smith, global chief information security officer at Vodafone; and Shaun Khalfan, senior vice president and chief information security officer at PayPal. The discussion revolved around how security leaders can adapt to an environment increasingly shaped by large language models and generative AI.
Rethinking Human Oversight in AI Security
One of the most provocative moments came when panelists challenged the widely accepted idea of keeping a 'human in the loop' for every AI decision. DeSouza argued that human-led defenses are simply too slow to counter the speed of agent-led cyberattacks, and as a result, Google is moving toward agent-led defense mechanisms. He emphasized that while humans remain critical for strategy and exception handling, routine security operations should be fully automated.
Smith agreed, stating that relying on human intervention for traditional security controls is not sustainable. 'A human in the loop is not scalable if we think about our traditional security controls. The ones that rely on human behaviors are the ones that we don't rely on the most,' she said. 'Let's face it, we rely on the ones that are technical and automated and that we can prove over time. A human in the loop is not the solution for the long term, certainly on scaled operations, and I also worry that it will give a boring job to the human in the loop.' Instead, she proposed the idea of a 'human on the loop'—where humans gain insights from AI rather than manually controlling or overseeing every step.
Khalfan added a layer of nuance, noting that PayPal incorporates AI to detect fraud across its billion monthly transactions. He stressed the importance of wrapping all AI initiatives in a data security and compliance framework. 'When we think about our key AI principles, it's data and security. It's privacy, it's transparency, it's explainability,' he explained. 'As we wrap everything we're doing in these principles, it helps us keep this anchor of all of the efforts that we're making.'
Challenges of AI Adoption in Security
The panel also addressed the inherent challenges of integrating AI into security operations. DeSouza noted that Google itself is seeing 50% of its code generated by AI with developer assistance, a statistic that underscores the rapid adoption of AI tools. However, this also introduces new risks, such as the potential for prompt injection attacks that could leak sensitive corporate data. The shared data security model between AI vendors and customers remains a complex issue, with unclear accountability lines.
Smith described Vodafone's approach through its AI Booster platform, a centralized machine learning system built on Google technology. The platform uses pre-trained models and custom tools to deploy use cases quickly, and it tracks the business value of each initiative. This gives her privacy engineering team the ability to intervene when necessary and ensure guardrails are in place. For high-risk use cases with significant business benefit, Vodafone still insists on human involvement, but only after careful risk assessment.
Khalfan highlighted the importance of tiering AI models based on data sensitivity, establishing clear use cases, and implementing controls to protect against tampering and prompt injections. He also stressed the need to account for the many identities that AI agents will require, and to collaborate with industry initiatives like the Coalition for Secure AI (CoSAI), which provides white papers and documentation across multiple workstreams.
Another key theme was the challenge posed by 'vibe coding'—a trend where organizations rely heavily on AI-generated code without adequate human review. This can make the CISO's job more complex, as insecure code may be deployed at scale before proper security checks are performed. The panelists agreed that while AI accelerates development, it must be accompanied by robust security processes.
Scaling AI Security with a Data-Centric Approach
Across the discussion, a consensus emerged that effective AI security requires a data-centric approach. Khalfan noted that PayPal's AI model teams rank models by data sensitivity, then establish controls to protect any sensitive data stored within. This includes protections against model tampering and prompt injections, as well as accounting for the many identities that AI agents will need. He also emphasized the importance of collaboration with the larger ecosystem, such as CoSAI, to ensure secure AI deployments.
Smith echoed this sentiment, describing Vodafone's heat map that evaluates both confidence in AI outcomes and potential risk. For very high-risk scenarios with limited business benefit, the company may choose not to pursue AI at all, or to mandate human oversight. This pragmatic approach helps balance innovation with safety.
Alexandra Rose, director of government partnerships and the Counter Threat Unit at Sophos, added that safe AI deployment is about encouraging curiosity and innovation while ensuring security. 'I think it's important that security is not the world of no,' she said. 'It's how do we get to yes, and how do we get to a yes in a way that we're protected?'
The panel made clear that the debate over the human role in AI security is far from settled. While some argue for full automation to match the speed of adversarial AI, others insist that human judgment remains indispensable for high-risk decisions. What is certain is that organizations must adopt a structured, risk-based approach to AI deployment—one that includes data governance, cross-industry collaboration, and a willingness to challenge conventional wisdom.
Source: Dark Reading News