Over the course of my career in CIO and CTO roles at major financial institutions, I’ve seen how quickly transformative technologies can run ahead of governance. We watched it unfold with early cloud adoption. We’re still living it with crypto, where regulatory frameworks continue to chase innovation. And now we are seeing it again – at unprecedented speed – with generative AI.
The pressure to adopt GenAI is unlike anything I’ve seen. Boards want a strategy yesterday. Competitors are announcing pilots and partnerships. Employees, understandably, are eager for tools that strip away manual work and let them focus on higher-value decisions.
But for those of us who have been responsible for the crown jewels – the personally identifiable information of millions of customers, patients, or clients – the current AI SaaS landscape, as well as early-stage interoperability protocols like MCP – is not just immature. In many cases, it is alarming.
Innovation is racing ahead. Enterprise-grade security and governance are trailing behind.
The Comforting Myth of Zero Data Retention
One of the clearest symptoms of this imbalance is the industry’s fixation on Zero Data Retention (ZDR).
Anyone who has sat through an AI vendor pitch recently will recognize the pattern. Raise a hand and ask about data privacy, and the response is immediate and confident:
“Don’t worry. We’re stateless. We don’t retain prompts. Nothing is used for training. Your data is deleted immediately.”
On the surface, this sounds reassuring. It checks a box on a standard vendor risk questionnaire. For many organizations, that may be enough.
For highly regulated institutions, it is not.
ZDR is not meaningless – but it is wildly insufficient. Worse, it has become a substitute for deeper architectural thinking about how sensitive data is actually handled, moved, and exposed.
Here is the uncomfortable truth few vendors will say out loud: deleting the evidence of a breach after the fact does not prevent the breach itself.
Transmission Is Disclosure
The core flaw in over-reliance on ZDR is that it ignores the most fundamental reality of data governance: movement matters.
In financial services and healthcare, we live by principles like GDPR’s “data minimization” and HIPAA’s “minimum necessary” standard. These are not abstract ideals; they are operational rules. You may only share the absolute minimum amount of data required to perform a specific function.
When an organization sends raw, unredacted customer financial histories or clinical notes to an external LLM – even one with a Zero Data Retention policy – it has already crossed a line.
In effect, that data has left a governed environment and been transmitted to third-party infrastructure where governance can no longer be fully enforced. At the moment it lands on an external server, an external disclosure has occurred.
Whether the vendor deletes it in 30 days, 30 minutes, or 30 milliseconds is largely beside the point. The data was disclosed to a party that did not have a strict “need to know” in order to perform statistical inference. ZDR does not undo that fact; it simply shortens the window in which evidence of the disclosure exists.
The “stateless” framing also glosses over technical reality. Even if data never touches persistent disk, it absolutely exists – in RAM, on GPUs, and within shared compute infrastructure – while it is being processed. History has shown us, repeatedly, that data “in memory” is not immune to attack. Side-channel exploits, cache leakage, and compromised hypervisors do not care about retention policies.
A sophisticated adversary is not scraping hard drives. They are watching live memory.
Zero Data Retention does nothing to address this risk.
What Regulated Industries Already Know
For decades, regulated institutions have built security programs around defense in depth. We assume controls will fail, and we design layers accordingly. Identity, access, encryption, monitoring, auditability – none of these stand alone.
Yet in the rush to deploy AI, many organizations are quietly abandoning these principles in favor of vendor assurances.
A credible “gold standard” for regulated AI does not rest on a single policy statement. It requires an architecture in which security and governance are intrinsic, not bolted on after the product ships or through generalized ZDR policies that are at best a thin veneer.
At a minimum, that architecture must include several elements that are currently missing – or inconsistently implemented – across much of the AI ecosystem.
First, pre-transmission obfuscation. The safest sensitive data is data that never leaves your perimeter. Before a prompt is transmitted externally, PII should be identified, redacted, or tokenized according to policy. Models do not need raw identities to reason effectively; they need context. Sending unfiltered data is a failure of design, not a technical necessity.
Second, granular internal controls. Zero Data Retention reduces some external risks, but it is limited in scope and does nothing for internal misuse. AI systems need role-aware governance: not just who can access the tool, but what data they are permitted to include in prompts, and which documents the system itself may retrieve through RAG pipelines.
Third, hardened transport and key ownership. Basic HTTPS is table stakes, not sufficient. Mutual TLS, strict endpoint authentication, and customer-managed encryption keys must be part of any serious deployment, especially when data touches third-party environments.
Finally, procurement must evolve. Standard boilerplate agreements were written for deterministic software, not probabilistic systems whose behavior cannot be fully bounded. In practice, few model providers are prepared to accept traditional forms of liability for hallucinated outputs, offer complete transparency into model lineage, or make unqualified guarantees around training data provenance. That gap does not eliminate the risk: it simply shifts it. Contracts alone will not resolve this tension, which is why governance cannot stop at the firewall.
None of this is radical. It is simply the application of long-standing regulatory discipline to a new class of technology.
Bridging the Gap Between Promise and Reality
I have spent enough time inside large institutions to understand the frustration many leaders feel today. There is genuine excitement about what AI can enable – and equal frustration when promising tools reveal security assumptions better suited to consumer apps than systemically important enterprises.
That gap between AI capability and regulatory reality is not theoretical. It is structural.
It is also the reason platforms like CAMP exist.
CAMP is not an AI model, nor does it attempt to replace the innovation occurring across the LLM ecosystem. Instead, it serves as an agentic orchestration and control layer – built on the assumption that strategic AI capabilities depend on deep integration with in-house data, and that external AI services should be treated as untrusted by default.
Rather than asking regulated organizations to contort their risk posture to fit vendor architectures, CAMP applies established security principles directly to AI workflows.
Sensitive data is detected and transformed before it ever leaves the organization’s boundary. Access controls are enforced consistently, regardless of which model or provider sits downstream. Every interaction is logged and auditable within the enterprise’s own environment.
In this model, Zero Data Retention becomes what it should have been all along: a secondary safeguard, not the foundation of trust.
Moving Forward Without Looking Away
To be clear, we cannot – and should not – slow innovation. Generative AI will reshape how work is done across every regulated industry.
But we also cannot pretend that thin assurances and marketing slogans constitute governance.
Relying on Zero Data Retention as a primary security strategy is not prudence; it is abdication. It shifts responsibility outward while ignoring the realities of data movement, exposure, and accountability.
If AI is to earn a durable place inside regulated enterprises, it must be built – and adopted – with the same seriousness we apply to every other system that touches sensitive data.
That standard already exists. We simply need to insist that AI meets it.