Dispatch
What the deepfake decade should have taught us and what it didn't
A composite of incidents, none of them fictional in their essentials, and the institutional discipline still missing from the response.
A composite of incidents, none of them fictional in their essentials.
A regional bank chief executive arrives at his office on a Monday morning to learn that a video of him appearing to authorise a fraudulent transfer has been circulating among senior staff over the weekend. The video is convincing. The voice is his. The setting, a Zoom meeting from his home office, looks correct. Three of his direct reports, having received the instruction by what appeared to be entirely normal channels, had begun to act on it before the chief financial officer raised a query.
A sovereign-fund deputy chair receives, by encrypted message, an audio recording of what sounds unmistakably like her principal authorising a position on a strategic asset. The instruction is significant. The voice is correct in every register, including the specific cadence of fatigue she recognises in her principal at the end of long days. She acts. Two days later, on returning from his actual physical location, her principal denies ever having made the call.
A head of a foreign ministry takes a call from what appears to be a counterpart in a neighbouring country. The conversation is conducted in the appropriate language, the appropriate register, with the appropriate references to a meeting the two had attended together six weeks earlier. A diplomatic position is communicated. It is not the position the counterpart actually holds. By the time the discrepancy surfaces, it has been written into briefing notes for three different cabinet members.
These are not predictions. The ingredients of each one are already in place. The technology to produce convincing synthetic video, voice, and conversational AI exists, is widely accessible, and is improving on a quarterly basis. The institutional vulnerabilities, staff trained to act on instructions from senior leaders, communication channels that authenticate by familiarity rather than cryptography, decision-making processes that compress time for the sake of agility, are nearly universal. The question is not whether the composite incidents above will occur in some form. They have already occurred, in forms close enough that we know the institutions affected. The question is what institutions are doing to prepare for the version that has not yet occurred to them.
The dispiriting answer, in most institutions we have visibility into, is very little.
The institutional commentary on artificial intelligence over the last three years has been overwhelmingly focused on what AI can create. The productivity case. The new content. The faster work, the better analysis, the cheaper output. This is the conversation that has dominated cabinet meetings, board agendas, and chief-executive offsites. It is not wrong. It is not exhausted. It is, however, only half the conversation.
The other half, what AI can imitate, has been treated as a security technicality, the province of chief information security officers and a small number of academic researchers. This is a category error. The capability to synthesise a specific principal's likeness, voice, and conversational style is not a niche security problem. It is a structural threat to the way consequential institutions actually communicate, decide, and operate. And the threat is increasing in capability faster than the defensive infrastructure is being built.
We have written elsewhere about why principal-grade AI fidelity is the question that matters as analytical AI becomes commoditised. The deepfake question is the inverse of the same point. The same fidelity that, when authorised, makes a Digital Human Twin a valuable institutional asset is, when produced by adversaries, a structural threat to institutional integrity. The two are not separate concerns. They are the same problem, viewed from opposite directions.
What does it mean to defend a principal in the deepfake decade?
It means, at minimum, that any output produced in the principal's name, voice, video, text, decision, must be cryptographically attributable to a system the principal authorised. Not by the receiver's recognition. Not by the sender's claim. By cryptographic signature, verifiable against a public key that the principal has formally published, with a chain of provenance that demonstrates the output's origin and the conditions of its authorisation.
This is not a future technology. The C2PA standard, increasingly adopted across the major model providers, allows for exactly this. The infrastructure for principal-grade signature exists. What is missing, almost universally in serious institutions, is the discipline of using it. Most chief executives have not yet published a public key against which their voice or video communications can be verified. Most boards have not yet specified that authentic communications from the principal must carry a verifiable signature. Most senior staff have not yet been trained to challenge an apparently routine instruction that does not.
This is the gap. It is not a technology gap. It is an institutional discipline gap, and it is closing only at the speed at which institutions take the threat seriously enough to invest the discipline.
The threshold for taking it seriously, in most institutions, has been the first incident. By that point, depending on the incident, the cost is anywhere between substantial and catastrophic.
A second observation, uncomfortable and worth stating plainly.
The institutions most exposed to deepfake attacks are the institutions whose principals have the highest public profiles and the most visible decision authority. Heads of state. Sovereign-fund principals. Tier-1 chief executives. Central bank governors. The very leaders whose decisions would most reward an adversary willing to fabricate them are the leaders whose voice, image, and conversational style are most extensively documented in public archives, making them the easiest, technically, to imitate at high fidelity.
This is the inverse of how institutional security has historically worked. In every previous era, the most exposed individuals could be defended by physical and procedural means: protected communications, vetted staff, controlled environments. None of these defences applies against a synthetic instruction that arrives through ordinary channels and bears every external mark of authenticity. The traditional security perimeter does not contain the threat.
What does contain it is provenance. Specifically, principal-grade provenance: the discipline by which every legitimate output from the principal is signed, every signature is verifiable, and every staff member is trained that an instruction without a verifiable signature is not, by definition, an instruction from the principal, regardless of how convincingly it is delivered.
This is an unfamiliar discipline. It will, over the coming decade, become standard. The institutions that adopt it earliest will, when the incidents occur, be defended. The institutions that adopt it late will discover the cost of the gap, and adopt it then.
A third point, which sits at the centre of why we are publishing this dispatch.
The architecture for defending principals against synthetic imitation is, in its essentials, the same architecture as the one Columbus has built for capturing principals at fidelity. The same calibration discipline, the same provenance infrastructure, the same governance and revocation framework. What enables a principal to authorise a Digital Human Twin to act in their name also enables the principal to defend against unauthorised imitation. The two functions share infrastructure. An institution that has done the work of one has, largely, done the work of the other.
This is not coincidence. It is structural. The capability to produce principal-grade synthetic output and the capability to defend against principal-grade synthetic output are the same capability, exercised in different directions. An institution that takes one seriously is, in the same investment, taking the other seriously.
It is, in our view, no longer responsible for institutions whose principals carry public weight to treat principal-grade fidelity as optional. It is now part of the basic infrastructure of consequential institutional life. We expect, within five years, that boards will ask the question of their chief executives as a matter of standard governance review. Most boards do not yet ask it. The ones that begin asking it now will, in retrospect, have been ahead.
This is the half of the AI conversation that has been largely missing from the institutional discussion of the last three years. It is the half that, in our view, will define the next three. The capability that has been the focus of celebration is, in its mirror form, also the capability that will define institutional risk for the foreseeable future. Both are real. Both are now. Neither is going to be undone by regulation, however well-intentioned.
The institutions that understand this are the ones that will navigate the deepfake decade with their reputations and their decision-making integrity intact. The institutions that do not will discover, sometimes very quickly, what the cost of misjudging the threat actually is.
We write this not to alarm. We write this because the discipline that protects against the threat is, broadly, the discipline that institutions will eventually have to build anyway as the principal-grade AI category matures. The question is whether they build it in advance of the first serious incident, or after.
We strongly recommend the former.
The Columbus Editorial Board