top of page

Six Geese-a-Laying a Trail of Trust and Disinformation

  • Writer: William Beresford
    William Beresford
  • 2 days ago
  • 3 min read

 12 Days of Christmas Predictions for 2026 — Beyond’s View of What’s Next 


Welcome to Day 6 of our Christmas Predictions for 2026: a series revealing the real, near-term forces that will reshape organisations. 


The first five predictions explored AI’s operational impact, its sustainability footprint, and the rise of personal agents. Day 6 turns to a growing threat that will soon reshape how organisations communicate, protect their executives and defend their reputations: Disinformation and deepfake-enabled fraud becoming everyday business risks. 


In 2026, trust will become a cross-functional discipline. 

six geese nesting in a snowy scene with golden glowing eggs
On the sixth day of Christmas, my true love gave to me six geese a-laying (trust and disinformation)

Prediction: Disinformation Becomes a Business Risk, Not Just a Security Issue 

For years, deepfakes and synthetic media felt like fringe risks; impressive demonstrations, politically sensitive incidents or sporadic scams. But within 12 months, the volume, accessibility and sophistication of generative impersonation tools will reach a tipping point. 


  • Synthetic emails impersonate executives to approve payments. 

  • Deepfake voice calls trick staff into divulging access codes. 

  • Fabricated videos circulate online to damage brand credibility. 

  • AI-powered phishing becomes hyper-targeted and near-undetectable. 

 

What was once a cybersecurity problem will become a whole-organisation trust problem. 

  

Why Disinformation Risk Accelerates in 2026 

1. Deepfake tools become frictionless and frighteningly convincing 

According to Europol, AI-generated audio and video will be involved in up to 90% of online content by 2026. What once required specialist skills can now be produced, both quickly and easily, using mobile apps or open-source models. As MIT Technology Review notes, the barrier to entry for creating convincing synthetic media is collapsing. 


2. Fraudsters follow ROI and AI increases their return 

McKinsey estimates that global fraud losses could exceed $40 billion annually by 2026 as AI-enhanced scams scale both volume and sophistication. AI allows attackers to personalise messages, mimic executive communication styles and automate social engineering. 


3. Reputational attacks become easier, cheaper and more damaging 

Harvard Business Review warns that disinformation has shifted from a political problem to a brand-level threat, especially in consumer and financial sectors. Brands must consequently detect, verify and respond in real time, rather than relying on reactive communications. 


4. Regulators begin to intervene 

The EU’s AI Act, the UK Online Safety regime and emerging US frameworks all include provisions related to synthetic media transparency. This increases both expectations and consequences for organisations. 

  

What Cross-Functional “Trust Operations” Look Like 

1. Brand, risk, legal and IT work together  

Disinformation is not just a comms problem. Not just a cyber problem. Not just a risk problem. 

In 2026, leading organisations will create Trust Operations: cross-functional teams responsible for monitoring narratives, verifying content, responding to synthetic threats and protecting leaders. 


2. AI-driven detection tools become essential 

These systems: 

  • identify manipulated media 

  • flag unusual communication patterns 

  • verify executive identity 

  • detect synthetic voices 

  • scan social platforms for suspicious virality 

 

Companies like Microsoft, Meta, OpenAI and specialised start-ups are already building these capabilities into their platforms. 


3. Executive protection becomes more digital than physical 

What is the highest-risk target in any organisation? Not infrastructure. Not networks. But trust in people with authority. 

Deepfake CEO scams have already resulted in multimillion-dollar losses. In 2026, identity verification protocols will have to become standard for sensitive approvals. 

4. Internal training gets a major upgrade 

Employees learn to recognise: 

  • synthetic voice attacks 

  • hyper-personalised phishing 

  • fabricated invoices 

  • manipulated imagery 

  • scam calls mimicking colleagues 

 

We predict this will become as fundamental as anti-bribery or GDPR training. 

  

Signals Already Emerging 

  • A Hong Kong company lost $25 million after staff were tricked by deepfake video of a CFO. 

  • Google and OpenAI launched synthetic media watermarking to help identify manipulated content. 

  • Meta expanded its content authenticity and transparency frameworks across platforms. 

  • Financial institutions are reporting rapid growth in AI-enhanced impersonation scams. 

  

Beyond: Putting Data to Work 

Disinformation resilience about defence, data quality, identity assurance and operational clarity. 

At Beyond, we help organisations: 

  • build monitoring systems that detect disinformation early 

  • map trust vulnerabilities across operations 

  • implement identity and verification processes for executives 

  • design cross-functional “trust ops” frameworks 

  • use AI to distinguish authentic from synthetic communications 

  • strengthen data pipelines that support content verification 

 

Trust is now an operational asset and a strategic risk if neglected. We offer our Data Governance Accelerator. In just 4–8 weeks, we can help you get clear ownership, practical policies, smart automation, and simple tools — all anchored in a rapid maturity review and a fit-for-purpose roles and stewardship model. 


If you want to protect your organisation in an era of synthetic threats, give us a shout! 

 

 
 
bottom of page