Arup Deepfake: How An AI-Generated Video Stole $25 Million

Contents

Summary Of The Attack

  • In January 2024, the Arup, the London-based multinational design and engineering firm, discovered that they had fallen victim to a deepfake attack.
  • Hong Kong police reported the attack in February 2024, though Arup did not publicly identify itself as the victim until May 2024.
  • 15 fraudulent wire transfers totaling $25.6 million (200 million Hong Kong dollars) were executed in a single day.
  • Initial access came via spear-phishing email impersonating the CFO and then then leveraged deepfake videos and audios to bypass traditional executive verification procedures to impersonate the company’s Chief Financial Officer and colleagues during a video conference call.
  • As of early 2025, the investigation remains ongoing with Hong Kong police. No arrests have been announced, no perpetrator has been publicly identified, and the stolen funds remain unrecovered.

What Happened?

In January 2024, the Arup deepfake attack became one of the largest AI-powered financial frauds ever documented. The attack used AI-generated video and audio to impersonate the company’s Chief Financial Officer and colleagues during a video conference call.

This attack leveraged AI to manipulate human psychology and trust. Attackers created convincing deepfakes of known executives using only publicly available video and audio from company meetings and conferences.

Hong Kong police reported the attack in February 2024, though Arup did not publicly identify itself as the victim until May 2024.

Discovering The Fraud

The fraud was discovered through standard post-transaction follow-up procedures. The finance employee who authorized the wire transfers contacted Arup’s actual corporate headquarters to discuss the “secret transaction.”

The company’s executives immediately stated that they had authorized no such transaction, held no such meeting, and had no knowledge of any deepfake video conference.

This discrepancy immediately revealed the fraud. The employee alerted internal security and compliance teams, who contacted Hong Kong police.

What Was The Impact Of The Arup Deepfake Attack?

The Arup deepfake attack resulted in 15 fraudulent wire transfers totaling $25.6 million (200 million Hong Kong dollars) executed in a single day. As of early 2025, none of the stolen funds have been recovered. Hong Kong authorities are continuing to investigate the crime.

The Financial Impact

  • Direct cash loss to the organization.
  • Lost operational capability of those funds.
  • Potential impact to financial reporting and shareholder confidence.
  • Costs associated with investigation and remediation.
  • Potential litigation and regulatory fines.

The Operational Impact

Beyond the immediate financial loss, the Arup deepfake attack created significant operational disruption:

  • Finance teams required extensive time to investigate and verify transaction legitimacy.
  • Employees lost confidence in digital communication from leadership.
  • The organization needed to implement new verification procedures for large transactions.
  • Internal audit and compliance teams initiated comprehensive reviews of financial controls.
  • Management attention diverted from core operations to incident response and remediation.

The Reputational & Trust Impact

The attacked exposed vulnerabilities that raised concerns among stakeholders:

  • Shareholders questioned the strength of financial controls.
  • Business partners became concerned about the organization’s cybersecurity posture.
  • Employees worried about the authenticity of digital communications.
  • The media highlighted the incident as a cautionary tale about AI-enabled fraud.

Understand What’s At Stake: The Top AI Security Risks In 2026

How Did The Arup Deepfake Attack Happen?

The cause of the Arup deepfake attack involved a failure in human verification procedures combined with AI technology that was convincing enough to trick an employee. The attackers did not exploit software vulnerabilities; instead, they leveraged AI  video and audtio to create convincing impersonations of trusted individuals.

How the Arup deepfake attack happened

Initial Compromise: Spear-Phishing Email

The attack began with reconnaissance and and crafting the email. The attacker collected information about Arup’s organizational structure, the identity of key financial decision-makers, and the general communication style of executive leadership.

The initial phishing email incorporated:

  • A spoofed sender address matching the CFO’s identity.
  • Language consistent with executive communication.
  • Time-sensitive and urgent tone.
  • Request for discretionary handling.

The email succeeded in prompting the employee to consider the request seriously, though the employee remained somewhat skeptical. However, this initial skepticism would not be enough to counter the next stage of the attack.

Reconnaissance And Gathering Source Material

To create convincing deepfakes, attackers must gather source material. In the Arup Deepfake Heist, the attackers exploited the public nature of Arup’s communications:

Video Source Material

  • LinkedIn profile videos of company executives.
  • Recording from company conferences and presentations.
  • Virtual meeting footage from Zoom or Teams calls.
  • Internal company presentations.

Audio Source Material

  • Public conference presentations.
  • Interview recordings.
  • Video conference meetings.
  • Company media appearances.

With this source material, the attackers used AI deepfake technology to generate convincing video and audio of the CFO and colleagues.

The AI models used were likely Generative Adversarial Networks (GANs) or diffusion models, which can synthesize realistic facial movements and expressions.

Voice Cloning Technology

Modern neural voice synthesis requires only 20-30 seconds , sometimes less, of source audio. The attackers likely extracted audio samples from publicly available videos and used voice synthesis software to create deepfake audio that matched the visual deepfakes.

Generating Deepfakes

Realistic deepfakes can be created in approximately 45 minutes using freely available tools like DeepFaceLab or similar open-source software. This accessibility makes deepfake creation increasingly practical for sophisticated attackers.

Video Conference Social Engineering

Once the deepfakes were generated, the attackers scheduled a video conference call with the target employee. The call featured multiple deepfakes of company executives, creating a false sense of legitimacy and urgency.

Key elements of the deepfake video conference:

  • Multiple Participants: Rather than a single deepfake, the attackers created multiple deepfakes of different executives. This made the meeting appear more legitimate and created social pressure.
  • Authority Establishment: The deepfakes referenced internal information and processes, establishing credibility with the target.
  • Consensus Building: With multiple “executives” present, the deepfake CFO’s instructions appeared vetted and approved by colleagues.
  • Urgency Amplification: The deepfakes emphasized time sensitivity and confidentiality, pressuring the employee to comply immediately without additional verification.
  • Detailed Instructions: The deepfakes provided specific account numbers, transfer amounts, and step-by-step instructions for executing the wire transfers.

Why Traditional Defenses Failed

Deepfakes remain difficult to detect, even with the use of advanced technology. State-of-the-art automated detection systems see their accuracy plummet by 45–50% when moving from controlled laboratory settings to real-world applications.

Humans fare little better, with identification rates hovering between 55–60%, a margin only slightly higher than random chance.

These limitations are further compounded by the extreme technical difficulty of detecting deepfakes during live video conferences in real time, making them a persistent threat to digital security.

Indicators Of Compromise

Security teams should monitor for the following indicators when investigating potential deepfake fraud or social engineering attacks targeting financial operations.

Social Engineering IOCs

  • Unsolicited video conference invitations, particularly from senior leadership.
  • Requests for urgent action or large financial transfers.
  • Emphasis on confidentiality or secrecy.
  • Pressure to bypass normal approval workflows and verification procedures.
  • Communication via unexpected channels (new video platforms, personal numbers).
  • Requests originating from executives who normally do not handle transactions.
  • Unusual time of day for executive communication.

Deepfake-Specific IOCs

Video Artifacts

  • Slight delays between audio and visual (lip-sync issues).
  • Unnatural eye movement or blinking patterns.
  • Inconsistent lighting across the face.
  • Slight distortions at face boundaries or edges.
  • Background rendering inconsistencies.
  • Unusual skin texture or color tone variations.
  • Unnatural head movements or angles.

Audio Artifacts

  • Audio compression or digital artifacts. 
  • Inconsistent voice quality.
  • Breathing patterns that don’t match the visual.
  • Background noise that doesn’t match the environment.
  • Unusual pauses or speech patterns.

One Shield Is All You Need - PromptShield™

PromptShield™ is the only Intent-Based AI Prompt WAF on the market that protects your enterprise from the most critical AI security risks.

How Can Deepfake Fraud Be Prevented?

Defending against deepfake fraud, like the case with Arup, requires a multi-layered approach combining process controls, technology solutions, and employee awareness.

Immediate Mitigation Strategies

  • Secondary Verification Channel: Require that all large fund transfer requests be verified through an independent communication channel. Call the requesting executive at a known, verified phone number to confirm the request. This simple step would have immediately exposed the Arup fraud.
  • Mandatory Delay Period: Implement a 24-48 hour review period for all transfers exceeding a certain threshold (e.g., $100,000). This allows time for verification before execution.
  • Video Conference Restrictions: Restrict the use of video conferences for financial authorization discussions. Require these conversations to occur through verified channels like in-person meetings or known phone numbers.
  • Duty Segregation: Ensure the person requesting a large transfer is not the same person authorized to approve it. Require multiple sign-offs from different team members.

Specific Countermeasures For Deepfake Fraud

  • Deepfake Detection Tools: Deploy AI-powered deepfake detection software on video conferencing platforms.
  • Watermarking Technology: Implement cryptographic watermarking of executive communications and video recordings. Watermarks provide proof of authenticity for critical communications.
  • Behavioral Analytics: Monitor communication patterns for anomalies, such as unusual requests from known executives or communication at unusual hours.
  • Voice Biometrics: Implement voice biometric authentication for sensitive financial decisions. Voice biometrics analyze unique characteristics of an individual’s voice to confirm identity.
  • Hardware Security Keys: Deploy hardware security keys (FIDO2 tokens) for communication platforms like Microsoft Teams and Slack. These physical keys provide stronger authentication than passwords or standard MFA.
  • Zero Trust Architecture: Implement zero trust principles for all executive communications involving financial transactions. Never assume caller identity; always independently verify through a known contact method.
  • Session Duration Limits: Enforce short session timeouts (5-15 minutes) for financial software and transaction systems. Require re-authentication for all new transactions.
  • Blockchain Verification: Use blockchain technology to verify the authenticity of critical communications. Digital signatures and tamper-evident logs provide proof of original communications versus replicas.
Promptshield securing AI gateway

How PromptShield™ Stops AI Deepfake Threats

PromptShield™ provides intent-based defense against AI-enhanced fraud. Rather than just detecting deepfakes, it identifies the malicious intent behind attacks by flagging social engineering patterns like artificial urgency, emotional manipulation, and leveraged authority.

By detecting deviations from standard business processes, PromptShield™ uncovers an attacker’s ultimate goal (such as unauthorized financial transfers) neutralizing threats at the behavioral level.

To prevent attackers from scaling, the system disrupts automation by blocking reconnaissance, preventing prompt injections, and securing internal AI workflows.

PromptShield™ further protects organizations by preventing unauthorized model fine-tuning using stolen executive assets and enforcing strict authentication. These proactive measures are supported by post-breach controls that monitor for identity misuse and maintain detailed audit trails for forensics and regulatory compliance.

Article by

Picture of Jason Firch, MBA
Jason Firch, MBA
Jason is a proven marketing leader, veteran IT operations manager, and cybersecurity expert with over a decade of experience. He is the founder and President of PurpleSec.
Picture of Jason Firch, MBA
Jason Firch, MBA
Jason is a proven marketing leader, veteran IT operations manager, and cybersecurity expert with over a decade of experience. He is the founder and President of PurpleSec.

Share This Article

Our Editorial Process

Our content goes through a rigorous approval process which is reviewed by cybersecurity experts – ensuring the quality and accuracy of information published.

Related Breaches