The footage looks real. The voice sounds familiar. The instructions appear to come from someone with the authority to give them. By the time anyone thinks to question what they have seen or heard, the wire transfer has cleared, the announcement has circulated, or the damage has quietly taken root.
Deepfake fraud has undergone a fundamental shift in character. What began as a technically demanding discipline has become, through advances in generative AI, something far more accessible. According to Deloitte, the volume of deepfake content on social media platforms grew by 550 percent between 2019 and 2023.
“The scale of deepfakes as a threat to enterprises is rising every day as attacks become cheaper, easier and more accessible,” says Andrea Sorri, Segment Development Manager for Smart Cities – EMEA, Axis Communications. “AI has been a key component in making deepfakes easier to generate and disseminate, and AI-generated videos are being used maliciously as part of disinformation campaigns, cyber-attacks and attacks against high-value business targets.”
The technologies enabling these attacks are the same ones organisations are investing in for competitive advantage. “AI and ML have become key enterprise technologies in unlocking the next stages of productivity and digital transformation,” says Sorri. “However, they are also enabling criminals to refine their attacks and produce more convincing malicious content. As enterprises continue to invest in and scale their AI-powered systems and applications, they need to enhance their ability to protect themselves against actors who seek to use those same applications against them.”
There is a clear logic to why senior leadership has become the preferred impersonation target. Executives carry institutional authority; their instructions move capital, shift strategy, and trigger operational responses across entire organisations. A fabricated directive from a CFO or CEO does not need to be technically flawless; it only needs to be believable long enough for someone to act on it.

The expansion of hybrid work and enterprise teleconferencing has compounded this. “The abstraction layer that comes with leaders communicating over a video link becomes a point of attack for criminals looking to manipulate content,” says Sorri. “What this means is that leaders are regarded like any other business asset and are thus subject to the same monitoring and proactive security measures.”
A 2024 Medius survey found that 53 percent of finance professionals had been targeted by deepfake scamming attacks, with 43 percent admitting they had fallen victim, resulting in documented losses and several widely reported cases where employees transferred multi-million dollar sums to fraudsters posing as company leadership.
“Once an attack is carried out, organisations have to move very quickly to avoid escalation or reputational damage,” says Sorri. “They need to conduct internal investigations, assess when, where and how the attack was carried out, identify the vulnerability or point of failure that was exploited by the attackers, and adhere to previously established protocols and procedures.”
Threat beyond the balance sheet
The financial services exposure is well documented, but the use of synthetic media as a disinformation instrument aimed at critical infrastructure draws considerably less scrutiny. “Infrastructure in sectors such as mining, logistics, transportation and urban management can be impacted by disinformation campaigns and content that seeks to cause market unrest,” says Sorri. “At a time when geopolitical tensions are influencing business activity across the Middle East, manipulated video content can cause people, countries and markets to panic.”
The scenarios he describes are not speculative. In sectors tied closely to economic stability, public confidence and national operations, manipulated content has the potential to trigger consequences long before its authenticity is challenged.
“Oil and gas operations can be the victim of AI-generated content that purports to show infrastructure being attacked or destroyed. Another example is video content showing public spaces or urban environments with high volumes of foot traffic, where disruptions or incidents appear to threaten personal safety,” he says. Content designed not to steal but to destabilise can move markets and erode institutional confidence without a single technical system being compromised in any conventional sense.
The technical response has to start with treating video integrity the way financial systems treat transaction integrity: verified, not assumed.
“Signed video adds cryptographic signatures to a captured video, collecting information from previous frames and signing the information using a private encryption key. Users can then verify the information using that signature and the corresponding public key, thus ensuring the end-to-end integrity of video data,” Sorri says. “Organisations can also improve their overall resilience through best practices, including protecting video data and using encrypted data transport. Safe data transmission, storage and encryption are how organisations build trust in their video systems.”
In 2021, Axis launched an open-source project for video authentication, prioritising shared standards over proprietary advantage. “By taking an open approach and advocating for shared standards, the industry is able to enshrine complete trust in video surveillance and organisations’ ability to verify content. If one system’s video data cannot be trusted, that distrust can extend to other systems as well,” Sorri says.
The company’s Edge Vault platform embeds cybersecurity at the hardware level, while its browser-based Signed Media Verifier allows organisations to validate footage independently of the camera vendor or system owner.
When a deepfake incident does occur, the first 24 hours determine how much of the damage remains containable. “Organisations need to immediately identify the manipulated content, the elements that feature, including location, personnel and information disseminated, as well as where and how that content originated. Once those details are confirmed, they can take action, issue orders and inform stakeholders, helping to minimise potential fallout,” Sorri says.
“Authenticating video data and upholding the integrity of video systems is not going to be solved by one standalone product. It requires vendors like Axis to rethink their solutions from top to bottom, and by doing so, we address the shared challenge of manipulated content head-on,” he adds.
As the tools to fabricate convincing video and audio become cheaper and more accessible, the gap between organisations that have built verification into their infrastructure and those that have not is becoming the most consequential security divide in the enterprise.






Discussion about this post