At its Ignite conference on November 15, 2023, Microsoft introduced Azure AI Speech, a tool for creating voice clones in over 40 languages. This technology is certainly exciting, but it also highlights how easy it is becoming to generate deepfakes. Deepfakes—a mashup of “deep learning” and “fake”—refer to deceptive videos or audio recordings created by AI. They can be so convincing that they appear authentic, raising considerable ethical and security concerns.
The World Economic Forum reports that online deepfakes increased by 900% between 2022 and 2023, highlighting the growing interest and impact of this technology. They can now be generated with minimal technical expertise, making the practice widespread. Deepfakes have a variety of uses. In entertainment, they were used to make Harrison Ford look younger in the latest Indiana Jones movie. In cybercrime, they are used for malicious activities such as blackmail or identity theft. For example, voice mimicry software was used to steal $240,000—a clear illustration of how deepfakes can be used for criminal purposes.
Deepfakes are also plaguing the cryptocurrency world, most obviously by compromising certain Know Your Customer (KYC) processes. For example, according to Stable Diffusion, an AI-generated image recently passed the verification process for a bank. However, centralized exchanges—in line with regulatory requirements—are opting for certified solutions to maintain their compliance.
Role of blockchain in the fight against deepfakes
In response to these challenges, public blockchains, with their intrinsic security and transparency features, hold out promising solutions for authenticating multimedia content. Their immutability means they could be used to guarantee the authenticity of digital content by using hash functions to detect any tampering with the original files. Key functions such as timestamping and traceability could be used to determine the origin and time of creation of content.
However, this would require the adoption of universal standards for data capture and processing, and the default use of cryptography and blockchains to secure and verify this information. Nevertheless, using them in their current state would not completely solve the problems associated with any subsequent modifications, and would raise major data confidentiality issues, limiting their effectiveness as the only solution against deepfakes.
Some companies are opting for technologies that validate information rather than proving it has been tampered with. Oscar Mairey, Web3 Communications Manager at iExec, stresses the importance of having decentralized infrastructure to certify the authenticity of videos and photos, rather than having to trust a corruptible third party. iExec’s DataProtector solution encrypts data and applies governance rules to its data using smart contracts.
This ensures that only authorized applications can use the data, which are manipulated in a secure environment known as the Trusted Execution Environment (TEE). This relies on hardware and software to guarantee the integrity and confidentiality of code and data execution.
For example, an encrypted KYC process shared on-chain will be able to certify a person’s identity without revealing to the public the official documents used for the KYC process. Gilles Fedak, CEO of iExec, also highlights the effectiveness of decentralized solutions: “By using a decentralized KYC process, we provide the ability not only to verify the authenticity of a video, but also to create a transparent and incorruptible reputation system.”
While there are concerns about using this tool to evade responsibility for unlawful acts, the public and transparent nature of transactions on a decentralized blockchain means that the record of past actions is readily available. This paves the way for a new era of digital trust. “By using blockchain, we are establishing indisputable veracity, which is essential in the fight against deepfakes,” says Gilles Fedak.
In practice, a politician who has validated their identity using a protocol developed on iExec solutions could themselves confirm or deny the authenticity of a video featuring them. Information certifying the veracity of the video would be made public, allowing journalists to disseminate the content without the need for additional veracity checks, while keeping personal data confidential.
It would then be easy to determine whether a video was a deepfake or not, and to disseminate this information transparently. If a politician misuses this power to deny something they actually did, the immutability of the recording on the blockchain would at least provide proof they lied.
Zero-Knowledge Proof Protocols (ZKPs) are also emerging as another potential blockchain-based solution, offering confidentiality and scalability to public blockchains. ZK-rollups—layer-2 protocols based on these algorithms—could play a key role in the fight against deepfakes in the future.
Technological tools and journalistic methodology
Other technological solutions have been developed to take on the challenge of deepfakes and misinformation. Deepware, for example, is an advanced tool that uses artificial intelligence to detect subtle manipulations in videos. Hoaxy stands out for its ability to track the spread of fake news on social media, offering a unique perspective on misinformation.
Sensity AI specifically targets deepfakes by analyzing videos to detect changes often imperceptible to the human eye. ClaimBuster, meanwhile, focuses on verifying the textual content of videos, comparing their claims against reputable databases.
However, the methods available to counter deepfakes are not solely technological. An approach based on verifying information by assessing the credibility of information sources is still one of the best ways of unearthing the truth about multimedia content. This method involves assessing the reliability and accuracy of information, based on impartiality, transparency, and upholding journalistic standards.
Together, these tools provide an increasingly robust defense against misinformation and deceptive videos, helping to protect the integrity of information in the digital age.