Not too long ago, a photo or video images could confirm the existence of a person, situations or locations. But technology has changed that0 and can now place a fake person in a manipulated situation at a created location to make statements — creating deepfake content.
“Deepfake” refers to synthetic media where artificial intelligence (AI) is used to create hyper-realistic but fabricated images, videos, or audio recordings. Making its entry as a form of entertainment, deepfakes have evolved into tools for deception, manipulation, and fraud, posing significant challenges to individuals, organizations, and governments worldwide.
These fake images exacerbate the problem of fake news that has permeated into people’s lives through their mobile phones and other gadgets.
The announcement of the Department of Information, Communication, and Technology (DICT) that it has acquired a software tool capable of detecting deepfake videos within seconds is welcome news. According to cybersecurity analyst Marco Reyes of the DICT Cybercrime Investigation and Coordinating Center (CICC), the application can be installed on computers and, upon running a video through it, determines its authenticity in about 30 seconds. If the video is identified as a deepfake, a red “X” will appear on the screen. This tool is envisioned to assist law enforcement agencies in identifying and combating deepfake-related scams.
The DICT initiative is timely, as deepfakes are increasingly being used in fraudulent schemes. For instance, in the US, Financial Industry Regulatory Authority (FINRA) has reported that scammers are leveraging generative AI tools to create synthetic identification documents and deepfake images to open fraudulent brokerage accounts and take over existing ones. These sophisticated scams are projected to lead to industry fraud losses reaching $40 billion by 2027, a Wall Street Journal report said.
Several notable deepfake scams have caused significant financial losses. In one case, scammers created deepfake videos featuring public figures to promote fraudulent schemes, leading unsuspecting individuals around the world to invest substantial sums. Last year, deepfakes were identified as creating problems in elections in several countries with deepfake videos of candidates making malicious statements.
To combat the proliferation of deepfakes, various tools and applications have been developed to act as an AI-threat detection platform that provides real-time assessment of digital media, including videos, images, audio, and identities.
Despite these technological advancements, the fight against deepfakes continues. In fact, authors of articles providing how-to tips on detecting deepfakes warn that this malicious technology is developing so fast that the tips may not work a few months later.
Governments and the private sectors need to collaborate on a multifaceted approach. A public awareness campaign is needed to educate citizens about the existence and potential dangers of deepfakes. Governments need to establish and enforce laws that penalize the creation and distribution of malicious deepfake content. Social media platforms and tech companies should implement stricter verification processes and employ AI-driven detection tools to identify and remove deepfake content promptly.
It is important that there is continuous investment in research to develop more advanced detection methods as deepfake technology continues to evolve.
The emergence of deepfakes represents a significant challenge in the digital age. While tools are essential, a comprehensive strategy involving technology, legislation, and public education is vital to effectively combat the threats posed by deepfake technology.