AI is revolutionizing various aspects of our lives, enhancing our experiences and providing us with intelligent personal assistance. From planning vacations to performing complex tasks, artificial intelligence has become an integral part of how humans interact with technology. However, while AI offers numerous benefits, there are also downsides to its increasing integration into our everyday lives. One such downside is the emergence of AI-based deepfake scams, where scammers exploit this technology to deceive and steal from unsuspecting individuals.
A recent case reported in Kerala, India, sheds light on the potential dangers of AI-based deepfake scams. Radhakrishnan, a resident of Kozhikode, received a video call from an unknown number that appeared to feature a person resembling one of his former colleagues from Andhra Pradesh. The caller, seeking to gain the victim’s trust, even mentioned names of their common friends. Believing it to be a genuine call, Radhakrishnan continued the conversation.
However, a few minutes into the call, the scammer requested an immediate payment of Rs 40,000, claiming it was required for a relative’s medical emergency. Driven by a desire to help his friend, the victim agreed and transferred the requested amount online. Shortly thereafter, the scammer asked for an additional Rs 35,000, which raised suspicions for Radhakrishnan. Realizing something was amiss, he contacted his former colleague to verify the situation. It was then that he discovered he had fallen victim to a fraudulent scheme, and the call was nothing more than an AI-generated deepfake. Radhakrishnan promptly reported the incident to the police.
Law enforcement agencies initiated an investigation and traced the transaction to a private bank in Maharashtra. The authorities swiftly froze the account associated with the fraudulent activity. This case marked the first reported instance in Kerala where scammers utilized AI technology to create deepfake videos for the purpose of deception. These criminals typically source pictures from social media platforms to fabricate realistic videos. Furthermore, they exploit publicly available information, such as common friends’ names, to enhance their credibility during the scams.
The police have issued an alert, cautioning the public about the risks of online scams involving AI-based deepfakes. Individuals are urged to double-check the authenticity of such requests and report any suspicious incidents promptly by dialing the helpline number 1930. By doing so, victims can receive assistance from law enforcement to freeze transactions and prevent further financial losses.
To understand how scammers manage to deceive victims during video calls, it is important to delve into the concept of AI-based deep fake calls. Deep fakes involve the use of AI algorithms, specifically facial reenactment techniques, to map one person’s facial movements onto another person’s face in a video or audio recording. This process creates a realistic representation that is challenging to distinguish from genuine recordings. Scammers exploit this technology to impersonate trusted individuals, tricking victims into revealing sensitive information or making monetary transactions.
While AI-based deep fake calls are a relatively recent phenomenon, they are increasingly prevalent worldwide. The accessibility and affordability of deepfake creation tools contribute to their proliferation. The concerning aspect is that innocent individuals often struggle to discern the authenticity of such calls, as awareness about this technology is not yet widespread.
To protect oneself from AI-based deep fake calls, it is crucial to exercise caution and follow these safety measures:
- Be skeptical of calls from unknown or unexpected sources. If someone claiming to be a friend or family member raises doubts about their identity, ask them a personal question that only they would know. If they cannot provide a satisfactory answer, it is likely a fraudulent call.
- Avoid sharing personal information or transferring money to unfamiliar or untrusted individuals. If a caller requests your personal information or asks for financial assistance, exercise caution and refrain from sharing sensitive details.
- If you suspect a call might be a deepfake or fraudulent, immediately terminate the conversation and avoid answering subsequent calls from the same number. Independently reach out to the person using a verified phone number to verify their identity and the authenticity of the call, protecting yourself from potential scams or impersonation.
- Report any suspicious calls to the police. If you receive a call that raises suspicions of being a deepfake, promptly report it to the relevant authorities to facilitate their investigations.
In addition to these safety measures, individuals should familiarize themselves with the signs of a deepfake call:
- The caller’s voice may sound noticeably different from what you remember.
- The caller may request personal information that they should not have access to.
- The caller may ask you to perform actions that are outside the realm of normal behavior, such as sending money or providing credit card details.
By remaining vigilant and following these precautions, individuals can mitigate the risks associated with AI-based deepfake scams. It is essential to stay informed about the latest developments in technology-driven scams and maintain a healthy skepticism to safeguard our personal and financial well-being in an increasingly AI-driven world.