Generative AI tools lead to rising deepfake fraud
Fraudsters can use AI tools to steal a person's identity or create a fictitious persona to further their criminal endeavors
As the rise of generative artificial intelligence (AI) technologies has created new opportunities for businesses and individuals looking to leverage those new tools, it has also opened the door to the misuse of those technologies by fraudsters looking to use deepfakes and other methods to carry out crimes.
Some data indicate that the use of AI-generated deepfakes to commit crimes is on the rise. Sumsub – a company that offers know-your-customer (KYC), know-your-business (KYB), transaction monitoring and anti-money laundering solutions – released data Monday showing an uptick in deepfake fraud over the last year. Deepfakes use biometric data to produce counterfeit images and audio that appear realistic to an unwitting viewer or listener.
Pavel Goldman-Kalaydin, the head of artificial intelligence and machine learning at Sumsub, explained the linkage between the growing sophistication and availability of generative AI tools and the use of deepfakes to FOX Business:
"It’s not a coincidence but rather a logical consequence. With the growing trend of generative AI tools and image deepfake usage, it is evident that deepfakes have become more prevalent in recent months. In fact, we have seen more deepfakes in the past three months than we have seen in the past few years."
FTC ISSUES WARNING ON MISUSE OF BIOMETRIC INFO AMID RISE OF GENERATIVE AI
Sumsub’s report on fraud statistics released Tuesday found that the proportion of fraud cases involving the use of deepfakes jumped from 0.2% in Q1 2022 to 2.6% in Q1 2023 for the U.S., and from 0.1% to 4.6% in Canada over the same period. Goldman-Kalaydin said that the evolution of generative AI platforms has made it easier for fraudsters to create deepfakes capable of thwarting antifraud safeguards.
"Recent advances in image generation have also influenced the generation of deepfakes. A typical scenario a year ago was to use a real stolen document as a base, and then use a face swap of the face from the document to a prerecorded video of a fraudster. This allowed the fraudster to move their head as required by many verification platforms," he explained.
"Now, it is not always necessary to have such a source face image. Instead, you can generate from scratch and use it in both documents and face photos. This makes it harder for verification platforms to search for stolen identities because there is nothing to search for."
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
In terms of which industries are at the greatest risk of deepfake fraud, Goldman-Kalaydin said that fintech, cryptocurrency and gambling platforms are "especially at risk" but that "all businesses operating digitally and performing remote verification are vulnerable to deepfake fraud."
Goldman-Kalaydin noted that, "As generative AI technology advances, more tools become available to fraudsters." He went on to explain that he and the Sumsub team have encountered fraudsters whose use of AI tools generally falls into two categories of sophistication:
"There are two types of fraudsters: very tech-savvy teams that use the most advanced tech and hardware, such as high-end GPUs (graphic processing units) and state-of-the-art generative AI models, and lower-level fraudsters who use commonly available tools on computers. For example, there are people who have been banned from gambling sites for rules violations who want to return to the site by any means."
GET FOX BUSINESS ON THE GO BY CLICKING HERE
As for what consumers and members of the general public can do to prevent the use of their data to create a deepfake, Goldman-Kalaydin said that verification platforms considering multiple factors may be the best means of deterring deepfake fraud.
"Answering this question is not easy. While posting high-quality face images on social networks is not recommended, even low-quality images can be used to create deepfakes that can then be enhanced with superresolution techniques to make them look realistic," he said. "Ultimately, one can only hope that verification platforms employ multiple checks, not just on the image itself, but also on the user’s behavior to detect and prevent fraud."