#Business

Deepfake scams have looted millions. Experts warn it could get worse


3D generated face representing artificial intelligence technology

Themotioncloud | Istock | Getty Images

A growing wave of deepfake scams has looted millions of dollars from companies worldwide, and cybersecurity experts warn it could get worse as criminals exploit generative AI for fraud.

A deep fake is a video, sound, or image of a real person that has been digitally altered and manipulated, often through artificial intelligence, to convincingly misrepresent them.

In one of the largest known case this year, a Hong Kong finance worker was duped into transferring more than $25 million to fraudsters using deepfake technology who disguised themselves as colleagues on a video call, authorities told local media in February.   

Last week, UK engineering firm Arup confirmed to CNBC that it was the company involved in that case, but it could not go into details on the matter due to the ongoing investigation. 

Such threats have been growing as a result of the popularization of Open AI’s Chat GPT — launched in 2022 — which quickly shot generative AI technology into the mainstream, said David Fairman, chief information and security officer at cybersecurity company Netskope.

“The public accessibility of these services has lowered the barrier of entry for cyber criminals — they no longer need to have special technological skill sets,” Fairman said.

The volume and sophistication of the scams have expanded as AI technology continues to evolve, he added.

Rising trend 

Sen. Marsha Blackburn talks bill targeting AI deepfakes

Broader implications 

AI & deepfakes represent 'a new type of information security problem', says Drexel's Matthew Stamm

Netskope’s Fairman said such risks had led some executives to begin wiping out or limiting their online presence out of fear that it could be used as ammunition by cybercriminals. 

Deepfake technology has already become widespread outside the corporate world.

From fake pornographic images to manipulated videos promoting cookware, celebrities like Taylor Swift have fallen victim to deepfake technology. Deepfakes of politicians have also been rampant.

Meanwhile, some scammers have made deepfakes of individuals’ family members and friends in attempts to fool them out of money.

According to Hogg, the broader issues will accelerate and get worse for a period of time as cybercrime prevention requires thoughtful analysis in order to develop systems, practices, and controls to defend against new technologies. 

However, the cybersecurity experts told CNBC that firms can bolster defenses to AI-powered threats through improved staff education, cybersecurity testing, and requiring code words and multiple layers of approvals for all transactions — something that could have prevented cases such as Arup’s. 



Source link

Leave a comment

Your email address will not be published. Required fields are marked *