AI fueling rise in cyberattacks
Hacking expert says the bad guys are using ChatGPT, too
The rise of powerful new generative artificial intelligence technologies are already making lots of jobs more efficient, and fresh data indicates hacking is one of them.
Check Point Research's 2023 Mid-Year Cyber Security Report released in recent days found there was an 8% spike in global cyberattacks in the second quarter of the year – the most significant increase in two years – and the analysts blame the surge on a combination of activities, including the misuse of generative AI tools.
The study conducted by the threat intelligence arm of Check Point Software Technologies said ransomware groups have elevated their game, the use of USB devices to infect organizations has seen a resurgence, "hacktivism" by politically motivated groups is up, and AI misuse has been juiced up thanks to criminals using the new tools to develop phishing emails, keystroke monitoring malware, and craft ransomware code.
Dane Sherrets is senior solutions architect at HackerOne, a security platform that connects businesses with ethical hackers who help organizations find vulnerabilities in their systems before the bad guys do, and says he is not surprised at all that AI is driving up cybercrimes because it makes attackers much more productive. He knows that, he says, because he uses ChatGPT every day to speed up his own work.
Beyond advising clients on best practices at HackerOne, Sherrets spends his free time as a bug bounty hunter, which essentially involves gaining authorization from companies to try to hack them and find vulnerabilities so they can be fixed. HackerOne even works with the Department of Defense by attempting to run hacks on the Pentagon and the Air Force, which Sherrets' team assists with.
He explained to FOX Business how hackers are using generative AI tools, and how the situation could evolve in the future.
"What I've noticed with AI is it just makes you ten times more productive at whatever you're doing – for an attacker, that could be writing code," he said. "So instead of me needing to write like fifty lines of code and trying to figure out exactly how to work [it out], which could take me hours, I can just ask an API [application programming interface] to generate that code for me. That takes me five seconds, and the code will sometimes work right out of the box."
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
Sherrets says prior to generative AI tools becoming available to the public, scammers had to spend a great deal of time creating customized spearphishing emails posing as company executives to gain privileged information. But now, they can write perfectly customized emails at scale using tools like ChatGPT because of the data available to the apps and their ability to draft sophisticated copy.
Ticker | Security | Last | Change | Change % |
---|---|---|---|---|
MSFT | MICROSOFT CORP. | 415.49 | -2.30 | -0.55% |
GOOG | ALPHABET INC. | 177.33 | -2.25 | -1.25% |
Sherrets says major AI companies like Google and Microsoft are getting better at cracking down on hackers that try to manipulate their models for misuse, but he himself has found ways to fool the systems into giving him information a hacker should not be granted access to. He believes future danger could lie in open source models from other developers who are not concerned with how their generative AI tools are used.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
"People going after hackers are going to become more efficient with this so it is going to be sort of this battle," he said. "The bad guys eventually are going to be able to have these systems of their own without those restrictions."