$20k Pwn2Own prize for the humans, zero for the AI
It was bound to happen sooner or later. For what looks like the first time ever, bug hunters used ChatGPT in a successful Pwn2Own exploit, helping researchers hijack software used in industrial applications and win $20,000.
To be clear: the AI did not find the vulnerability nor write and run code to exploit a specific flaw. But its successful usage in the bug-reporting contest could be a harbinger of hacks to come.
“This is not interpreting the Rosetta Stone,” Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative (ZDI) told The Register.
It represents the beginning of something bigger. Although we don’t believe AI will replace humans as hackers, it could make a fantastic assistant for researchers when they run into unfamiliar code or unexpected defenses.
Claroty’s Team82 requested assistance from ChatGPT to create a remote code execution attack against Softing edgeAggregator Siemens, software that facilitates connectivity at the interface between OT (operational technology) and IT in industrial applications, at the competition last week in Miami, Florida.
Due to the nature of Pwn2Own, which sees participants discover security vulnerabilities, demonstrate them on stage, privately reveal how they did it to the developer or vendor, claim a prize, and then wait for the details and patches to be released when they are prepared, the technical details are limited.
While waiting for more information, we can confirm that the exploit’s creators, security researchers Noam Moshe and Uri Katz, found a flaw in an OPC Unified Architecture (OPC UA) client, most likely found in the edgeAggregator industrial software package. A machine-to-machine communication protocol used in industrial automation is called OPC UA.
To test their remote execution attack after discovering the problem, the researchers asked ChatGPT to provide a backend module for an OPC UA server. It appears that this module was required to essentially construct a hostile server to exploit the vulnerability the two discovered and target the susceptible client.
According to Moshe and Katz, “we had to make a lot of adjustments for our exploitation technique to work, and we had to make a lot of changes to existing open-source OPC UA projects.”
While we were unfamiliar with the unique server SDK implementation, ChatGPT aided us in using and modifying the pre-existing server to speed up the process.
The team acknowledged that after giving the AI instructions, it required some “small” adjustments and fixes before producing a functional backend server module.
But overall, they said, the chatbot was a helpful tool that saved them time, especially by bridging knowledge gaps like learning how to develop a backend module and freeing the humans to concentrate more on actually putting the attack into practice.
The pair claimed, “ChatGPT has the potential to be a terrific tool for expediting the coding process,” and added that it increased their productivity.
- Russian criminals can’t wait to hop over OpenAI’s fence, use ChatGPT for evil
- Pwn2Own contest concludes with nearly $1m paid out to ethical hackers
- What keeps this FBI director up at night? China’s AI work, for one
- Sure, Microsoft, let’s put ChatGPT in control of robots
It’s similar to conducting numerous rounds of Google searches to find a certain code template, then adding numerous rounds of adjustments to the code based on our unique requirements, just by telling it what we intended to accomplish, according to Moshe and Katz.
Childs predicts that cybercriminals would utilize ChatGPT in actual assaults against industrial systems like this.
“It’s difficult to exploit complicated systems, and frequently threat actors don’t fully understand a particular target,” he said. Childs continued, “but supplying that last piece of the puzzle needed for success,” adding that he doesn’t anticipate AI-generated tools creating exploits.
He also doesn’t care if AI takes over Pwn2Own. not yet, at least.
That’s still a ways off, according to Childs. “The usage of ChatGPT in this instance demonstrates how AI can assist in converting a vulnerability into an exploit, providing the researcher knows how to ask the right questions and disregard the incorrect responses. In the competition’s history, it’s an intriguing development, and we’re interested to see where it might go.” ®