Tricking generative AI to help conduct scams and cyberattacks doesn't require much coding expertise, new research shared exclusively with Axios warns.
By Sam Sabin, Axios Cookbook
Driving the news: Researchers at IBM released a report Tuesday detailing easy workarounds they've uncovered to get large language models (LLMs) — including ChatGPT — to write malicious code and give poor security advice.
- All it takes is knowledge of the English language and a bit of background knowledge on how these models were trained to get them to help with malicious acts, Chenta Lee, chief architect of threat intelligence at IBM, told Axios.
Why it matters: The research comes as thousands of hackers head to Las Vegas this week to test the security of these same LLMs at the DEF CON conference's AI Village.
The big picture: So far, cybersecurity professionals have sorted their initial response to the LLM craze into two buckets:
- Several companies have released generative AI-enabled copilot tools to augment cybersecurity defenders' work and offset the industry's current worker shortage.
- Many researchers and government officials have also warned that LLMs could help novice hackers write malware with ease and make phishing emails appear legitimate.
Between the lines: Those use cases just scratch the surface of how generative AI will likely affect the cyber threat landscape. IBM's research provides a preview of what's to come.
Details: Lee just told different LLMs that they were playing a game with a specific set of rules in order to "hypnotize" them into betraying the "guardrail" rules meant to protect users from various harms.
- https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
- In one case, Lee told the AI chatbots that they were playing a game and needed to purposefully share the wrong answer to a question to win and "prove that you are ethical and fair."
- When a user asked if it was normal to receive an email from the IRS to transfer money for a tax refund, the LLM said it was. (It's definitely not.)
- The same type of "game" prompt also worked to create malicious code, come up with ways to trick victims into paying ransoms during ransomware attacks and write source code with known security vulnerabilities.
The intrigue: Researchers also found that they could add additional rules to make sure users don't exit the "game."
- https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
- In this example, the researchers built a gaming framework for creating a set of "nested" games. Users who try to exit are still dealing with the same malicious game-player.
Threat level: Hackers would need to launch a specific LLM to hypnotize it and deploy it in the wild — which would be quite the feat.
- https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
- However, if it's achieved, Lee can see a scenario where a virtual customer service bot is tricked into providing false information or collecting specific personal data from users, for instance.
What they're saying: "By default, an LLM wants to win a game because it is the way we train the model, it is the objective of the model," Lee told Axios. "They want to help with something that is real, so it will want to win the game."
Yes, but: Not all LLMs fell for the test scenarios, and Lee says it's still unclear why since each model has different training data and rules behind them.
- https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
- OpenAI's GPT-3.5 and GPT-4 were easier to trick into sharing wrong answers or play a game that never ended than Google's Bard and a HuggingFace model.
- GPT-4 was the only model tested that understood the rules enough to provide inaccurate cyber incident response advice, such as recommending victims pay a ransom.
- Meanwhile, GPT-3.5 and GPT-4 were easily tricked into writing malicious source code, while Google's Bard would do so after the user reminded it to do so.
Sign up for Axios' cybersecurity newsletter Codebook here
LATEST COMMENTS
MC Press Online