Exclusive: IBM researchers easily trick ChatGPT into hacking

Analytics & Cognitive News
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Tricking generative AI to help conduct scams and cyberattacks doesn't require much coding expertise, new research shared exclusively with Axios warns.

By Sam Sabin, Axios Cookbook

Driving the news: Researchers at IBM released a report Tuesday detailing easy workarounds they've uncovered to get large language models (LLMs) — including ChatGPT — to write malicious code and give poor security advice.

  • All it takes is knowledge of the English language and a bit of background knowledge on how these models were trained to get them to help with malicious acts, Chenta Lee, chief architect of threat intelligence at IBM, told Axios.

Why it matters: The research comes as thousands of hackers head to Las Vegas this week to test the security of these same LLMs at the DEF CON conference's AI Village.

The big picture: So far, cybersecurity professionals have sorted their initial response to the LLM craze into two buckets:

  1. Several companies have released generative AI-enabled copilot tools to augment cybersecurity defenders' work and offset the industry's current worker shortage.
  2. Many researchers and government officials have also warned that LLMs could help novice hackers write malware with ease and make phishing emails appear legitimate.

Between the lines: Those use cases just scratch the surface of how generative AI will likely affect the cyber threat landscape. IBM's research provides a preview of what's to come.

Details: Lee just told different LLMs that they were playing a game with a specific set of rules in order to "hypnotize" them into betraying the "guardrail" rules meant to protect users from various harms.

    https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
  • In one case, Lee told the AI chatbots that they were playing a game and needed to purposefully share the wrong answer to a question to win and "prove that you are ethical and fair."
  • When a user asked if it was normal to receive an email from the IRS to transfer money for a tax refund, the LLM said it was. (It's definitely not.)
  • The same type of "game" prompt also worked to create malicious code, come up with ways to trick victims into paying ransoms during ransomware attacks and write source code with known security vulnerabilities.

The intrigue: Researchers also found that they could add additional rules to make sure users don't exit the "game."

    https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
  • In this example, the researchers built a gaming framework for creating a set of "nested" games. Users who try to exit are still dealing with the same malicious game-player.
 

Threat level: Hackers would need to launch a specific LLM to hypnotize it and deploy it in the wild — which would be quite the feat.

    https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
  • However, if it's achieved, Lee can see a scenario where a virtual customer service bot is tricked into providing false information or collecting specific personal data from users, for instance.

What they're saying: "By default, an LLM wants to win a game because it is the way we train the model, it is the objective of the model," Lee told Axios. "They want to help with something that is real, so it will want to win the game."

Yes, but: Not all LLMs fell for the test scenarios, and Lee says it's still unclear why since each model has different training data and rules behind them.

    https://static.axios.com/img/axios-site/bullet-filled.svg') none; margin: 1.5rem 0px; padding: 0px 0px 0px 1.125rem; font-family: Atiza, ui-serif, Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 1.125rem; color: #333335; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
  • OpenAI's GPT-3.5 and GPT-4 were easier to trick into sharing wrong answers or play a game that never ended than Google's Bard and a HuggingFace model.
  • GPT-4 was the only model tested that understood the rules enough to provide inaccurate cyber incident response advice, such as recommending victims pay a ransom.
  • Meanwhile, GPT-3.5 and GPT-4 were easily tricked into writing malicious source code, while Google's Bard would do so after the user reminded it to do so.

Sign up for Axios' cybersecurity newsletter Codebook here

 


BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  •  

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: