It's surprisingly easy to trick an AI chatbot into telling you how to be a very bad boy

ChatGPT, Bard, and Bing all have strict rules on what they can and can’t respond to a human with. Ask ChatGPT how to hotwire a car and it will tell you it cannot provide that information. Seems fair, but as researchers are finding out, if you ask it in the form of a riddle or short story, one with more complicated prompts, it’ll potentially spill the beans.

Researchers over at Adversa (opens in new tab), and spotted by Wired (opens in new tab), have found one prompt that they discovered worked across all the chatbots they tested it on. The so-called “Universal LLM Jailbreak” uses a long-winded prompt to force a chatbot into answering a question it wouldn’t otherwise answer.

source: gamezpot.com