It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse

It's dangerously easy to 'jailbreak' AI models so they'll tell you how to build Molotov cocktails, or worse

REUTERS/Kacper Pempel/Illustration/File Photo

It doesn't take much for a large language model to give you the recipe for all kinds of dangerous things.

No comments

Read more