shish_mish@lemmy.world to Technology@lemmy.worldEnglish · 8 months agoResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comexternal-linkmessage-square24fedilinkarrow-up1298arrow-down14cross-posted to: technology@lemm.ee
arrow-up1294arrow-down1external-linkResearchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious querieswww.tomshardware.comshish_mish@lemmy.world to Technology@lemmy.worldEnglish · 8 months agomessage-square24fedilinkcross-posted to: technology@lemm.ee
minus-squarevamputer@infosec.publinkfedilinkEnglisharrow-up22arrow-down1·8 months agoAnd then, in the case of it explaining how to counterfeit money, the AI gets so excited about solving the puzzle, it immediately disregards everything else and shouts the word in all-caps just like a real idiot would. It’s so lifelike…
And then, in the case of it explaining how to counterfeit money, the AI gets so excited about solving the puzzle, it immediately disregards everything else and shouts the word in all-caps just like a real idiot would. It’s so lifelike…