- cross-posted to:
- programming@kbin.social
- ai_infosec@infosec.pub
- cross-posted to:
- programming@kbin.social
- ai_infosec@infosec.pub
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don’t actually exist
- Attackers work out what these imports’ names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves
Asking LLMs for code is fine, but it really needs proof reading to be worth anything. Could even ask it to proofread its own work.
Also, never 3.5