Let’s deploy LLMs everywhere! What could possibly go wrong?

  • kiku123@feddit.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is a very interesting read given the hard press on AI that my company is pushing.

    I guess I’ll try to make sure that we don’t implement some of these really bad ideas.

    A lot of these seem to go away if you don’t connect to the Internet or allow user input, at least.

  • 0xCBE@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This stuff is fascinating to think about.

    What if prompt injection is not really solvable? I still see jailbreaks for chatgpt4 from time to time.

    Let’s say we can’t validate and sanitize user input to the LLM, so that also the LLM output is considered untrusted.

    In that case security could only sit in front of the connected APIs the LLM is allowed to orchestrate. Would that even scale? How? It feels we will have to reduce the nondeterministic nature of LLM outputs to a deterministic set of allowed possible inputs to the APIs… which is a castration of the whole AI vision?

    I am also curious to understand what is the state of the art in protecting from prompt injection, do you have any pointers?

    • Capt. AIn@infosec.pubOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      My take so far is that there isn’t really any great options to protect against prompt injections. Simon Wilson presents an idea here on his blog which could is a bit interesting. NVIDIA has open sourced a framework for this as well, but it’s not without problems. Otherwise I’ve mostly seen prompt injection firewall products but I wouldn’t trust them too much yet.