• 28 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


























  • 0xCBE@infosec.pubOPtoBlue Team@infosec.pubNVD damage continued
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I found it interesting because starting from NVD, CVSS etc we have a whole industry (Snyk, etc) that is taking vuln data, mostly refuse to contextualize it and just wrap it in a nice interface for customers to act on.

    The lack of deep context shines when you have vulnerability data for os packages, which might have a different impact if your workloads are containerized or not. Nobody seems to really care that much, they sell a wet blanket and we are happy to buy for the convenience.




  • This stuff is fascinating to think about.

    What if prompt injection is not really solvable? I still see jailbreaks for chatgpt4 from time to time.

    Let’s say we can’t validate and sanitize user input to the LLM, so that also the LLM output is considered untrusted.

    In that case security could only sit in front of the connected APIs the LLM is allowed to orchestrate. Would that even scale? How? It feels we will have to reduce the nondeterministic nature of LLM outputs to a deterministic set of allowed possible inputs to the APIs… which is a castration of the whole AI vision?

    I am also curious to understand what is the state of the art in protecting from prompt injection, do you have any pointers?