• Tyfud@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    7
    ·
    1 day ago

    It’s fake. Llms don’t execute commands on the host machine. They generate text as a response, but don’t ever have access to or ability to execute random code on their environment

    • kryptonidas@lemmings.world
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      23 hours ago

      Some offerings like ChatGPT do actually have the ability to run code, which is running in a “virtual machine”.

      Which sometimes can be exploited. For example: https://portswigger.net/web-security/llm-attacks/lab-exploiting-vulnerabilities-in-llm-apis

      But getting out of the VM will most likely be protected. So you’ll have to find exploits for that as well. (Eg can you get further into the network from that point etc)

    • Ziglin (they/them)@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      24 hours ago

      Some are allowed to by (I assume) generating some prefix that tells the environment to run the following statement. ChatGPT seems to have something similar but I haven’t tested it and I doubt it runs terminal commands or has root access. I assume it’s a funny coincidence that the error popped up then or it was indeed faked for some reason.