• ggppjj@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    24 hours ago

    Not copilot, but I run into a fourth problem:
    4. The LLM gets hung up on insisting that a newer feature of the language I’m using is wrong and keeps focusing on “fixing” it, even though it has access to the newest correct specifications where the feature is explicitly defined and explained.

    • obbeel@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      I’ve also run into this when trying to program in Rust. It just says that the newest features don’t exist and keeps rolling back to an unsupported library.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      21 hours ago

      Oh god yes, ran into this asking for a shell.nix file with a handful of tricky dependencies. It kept trying to do this insanely complicated temporary pull and build from git instead of just a 6 line file asking for the right packages.

      • ggppjj@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        21 hours ago

        “This code is giving me a return value of X instead of Y”

        “Ah the reason you’re having trouble is because you initialized this list with brackets instead of new().”

        “How would a syntax error give me an incorrect return”

        “You’re right, thanks for correcting me!”

        “Ok so like… The problem though.”

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 hours ago

          Yeah, once you have to question its answer, it’s all over. It got stuck and gave you the next best answer in it’s weights which was absolutely wrong.

          You can always restart the convo, re-insert the code and say what’s wrong in a slightly different way and hope the random noise generator leads it down a better path :)

          I’m doing some stuff with translation now, and I’m finding you can restart the session, run the same prompt and get better or worse versions of a translation. After a few runs, you can take all the output and ask it to rank each translation on correctness and critique them. I’m still not completely happy with the output, but it does seem that sometime if you MUST get AI to answer the question, there can be value in making it answer it across more than one session.