• db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    1
    ·
    21 hours ago

    As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

    • kboy101222@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      19
      ·
      19 hours ago

      I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn’t need that thing included

      Sorry for being vague, I just didn’t want to post my home town on here

    • 1rre@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      20 hours ago

      The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

      Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

      • db0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        20 hours ago

        The problem is that the “train of the thought” is also hallucinations. It might make the model better with more compute but it’s diminishing rewards.

        Rpg can use the llms because they’re not critical. If the llm spews out nonsense you don’t like, you just ask to redo, because it’s all subjective.

    • kat@orbi.camp
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 hours ago

      Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      20 hours ago

      Nonsense, I use it a ton for science and engineering, it saves me SO much time!

      • Atherel@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        17 hours ago

        Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.

          • otp@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 hour ago

            Y’know, a lot of the hate against AI seems to mirror the hate against Wikipedia, search engines, the internet, and even computers in the past.

            Do you just blindly believe whatever it tells you?

            It’s not absolutely perfect, so it’s useless.

            It’s all just garbage information!

            This is terrible for jobs, society, and the environment!

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            20 minutes ago

            In which case you probably aren’t saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 minutes ago

              Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.