What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    2 days ago

    Since you ask, here are my thoughts https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence with numerous examples. To clarify your points :

    • rely on open-source repository where the code is auditable, hopefully audited, and try offline
    • see previous point
    • LLMs don’t “analyze” anything, they just spit out human looking text

    To clarify on the first point, as the other 2 unfold from there, such project would instantly lose credibility if they were to sneak in telemetry. Some FLOSS projects tried that in the past and it always led to uproars, reverts and often forks of the exact same codebase but without telemetry.

      • DWin
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        12 hours ago

        But it’s accurate? Doesn’t mean that human looking text can’t be helpful to some, but it’ll also help keep us grounded to the reality of the tech.

        • surph_ninja@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          6 hours ago

          It’s not a room of monkeys typing on keyboards. Calculators don’t just spit out random numbers. They use the input to estimate the best solution it can.

          I know being anti-AI is popular right now, but y’all are being dishonest about its capabilities to backup your point.

          • DWin
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            They’re Large Language Models. They’re defined as generative pre-trained text transformers, that’s their entire purpose.

            Saying the calculator spits out random numbers would be wrong, but saying a calculator spits out numbers, that would be correct. Reductionist would probably be a better word than regressive or asinine.