Programmer, Writer, and Thought Criminal

  • 13 Posts
  • 19 Comments
Joined 1 年前
cake
Cake day: 2023年7月7日

help-circle















  • Utsob Roy@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    11 个月前

    In the case of current LLMs, we can tell. These LLMs are not black boxes to us. It is hard to follow the threads of their decisions because these decisions are just some hodgepodge of statistics and randomness, not because they are very intricate thoughts.

    We can’t compare the outputs, probably, but compute the learning though. Imagine a human with all the literature, ethics, history, and all kind of texts consumed like that LLMs, no amount of trick questions would have tricked him to believe in racial cleansing or any such disconcerting ideas. LLMs read so much, and learned so little.