Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.

  • Free_Opinions
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 day ago

    It’s physically impossible for an LLM to hold prejudice.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      You are so entirely mistaken. AI is just as biased as the data it is trained on. That applies to machine learning as well as LLMs.