ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • iforgotmyinstance@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    10 months ago

    I know university professors struggling with this concept. They are so convinced using an LLM is plagiarism.

    It can lead to plagiarism if you use it poorly, which is why you control the information you feed it. Then proofread and edit.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      Another related confusion in academia recently is the ‘AI detector’. It could easily be defeated with minor rewrites, if they were even accurate in the first place. My favorite misconception is there was a story of a professor who told students “I asked ChatGPT if it wrote this, and it said yes” which is just really not how it works.

    • ZodiacSF1969@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      10 months ago

      I can understand the plagiarism argument, though you have to extend the definition of it. If I am expected to write an essay, but I use ChatGPT instead, then I am fraudulently presenting the work as my own. Plagiarism might not be the right word, or maybe it’s a case where language is going to evolve so that plagiarism includes passing off AI generated work as your own. Either way it’s cheating unless I was specifically allowed to use AI.

      • iforgotmyinstance@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        If the argument and the sources are incongruous, that isn’t the fault of the LLM/AI. That’s the authors fault for not proofreading and editing.

        You assume an inherent morality of LLMs but they are amoral constructs. They are tools, and you limit yourself by not learning them.

        • ZodiacSF1969@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          I didn’t say anything about the sources being incongruent? That’s a completely separate issue. We were talking about plagiarism.

          I don’t understand the morality comment either, I didn’t ascribe any morality to AI, I was talking about whether using them fits the definition of plagiarism or not.

          If you are expected to write it yourself, and you use an LLM to generate it, then that’s cheating in my opinion. Yes, of course we shoukd learn to use AI, but if you are told to do something and you get a person or LLM to do it for you, then you didn’t complete the task as you were told. And at university that can have consequences.