Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        64
        arrow-down
        5
        ·
        2 months ago

        It might be all I care about. Humans might always be better, but AI only has to be good enough at something to be valuable.

        For example, summarizing an article might be incredibly low stakes (I’m feeling a bit curious today), or incredibly high stakes (I’m preparing a legal defense), depending on the context. An AI is sufficient for one use but not the other.

        • greenskye@lemm.ee
          link
          fedilink
          English
          arrow-up
          26
          arrow-down
          1
          ·
          2 months ago

          And you can absolutely trust that tons of executives will definitely not understand this distinction and will use AI even in areas where it’s actively harmful.

          • Mrkawfee@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            2 months ago

            They’ll use it until it blows up in their faces and then they will all backtrack. Executives are like startled cattle.

            • scarabic@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              2 months ago

              Let’s not act like executives are the only morons in this world. Plenty of rank and file are leaning on AI as well.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          2 months ago

          I mean, what you’re essentially implying is, what if we could do a lot of things that we do today, but faster and less quality.

          Imo we have too much things today and very few are worth their salt, so this is the opposite of the right direction.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 months ago

            That’s not what I’m implying. What I’m saying is that wasting time and effort on quality is pointless when the threshold for success is low.

            For example, I could use aerospace quality parts (perfectly machined to micron-level tolerances) to build a toaster. However, while this would not increase the performance meaningfully, the cost would be orders of magnitude greater. Instead I can use shitty off-the-shelf parts because it doesn’t really make a difference.

            Maybe in other words, engineering tolerances apply to LLMs too. They’re crude devices, but it’s totally fine if you have a crude problem.

            • Grandwolf319@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              That’s not what I’m implying. What I’m saying is that wasting time and effort on quality is pointless when the threshold for success is low.

              Yes and my response to that is for some people maybe, for others they don’t want a low threshold, they want few good articles instead of spam of low quality.

              Maybe in other words, engineering tolerances apply to LLMs too. They’re crude devices, but it’s totally fine if you have a crude problem.

              Exactly, I’m saying there is no objective crude problem. You might be okay with simple summaries but I want every single piece of information I consume to have a very high bar.

              • AA5B@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                What if you’re reading Lemmy, and you don’t really feel like reading the article. Is the headline likely to tell you all you need to know or is the ai summary likely to find more info and without the clickbait?

                • Grandwolf319@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  2 months ago

                  Imo it’s on me to either read the article or be okay with not being informed. Don’t get me wrong, a summery is good, but not when it’s not reliable and the article is a click away, some might have a different comfort level.

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                Sure, go for it. But good luck paying an army of copywriters to summarize every article you read.

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 months ago

          Sometimes I am preparing a high stakes communication for work and struggling for brevity. I will ask AI for help reducing my word count and I find it is helpful as an impartial editor. I take its 25% reduction, sigh, accept most of what it sacrificed, fix a word or two, and am done. It’s helpful.

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      23
      ·
      2 months ago

      This is a really valid point, especially because it’s not only faster but dramatically cheaper.

      The thing is, summaries which are pretty terrible might be costly. If decision makers are relying on these summaries and they’re inaccurate, then the consequences might be immeasurable.

      Suppose you’re considering 2 cars, one is very cheap but on one random day per month it just won’t start, the other is 5x the price but will work every day. If you really need the car to get to work, then the one that randomly doesn’t start might be worse than no car at all.

      • PumpkinSkink@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        2 months ago

        Are we sure it’s cheaper though? I mean it legitimatly might not be. I have some friends who work in tech and they use an AI model for, amongst other things, summarizing information on their internal documentation. They’ve told me what their company is paying for the license to use this thing, and it’s eyewatering. also, uhh last time I checked, the company they got that license from does not turn a profit… so it appears to be too cheap at the moment.

        It might really be the case that it isn’t cheaper than just paying someone a normal salary to do that work, and it probably isn’t cheaper than just jamming the work being done by the AI now back onto preexisting employees (which is what they did before ~2 years ago anyway).

        The other thing that makes me feel this might not be unreasonable is that everyone on the team likes the tool, except their manager, who has thrown out the idea to cut it twice now (that I know of).

        • sevan@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I’ve been curious about this too, but haven’t been able to find anything that puts a real price (including future profit margin) on GenAI. For example, having a chat conversation with a customer service agent in India might cost about $2-3. Is a GenAI bot truly cheaper than that once you factor in the energy & water costs, hardware, training, profits, etc.? It might be, but I’m skeptical.