I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?

I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.

    • Catoblepas@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      5 days ago

      Might want to rethink the summarization part.

      AI also hasn’t made any huge improvements in machine translation AFAIK. Translators still get hired because AI can’t do the job as well.

      • xep@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        5 days ago

        Thank you for pointing that out. I don’t use it for anything critical, and it’s been very useful because Kagi’s summarizer works on things like YouTube videos friends link which I don’t care enough to watch. I speak the language pair I use DeepL on, but DeepL often writes more natively than I can. In my anecdotal experience, LLMs have greatly improved the quality of machine translation.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        4 days ago

        The AI summaries were judged significantly weaker across all five metrics used by the evaluators, including coherency/consistency, length, and focus on ASIC references. Across the five documents, the AI summaries scored an average total of seven points (on ASIC’s five-category, 15-point scale), compared to 12.2 points for the human summaries.

        The focus on the (now-outdated) Llama2-70B also means that “the results do not necessarily reflect how other models may perform” the authors warn.

        to assess the capability of Generative AI (Gen AI) to summarise a sample of public submissions made to an external Parliamentary Joint Committee inquiry, looking into audit and consultancy firms

        In the final assessment ASIC assessors generally agreed that AI outputs could potentially create more work if used (in current state), due to the need to fact check outputs, or because the original source material actually presented information better. The assessments showed that one of the most significant issues with the model was its limited ability to pick-up the nuance or context required to analyse submissions.

        The duration of the PoC was relatively short and allowed limited time for optimisation of the LLM.

        So basically this study concludes that Llama2-70B with basic prompting is not as good as humans at summarizing documents submitted to the Australian government by businesses, and its summaries are not good enough to be useful for that purpose. But there are some pretty significant caveats here, most notably the relative weakness of the model they used (I like Llama2-70B because I can run it locally on my computer but it’s definitely a lot dumber than ChatGPT), and how summarization of government/business documents is likely a harder and less forgiving task than some other things you might want a generated summary of.

      • theunknownmuncher@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        5 days ago

        Downvoters need to read some peer reviewed studies and not lap up whatever BS comes from OpenAI who are selling you a bogus product lmao. I too was excited for summarization use-case of AI when LLMs were the new shiny toy, until people actually started testing it and got a big reality check

      • xep@fedia.io
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        5 days ago

        The services I use, Kagi’s autosummarizer and DeepL, haven’t done that when I’ve checked. The downside of the summarizer is that it might remove some subtle things sometimes that I’d have liked it to keep. I imagine that would occur if I had a human summarize too, though. DeepL has been very accurate.

        • theunknownmuncher@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          5 days ago

          LLMs are especially bad for summarization for the use case of presenting search results. The source is just as critical of information for search as the information itself, and LLMs obfuscate this critical source information and combine results from multiple sources together…