Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

    • Redredme@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      9
      ·
      1 year ago

      That’s such a strange question. It’s almost like you imply that Google results do not need fact checking.

      They do. Everything found online does.

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        1 year ago

        With google, it depends on what webpage you end up on. Some require more checking than others, which are more trustworthy

        Generative AI can hallucinate about anything

        • Dojan@lemmy.world
          link
          fedilink
          English
          arrow-up
          31
          arrow-down
          4
          ·
          edit-2
          1 year ago

          There are no countries in Africa starting with K.

          LLMs aren’t trained to give correct answers, they’re trained to generate human-like text. That’s a significant difference.

          • Takumidesh@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            They also aren’t valuable for asking direct questions like this.

            There value comes in with call and response discussions. Being able to pair program and work through a problem for example. It isn’t about it spitting out a working problem, but about it being able to assess a piece of information in a different way than you can, which creates a new analysis of the information.

            It’s extraordinarily good at finding things you miss in text.

            • Dojan@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Yeah. There’s definitely tasks suited to LLMs. I’ve used it to condense text, write emails, and even project planning because they do give decently good ideas if you prompt them right.

              Not sure I’d use them for finding information though, even with the ability to search for it. I’d much rather just search for it myself so I can select the sources, then have the LLM process it.

            • Phanatik@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              If you knew nothing about Africa, congratulations, you don’t know that Kenya exists. It’s a microcosm of what would be a major problem if you don’t fact-check the bot.

              • AngrilyEatingMuffins@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                That’s not asking for fact checks. That’s telling the machine it’s wrong. It thinks of you as acting in good faith, so it reacts accordingly.

                This, like most of these criticisms, has fundamental misunderstandings about the technology.

                • Phanatik@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 year ago

                  You didn’t understand what I was saying. You have to independently fact check the bot which means performing Google searches anyway. At that point, it’s redundant to even ask the bot if the endpoint is the same, you’ll be on whatever search engine you’re using trawling results.

                • HarkMahlberg@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  “The only reason people criticize something is because they don’t understand it.”

                  This is the same person who defended NFT’s on the basis that “people just don’t get it, maaaan, it’s gonna revolutionize, like, the wooorld.” Without thinking about the social or ethical consequences of embracing a technology that has no social or ethical safeguards.

                  • AngrilyEatingMuffins@kbin.social
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    I’d say you’re closer to that person since you’re the one spouting bullshit about a technology you fundamentally don’t understand.

                    This article isn’t even about openai’s chatGPT - it’s about an AI that filters the active internet. Try perplexity.ai a few times and let me know if you still think the tech is useless. It’s a baby even in comparison to the baby AIs we have now and I haven’t touched google since I started using it.

      • madnificent@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Agree.

        I found it more tempting to accept the initial answers I got from GPT4 (and derivatives) because they are so well written. I know there are more like me.

        With the advent of working LLMs, reference manuals should gain importance too. I check them more often than before because LLMs have forced me to. Could be very positive.