• FunkyStuff [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    57
    ·
    19 hours ago

    This is simply revolutionary. I think once OpenAI adopts this in their own codebase and all queries to ChatGPT cause millions of recursive queries to ChatGPT, we will finally reach the singularity.

    • hexaflexagonbear [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      24
      ·
      19 hours ago

      There was a paper about improving llm arithmetic a while back (spoiler: its accuracy outside of the training set is… less than 100%) and I was giggling at the thought of AI getting worse for the unexpected reason that it uses an llm for matrix multiplication.

      • FunkyStuff [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        17
        ·
        19 hours ago

        Yeah lol this is a weakness of LLMs that’s been very apparent since their inception. I have to wonder how different they’d be if they did have the capacity to stop using the LLM as the output for a second, switched to a deterministic algorithm to handle anything logical or arithmetical, then fed that back to the LLM.

        • nightshade [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          17 hours ago

          I’m pretty sure some of the newer ChatGPT-like products (the consumer-facing interface, not the raw LLM) do in fact do this. They try to detect certain types of inputs (i.e. math problems or requesting the current weather) and convert it to an API request to some other service and return the result instead of a LLM output. Frankly it comes across to me as an attempt to make the “AI” seem smarter than it really is by covering up its weaknesses.

          • FunkyStuff [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 hours ago

            Yeah, Siri has been capable of doing that for a long time, but my actual hope would be that moreso than handing the user the API response, the LLM could actually keep operating on that response and do more with it, composing several API calls. But that’s probably prohibitively expensive to train since you’d have to do it billions of times to get the plagiarism machine to learn how to delegate work to an API properly.