• fiasco@possumpat.io
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    29
    ·
    1 year ago

    The thing that amazes me the most about AI Discourse is, we all learned in Theory of Computation that general AI is impossible. My best guess is that people with a CS degree who believe in AI slept through all their classes.

    • leonardo_arachoo@lemm.ee
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      5
      ·
      edit-2
      1 year ago

      we all learned in Theory of Computation that general AI is impossible.

      I strongly suspect it is you who has misunderstood your CS courses. Can you provide some concrete evidence for why general AI is impossible?

      • fiasco@possumpat.io
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        13
        ·
        1 year ago

        Evidence, not really, but that’s kind of meaningless here since we’re talking theory of computation. It’s a direct consequence of the undecidability of the halting problem. Mathematical analysis of loops cannot be done because loops, in general, don’t take on any particular value; if they did, then the halting problem would be decidable. Given that writing a computer program requires an exact specification, which cannot be provided for the general analysis of computer programs, general AI trips and falls at the very first hurdle: being able to write other computer programs. Which should be a simple task, compared to the other things people expect of it.

        Yes there’s more complexity here, what about compiler optimization or Rust’s borrow checker? which I don’t care to get into at the moment; suffice it to say, those only operate on certain special conditions. To posit general AI, you need to think bigger than basic block instruction reordering.

        This stuff should all be obvious, but here we are.

        • leonardo_arachoo@lemm.ee
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          1 year ago

          Given that humans can write computer programs, how can you argue that the undecidability of the halting problem stops intelligent agents from being able to write computer programs?

          I don’t understand what you mean about the borrow checker in Rust or block instruction reordering. These are certainly not attempts at AI or AGI.

          What exactly does AGI mean to you?

          This stuff should all be obvious, but here we are.

          This is not necessary. Please don’t reply if you can’t resist the temptation to call people who disagree with you stupid.

          • fiasco@possumpat.io
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            9
            ·
            1 year ago

            This is proof of one thing: that our brains are nothing like digital computers as laid out by Turing and Church.

            What I mean about compilers is, compiler optimizations are only valid if a particular bit of code rewriting does exactly the same thing under all conditions as what the human wrote. This is chiefly only possible if the code in question doesn’t include any branches (if, loops, function calls). A section of code with no branches is called a basic block. Rust is special because it harshly constrains the kinds of programs you can write: another consequence of the halting problem is that, in general, you can’t track pointer aliasing outside a basic block, but the Rust program constraints do make this possible. It just foists the intellectual load onto the programmer. This is also why Rust is far and away my favorite language; I respect the boldness of this play, and the benefits far outweigh the drawbacks.

            To me, general AI means a computer program having at least the same capabilities as a human. You can go further down this rabbit hole and read about the question that spawned the halting problem, called the entscheidungsproblem (decision problem) to see that AI is actually more impossible than I let on.

            • leonardo_arachoo@lemm.ee
              link
              fedilink
              English
              arrow-up
              11
              ·
              edit-2
              1 year ago

              Here are two groups of claims I disagree with that I think you must agree with

              1 - brains do things that a computer program can never do. It is impossible for a computer to ever simulate the computation* done by a brain. Humans solve the halting problem by doing something a computer could never do.

              2 - It is necessary to solve the halting problem to write computer programs. Humans can only write computer programs because they solve the halting problem first.

              *perhaps you will prefer a different word here

              I would say that:

              • it doesn’t require solving any halting problems to write computer programs
              • there is no general solution to the halting problem that works on human brains but not on computers.
              • computers can in principle simulate brains with enough accuracy to simulate any computation happening on a brain. However, there would be far cheaper ways to do any computation.

              Which of my statements do you disagree with?

              • fiasco@possumpat.io
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                4
                ·
                1 year ago

                I suppose I disagree with the formulation of the argument. The entscheidungsproblem and the halting problem are limitations on formal analysis. It isn’t relevant to talk about either of them in terms of “solving them,” that’s why we use the term undecidable. The halting problem asks, in modern terms—

                Given a computer program and a set of inputs to it, can you write a second computer program that decides whether the input program halts (i.e., finishes running)?

                The answer to that question is no. In limited terms, this tells you something fundamental about the capabilities of Turing machines and lambda calculus; in general terms, this tells you something deeply important about formal analysis. This all started with the question—

                Can you create a formal process for deciding whether a proposition, given an axiomatic system in first-order logic, is always true?

                The answer to this question is also no. Digital computers were devised as a means of specifying a formal process for solving logic problems, so the undecidability of the entscheidungsproblem was proven through the undecidability of the halting problem. This is why there are still open logic problems despite the invention of digital computers, and despite how many flops a modern supercomputer can pull off.

                We don’t use formal process for most of the things we do. And when we do try to use formal process for ourselves, it turns into a nightmare called civil and criminal law. The inadequacies of those formal processes are why we have a massive judicial system, and why the whole thing has devolved into a circus. Importantly, the inherent informality of law in practice is why we have so many lawyers, and why they can get away with charging so much.

                As for whether it’s necessary to be able to write a computer program that can effectively analyze computer programs, to be able to write a computer program that can effectively write computer programs, consider… Even the loosey goosey horseshit called “deep learning” is based on error functions. If you can’t compute how far away you are from your target, then you’ve got nothing.

                • leonardo_arachoo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  edit-2
                  1 year ago

                  Well, I’ll make the halting problem for this conversation decidable by concluding :). It was interesting to talk, but I was not convinced.

                  I think some amazing things are coming out of deep learning and some day our abilities will generally be surpassed. Hopefully you are right, because I think we will all die shortly afterwards.

                  Feel free to have the final word.

        • irmoz@reddthat.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          From what I’ve heard, the biggest hurdle for AI right now is the fact that computers only work with binary. They are incapable of actually “reading” the things they write - all they’re qctually aware of is the binary digits they manipulate, that represent the words they’re reading and writing. It could analyse War and Peace over and over, and even if you asked it who wrote it, it wouldn’t actually know.

    • qfe0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      20
      ·
      1 year ago

      The existence of natural intelligence is the proof that artificial intelligence is possible.

    • argv_minus_one@beehaw.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      We can simulate all manner of physics using a computer, but we can’t simulate a brain using a computer? I’m having a real hard time believing that. Brains aren’t magic.

      • fiasco@possumpat.io
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        Computer numerical simulation is a different kind of shell game from AI. The only reason it’s done is because most differential equations aren’t solvable in the ordinary sense, so instead they’re discretized and approximated. Zeno’s paradox for the modern world. Since the discretization doesn’t work out, they’re then hacked to make the results look right. This is also why they always want more flops, because they believe that, if you just discretize finely enough, you’ll eventually reach infinity (or infinitesimal).

        This also should not fill you with hope for general AI.

    • IllNess@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      It’s all buzzword exaggerations. It’s marketing.

      Remember when hoverboards were for things that actually hover instead of some motorized bullshit on two wheels? Yeah, same bullshit.