- cross-posted to:
- morewrite@awful.systems
- cross-posted to:
- morewrite@awful.systems
guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
guy recently linked this essay, its old, but i don’t think its significantly wrong (despite gpt evangelists) also read weizenbaum, libs, for the other side of the coin
here are some more relevant articles for consideration from a similar perspective, just so we know its not literally just one guy from the 80s saying this. some cite this article as well but include other sources.
https://medium.com/@nateshganesh/no-the-brain-is-not-a-computer-1c566d99318c
https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness
https://www.infoq.com/articles/brain-not-computer/
https://intellerts.com/sorry-your-brain-is-not-like-a-computer/
https://www.infoq.com/articles/brain-not-computer/
https://intellerts.com/sorry-your-brain-is-not-like-a-computer/
You can build a computer out of anything that can flip a logic gate, up to and including red crabs. It doesn’t matter if you’re using electricity or chemistry or crabs. That’s why it’s a metaphor. This really all reads as someone arguing with a straw man who literally believes that neurons are logic gates or something. “Actually brains have chemistry” sounds like it’s supposed to be a gotcha when people are out there working on building chemical computers, chemical data storage, chemical automata right now. There’s no dichotomy there, nor does it argue against using computer terminology to discuss brain function. It just suggests a lack of creativity, flexibility, and awareness of the current state of the art in chemistry.
It’s also apparently arguing with people who think chat-gpt and neural nets and llms are intelligent and sentient? In which case you should loudly specify that in the first line so people know you’re arguing with ignorant fools and they can skip your article.
And what the hell is this? Jumping up and down and screaming “i have a soul! Consciousness is privileged and special! I’m not a meat automata i’m a real boy!” Is not mature or productive. This isn’t an argument, it’s a tantrum.
The deeper we get in to this it sounds like dumb guys arguing with dumb guys about reductive models of the mind that dumb guys think other dumb guys rigidly adhere to. Ranting about ai research without specifying whether you’re talking about long standing research trends or the religious fanatics in California proseletyzing about their fictive machine gods isn’t helpful.
you literally ignore the actual part of the text that adresses your problems.
you can use the word ‘tantrum’ while you ignore the literal words used and their meanings if you want but it only makes you seem illiterate and immature.
‘intuition worldviews thoughts beliefs our conscience’ are specific words with specific meanings. no computer (information processing machine) has ‘consciousness’, no computer has ‘intuition’, no computer has internal subjective experience - not even an idealized one with ‘infinite processing power’ like a turing machine. humans do. therefore humans are not computers. we cannot replicate ‘intuition’ with information processing, we cannot replicate ‘internal subjective experience’ with information processing. we cannot bridge the gap between subjective internal experience and objective external physical processes, not even hypothetically, there is not even a theoretical experiment you could design for it, there is not even theoretical language to describe it without metaphor. We could learn and simulate literally every single specific feature of the brain and it would not tell us about internal subjective experiences, because it is simply not the kind of phenomena that is understood by the field of information processing. If you have a specific take on the ‘hard problem of consciousness’ thats fine, but to say that ‘anyone who disagrees with me about this is just stupid’ is immature and ignorant, especially in light of your complete failure to understand things like Turing machines.
I usually like your posts and comments but this thread has revelaed a complete ignorance of the philosophical and theoretical concepts under discussion here and an overzealous hysteria regarding anything that is remotely critical of a mechanistic physicalist reductionist worldview. you literally ignore or glazed over any relevant parts of the text i quoted, misunderstood the basic nature of what a turing machine is, misunderstood the nature of the discourse around the brain-as-computer discourse, all with the smuggest redditor energy humanly possible. I will not be further engaging after this post and will block you on my profile, have a nice life.
Almost all of this is people assuming other people are taking the metaphor to far.
No one who is worth talking to about this disagrees with this. Everyone is running on systems theory now, including the computer programmers trying to build artificial intelligence. All the plagiarism machines run on systems theory and emergence. The people they’re yelling at about reductive computer metaphors are doing the thing the author is saying they don’t do, and the plagiarism machines were only possible because people were using systems theory and emergent behaviors arising from software to build the worthless things!
This author just said that economics isn’t maths, that it’s spooky and mysterious and can’t be undersyood.
This is so frustrating. “You see, the brain isn’t like this extremely reductive model of computation, it’s actually” and then the author just lists every advance, invention, and field of inquiry in computation for the last several decades.
“The brain isn’t a computer, it’s actually a different kind of computer! The brain compensates for injury the same way the internet that was in some ways designed after the brain compensates for injury! If you provide the discrete nodes of a distributed network with the inputs they need to function efficiently the performance of the entire network improves!”
This is just boggling, what argument do they think they’re making? Software does all these things specifically because scientists are investigating the functions of the brain and applying what they find to the construction of new computer systems. Our increasing understanding of the brain feeds back to novel computational models which generate new tools, data, and insight for understanding the brain!
Not even that. They literally did not provide any argument that brains are not structured like Turing machines. Hell, the author seems to not be aware of backup tools in hardware and software, including RAID.
https://medium.com/the-spike/yes-the-brain-is-a-computer-11f630cad736
people are absolutely arguing that the human brain is a turing machine. please actually read the articles before commenting, you clearly didn’t read any of them in any detail or understand what they are talking about. a turing machine isn’t a specific type of computer, it is a model of how all computing in all digital computers work, regardless of the specific software or hardware.
https://en.wikipedia.org/wiki/Turing_machine
This is a rather silly argument. People hear about certain logical fallacies and build cargo cults around them. They are basically arguing ‘but how can conscious beings process their perception of material stuff if their consciousness is tied to material things???’, or ‘how can we learn about our bodies if we need our bodies to learn about them in the first place? Notice the circularity!!!’.
The last sentence there is a blatant non sequitur. They provide literally no reasoning for why a thing wouldn’t be able to learn stuff about itself using algorithms.
This whole discussion is becoming more and more frustrating bc it’s clear that most of the people arguing against the brain as computer don’t grasp what metaphor is, have a rigid understanding of what computers are and cannot flex that understanding it to use it as a helpful basis of comparsion, and apparently have just never heard of or encountered systems theory?
Like a lot of these articles are going “nyah nyah nyah the mind can’t be software running on brain hardware that’s duaism you’re actually doing magic just like us!” And it’s like my god how are you writing about science and you’ve never encountered the idea of complex systems arising from the execution of simple rules? Like put your pen down and go play Conway’s Game of Life for a minute and shut up about algorithms and logic gates bc you clearly can’t even see the gaping holes in your own understanding of what is being discussed.
literally read anything about a Turing Machine because you are comically misunderstanding these articles.
please read the entire article, you are literally not understanding the text. the following directly addresses your argument.
Unless the author redefines the words used in the bit that you quoted from them, I addressed their argument just fine.
In the case the author does redefine those words, then the bit that you quoted is literally meaningless unless you also quote the parts where the author defines the relevant words.
The author is just arbitrarily placing on algorithms the requirement that they ‘can be followed mechanically, with no insight required’. This is silly for a few reasons.
Firstly, that’s not how algorithms are defined in mathematics, nor is that how they are understood in the context of relevant analogies. Going to just ignore the ‘mechanically’ part, as the author seems to not be explaining what they meant, and my interpretations are all broad enough to conclude that the author is obviously incorrect.
Secondly, brains perform various actions without any sort of insight required. This part should be obvious.
Thirdly, the author’s problem is that computers usually work without some sort of introspection into how they perform their tasks, and that nobody builds computers that inefficiently access some random parts of memory vaguely related to their tasks. The introspection part is just incorrect, and the point about the fact that we don’t make hardware and software that does inefficient ‘insight’ has no bearing on the fact that computers that do those things can be built and that they are still computers.
The author is deeply unserious.
If their problem is that the analogy is not insightful, then fine. However, their thesis seems to be that the analogy is not applicable well enough, which is different from that.
Okay, so their thesis is not that the computer analogy is inapplicable, but that we do not work exactly the way PCs work? Sure.
I don’t know why they had to make bad arguments regarding algorithms, though.
There is no such thing as ‘Boolean logic’. There is ‘Boolean algebra’, which is an algebraisation of logic.
The author also seems to assume that computers can only work with classical logic, and not any other sort of logic, for which we can implement suitable algebraisations.
This is silly. The author is basically saying ‘but all computers are intelligently made by us’. Needless to say, they are deliberately misunderstanding what computers are and are placing arbitrary requirements for something to be considered a computer.
Who is this ‘we’?
Again, the author is deeply unserious.
so you aren’t going to read the article then.
No Investigation, No Right to Speak.
Here follows some selections from the article that deal with exactly the issues you focus on.
I strongly advise reading the entire article, and the two it is in response to, and furthermore reading about what a Turing Machine actually is and what it can be used to analyze.
the bolded part above is ‘why the author has a problem with the computer metaphor’ since you seem so confused by that.
these are the definitions the author is using, not ones he made up but ones he got from one of the articles he is arguing against. note the similarities with the definitions on https://en.wikipedia.org/wiki/Algorithm :
"One informal definition is “a set of rules that precisely defines a sequence of operations”,[11][need quotation to verify] which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure[12] or cook-book recipe.[13] In general, a program is an algorithm only if it stops eventually[14]—even though infinite loops may sometimes prove desirable. Boolos, Jeffrey & 1974, 1999 define an algorithm to be a set of instructions for determining an output, given explicitly, in a form that can be followed by either a computing machine, or a human who could only carry out specific elementary operations on symbols.[15]"
The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.
now back to the article
this explains the author’s reasoning for their definitions further, he is not making these up, these are the common definitions in use in the discourse.
I have investigated the parts that you have quoted, and that is what I am weighing-in on… They are self-contained enough for me to weigh-in, unless the author just redefines the words elsewhere, in which case not quoting those parts as well just means that you are deliberately posting misleading quotes.
From the parts already quoted, it seems that the author is clueless and is willing to make blatantly faulty arguments. The fact that you opted to quote those parts of the article and not the others indicates to me that the rest of the article is not better in this regard.
Firstly, the term ‘Turing machine’ did not come up in this particular chain of comments up to this point. The author literally never referred to it. Why is it suddenly relevant?
Secondly, what exactly do you think I, as a person with a background in mathematics, am missing in this regard that a person who says ‘Boolean logic’ is not?
This contradicts the previous two definitions the author gave.
Whether we know of such an algorithm is actually irrelevant, actually. For a function to be computable, such an algorithm merely has to exist, even if it is undiscovered by anybody. A computable function also has to be N->N.
That’s a deliberately narrow definition of what a computer is, meaning that the author is not actually addressing the topic of the computer analogy in general, but just a subtopic with these assumptions in mind.
This directly contradicts the author’s point (1), where they give a different, non-equivalent definition of what an algorithm is.
So, which is it?
This is obvious nonsense. Not only are those definitions not equivalent, the author is also not actually defining what it means for instructions to be followed ‘mechanically’.
Does the author also consider the word ‘time’ to have a meaning without ‘importance’/‘significance’?
I have already addressed this.
At this point, I am not willing to waste my time on the parts that you have not highlighted. The author is a boy who cried ‘wolf!’ at this point.
EDIT: you seem to have added a bunch to your previous comment, without clearly pointing out your edits.
I will address one thing.
The author seems to be clueless about what a Turing machine is, and the Chinese Room argument is also silly, and can be summarised as either ‘but I can’t imagine somebody making a computer that, in some inefficient manner, does introspection, even though introspection is a very common thing in software’ or ‘but what I think we should call “computers” are things that I think do not have qualia, therefore we can’t call things with qualia “computers”’. Literally nothing is preventing something that does introspection in some capacity from being a computer.
I’ve heard people saying that the Chinese Room is nonsense because it’s not actually possible, at least for thought experiment purposes, to create a complete set of rules for verbal communication. There’s always a lot of ambiguity that needs to be weighed and addressed. The guy in the room would have to be making decisions about interpretation and intent. He’d have to have theory of mind.
The Chinese Room argument for any sort of thing that people would commonly call a ‘computer’ to not be able to have an understanding is either rooted on them just engaging in endless goalpost movement for what it means to ‘understand’ something (in which case this is obviously silly), or in the fact that they assume that only things with nervous systems can have qualia, and that understanding belongs to qualia (in which case this is something that can be concluded without the Chinese Room argument in the first place).
In any case, Chinese Room is not really relevant to the topic of if considering brains to be computers is somehow erroneous.
My understanding was that the point of the chinese room was that a deterministic system with a perfect set of rules could produce the illusion of consciousness without ever understanding what it was doing? Is that not analogous to our discussion?
and yet you ignore the definitions the author provided
Turing machines are integral to discussions about computing, algorithms and human consciousness. The author uses the phrase ‘turing complete’ several times in the article (even in parts i have quoted) and makes numerous subtle references to the ideas, as i would expect from someone familiar with academic discourse on the subject. focusing on a semantic/jargon faux pas does not hide your apparent ignorance of the subject.
there were no previous definition, this is the first definition given in the article. i am not quote-mining in sequence, i am finding the relevant parts so that you may understand what i am saying better. Furthermore, since you seem to miss this fact many times, the author is using the definitions put forward in another article by someone claiming that the brain is a computer and that it is not a metaphor. By refusing to read the entire article you only demonstrate your lack of understanding. Was your response written by an LLM?
‘we have’ in this case is equivalent to ‘exists’, you are over-focusing on semantics without addressing the point.
i have no idea what you mean by this, according to wikipedia: “A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation).” which is identical in content to the author’s definition.
The point that the author is making here is that the definitions are functionally equivalent, one is the result of the implications of the other.
‘mechanically’ just means ‘following a set of pre-determined rules’, as in a turing machine or chinese room. you would know this if you were familiar with either. There is absolutely no way you have a background in mathematics without knowing this.
the author referred to here is not the author of the article i am quoting, but the author of the article it is in response to.
you have not. this is the author of the pro-brain-as-computer article restricting his definitions, that the article i am quoting is arguing against using the same definitions. I am not sure you understood anything in the article, you seem like you do not understand that the author of the article i quote was writing against another article, and using his opponent’s own definitions (which i have shown to be valid anyway)
in short you are an illiterate pompous ass, who lies about their credentials and expertise, who is incapable of interpreting any nuance or meaning from text, chasing surface level ghost interpretations and presenting it as a Gotcha. I am done with this conversation.
Which definitions am I ignoring? I have quite literally addressed the parts where the author gives definitions.
The author is really bad at actually providing definitions. They give three different ones for what an ‘algorithm’ is, but can’t give a single one to what the expression ‘mechanically following instructions’ means.
They are irrelevant to the parts that you quoted prior to bringing up Turing machines.
Not in any part that you quoted up to that point.
I looked for those with ctrl+f. There are no mention of Turing machines and of Turing completeness up to the relevant point.
Expecting the reader of the article to be a mind reader is kind of wild.
In any case, the author is not making any references to Turing machines and Turing completeness in the parts you quoted up to the relevant point.
Also, the author seems to not actually use the term ‘Turing machine’ to prove any sort of point in the parts that you quoted and highlighted.
I bring up a bunch of issues with what the author says. Pretending that my only issue is the author fumbling their use of terminology once just indicates that, contrary to your claims, my criticism is not addressed.
This is a lie. Here’s a definition that is given in the parts that you quoted previously:
I’m going to note that this is not the first time I’m catching you being dishonest here.
Okay, I went and found the articles that they are talking about (hyperlinked text is not easily distinguishable by me on that site). Turns out, the author of the article that you are defending is deliberately misunderstanding that other article. Specifically, this part is bad:
Here’s a relevant quote from the original article:
Also, I’d argue that the relevant definitions in the original article might be/are bad.
Onto the rest of your reply.
So far, I don’t see any good arguments against that put forth by the author you are defending.
I came here initially to address a particular argument. Unless the author redefines the relevant words elsewhere, the rest of the article is irrelevant to my criticism of that argument.
Cute.
I do not trust the author to not blunder that part, especially considering that they are forgetting that computable functions have to be N->N.
‘The English Wikipedia gives this “definition”, so it must be the only definition and/or understanding in this relevant context’ is not a good argument’.
I’m going to admit that I did make a blunder regarding my criticism of their point (3), at least internally. We can consider myself wrong on that point. In any case, sure, let’s go with the definition that the author uses. Have they provided any sort of argument against it? Because so far, I haven’t seen any sort of good basis for their position.
They are not equivalent. If something is an algorithm by one of those ‘definitions’ (both of them are not good), then it might not be an algorithm by the other definition.
The author is just plain wrong there.
Care to cite where the author says that? Or is this your own conjecture?
In any case, please, tell me how your brain can operate in contradiction to the laws of physics. I’ll wait to see how a brain can work without following ‘a set of pre-determined rules’.
Or in any kind of other system, judging by the ‘definition’.
Cute.
You mean this part?
Or the part where, again, the same author literally calls stones and snowflakes ‘computers’ (which I am going to back as a reasonable use of the word)?
I was addressing particular arguments. Again, unless the author redefines the words elsewhere in the article, the rest of the article has no bearing on my criticism.
Cool. Now, please, tell me how my initial claim, ‘this is a rather silly argument’ is bad, and how the rest of the article is relevant. Enlighten me, in what way is me saying that the particular argument that you quoted, and for which you have failed to provide any sort of context that is significant to my criticism making me ‘illiterate’?
In case you still don’t understand, ‘read the entire rest of the article’ is not a good refutation of the claim ‘this particular argument is bad’ when the rest of the article does not actually redefine any of the relevant words (in a way that is not self-contradictory).
In return, I can conclude that you are very defensive of the notion that brains somehow don’t operate by the laws of physics, and it’s all just magic, and can’t actually deal with criticism of the arguments for your position.