Meat Robot 3: More Robot, More Meat

meatrobotWe really don’t know what intelligence is. We don’t know how it works. We struggle to define what kinds of activities are and are not intelligent. It is unclear whether it is simply a quality or whether it is something quantifiable and which exists on some kind of sliding scale.

With both free-will and intelligence, discussion can easily founder because of a lack of clarity of what the terms mean. We have (or we believe we have) an intuitive grasp of both but it is easy to argue at cross-purposes about both. This can make discussions frustrating and vexatious. Additionally, like free-will, the problem exist whether you take a materialist view or a less materialist view.

However, unlike free-will, intelligence is something that humanity has made progress in understanding. Whether you regard computers/software as intelligent or not, their existence has helped us map out tasks that in earlier times we would have regarded as being purely the domain of human thought. You can take the view that a computer playing chess is not an example of intelligence (a respectable position) but in doing so you still help clarify what intelligence might be.

Cognitive psychology, the psychology of individual differences (e.g. IQ), psychometrics and educational psychology have also helped us gain a better understanding of how our minds work. Improved imaging of the brain is still a shaky area of science but even less technology driven medical information (such as work with stroke victims) has provided interesting insights into how parts of our brain relate to mental faculties.

Research on animals has revealed a range of mental abilities. Obvious examples include work with our close ape relatives but the capacity of birds to engage in numerical reasoning is also worth noting (crows are scary clever). Nor are such insight confined to vertebrates – research with cephalopods has shown capacity for problem-solving and what could be reasonably described as emotional reactions.

So intelligence remains a viable scientific research project. It is one that continues to make progress in multiple fields. For those who prefer a non-materialist view, this has not yet amounted to an absolute challenge – there is still plenty of room to assume that, in the end, intelligence must be something non-mechanical and perhaps non-material. However, the overall space for the non-materialist has grown smaller – just not to the point where a non-materialist has to be in active denial about established facts.

However, the non-materialist can still correctly point to the materialist and say that the meat robot can’t be built. If I am, as I claim, a meat robot then I cannot explain how it is possible for my meaty existence to do the things that I can do. I cannot point to an equivalently clever but undoubtedly mechanical thing and say ‘I work like that thing, but using meat’. This is a problem for the philosophically inclined meat robot.

Of course, the non-meat robots of this world can’t explain intelligence either in any practical manner. However, they don’t have to. Their claim is that intelligence is inherently mysterious and not something that can be constructed out of lego, computer parts or meat.

Where am I going with this?

My point is that while it is a problem for meat robots that we don’t have a full explanation of what intelligence is or how it works, it is also true that many arguments against meat robots are really just variations on that single point.

Hence this post from John C Wright:

My reply was:

“Put another way: we don’t know what intelligence is or how it works. This I agree with. However, what does not follow from that is “we cannot know what intelligence is or how it works” (that is not what you said above but I think it is the next step).”

Wright’s post is essentially a riddle that suggests that if mental processes were reducible to the movement of atoms etc, we should be able to decipher those movements as thoughts without some sort of codebook. As the codebook would essentially be a description of meaning and hence work at a non-material level then this demonstrates that the initial assumption was incorrect. The mental processes have not been reduced to the movement of atoms because to make sense of it, we need to return to a different domain of meaning (exemplified by the codebook).

Wright concludes:

“The codebook acts as the way to climb from the meaningless material world into  the world of the mind, where the meaning is kept. If the two worlds are in  perfect lockstep, that is, if they share the same form (as they do in limited cases like mathematics, or mathematical games like chess) then the representations in the codebook will keep perfect  track one with the other.
In those cases, the material and the mental can be confused with each other by the unwary, because the material symbols representing the mental reality will always be in a one-to-one correspondence with each other. The adding machine, if the gears and wheels and keys are labeled correctly, will always come up with a correct sum, just as a mind would do, if it went through the sums one by one, and made no mental mistakes.
But whether the worlds are in lockstep or not makes no difference to the fact  that they are two different worlds. One world is meaningless, has no intentions,  and can be expressed solely in terms of numbers and unit measures. The other  world is meaningful and intentional, and cannot be expressed solely in terms of  numbers and unit measures.
“How does the cue ball strike the eight ball?” is a question that can be  answered with mechanics. You can give the angle and velocity and mass of the  balls, and predict their final positions. “Why should I knock the eight ball  into the side pocket?” cannot be expressed in those terms.
If something cannot be expressed in material terms, it is senseless to say it is  material.”

There are multiple issues there but the core one is an implied but unstated appeal to ignorance. We don’t actually know how such a reduction of mental processes to the movement of atoms could be done or how we could deduce from such movements what was being thought. Consequently, the riposte that Wright’s argument is calling for can’t be made – but if it could that riposte would be “but that is not how such a reduction would work!”It is unfair to say that Wright’s argument is a straw man because he is making a reasonable attempt to imagine what it might mean to reduce mental processes to physical ones. However, it has some of the features of a straw man argument but the core challenge is a solid one.

It is unfair to say that Wright’s argument is a straw man because he is making a reasonable attempt to imagine what it might mean to reduce mental processes to physical ones. However, it has some of the features of a straw man argument but the core challenge is a solid one.How is it that things can have

How is it that things can have meaning? This is the core philosophical question that Wright is raising. The answer is that I don’t know.

Wright, or a Thomist or a Platonist of one kind or another can point upwards to higher levels of abstraction end in some kind of ultimate level of abstraction which can then identified as god, the ultimate truth etc. But that literally gets us nowhere.

Advertisements

6 comments

  1. Mark

    From his example, JCWs materialists are scienc-y (and at least a bit straw-y) as they are doing all that probing to establish exact measurements. The straw-ish bit is that no-one currently claims we can know all the measurements all the time.

    When Wright says ““How does the cue ball strike the eight ball?” is a question that can be answered with mechanics. You can give the angle and velocity and mass of the balls, and predict their final positions. “Why should I knock the eight ball into the side pocket?” cannot be expressed in those terms” he’s essentially restricting the language his materialists can talk in to scientific measurements alone, and then declaring victory because he’s speaking English and they are speaking numbers, which ignores the argument that those numbers could eventually be translated into English by feeding them into a meat robot. It’s rather like declaring your opponent can only reply in binary, and then affecting not to understand what they say.

    Like

  2. thephantom182

    I like to restate the question thus, because it seems evident to me that a lot of the -mind- is certainly algorithmic.

    If we think of the mind as the machine runs inside the brain, the machine which thinks, perceives, remembers and orders things, the machine that is the little voice that talks all the damn time and never shuts up… the question arises: who’s listening?

    That’s where it gets interesting, and that would be the part that is most likely irreducible to “atomic movement.”

    Mr. Wright’s argument is based on Catholic doctrine, and so he’s sort of fenced in with what he can do. I remain a deplorable heretic, so I get to ask uncomfortable questions. Churchmen and Marxists both hate my guts.

    I also think that -because- we are not meat robots, as in Turing machines running algorithms, we may be able to understand how the human mind works, eventually. This would be the ultimate in standing outside the box. Robots can’t do that.

    Like

    • camestrosfelapton

      Good point. It seems paradoxical to imagine that a human brain would be capable of understanding a human brain as one would imagine the understanding would need to be at least as complex as a human brain.

      OK, head hurts now.

      Like

      • David Brain

        I think that’s why I am still doubtful that many AI researchers are doing a good job of explaining what they are trying to do. Because “replicating human intelligence” strikes me as being the primary school level of description – working on the common premise that each time you move up a level of education, basically the first thing you are told is that what you were told before is, essentially, entirely wrong but comprehensible at your previous level of knowledge. Exploring the concept of intelligence itself is a worthy aim, but confusing it with understanding (for want of a better word) humanity strikes me as being unhelpful.

        Then again, I’m a trinitarian at heart, in that I think that “mind”, “body” and “spirit/soul” are distinct, even if I am unwilling to yet state a position as to whether or not they are necessarily independent entities*. And that’s stepping even further away from the concept of the meat-robot! But I’m not sure that necessarily has anything to do with whether or not free will exists or even whether or not it can exist.

        *I’m not venturing into the minefield of “life-after-death” here. I am unsure if it is a meaningful concept anyway, at least not without an extensive definitional argument first! Not that meat-robots would care about such things. 🙂

        Liked by 1 person

      • thephantom182

        “It seems paradoxical to imagine that a human brain would be capable of understanding a human brain as one would imagine the understanding would need to be at least as complex as a human brain.”

        That’s the thing though, isn’t it? Human beings -do- comprehend themselves, and notice themselves. Sometimes they even notice and connect to other people. Turing machines do not. They run programs.

        Thus is exploded the notion of the meat robot.

        Like

  3. Mark

    Random thought: when people speculate on whether or not the human brain can be modeled as a machine running algorithms, I think there’s a natural tendency to think of it as running algorithms of the sort that we understand on a machine of the sort that we can build, with some handwaving about how it’s probably a bit more complicated. I suspect this is a false assumption though (and one that JCW makes when he insists on what materialists must believe).
    If you look at the simple task of deciding what to wear in the morning, and reduce it to a task of systematically comparing the available combinations in some way that a computer can run, if you’ve got a reasonable number of items then the number of possible combos is far beyond what a “brain computer” running like a silicon chip computer can actually work through in the time available. Yet we know people successfully get dressed and leave the house all the time, so clearly the brain works in a way that is beyond this simplistic model, but not necessarily beyond the realms of the material. (OK, the getting dressed example may not be the best one but I hope it makes the point).
    We know that e.g. quantum computing involves different programming with results different to what binary and silicon can achieve, so I see no barrier to thinking that brains are achieving something beyond silicon without invoking the immaterial.

    Like