We really don’t know what intelligence is. We don’t know how it works. We struggle to define what kinds of activities are and are not intelligent. It is unclear whether it is simply a quality or whether it is something quantifiable and which exists on some kind of sliding scale.
With both free-will and intelligence, discussion can easily founder because of a lack of clarity of what the terms mean. We have (or we believe we have) an intuitive grasp of both but it is easy to argue at cross-purposes about both. This can make discussions frustrating and vexatious. Additionally, like free-will, the problem exist whether you take a materialist view or a less materialist view.
However, unlike free-will, intelligence is something that humanity has made progress in understanding. Whether you regard computers/software as intelligent or not, their existence has helped us map out tasks that in earlier times we would have regarded as being purely the domain of human thought. You can take the view that a computer playing chess is not an example of intelligence (a respectable position) but in doing so you still help clarify what intelligence might be.
Cognitive psychology, the psychology of individual differences (e.g. IQ), psychometrics and educational psychology have also helped us gain a better understanding of how our minds work. Improved imaging of the brain is still a shaky area of science but even less technology driven medical information (such as work with stroke victims) has provided interesting insights into how parts of our brain relate to mental faculties.
Research on animals has revealed a range of mental abilities. Obvious examples include work with our close ape relatives but the capacity of birds to engage in numerical reasoning is also worth noting (crows are scary clever). Nor are such insight confined to vertebrates – research with cephalopods has shown capacity for problem-solving and what could be reasonably described as emotional reactions.
So intelligence remains a viable scientific research project. It is one that continues to make progress in multiple fields. For those who prefer a non-materialist view, this has not yet amounted to an absolute challenge – there is still plenty of room to assume that, in the end, intelligence must be something non-mechanical and perhaps non-material. However, the overall space for the non-materialist has grown smaller – just not to the point where a non-materialist has to be in active denial about established facts.
However, the non-materialist can still correctly point to the materialist and say that the meat robot can’t be built. If I am, as I claim, a meat robot then I cannot explain how it is possible for my meaty existence to do the things that I can do. I cannot point to an equivalently clever but undoubtedly mechanical thing and say ‘I work like that thing, but using meat’. This is a problem for the philosophically inclined meat robot.
Of course, the non-meat robots of this world can’t explain intelligence either in any practical manner. However, they don’t have to. Their claim is that intelligence is inherently mysterious and not something that can be constructed out of lego, computer parts or meat.
Where am I going with this?
My point is that while it is a problem for meat robots that we don’t have a full explanation of what intelligence is or how it works, it is also true that many arguments against meat robots are really just variations on that single point.
Hence this post from John C Wright:
My reply was:
“Put another way: we don’t know what intelligence is or how it works. This I agree with. However, what does not follow from that is “we cannot know what intelligence is or how it works” (that is not what you said above but I think it is the next step).”
Wright’s post is essentially a riddle that suggests that if mental processes were reducible to the movement of atoms etc, we should be able to decipher those movements as thoughts without some sort of codebook. As the codebook would essentially be a description of meaning and hence work at a non-material level then this demonstrates that the initial assumption was incorrect. The mental processes have not been reduced to the movement of atoms because to make sense of it, we need to return to a different domain of meaning (exemplified by the codebook).
“The codebook acts as the way to climb from the meaningless material world into the world of the mind, where the meaning is kept. If the two worlds are in perfect lockstep, that is, if they share the same form (as they do in limited cases like mathematics, or mathematical games like chess) then the representations in the codebook will keep perfect track one with the other.
In those cases, the material and the mental can be confused with each other by the unwary, because the material symbols representing the mental reality will always be in a one-to-one correspondence with each other. The adding machine, if the gears and wheels and keys are labeled correctly, will always come up with a correct sum, just as a mind would do, if it went through the sums one by one, and made no mental mistakes.
But whether the worlds are in lockstep or not makes no difference to the fact that they are two different worlds. One world is meaningless, has no intentions, and can be expressed solely in terms of numbers and unit measures. The other world is meaningful and intentional, and cannot be expressed solely in terms of numbers and unit measures.
“How does the cue ball strike the eight ball?” is a question that can be answered with mechanics. You can give the angle and velocity and mass of the balls, and predict their final positions. “Why should I knock the eight ball into the side pocket?” cannot be expressed in those terms.
If something cannot be expressed in material terms, it is senseless to say it is material.”
There are multiple issues there but the core one is an implied but unstated appeal to ignorance. We don’t actually know how such a reduction of mental processes to the movement of atoms could be done or how we could deduce from such movements what was being thought. Consequently, the riposte that Wright’s argument is calling for can’t be made – but if it could that riposte would be “but that is not how such a reduction would work!”It is unfair to say that Wright’s argument is a straw man because he is making a reasonable attempt to imagine what it might mean to reduce mental processes to physical ones. However, it has some of the features of a straw man argument but the core challenge is a solid one.
It is unfair to say that Wright’s argument is a straw man because he is making a reasonable attempt to imagine what it might mean to reduce mental processes to physical ones. However, it has some of the features of a straw man argument but the core challenge is a solid one.How is it that things can have
How is it that things can have meaning? This is the core philosophical question that Wright is raising. The answer is that I don’t know.
Wright, or a Thomist or a Platonist of one kind or another can point upwards to higher levels of abstraction end in some kind of ultimate level of abstraction which can then identified as god, the ultimate truth etc. But that literally gets us nowhere.