Meat Robot 1: Free Will

John C Wright started an interesting discussion at his blog http://www.scifiwright.com/2016/08/a-general-query-to-all-panphysicalists-and-radical-materialists/

Let us cut to the chase.

Think back to the day when you first discovered that you were a meat robot without free will, without freedom, and without dignity. Did the discovery fill you with awe, rapture, wonder and gratitude?

For, if not, the discovery is false. Truth is majestic and majesty provokes awe; truth is sublimely beautiful and beauty provokes rapture; truth is startling, because it shatters the lies we tell ourselves, and the bright surprise leaves us blinking in wonder; truth is a gift to be prized above all price, and gifts provoke gratitude.

If the discovery of material did none of these things, either your reactions are miscalibrated and do not reflect reality, or your discovery was not a discovery at all, merely a falsehoods you have yet to test with due rigor.

So? What was your reaction?

There were two more posts and the discussion above was a follow on from previous discussions.

I hadn’t been commenting at JCW’s blog because it tended to cause more upset than discussion but this post looked like an invitation back. As it happens, things started to get weird and tense there and I opted out again. However, there were some intelligent questions asked and I said I’d try and address them.

And fair warning: this goes on a bit and involes some thought experiments about predicting other people’s behaviour which is neccesarily a bit creepy-when-you-think-about-it sort of philsophical scenario.

Free Will

I’m not going to cite anybody in this chunk but the ideas below aren’t original.

Free will is conceptually a mess but it is also something people grasp as a thing they experience. When I say it is a mess, what I mean is:

  • If you are Judeo-Christian-Islamic theist then you have to reconcile free-will with a god that can do anything and knows everything in advance.
  • If you believe in any kind of determinism (physical or theological) then you have to somehow reconcile that with the supposed choices of free-will.
  • If you believe the world is more unpredictable then you escape determinism but swap forced choices for random ones.

Free will is cognitive: it is about a person making free choices and deciding to do something. While that can encompass spontaneity or seemingly random acts, it isn’t confined to such acts. We would regard our rational and/or sensible choices to also be encompassed by free-will.

Also, we tend to see as our choices defining us – they are the kinds of things we would do. Our choices reflect our personality and our history. We can also reflect on our decision making and consider how the information we had, our emotional state, our personal goals and our personality influenced our decision.

Wright sees free-will as being particularly challenged by a physical view of reality. If I am a robot made of meat (I am – but one running a Camestros Felapton module) then I am like a clockwork machine and my thought processes are reducible to atoms moving about and hence no free-will. I’ll put aside, for the time being, the question of things being usefully reducible to physics

I’ll put aside, for the time being, the question of things being usefully reducible to physics. At a broader level, my mind is the operations of my brain and my brain operates at levels some of which I’m not conscious of. This can be alarming because it begins to sound like my brain is in charge of me rather than my mind.

Neurologically there is apparent evidence for this, with some indication of things occurring in the brain pertaining to decisions before we are conscious of having made our decision.

I think this is actually unremarkable. Whether we imagine souls, gods, quantum effects creating intelligence or computer-like brains, an unconscious process will precede conscious ones. To imagine otherwise is to assert our thought processes transcend time and that is a point where I resort to saying that is just silly.

The problem is we really don’t have a good handle on what free-will is. As a consequence, we tie ourselves into paradoxes. So I’m going to assert what free-will is – again this is not original with me but I don’t have a pointer to a specific thinker who said this.

What Free Will Is

Imagine a person, call her Sue.

We have the power to predict what Sue will do. How have we got that power? You can pick the way you feel most comfortable with:

  • Access to a parallel universe which is temporally further ahead than ours but otherwise the same (currently).
  • An extraordinary computer simulation of Sue that tracks all inputs and physical states of Sue to produce a deterministic model of Sue that predicts exactly what she will do.
  • An angel tells us what decisions Sue will make.
  • All of these choices are actually a bit disturbing when you think about it and you’d rather not pick any of them, thanks very much.

It doesn’t matter which we pick. Somehow, we can know what Sue will do next.

I believe Sue can still have free-will in this scenario.

That is important. I’m not saying she must have free-will, it could still be a contingent fact about our universe that free-will is an illusion but even with the kind of determinism I just described, I think free-will is still possible. Having said that, we need to describe it carefully.

We meet Sue for a cup of coffee. Our predestination powers tell us that Sue will order a cappuccino. Note how unremarkable a prediction this is. It is the kind we make fairly reliably about people we know, despite not having any remarkable powers. However:

  • Sue does not know about our power to see what she will do next.
  • We have not told Sue that we know she will order a cappuccino.

Sue orders a cappuccino.

Over coffee, we explain to Sue that we have this incredible power to predict her decisions. Sue is naturally aghast at this gross invasion of her privacy, demands that we smash up the computer/stop accessing a parallel universe/stop talking to messed-up angels. Still, the idea gets stuck in Sue’s head and she decides to teach us a lesson.

She uses what we told her to build her own simulation/access a parallel universe/contact an angel and now has he power to see what WE will do next.

She asks to meet us for a cup of coffee. Before we order, Sue explains what she has done and also that she knows we will order a cup of tea. Because we are somewhat infantile and cross that Sue has neatly demonstrated how messed-up it would be to gain pre-knowledge of another persons actions, we rather petuantly decide to order a cappuccino.

  • We order a cappuccino.

That is free will.

Sue asks us to check our Sue-predicting-powers and we discover that:

  • Our parallel universe Sue is now an alt-history Sue and the universe is slightly different.
  • Our computer model has diverged from Sue’s behaviour.
  • The angle is lecturing us on how god works in mysterious ways.

Of those options, the only one I can actually garuntee is the computer one. Sue’s model of us and our model of Sue must contain everything Sue knows and believes about the world. Whether that is at fundamental level of how the data is encoded in neuron’s or pulses of electrons or arrangements of atoms or whether it is at a more comprehensible level, it doesn’t matter. All of us make decisions based on what we know and believe and have been told.

Sue predicted that we would order tea but that prediction did not contain the fact that she would tell us about the prediction. Our mental state is changed by Sue telling us about the prediction (which we believe because of our past experience with such predictions).

Sue could have anticipated this and fed back into the model of us that she would tell us about ordering tea. In that case her computer model then has to take into account its own predictions. Maybe it collapses into a self-reference paradox at that point or maybe it copes and says that I’ll order a cappucinno…but then Sue has to tell me I’ll order a capuccino…so I order tea…so Sue would need to feed in double, triple, quadruple layers of prediction. Even numbered predictions would be cappuccino and odd numbered would be tea.

This isn’t just an exercise in the logic of determinism. Socially and evolutionarily, other people need our behaviour to be predictable (it is how we all get along) and we need our behaviour to have the capacity to be surprising (it’s what stops people taking each other for granted).

Even at a very fallible level of predicting another person’s behaviour, we know that telling somebody what choice they will make in advance is at best cheeky and more generally is rude. The scenario above of some supposedly infallible predictor of a person’s behaviour is worse than rude but would be downright creepy, weird and unethical.

Meat robot is quite happy thanks

As a meat robot I’m confident I have free-will. My decisions are determined in the sense that the component parts of me together make up ME and it is the interaction of those parts on what I know and believe about the world that make up my decisions. That isn’t a loss of free-will, it is just being honest about who and what I am. I’m not an abstract geometrical point or a monad. I’ve got parts.

Within that framework, I can make sense of what it means for me to have free-will. Not only that, free-will makes sense LOGICALLY and also EVOLUTIONARILY.

  • Logically because a deterministic prediction of what I will do is still something I can defy because such a prediction to be foolproof would require the capacity to model its own predictions in the event of me learning about its predictions. Heck, even decsribing the issue gets you into a self-referential nest.
  • Evolutionarily because meat-robots are social animals and that requires us to be both predictable and surprising.

I’m not saying divine or metaphysical explanations of free-will are neccesarily wrong (although they have issues and I’ll touch on some of them in other posts) but I am saying free-will makes a lot of sense for a meat robot.

, ,

31 responses to “Meat Robot 1: Free Will”

  1. If Wright means to tell us that he has always contemplated the real or impending death of a loved one with awe, rapture, wonder and gratitude, I say that he is lying, perhaps to himself as well as us. Likewise if awe, rapture, wonder and gratitude dominate how he feels about the innocent victims of war, the lifestyle of the Ampulex Dementor wasp, the fate of all living things near gamma-ray bursters, and the popularity of reality TV shows. For that matter, his own reaction to the state of the Hugos scarcely demonstrates a state of mind giving priority to awe, rapture, wonder and gratitude.

    Like

  2. I look forward to seeing where you take this argument, given that you’ve stated your position quite crisply here; arguing another position would surely require you to replace your Camestros Felapton module with another or else you risk making statements that presumes a superior understanding of someone else’s position (a version of the Dawkins fallacy, perhaps?) Then again, that’s probably the core of the argument about free will anyway – that one individual can tell another individual what they think.

    For me, your conclusions don’t seem especially contentious but also don’t make me think “Oh, so I’m just a meat puppet!” as they don’t seem to contradict other philosophical positions (one of which I hold.) Indeed, I am not entirely convinced that your second one (predictability + surprise) doesn’t raise more questions than it answers.

    Liked by 1 person

    • Thanks David,
      One point first of all: “meat robot” rather than “meat puppet”. A meat puppet is controlled by something else, whereas a meat robot can be autonomous 🙂

      I’m sort of glad I haven’t convinced you that you are a meat robot. My main aim here is to show that those of us who are meat robots aren’t dismayed to discover we lack free will. I’ll concede, for the moment, that there can be being with free will who aren’t meat robots 🙂

      Like

      • Doh. I’m in the middle of another project atm in which the term “meat puppets” is being used a lot, so I imagine that’s why I got momentarily distracted.
        On the other hand, I do think your latter point is significant. I have long wondered if one of the simplest explanations is that it is entirely possible that the universe is the problem rather than anything else; that, for instance, in the universe I inhabit, God exists*, whereas in the universe that Richard Dawkins inhabits (to pick a name at random), God does not exist. But because those universes co-exist, we each consider the other to be holding absurd positions (and, indeed, cannot properly understand the others’ pov.)
        *note that I don’t wish to discuss God at this point, as that’s a wholly separate definition problem that is not, perhaps, entirely relevant yet.

        Like

  3. I think Penrose handled this issue rather nicely in The Emperor’s New Mind. I recommend it.

    Fudging what he said a little because it’s been many years since I read it, Penrose boils down to the issue that one cannot -in principle- construct an algorithmic computer program that perfectly predicts the actions of a human being. Because human beings solve mathematical problems which are not solvable by algorithms running in Turing machines. He uses the five-fold symmetrical tiling of a two dimensional plane as an example. Apparently there are others.

    So, if there are things that humans do which Turing machines cannot, and there are, therefore the meat-robot supposition is incorrect.

    This does not by itself prove the existence of a higher being. But it does disprove the “strong AI” theory that human intelligence is merely software running on a meat Turing machine.

    I think its possible that some or all the human -mind- might be software, because the mind is quite mechanical in many ways. But the self-awareness, the consciousness of humans and most likely higher mammals, that is something else entirely. How much of our behavior comes from that self-awareness I don’t know, but clearly some does.

    Like

    • I think Penrose is wrong – indeed I think he is very wrong, as in he made a basic error. However, Penrose is also really scary smart and so I’m always a bit nervous asserting that he basically messed up when it comes to Godel’s theory.

      OK – short version as to why I think Penrose is wrong. He is using an argument based on incompleteness theorem. In short, a sufficiently complex formal system that is CONSISTENT has statements which are true but not provable. Consistency is a prerequisite for both of Godel’s incompleteness theorems and related results, such as Turing’s.

      Human thought does not enforce consistency at a higher level (i.e. beliefs and stuff that we are conscious of). People can believe inconsistent things and can come to inconsistent conclusions. Now, I guess that by itself is a reason to reject the notion of minds/brains as being computational machines but that isn’t Penrose’s argument. He is arguing Godel’s theorem is relevant, when it doesn’t even pass the first criteria for things-which-Godel’s-theorem-applies to.

      Liked by 1 person

      • As I said, it’s been a long time, I read it back in the early 1990s. I also lack the expertise to follow the mathematical argument completely. However I did find the notion that Turing machines cannot manage a lot of things which humans do manage to be compelling.

        Functionally a digital computer is a fancy Leibniz wheel that only has two digits on it. You can make it go as fast as you want, string as many together as you want, but it is not going to develop self awareness. That seems to be a different order of thing. As different as ice is from water. Same stuff, but also not the same at all.

        Allowing oneself to be in the space of not-knowing is uncomfortable, but if we are honest, we really don’t know how it works at all.

        Like

        • I think I read it at a similar time. I’ve been trying to hunt down my copy but alas! its wandered off or was left behind on some other continent. So I might be doing his argument a diservice.
          Mind you, even when Penrose is wrong he is fascinating.

          Like

      • Mind you, even when Penrose is wrong he is fascinating.

        Alas, you’re overlooking the major flaw in the argument – it’s a red herring argument.

        “Human consciousness is not a Turing machine therefore there is a soul” is as valid an argument as “Human consciousness is not a Turing machine therefore my mother is named Martha,” Penrose himself tried to make a case for humans including quantum computation – and we can already make very crude quantum computers. If we plug a qubit into a circuit, does it have a soul or free will?

        Further, ask yourself how you know that humans don’t themselves show Godel incompleteness, that there are things that they cannot think – which, by definition, you can’t think of…

        Like

      • “Human consciousness is not a Turing machine therefore there is a soul”

        This does not follow, and it is not what I said. Human consciousness is not a Turing machine, therefore we are not robots. There could be a soul, or perhaps there might not be one. We don’t know at this point with any certainty, so the only reasonable thing to do is live in the question.

        As to Godel incompleteness, human beings think of things that were never thought of before. They do it all the time. That was one of Penrose’s points. Algorithms can’t “step out of the box” and invent Newtonian calculus. I can’t either, but Newton did. Every time somebody invents something that they’ve never seen or heard of before, they are outside the algorithm box.

        That invention and being in uncharted territory is the thing that makes human intelligence fundamentally different than that of other social animals.

        Insects for example -are- meat robots, all their behaviors (as far as we know to date anyway) are algorithmically generated. Their learning is machine learning. The progress of machine learning is in fact based on ants and bees. Humans include that machine-learning technique in their repertoire, as it were, but they can also short circuit the whole algorithmic search thing by going straight to the answer. Bugs can’t do that. They have to take the long way around.

        So human intelligence is a way of ‘cheating’ and getting the answer more cheaply, if you will. Penrose appealed his unknown mechanism for what that is to quantum mechanics, because that’s the edge of what we know about. As quantum computing becomes more understood, I will not be surprised to see qubits not explaining human consciousness either. They are an even fancier Leibniz Wheel, at least so far.

        The truth is that we have no idea at all and can barely even frame the questions regarding the nature of consciousness. We are no farther ahead now than Leibniz was, if we are honest.

        Like

      • Not sure whether Newton “invented” anything. Presumably the laws or dynamics existed independently of his discovery and publication, in the same way that America existed before it appeared on European maps — though it was not yet named, framed and tamed.

        Like

      • People can comment on other people’s comments RDF.

        They can indeed – and others can choose to ignore them.

        Like

      • KR said: “Not sure whether Newton “invented” anything. Presumably the laws or dynamics existed independently of his discovery and publication…”

        Does mathematics exist independently, waiting to be discovered? This is a possibility, but does not affect the argument. Invented or discovered, an algorithmic engine cannot get out of its own box to find or make it. Algorithms always only live -inside- the box. Humans usually live inside their little boxes too, never really getting outside. But some do, and that is the thing that confounds the “meat robot” argument.

        It happens more often in modern times, too. 250,000 years, Cro Magnon and early Humans roamed the Earth and really didn’t advance technologically at all. Personally I think that was the case because they were just that bad ass and didn’t have to. Top of the food chain is a good place to be, means food nearly falls into your mouth. Agriculture 10K years ago brought huge, lasting changes, and an ever-accelerating rate of technological change. Late Victorians grew up before electricity was invented. They lived through the electric light, the internal combustion engine, cars, airplanes, antibiotics, telephones, computers, space flight and some of them the moon landings. One lifetime. Makes me wonder it something changed.

        That’s not a bunch of ants. Ants are pretty much the same now as they were a million years ago. Their fossilized tunnels are identical.

        Like

  4. If I am a robot made of meat (I am – but one running a Camestros Felapton module) then I am like a clockwork machine and my thought processes are reducible to atoms moving about and hence no free-will.

    And?…

    Wright is arguing argumentum ad consequentiam – “I don’t want to be a meat robot, therefore we cannot be meat robots”. Yeah, and I don’t want to die, therefore we are immortal, right? As far as I can tell, everything he says is bafflegab thrown up to try and obscure this fundamental idiocy in his argument. As always with Wright, whenever he waxes florid, he’s trying to hide his weaknesses.

    However, allow me to suggest something that’s not really a thought experiment at all – imagine you’re a meat robot who (I) cannot fully predict their own actions, (ii) lives in a world where no-one can fully predict your actions and (iii) has the *illusion* of free will.

    What’s the difference between that and having free will? Under the circumstances we actually live in, without precognitive coffee partners, YHWH waggling his non-existent finger at you, or Ceiling Cat knowing when you’re going to masturbate – how *exactly* do you distinguish the illusion of free will from the reality?

    You can’t. There is no difference. “Free will” is a name we put on an *internal* perception based on our limited perspectives, and assume we’re describing an objective reality.

    Liked by 1 person

    • Can one have free will within a range? Some things are certainly deterministic.

      Consider Sue. Just giving the name Sue can be deterministic to some degree as discussed here:

      Like

      • Can one have free will within a range?

        The problem is in even asking the question without defining the terms. To state that you have free will within a range is to state that you can choose your actions freely within that range, when the opposite is put up by Wright’s frothing argument as a “meat puppet”, not able to choose your actions freely at all. But if you can’t PREDICT your actions before you “choose” them, no-one is able to PREDICT those actions within that range before you make them and you THINK you choose your actions freely, then what exactly does it mean to say there’s a distinction between that and this undefined concept of “free will”?

        I, you, Wright and Camestros might all be “meat puppets” (which is what the evidence points to, such as MRIs being able to predict your actions shortly before you “make” them) – but who cares? IT MAKES NO DIFFERENCE. Wright is repelled by the idea which is why he vomits syllables in lieu of an argument, but he hasn’t really given any way to distinguish one situation from the other.

        Perhaps the fundamental problem is that current research seems to be drilling down into what makes our identities, and coming to the uncomfortable conclusion that our “selves” are constructed illusions mediating between several brain processes. In essence, there is thinking occurring, therefore “we” become. The best hypothesis seems to be that Descartes was wrong – our existence doesn’t precede thinking, but arises as a way of organizing it.

        Of course, your entire experience says that that is wrong – because your entire “self” is from the perspective that you actually exist as a continuous event. Yet, I can go further and suggest that for large periods of time, “you” don’t actually exist – when time seems to pass without you noting it. The brain may not be booting up the “self-image” process when it is not needed – but you *can’t* notice this, because the very act of asking “do I exist?” or “Was I actually there during that time?” necessarily requires the notion of self. Think about it – when you get into your car at work and drive quietly home, can you be sure that “you’re” actually there for the entire journey?

        The idea that the self arises from and is dependent on physical processes is scary – which is why people come up with ideas of “souls” WITH NO DECENT PROOF. They want to be separate from the meat, so they construct a semantic map with whole chains of concepts designed to support this conceit. But there’s no proof for it. Souls, religion, God, the afterlife – they’re all threatened by the observation that all you are, your entire identity is just processes in meat. Change the meat, you change the self. When the meat dies, you die. Your soul doesn’t waft up to heaven for judgement, you simply go where a candle flame goes when the candle is destroyed.

        I think Wright’s smart enough to grasp that idea. However, his major characteristic is arrogance and a blinding inability for self-reflection, which is why he resorts to walls of gobbledegook, anger and misrepresentation to hide that grasp from himself.

        Like

  5. I don’t think you have to resort to either the incompleteness theorem or the halting problem to demonstrate that a machine that can predict the future cannot exist in the same universe that it predicts. As you illustrated above, if the machine predicts I’ll order cappuccino, I can order tea just to prove it wrong. This contradicts the assumption that the machine was able to predict the future.

    I’m not sure it enlightens us on free will, though. It will break just the same if I replace the human with a program that generates integers. The program calls the future-predictor’s web API and asks “what number will I generate next?” If the predictor returns N, the program outputs N+1.

    Like

  6. So, I’ve been thinking a bit about this post for a bit. I confess my limitations in not being able to understand and follow all the references to theorists. I guess we can’t know everything, can we? But are we comfortable saying we aren’t sure? That particular rhetorical question might cross-reference this discussion of free will with your “conservative crisis” tag.

    For some reason, the word “meat” to apply to people causes a visceral negative reaction in me. I’ve heard it used before (“meatspace” etc) and really really really hate it. Such a strong emotional reaction seems interesting in this context. Is it because the word combination itself has an ugly sound? Is it because humans have elevated themselves above animals and the reductive reminder is uncomfortable? I can think of a dozen other potential explanations.

    Discipline and punish, archaeology of knowledge, panopticon etc. The will (desire?) to explain and systematize is the will (desire?) to control. I took an internet quiz to determine which philosopher I might be and it said I am Foucault. And we all know internet quizzes don’t lie. 🙂

    On the subject of machines and attempts to predict a person’s behaviour being worse than rude and verging on downright creepy, weird and unethical: https://www.washingtonpost.com/news/on-leadership/wp/2016/09/06/this-software-startup-can-tell-your-boss-if-youre-looking-for-a-job-2/

    I am terrible at discussions of abstract ideas, but I enjoy them. I have a glancing interest in the concept of free will, but am fascinated by problems of temporality.

    Like

    • I agree “meat” is provocative. Wright was using it dismissively, I’m using it rather than “biological” because “biological robot” has an air of plausibility it to it that “meat” doesn’t have.

      Like

  7. I agree “meat” is provocative. Wright was using it dismissively,

    In much the same way a believer in the luminiferous aether would sneer at waves in a vacuum, or a geocentricist would sneer at the idea of the Earth spinning without us feeling it.

    Like

Blog at WordPress.com.