Pages

Friday, May 18, 2012

Stupid Enough to be Responsible?

A certain Stephen Lawrence wrote as follows in addressing comments I made in the context of the Harris Wars:
What we are interested in [when considering morality] is the real meaning of could have done otherwise [...] you’ve said why, it’s to do with what we mean by ability, have the power to, capable of, could. And it’s to do with [...] evaluating options and act on the bases of the evaluation. No need for unnecessary complication at all. Real randomness [...] can’t possibly make us deserving of blame, reward, shame, punishment and so on. And we can’t be deserving without it. Responsibility must be compatible with determinism or else it is a lie.
[...] what would you rather? Your decisions to depend upon the reasons that you have the desire set you do as well as the desire set? Or just the desire set, there due to indeterminism? [...] when you bring indeterminism to your computers or rather pseudo randomness, you place it very carefully somewhere, or else the thing would be utterly useless. [...] we have another perfectly good answer to why it’s a struggle to get computers to behave like us. Because we are much more complex.
Stephen Lawrence, 18/05/2012 (Jerry Coyne on free will, Talking Philosophy)
To begin with, I find the question what I would rather causality be like to be strange. Should I not prefer neither of the presented options but simply that causality be such that effects always benefit me? It's completely irrelevant what I would prefer. I also find it bizarre to appeal to simplicity for the sake of some rule that we need only consider what is convenient to maintain moral simplicity. Occam's razor does not state that the simpler theory is always true. The principle is a guide to where we should look for answers. But if anecdotal evidence points to a more complex picture, that's where we must head.

Stephen's comment about human complexity being what distinguishes us from computers is unhelpful. We know that we are more complex than the mother boards and CPU's that make up an electronic device. But what is it that renders us more complex? Is it just that there is more logic and opportunity for "mechanical" bugs in humans? Or is it that we differ in some more fundamental way? I'm suggesting it's the latter.

It is indeed true, as Stephen suggests, that I carefully use pseudo-randomness in my code. Having code that uses a pseudo-random number generator (PRNG) in software is associated with a known and morally interesting conundrum. If I deliberately introduce pseudo-randomness to make an airplane work well in most circumstances then is it acceptable that in some rare circumstance it causes crashes and kills people? If on investigation it turns out the crash could have been avoided would the PRNG have output anything lower than 0.7854 on a scale of 0 to 1, how will we react? Will it do to say that the PRNG statistically saved thousands of lives prior?



We do not seem to want computers to be like us, free and capable of error. But is it possible that our intelligence is fundamentally related to our capacity to err? I'm going to assume moral competence is associated with intelligence. After all, we don't put pigs on trial! And not even crows, orcas or chimpanzees even if they are some of the smarter non-humans. [1] If we assume this integral relationship between intelligence and moral competence, then it would seem obvious that ethicists should ask what intelligence is. Many assume and keep insisting that intelligence is equatable with rationality. What I'm suggesting is that it's not. At least not entirely. That said, if I understood the exact "mechanics" of intelligence I'd be be less researching developer and more birth nanny of non-biological babies. Why do we really quibble about free will in the contest of morality? What we're really quibbling about is how it's possible for humans to make good decisions.

This has been a corner stone of all modern law so far: are you competent enough to stand trial? So competence is what ethicists must study, not really the "freedom" part of free will. However, it's not unreasonable to claim that our "freedom" is what allows us to be intelligent, sensible beings and hence morally competent. Assume for a moment that some amount of pseudo-randomness makes for better software. What a PRNG does is allow a computer to be more free. The variables that rely on the PRNG are not fixed. It could be that the permissible range of the values are fluctuating depending on how well the software that uses it performs. When the software starts off, the ranges are wide. As the software matures, the values get more and more constrained [2].

The baby is growing up, the baby is becoming a (wo)man. Isn't it peculiar how long humans stay helpless and cuddly? And how knuckle-headed teenagers can be in their experimentation? Maybe the insanity of freedom and capacity for error has something to do with our great intelligence, sensibility and moral competence. Essentially, we couldn't be intelligent and morally competent if we couldn't on occasion be profoundly stupid.


1.^ At least not any more: Wikipedia: Animals on trial.
2.^ We call such code a type of evolutionary algorithm.

Saturday, May 5, 2012

The Harris Wars

The Harris Wars continue at Talking Philosophy. The virulent civil conflict focus on to what extent it can be said that we have free will, and how much responsibility we have for our own actions. The Harris camp goes all the way and categorically declares free will – yes you got it – an illusion. Are they in any way justified to make this bold move? Is, as Sam Harris puts it, "the only philosophically respectable way of endorsing free will to be a compatibilist – because we know that determinism, in every sense relevant to human behavior, is true" [1] and "compatibilism amounts to nothing more than an assertion of the following creed: A puppet is free as long as he loves his strings" [2]?


If determinism means that states get determined, then I'm a determinist all the way. How could I not be? Not being a determinist would be the same as disbelieving that there is a present and a past. If determinism means that everything is predetermined, then goodbye and I'm off the bandwagon to organize the opposition. What does this PRE-determined mean? From who's perspective is anything PRE-determined? The demiurge Laplace? You hard determinists out there, do you know him? How is it in any way useful to say that Laplace predetermined what was going to happen? Such (pre)determinism is like saying there is no present, past nor future, only an eternally static construct.

Such eternal construct may exist in our thoughts the same way people wield the word God as if they knew or could know what God intends. Yes, conceptually existing like some finger pointing into the deep darkness. But beyond a few yellow bricks fading very quickly into the distance, there is very little to these concepts. What is this eternally static thingymajig? The only language I can imagine that is useful here is talk of all possible worlds. Note the use of the word possible. What does possible mean? At the very most it means epistemically uncertain. That is to say that it is as of yet unknown whether a Person A and a Person B will try to save a drowning child. [3]
Unknown to whom? Obviously it has to be someone who considers what these two people will do, someone whose thoughts attend to the situation.

It seems best to assume that the someone is ourselves, the pensive and passive philosophizing jerk standing by the shore callously watching the scene unfold. Why should we assume it's Laplace? Who is he anyway? Does he have red hair and a wily beard and waves a star-studded wand? Does he like his tea with milk in the morning? What really matters is how well we can guess what Person A and Person B will do. We weigh the possibilities. At our disposal is everything we have learnt from our limited piece of spacetime. On reflection we could perhaps say that – though we may not be able to guess what will happen next – what will happen has already been predetermined by something unknown (though even already is a strange notion here). But then please explain to me, Harris camp, how is this any different from saying "Only God knows"? To me this is about as useful as saying, "So, it's raining isn't it". Yep, it's raining alright.

If we agree that it's pretty useless to talk about what Laplace can know, then we are left with what we can know. But to answer that, we have to determine (pun intended) what we – or the more directly self-referential I – means. This is where I suspect we get into serious disagreement. Sam Harris obviously thinks that I means only the conscious process (at least in respect to free will):

[Findings by Benjamin Libet and others] are difficult to reconcile with the sense that we are the conscious authors of our actions. [...] The distinction between "higher" and "lower" terms in the brain offers no relief: I, as the conscious witness of my experience, no more initiate events in my prefrontal cortex than I cause my heart to beat. 
[...]

And  it is perfectly obvious that I, as the conscious witness of my experience, am not the deep cause of it.

    Sam Harris, 2012 (Free Will, Free Press, pp.9, pp.44)

Here we have that strange notion of being the author of what authors. So some X has to initiate the activity in the prefrontal cortex, activity which is the only sensible definition of what X is with any certitude? It makes NO sense! And I emphatically repeat: Sam Harris has made NO meaningful observation here what so ever. If this is the point of his book Free Will, then it is indeed sad that we should be forced to spend so much time discussing it due to his popularity. I'm with Tim Dean here. More intelligent people definitely need to write engaging books on the topic that don't require 10 plus years of learning philosophical jargon. This cannot mean what making philosophy more digestible for a wider audience is about! [4]

My understanding of I is the bodily mass that our neuro-endocrine system has proximal sensory control over. When a rubber mallet hits below my knee I react and I sense it. When a car comes zooming down the wrong lane, I swerve. Have you ever lost proper sensory control of some part of yourself over an extended period? It's freaky, freaky.  If someone is paralyzed from tip to toe, is it they that can't move or is it their body that can't move? Do you Harris campers deny that determinations are made by what constitutes Person A and Person B at the proximal point when the child is about to drown? And that Person A and Person B have the power to attempt to effectuate their decisions? If not then when was the determination made and by whom?

Now lets remove ourselves one step by switching to the pensive and passive jerk observing the scene from a distance. What can this Observer know about what's happening down by the lake? Can the Observer determine what Person A and Person B will do? Nope. They can only guess what will happen. The accuracy of each guess will vary. The best a priori knowledge (knowledge before the fact) that the Observer can hope for is a very likely. Again, what is the point of saying that even if the Observer does not know what has been determined, it has in fact been determined? It's nonsensical! How can the determination occur before the determination?

For there to be any sensible difference between determinism and indeterminism, we must be considering some statement about how much a priori knowledge we can have about the future and a posteriori understanding of the past (understanding after the fact). And in the context of the Harris Wars we are focusing on knowledge about human behavior. Why did Person A not save the child? Was Person A justified in not saving the child given what we know? The more knowledge we have, the more precise we can be in how we prevent it from happening again. Sam Harris is right that if we firmly believe beyond any reasonable doubt that a tumor caused someone to commit a heinous act, then the just thing is to remove the tumor and shorten the time we keep the person constrained from acting as freely as society allows under usual conditions (consider this the observation time to determine the likelihood of remission). But if we don't know what caused the heinous behavior, then we are left only with the option of containment (i.e. incarceration).

So how much can our Observer usually know about why Person A didn't save the child?

  • Nearly everything [1]
  • Many things [0.75]
  • Somethings [0.5]
  • Few things [0.25]
  • Nearly nothing [0]

And how much better can our Observer understand why Person A didn't save the child?

  • Much better [1]
  • Better [0.75]
  • Equally well [0.5]
  • Less [0.25]
  • Much less [0]

I think that by answering these two question we will have a clearer picture of how far apart we stand on the Knowability of Human Motivation scale. My answers are Somethings [0.5] and Less [0.25]. This gives me a score of [0.375]. Will my answers vary with time? I'm not sure, but I conjecture if at all then very, very slowly because of evolutionary principles – too slow for me to still be alive when a workable extreme national union becomes possible. There is an evolutionary incentive for Person A to remain mysterious to our Observer (who can jump into predatory action on a moments notice). Unless we become some truly symbiotic creature whose neuro-endocrine systems are completely interconnected and individuals cannot survive detached from the hive for long, I can't see it ever happen. My answer affects how I believe society should be structured. If my answer was [0.75], my society would be structured differently. This relates to my argument about extreme national union in my previous post.

So where would you score on the scale? Freedom Fighter or Borg?


1.^ Sam Harris, 2012 (Free Will, Free Press, pp.16)
2.^ Sam Harris, 2012 (Free Will, Free Press, pp.20)

3.^ Russell Blackford introduced this example into the discussions: "Say a child drowns in a pond in my close vicinity, and I stand by allowing this to happen. The child is now dead, and the child’s parents blame me for the horrible outcome. Will it cut any ice if I reply, 'I couldn’t have acted (or couldn’t have chosen) otherwise?' No. They are likely to be unimpressed." Russell Blackford, 24.03.2012 (Talking Philosophy, Jerry Coyne on free will)

4.^ In all fairness to Sam Harris, he's not the only one to make an author of authoring like argument. Galen Strawson makes a similar claim when arguing that you can be ultimately morally responsible: "Interested in free action, we’re particularly interested in actions performed for reasons (as opposed to reflex actions or mindlessly habitual actions) [...] But one can’t really be said to choose, in a conscious, reasoned, fashion, to be the way one is in any respect at all, unless one already exists, mentally speaking, already equipped with some principles of choice, “P1″ — preferences, values, ideals — in the light of which one chooses how to be [...] But for this to be so one must already have had some principles of choice P2, in the light of which one chose P1." Galen Strawson, 1994 ("The impossibility of Moral Responsibility", Philosophical Studies 75, Kluwer Academic Publishers, pp.6)