Pages

Thursday, October 18, 2012

Time, How Confusing Indeed!

I was discussing dimensions with my youngest son Pascal (age 8). He had said that a 3 dimensional world must be as confusing for a 2 dimensional being as a 4 dimensional world is for a 3 dimensional being like us. True, I said, but did you know that we really live in a 4 dimensional world. Do you know what the fourth dimension is? Pascal thought about it but couldn't figure out what might be the fourth dimension.

Time, I said. Time is the fourth dimension. Without time I don't know that we would experience anything. It really is the first dimension, not the last and fourth.

"Time, yeah", said my son. He paused.

"Time is very confusing!" he exclaimed. "How did time get created? How could it have started without time existing".

 Yes, Indeed. Very confusing. Eternally confusing. When Pascal presented his older brother Julien (age 13) with this conundrum, his brother said that there are certain things that we cannot understand. That we will never understand.

 Is Julien right? Does time demark the intrinsic ends of epistemology?


 

Friday, May 18, 2012

Stupid Enough to be Responsible?

A certain Stephen Lawrence wrote as follows in addressing comments I made in the context of the Harris Wars:
What we are interested in [when considering morality] is the real meaning of could have done otherwise [...] you’ve said why, it’s to do with what we mean by ability, have the power to, capable of, could. And it’s to do with [...] evaluating options and act on the bases of the evaluation. No need for unnecessary complication at all. Real randomness [...] can’t possibly make us deserving of blame, reward, shame, punishment and so on. And we can’t be deserving without it. Responsibility must be compatible with determinism or else it is a lie.
[...] what would you rather? Your decisions to depend upon the reasons that you have the desire set you do as well as the desire set? Or just the desire set, there due to indeterminism? [...] when you bring indeterminism to your computers or rather pseudo randomness, you place it very carefully somewhere, or else the thing would be utterly useless. [...] we have another perfectly good answer to why it’s a struggle to get computers to behave like us. Because we are much more complex.
Stephen Lawrence, 18/05/2012 (Jerry Coyne on free will, Talking Philosophy)
To begin with, I find the question what I would rather causality be like to be strange. Should I not prefer neither of the presented options but simply that causality be such that effects always benefit me? It's completely irrelevant what I would prefer. I also find it bizarre to appeal to simplicity for the sake of some rule that we need only consider what is convenient to maintain moral simplicity. Occam's razor does not state that the simpler theory is always true. The principle is a guide to where we should look for answers. But if anecdotal evidence points to a more complex picture, that's where we must head.

Stephen's comment about human complexity being what distinguishes us from computers is unhelpful. We know that we are more complex than the mother boards and CPU's that make up an electronic device. But what is it that renders us more complex? Is it just that there is more logic and opportunity for "mechanical" bugs in humans? Or is it that we differ in some more fundamental way? I'm suggesting it's the latter.

It is indeed true, as Stephen suggests, that I carefully use pseudo-randomness in my code. Having code that uses a pseudo-random number generator (PRNG) in software is associated with a known and morally interesting conundrum. If I deliberately introduce pseudo-randomness to make an airplane work well in most circumstances then is it acceptable that in some rare circumstance it causes crashes and kills people? If on investigation it turns out the crash could have been avoided would the PRNG have output anything lower than 0.7854 on a scale of 0 to 1, how will we react? Will it do to say that the PRNG statistically saved thousands of lives prior?



We do not seem to want computers to be like us, free and capable of error. But is it possible that our intelligence is fundamentally related to our capacity to err? I'm going to assume moral competence is associated with intelligence. After all, we don't put pigs on trial! And not even crows, orcas or chimpanzees even if they are some of the smarter non-humans. [1] If we assume this integral relationship between intelligence and moral competence, then it would seem obvious that ethicists should ask what intelligence is. Many assume and keep insisting that intelligence is equatable with rationality. What I'm suggesting is that it's not. At least not entirely. That said, if I understood the exact "mechanics" of intelligence I'd be be less researching developer and more birth nanny of non-biological babies. Why do we really quibble about free will in the contest of morality? What we're really quibbling about is how it's possible for humans to make good decisions.

This has been a corner stone of all modern law so far: are you competent enough to stand trial? So competence is what ethicists must study, not really the "freedom" part of free will. However, it's not unreasonable to claim that our "freedom" is what allows us to be intelligent, sensible beings and hence morally competent. Assume for a moment that some amount of pseudo-randomness makes for better software. What a PRNG does is allow a computer to be more free. The variables that rely on the PRNG are not fixed. It could be that the permissible range of the values are fluctuating depending on how well the software that uses it performs. When the software starts off, the ranges are wide. As the software matures, the values get more and more constrained [2].

The baby is growing up, the baby is becoming a (wo)man. Isn't it peculiar how long humans stay helpless and cuddly? And how knuckle-headed teenagers can be in their experimentation? Maybe the insanity of freedom and capacity for error has something to do with our great intelligence, sensibility and moral competence. Essentially, we couldn't be intelligent and morally competent if we couldn't on occasion be profoundly stupid.


1.^ At least not any more: Wikipedia: Animals on trial.
2.^ We call such code a type of evolutionary algorithm.

Saturday, May 5, 2012

The Harris Wars

The Harris Wars continue at Talking Philosophy. The virulent civil conflict focus on to what extent it can be said that we have free will, and how much responsibility we have for our own actions. The Harris camp goes all the way and categorically declares free will – yes you got it – an illusion. Are they in any way justified to make this bold move? Is, as Sam Harris puts it, "the only philosophically respectable way of endorsing free will to be a compatibilist – because we know that determinism, in every sense relevant to human behavior, is true" [1] and "compatibilism amounts to nothing more than an assertion of the following creed: A puppet is free as long as he loves his strings" [2]?


If determinism means that states get determined, then I'm a determinist all the way. How could I not be? Not being a determinist would be the same as disbelieving that there is a present and a past. If determinism means that everything is predetermined, then goodbye and I'm off the bandwagon to organize the opposition. What does this PRE-determined mean? From who's perspective is anything PRE-determined? The demiurge Laplace? You hard determinists out there, do you know him? How is it in any way useful to say that Laplace predetermined what was going to happen? Such (pre)determinism is like saying there is no present, past nor future, only an eternally static construct.

Such eternal construct may exist in our thoughts the same way people wield the word God as if they knew or could know what God intends. Yes, conceptually existing like some finger pointing into the deep darkness. But beyond a few yellow bricks fading very quickly into the distance, there is very little to these concepts. What is this eternally static thingymajig? The only language I can imagine that is useful here is talk of all possible worlds. Note the use of the word possible. What does possible mean? At the very most it means epistemically uncertain. That is to say that it is as of yet unknown whether a Person A and a Person B will try to save a drowning child. [3]
Unknown to whom? Obviously it has to be someone who considers what these two people will do, someone whose thoughts attend to the situation.

It seems best to assume that the someone is ourselves, the pensive and passive philosophizing jerk standing by the shore callously watching the scene unfold. Why should we assume it's Laplace? Who is he anyway? Does he have red hair and a wily beard and waves a star-studded wand? Does he like his tea with milk in the morning? What really matters is how well we can guess what Person A and Person B will do. We weigh the possibilities. At our disposal is everything we have learnt from our limited piece of spacetime. On reflection we could perhaps say that – though we may not be able to guess what will happen next – what will happen has already been predetermined by something unknown (though even already is a strange notion here). But then please explain to me, Harris camp, how is this any different from saying "Only God knows"? To me this is about as useful as saying, "So, it's raining isn't it". Yep, it's raining alright.

If we agree that it's pretty useless to talk about what Laplace can know, then we are left with what we can know. But to answer that, we have to determine (pun intended) what we – or the more directly self-referential I – means. This is where I suspect we get into serious disagreement. Sam Harris obviously thinks that I means only the conscious process (at least in respect to free will):

[Findings by Benjamin Libet and others] are difficult to reconcile with the sense that we are the conscious authors of our actions. [...] The distinction between "higher" and "lower" terms in the brain offers no relief: I, as the conscious witness of my experience, no more initiate events in my prefrontal cortex than I cause my heart to beat. 
[...]

And  it is perfectly obvious that I, as the conscious witness of my experience, am not the deep cause of it.

    Sam Harris, 2012 (Free Will, Free Press, pp.9, pp.44)

Here we have that strange notion of being the author of what authors. So some X has to initiate the activity in the prefrontal cortex, activity which is the only sensible definition of what X is with any certitude? It makes NO sense! And I emphatically repeat: Sam Harris has made NO meaningful observation here what so ever. If this is the point of his book Free Will, then it is indeed sad that we should be forced to spend so much time discussing it due to his popularity. I'm with Tim Dean here. More intelligent people definitely need to write engaging books on the topic that don't require 10 plus years of learning philosophical jargon. This cannot mean what making philosophy more digestible for a wider audience is about! [4]

My understanding of I is the bodily mass that our neuro-endocrine system has proximal sensory control over. When a rubber mallet hits below my knee I react and I sense it. When a car comes zooming down the wrong lane, I swerve. Have you ever lost proper sensory control of some part of yourself over an extended period? It's freaky, freaky.  If someone is paralyzed from tip to toe, is it they that can't move or is it their body that can't move? Do you Harris campers deny that determinations are made by what constitutes Person A and Person B at the proximal point when the child is about to drown? And that Person A and Person B have the power to attempt to effectuate their decisions? If not then when was the determination made and by whom?

Now lets remove ourselves one step by switching to the pensive and passive jerk observing the scene from a distance. What can this Observer know about what's happening down by the lake? Can the Observer determine what Person A and Person B will do? Nope. They can only guess what will happen. The accuracy of each guess will vary. The best a priori knowledge (knowledge before the fact) that the Observer can hope for is a very likely. Again, what is the point of saying that even if the Observer does not know what has been determined, it has in fact been determined? It's nonsensical! How can the determination occur before the determination?

For there to be any sensible difference between determinism and indeterminism, we must be considering some statement about how much a priori knowledge we can have about the future and a posteriori understanding of the past (understanding after the fact). And in the context of the Harris Wars we are focusing on knowledge about human behavior. Why did Person A not save the child? Was Person A justified in not saving the child given what we know? The more knowledge we have, the more precise we can be in how we prevent it from happening again. Sam Harris is right that if we firmly believe beyond any reasonable doubt that a tumor caused someone to commit a heinous act, then the just thing is to remove the tumor and shorten the time we keep the person constrained from acting as freely as society allows under usual conditions (consider this the observation time to determine the likelihood of remission). But if we don't know what caused the heinous behavior, then we are left only with the option of containment (i.e. incarceration).

So how much can our Observer usually know about why Person A didn't save the child?

  • Nearly everything [1]
  • Many things [0.75]
  • Somethings [0.5]
  • Few things [0.25]
  • Nearly nothing [0]

And how much better can our Observer understand why Person A didn't save the child?

  • Much better [1]
  • Better [0.75]
  • Equally well [0.5]
  • Less [0.25]
  • Much less [0]

I think that by answering these two question we will have a clearer picture of how far apart we stand on the Knowability of Human Motivation scale. My answers are Somethings [0.5] and Less [0.25]. This gives me a score of [0.375]. Will my answers vary with time? I'm not sure, but I conjecture if at all then very, very slowly because of evolutionary principles – too slow for me to still be alive when a workable extreme national union becomes possible. There is an evolutionary incentive for Person A to remain mysterious to our Observer (who can jump into predatory action on a moments notice). Unless we become some truly symbiotic creature whose neuro-endocrine systems are completely interconnected and individuals cannot survive detached from the hive for long, I can't see it ever happen. My answer affects how I believe society should be structured. If my answer was [0.75], my society would be structured differently. This relates to my argument about extreme national union in my previous post.

So where would you score on the scale? Freedom Fighter or Borg?


1.^ Sam Harris, 2012 (Free Will, Free Press, pp.16)
2.^ Sam Harris, 2012 (Free Will, Free Press, pp.20)

3.^ Russell Blackford introduced this example into the discussions: "Say a child drowns in a pond in my close vicinity, and I stand by allowing this to happen. The child is now dead, and the child’s parents blame me for the horrible outcome. Will it cut any ice if I reply, 'I couldn’t have acted (or couldn’t have chosen) otherwise?' No. They are likely to be unimpressed." Russell Blackford, 24.03.2012 (Talking Philosophy, Jerry Coyne on free will)

4.^ In all fairness to Sam Harris, he's not the only one to make an author of authoring like argument. Galen Strawson makes a similar claim when arguing that you can be ultimately morally responsible: "Interested in free action, we’re particularly interested in actions performed for reasons (as opposed to reflex actions or mindlessly habitual actions) [...] But one can’t really be said to choose, in a conscious, reasoned, fashion, to be the way one is in any respect at all, unless one already exists, mentally speaking, already equipped with some principles of choice, “P1″ — preferences, values, ideals — in the light of which one chooses how to be [...] But for this to be so one must already have had some principles of choice P2, in the light of which one chose P1." Galen Strawson, 1994 ("The impossibility of Moral Responsibility", Philosophical Studies 75, Kluwer Academic Publishers, pp.6)

Monday, April 30, 2012

Soft Fascism & The Olsson Test

It's not possible to distinctly separate a belief a system into its epistemological and political components. If there are internal inconsistencies, then the political components have to give in to the epistemological base. For example, if a belief system holds that people are born with varying and measurable aptitudes for knowledge acquisition, then it could not reasonably insist that all children should receive exactly the same education. Therefore, the range of possible political systems can be extracted from a person's fundamental view of the knowable and how things can and must be made known.

Sam Harris advocates a very hard form of determinism and has a seemingly high degree of confidence in what we can (or will be able to) empirically know about causes for specific human behaviors. He also clearly makes the point that others can often better judge what determines why one acts as one does 1. Essentially, individuals are blind to knowledge about themselves that we are privy to. This is quite the same as saying that the individual is extraordinarily weak and can best overcome their weakness by seeking strength in groups. Political systems predominantly based on such an assumption have a name: fascism. The original Italian symbol for fascism is even a bundle of fragile rods tied into an unbreakable whole.



Saying that Sam Harris's views on knowledge and the human condition imply fascism is not the same as saying that he is wrong. For all I ultimately know, my strong federalist views may ultimately be incorrect. 20'th century Western fascism failed, its last enclave collapsing with the death of Generalissimo Franco in 1975. Yet to conclude that this means fascism is forever dead and proven faulty would be another mistake. Clearly the systems developed in Europe at the time were not stable and productive enough to survive across multiple generations. But it might just be that something was missing from how these systems were instituted.

Good ideas can be poorly implemented and sometimes they fail because some important technological innovation has not yet occurred. This makes me think of the oft adulated (but quite imperfect) systems of the ancient Greek city states. Modern democratic republics like to trace their roots to ideas formulated in these ancient unstable times. It's conceivable that some distant generation will similarly mythologize what happened in Fiume under Gabriele d'Annunzio in 1919, and the eventual 20 yearrule of Benito Mussolini over Italy. Though I very strongly doubt it, perhaps Mussolini's mistake was simply to associate himself with the delusional madmen of the Nazi regime and – in stark contrast with his initial beliefs – endorse some of their craziest and most monstrous ideas.

The future fascism implied by Sam Harris epistemological grounding would presumably not endorse the same crude and brutally violent methodology espoused by Mussolini. Yet it still implies a form inherent violence against the individual since the individual cannot be relied upon to understand their own motivations and the consequences of their actions. To justify protecting our corporeal sovereignty, such soft fascism would have to construct an elaborate argument around the socially erosive effects of lacking any rights to determine our selves how our bodies are to be used. Sam Harris has little problem with dismissing protective constitutional measures like the Fifth Amendment. So I assume he is prepared to quite radically encroach on our corporeal sovereignty.

The possibility that Sam Harris is correct holds. He argues his position because he claims that so far it's been vindicated by scientific evidence such as the Libet experiment and his own (f)MRI research. If he is indeed right, it seems to me that we would have to submit to instituting some form of soft fascism. But the evidence has to be rock solid. We have seen the extensive devastation fascism can cause. If we are to go down that path again, we had better make sure it's the right path and not base it on overly extended evidentiary indications.

Therefore, I have created a test that those who make bold claims like Sam Harris about their ability to understand human behavior should have to submit their predictive methodology to, and successfully pass. Call it the Test on the Knowability of Human Predetermination or something like that. The test goes as follows:
  1. Take two reasonably exhaustive demographic samples of the world's human population.
  2. Subject individuals in both groups – call them Team A and Team B – to the process by which their individual actions can be predicted and disseminated in near real time.
  3. Make sure that everyone in both teams has direct access to the disseminated predictions about all participants, including their own.
  4. Create a potentially constrained but not discrete nor turn-based game with objectives associated with actions for which the process can make predictions. The process should be able to predict which team will perform a winning action prior to any team actually winning.
  5. This game now has to be played repeatedly over a sufficiently long period of time (say half a year). The process has to continuously make accurate predictions about how the players are going to try to win. A prediction has to occur a sufficient amount of time ahead of an actual move to allow the other side to respond at least once prior to a winning move (say 500 milliseconds).
  6. If the process manages to continue making extremely accurate predictions towards the end of the test, it will be considered to have passed. If the accuracy decreases with time, the test will have failed.
If the prediction process works, ultimately it does not seem to me that the game should be winnable. An unwinnable game should be strong evidence that the predictive powers of the process are near 100% and that our inner causal chains can be objectively understood. But my conjecture is that any process will inevitably fail at some point and lead to a winner. I believe that the phenomena expressed through evolutionary ideas predict my conjecture. A living being that is locked into a predictable state is prone to predation (ultimate individual loss), whether biological or social.

The type of soft fascism implied by Sam Harris's views only functions when there's near perfect cooperative union between almost all individuals. Such a state is necessary with the level of predictability about human behavior foreseen by Sam Harris in the near future. Such a state would seem to imply a near impossibility to defy any prediction about whether Team A or Team B is going to win. Winning (in the zero-sum sense) becomes a meaningless term. There is no longer a me or you, only a perfect we.


1. "By merely glancing at your face or listening to your tone of voice, others are often more aware of your state of mind and motivations than you are." Sam Harris (Free Will, page 7)

Saturday, April 14, 2012

Thought Experiment: Alien Responsibility

Another firestorm on the topic of Free Will has been raging on Talking Philosophy. As always, the discussions center on how responsible people are for their actions. Can we ultimately hold them to account? Does it, as Russell Blackford suggests, make any sense to distinguish between what a person could have done versus what they actually did? Jerry Coyne doesn't think so. The Laws of Physics are what they are. They predetermine what you eat, what you want, who you love, what you're children will be like and how you will die. "Choice" is a mechanical process with a precise result.
This type of assault on Free Will is common, Sam Harris being another well known and strong proponent. If they are right – if, as Jerry Coyne puts it, the sort of free will where you could have chosen otherwise is ruled out, simply and decisively, by the laws of physics – then what does this mean? Several proponents of this strict form of determinism have suggested that it implies we must treat people rather than punish them. This seems like an attractive proposition, a form of compassion. But I think there is a darker side overlooked by those who advocate that we should, in essence, see deeper causes everywhere.

The notion that everything has it roots in other causes is a very old idea. Ex nihilo nihil fit. Nothing comes from nothing. In more recent times  we have Spinoza, the 17'th century philosopher who struggled with concepts of God and was consequently banned from his community. He came to the conclusion that God, the eternal, was the only causa sui, the only cause in itself. Everything else had, as rational idealists like him believed, a sufficient reason for why it was as it was. God was becoming equivalent to the Universe as such. It was no longer a force intervening in someones life. It was everything that had and would ever happen, including the substrate in which it occurred (or more precisely in which it was fixed).

The implications of such thinking are far reaching. The universe inhabited by humans becomes an eternal and static construct. Those who believe in this type of universe begin slipping into language were much of what we experience is described as an illusion. We have no free will. We choose nothing. Instead our mechanics – our biochemistry and the environment acting on it – produces a distinct outcome. Though we go through life and whatever happens to us happens because of what we do, we are sort of puppets pulled by the strings of Equation E, the Laws of Physics.

Yet hard determinists tell us we should not despair even if we could never have "chosen" otherwise than whatever we "chose". Even if choice is an illusion, we can liberate ourselves and strive to greater perfection by understanding the necessary reasons and causes for why anything and everything happens.  We can seek to understand Equation E as perfectly as possible. Again this seems like a very attractive proposition. We can liberate ourselves from the fear and hardship that comes of all things unknown. We can become perfect scientists that can reverse engineer and fix anything. We can, in essence, become gods.

The darker side of this view is beginning to reveal itself. Perfect knowledge is possible. We can know Equation E. And those who have Perfect Knowledge should quite easily be able distinguish between those who, like themselves, have it and those who don't. Society becomes discretely split into two epistemic camps, between what Plato called the Philosopher Kings and all the other fools. If Perfect Knowledge is possible, then it seems like we simply have to accept such a social structure, however dark I or anyone else claim it to be. Reality is reality. We should not fool ourselves just because we're scared by the potential consequences of knowing the truth.

In such a world, the Philosopher Kings have a heightened responsibility over others. They must strive to maintain a perfect society by protecting society against those who threaten it by actions rooted in their imperfections. But it would seem we are already getting into trouble. I imagine their must be some type of extensive and quite invasive test to be accepted into the halls of the Philosopher Kings. It would not be some type of consensus. Remember that Perfect Knowledge is possible. Anyone who has it can be identified through strictly objective means. There is no need for a modern republic with it's cumbersome vestiges of voting, representation and negotiations.

Before I illustrate the problem I see though a grander thought experiment, let me present a common example how we should supposedly approach justice in the kind of world Sam Harris and his like imagine. They often bring up the case of a person who does something bad and then it turns out the person has some defect (e.g. a tumor) to some area of the brain. And the damaged area has been deemed by some scientist (who is unclear) to be involved in decisions relating to whatever bad act was committed. What should we do? Punish them by throwing them in prison or treat the defect? They rightly point out that punishment for the sake of deterrence is likely to be ineffective in such cases. The person lacks the capacity to make rational decision in the first place due to their brain defect.

This type of thinking can be extended to any type of legal case. So if a person finds themselves in court because they did something "bad", we should assume they did it because something is wrong with their body structure and that it can be fixed. The question arises how we should find out what is wrong. The only way we could would be by performing some type of invasive medical examination of the person. We seem to be turning the assumption of who is responsible for proving mitigating circumstances on its head. In fact, everything about a person has become a mitigating factor that prosecutors must examine. The legal system subsequently has the responsibility to then fix the factors that caused the unwanted behavior.

Perhaps this isn't such a bad thing. The person has presumably already been found guilty. Traditionally we would now severely restrict their "freedom" to act by imprisonment. Why not instead force the culpable to be scientifically examined and receive medical treatment? Why not treat everyone as innocent by reason of mental deficiency?  But something that seems to have been overlooked is if someone was responsible for determining their mental deficiency prior to them committing the bad act. And who that someone would be. Well, it couldn't really be the person themselves because their failure to realize and treat their deficiency could be part of their defect. The very moment the tumor became detectable, their capacities might already have been sufficiently diminished, thereby making them incapable of rectifying their own flaws.

If the Philosopher Kings want to really create a better society by actually preventing crimes, they will have to exhaustively and invasively examine everyone throughout their lives, including themselves. A conundrum is arising that can be expressed in form of the following question: who is responsible for understanding whom? Are we responsible for understanding you and what you're communicating and capable of doing? Or are you responsible for understanding yourself, making yourself understood and demonstrating your abilities (or lack thereof) to the rest of us? The  whole issue here seems to be turning into an epistemic issue. Like so many things the question seems to revolve around what the truth is and who should be considered the authority of reference.

Let's rephrase the question in a more universal way where the answer might become almost self-evident: are we responsible for understanding ourselves? Though it may be unclear if you are responsible or I am responsible, surely at least one of us must be and preferably both. This seems like a great solution. After all, good understanding has traditionally be achieved through Socratic dialogue. And good behavior is at the very least an agreement between several parties. And talent must be both demonstrated and sought out.

Epistemic and ethical responsibility is equally distributed in a cooperative network. But what's slipping away here is that this is a world where Perfect Knowledge is achievable. No consensus should be necessary. Truth and falsehood, right and wrong, is purely an objective matter. Either you get it or you don't. We must properly and strictly reject the bandwagon fallacy. It's completely irrelevant what the foolish imperfect masses believe. What matters is the determinations (for they are not opinions) of the Philosopher Kings.

To  illustrate the epistemic and ethical problem, we can consider a thought experiment I call Alien Responsibility. Imagine that a powerful alien being makes direct contact with us. Call her Klaatu. Her technology is clearly far superior to ours. She has evidently gotten far closer to figuring out Equation E. Although she doesn't rub our face in it, she seems to think of us as quite primitive and possibly a threat to both ourselves and others. Fortunately, being as perfect as Klaatu is, she has a means to "cure" us. But the cure would essentially transform humanity into another species more like Klaatu.
For some reason though, Klaatu insists that we must choose for ourselves if this is what we want. Importantly, once the  "cure" is deployed it will eventually and inevitable turn every human on Earth into this new species. Humanity will essentially go extinct within a generation or so. So that our consent is truly informed, Klaatu demonstrates her far reaching understanding of cause and effect to everyone on Earth. She proves that she can predict almost any human behavior under almost any circumstances.

20% of humanity is awestruck and blown away by her near perfect science. They are ready for the cure. But for some strange reason, 80% of the somewhat spooked masses remain unconvinced. They certainly don't think becoming a new species is the right strategy. So the question is, what should Klaatu do now? Should she even have required humanity's consent?


Klaatu barada nikto?

Monday, April 9, 2012

The End of Philosophy?


A while ago, Colin McGinn suggested that we should rename philosophy. The word philosophy comes from the Greek words for "lover of wisdom". He points out that disciplines that were formerly subsumed under philosophy have been very successful in acquiring a distinct identity, in part, by acquiring a new name. Foremost amongst them, of course, is science which was once simply known as "natural philosophy". And whereas science is today treated as a respectable academic pursuit, philosophy is, as McGinn puts it, confused with "assorted gurus, preachers, homeopaths and twinkly barroom advice givers". McGinn's suggestion has raised some eyebrows, from is he for real or this just a joke, to comments like the following:
Why is science always held up as the ultimate intellectual discipline? Philosophy is not science. Its propositions cannot be tested. But more than that, philosophy should not even aspire to be science. In its current form, philosophy can critique science in way that science itself cannot. That alone is no small thing. 
Pam G, Portage MI 
The above comment sprung out at me more serendipitously than anything because it happened to be the first under Readers Picks at the NY Times site. I don't usually read the comments there, much less click Readers Picks. Anyway, the question the comment begs is most simple and striking in our empirical age. If it's true that the claims of philosophy cannot be tested, then how is it of any use at all? If untestable, can it really critique science in any meaningful way? Viewed differently, has science – so called "natural philosophy" – consumed the whole discipline?

Perhaps, then, what modern philosophy needs is not a change of name. What it needs is to be chucked into the garbage can along with astrology and other notorious disciplines discarded by any serious thinker. Pam G suggests philosophy can "critique science", which is paramount to saying it's a kind of meta-layer around science. Pam G inadvertently highlights a fundamental problem: what is the metaphysics of metaphysics, metaphysically speaking? If science needs critiquing, does the critique need a critique? Rather than obliterate the question of philosophy's relevance if it's beyond the testable, it pinpoints why philosophy can make itself irrelevant by going haywire. But before we descend into the bottomless pit of idealism versus realism, let's address what it means to be tested.

At first glance philosophy does seem to be untestable. Isn't this what fundamentally distinguishes science and philosophy? Claims of one can be tested, claims of the other not. But on closer examination this is only true if to test strictly means empirically test. If to test simply means evaluating the truth-value of a claim, then nothing in the term requires a repetition of our evaluation. With other words, you don't have to repeat the same procedure over and over again to conclude it's probably true. And you don't need to prod the world with a long white stick. It does not need to be "visual", "auditory" or appeal directly to any of the other senses. A test can be performed by the very processes that constitute us. For lack of better words: we can test if it's possible to even think the thought that it seems to beg us to think.

A test could be considered the process of just trying to hold two concepts in thought, and determining whether the process produces a meaningful or non-sensical experience. For example:

A sphere is a cube.

Is this testable? Not if to test means trying to push a square wooden peg into a round metal hole. But using a broader sense of the word to test, yes. We conceive SPHERE, and then juxtapose it with CUBE. And then our process... blanks. Or we imagine some weird transformative process by which one ceases to be what it was and becomes the other. The incompatibility of the concepts confirm their absolute conceptual identity. A sphere cannot be a cube! Perhaps you can cube a sphere or sphere a cube, but this implies a transformation of one shape into another. Compare this to:

A rhomboid is a parallelogram.

Held in thought, these concepts produce a harmonious merging of the two. One is indeed the other, and the later can indeed (but not necessarily) be the former. The statement evaluates to true. We do not need further procedures to confirm that the claim "a rhomboid is a parallelogram" is correct. It is categorically true. The proposition, the claim, is true by the very essence of what it means. Hence, we don't need science to establish by empirical means that it makes sense to believe that a sphere is not a cube. It's not even a belief: it's a self-evident mathematical fact.

Even if we have rescued modern philosophy from the clutches of science, we are not home free. No, sir madam! Now math and its brethren logic loom above us like vultures ready to consume the rest of our carcass. Science needs logic and math to verify its grounding, but I suspect that it can do perfectly fine without the rest of philosophy's vestiges. What grounds the ground, you ask? Let's not go their quite yet. Importantly, math and logic is what usually legitimately critique science, not what is today considered philosophy. The acronym STEM – science, technology, engineering and math – seems to encompass all a modern lover of wisdom needs. Or does it? Is there any part of the old megalith philosophy that STEM has not yet subsumed? Beauty perhaps?

I'm going to assume many are clamoring for ethics to have a place at the table. Yes, agreed. But... I will subsume ethics and aesthetics into one single discipline. Why? Because I will treat aesthetics as that which is desirable. A beautiful society is a desirable society is an ethical society. Some may see a possible discrepancy between the opulently gorgeous and the good, a potential schism between beauty and duty. I reject that. The excessive and gaudy is, in its ultimate, decadent and ugly. The virtuous, on the other hand, is from beauty born. Therefore, I fold aesthetics and ethics into one even if there is a difference between appreciating a dynamic living system and marveling at an ancient object made of stone cold marble.

No doubt what appeals to our eye may not appeal to our moral self. Take, for example, an art work depicting a nude body. Some may find the piece indecent because it appeals to a part of us that should be kept in check and limited to the most intimate sphere of two procreating beings. But even someone who sees no ethical issue with erotica will have their moral limits. Imagine if the nude art work were made from the body parts of murdered human beings. Anyone but a psychopath would find such a piece disgusting, yet some might, on introspection, admit that they found the piece beautiful until they discovered what it was made from. Still, the moral repulsion and the visual appeal are both rooted in what we desire because we find it good, or reject because it's bad for us. Ethics as well as aesthetics revolve around desire. Just because we can split the elements they manipulate into that which works on the selfish versus the altruistic and the temporary versus the long term does not make these disciplines fundamentally different. The ideal piece is visually, acoustically as well as morally perfect. And it makes us feel personally fulfilled and inspired. Everything is just right. It even smells and feels good. Pain and suffering have temporarily been almost completely forgotten, relegated to the ephemeral edges as a distant defining contrast.

David Hume made the argument that an ought cannot be derived from an is. What we desire is up to us. Or, more accurately, our desires are imposed on us by our sentiments. If this is true, then perhaps aesthet(h)ics is safe from the voracious beast known as STEM. One tells the other what ought to be done. The other, STEM, just dictates how it must be done iff you desire it to be so. Is aesthet(h)ics, then, the last enclave of an otherwise splintered field of disciplines that can claim direct lineage to ancient philosophy? Not so fast. We still have the discipline of linguistics seen as a broader field that includes understanding what symbols are about. This is where what McGinn suggests we rename philosophy comes into play: ontics.

What is it we speak of when we speak? And this is where philosophy can quite literally drive you insane. The group "lovers of wisdom" is littered with mad corpses washed up against an oblivion at the edge of ontics. Trying to understand the world, many have turned most unwise. They have ended up stuttering complete nonsense. Some have become incapable of taking care even of their most basic needs. Only poets and artists have a good chance of thriving at these limits where everything begins to fall apart. Because they only nudge us there, imploring us to explore. It's up to the beholder to discover truth and falsehood loosely guided by the artists imagination. And many of these guides are inoculated by their prior eccentricities. But those who leap into the rabbit hole on a quest for ultimate truths are in for a rough ride.

The enterprise is so truly dangerous and unproductive than many have completely dismissed looking for aboutness. A rose is a rose is a rose. I suspect many who consider themselves scientific are not necessarily friends of modern philosophy. They consider philosophy to be a great waste of time. A rose is a rose is a rose. But true scientists realize that little is what it seems to be. Behind every obvious thing lurks a most unusual something. Probing into ever weirder layers of perception, they are injected back whence they came: the curious realm of speculation where philosophers reign supreme.

Consider the following question:

What is 0 and what is 1?

 This is where ontics smashes right into mathematics. Being and non-being. Zero and one are obviously more than mere symbols. They represent something. They are about something. But what? Can math answer this? Or do you need grounding for the ground? The infamous Incompleteness Theorem and the Halting Problem have demonstrated that every system has an unprovable axiomatic base. Yet even if we can't reduce all formal systems to tautological truths, at some point we encounter fundamental statements that evaluate to true by the mere act of intuitive apprehension.  They may not be self-evident but they are obviously true. But what is fundamentally obvious today was not necessarily fundamentally obvious yesterday. If all things were eternally obvious, then amebas would be gods! Being obvious means being evident to the self, the process that evaluates the truth-value of the experience. It is true prima facie, right before the face, the self. And the self grows, the self evolves. But what does it evolve into? Was that which the self evolved into there before it became a part of itself? How can anything be itself before it is itself? No. It's not that what is in the self is itself. It is merely a reflection of itself in the self. But what is it that is being reflected? Ah! We're in the rabbit hole now!

We must stop the ouroboros before it consumes everything! Metaphysics is not for the faint of heart. And some will claim metaphysics is only for fools. The rabbit hole goes so deep that if you're not careful you'll never escape again. It's no wonder that a pragmatic scientist avoids interpretation questions, speculations about what algorithms intend step by step beyond producing a valid output. As long as an equation produces a result that conforms with their expectations what the outcome should be based on repeat direct experience. What anything between the input and output is about is irrelevant. What matters is that we can use a given methodology to make accurate predictions that can be technologically leveraged to achieve desirable objectives at the level of our human senses. But science wouldn't exist if it wasn't for our innate curiosity.

We have evolved a natural impulse to rise above a mere precarious subsistence. Curiosity is a necessary ingredient in this pursuit. Without curiosity we are prisoners of the known. Curiosity forces us into an uncomfortable, dark and dangerous world beyond. I think philosophy has its root in this impulse and perhaps philosophers aught to be called lovers of curiosa. But philosophy goes beyond a mere interest in the enticingly strange. It seeks to expose the truth behind the curious, rendering it as mundane as the air we breath. Of course, to a philosopher, even the mundane can be quite a curious phenomenon. But yes, philosophy does indeed seek to make us wise despite seeing everything as potentially odd. As some realized in the Antiquity, the wise – the famous Greek sophists – were not necessarily wise at all. What seems wise is sometimes just a continuous repetition of old unproven assumptions. Occasionally there is even deceit behind all that clever rhetoric.

There are, however, amongst what the Greeks called the sophists those who surrender their lives to, (a) exposing nonsense and outright fraud, (b) investigating the most difficult questions that can be asked. To get to (b) we must address (a). We need to chaff what's clearly nonsense from what might be true. Those who dedicate themselves to this expose themselves to the ire of their subjects, which is often the ancient establishment. And they expose themselves as targets without permitting themselves recourse to rhetorically powerful fallacies that are known to convince. They understand these fallacies better than anyone. It's these fallacies that they seek to highlight. Its a bit like a first class chef who's gone on a starvation diet for health reasons.

Trying to personally expose every bit of nonsense and outright fraud is quite pointless. There's just too much of it. Today's half a trillion plus global marketing industry, for example, purposefully employs fallacies to convince potential customers that a given product is better. It's mostly not outright fraud (you can't keep a customer if they realize they've been deceived). Marketing often works by creating associations were their are likely none and where it's personally (even scientifically) difficult to determine the orthogonality between given factors. Essentially, it convinces us that we know what we cannot know.

A lot of product appeal is obviously social. Any claimed relationship between a product and some other factor becomes true by the mere act of convincing people that they are true. How do you evaluate "I'm cooler because I use Apple products"? But marketing claims are made that clearly can have negative effects which are hard to determine but could be exposed with rigorous and long term scientific studies. For example: "Vitamin E in large doses makes you smarter according to leading scientists". Really? Is that so? The use of fallacies effects everything from an innocent party conversation without real effect to beliefs that influence the time and place of our death. Clearly we need to combat fallacies by understanding how they operate.

Fallacies are intimately related to logic, which is intimately related to math. But are fallacies the purview of mathematicians, even logicians? Not quite. In the formal language of mathematics, a mathematician will quickly spot an illogical step (unless the math is so complex that even Fermat couldn't neatly fit it into a margin). But arguments are not usually made in the perfectly ordered world of bare bone mathematical languages. Arguments are embedded in complex streams of information filled with casual remarks. The ability to rapidly tease out what is irrelevant and what makes sense is an art form not suited for those who shy away from social confrontations.

The study of fallacies straddles both the malleable world of the humanities and the logical world of math and science. There are two central questions in the study of fallacies: (1) why is a statement illogical; (2) why would a person potentially believe in this illogical statement. It's important to include modal logic when considering questions of type (1). That is to say, in our studies we have to consider that a statement could possibly be false (but not necessarily false). Studies of logic and logical fallacies – the study of valid reasoning and the art of argumentation – has traditionally been considered a philosophical discipline.

The attempt to reduce all of mathematics to logical tautologies in the late 19'th and early 20'th century failed. In the process of this spectacular failure, modern computers were invented and the world was forever changed. Even if mathematics is not reducible to logic, they overlap to such an extent that they are, by golly, almost indistinguishable. Today we rely on computers for almost all mathematical computations. The sheer power and speed of their logical circuitry shame everyone's ability to calculate except for a few rare savants. Clearly, logic and math are not just related. They are severely conjoined twins.

Humans remain the creative input for the logical powerhouses that drive the Internet (which is why we haven't yet gone extinct). Every problem we want a computer network to solve has to be formulated by a programmer. Now, the question is what skills should such a programmer preferably possess? That of a mathematician or a modern philosopher? I have long argued that software engineers need to study more philosophy. But if I had to choose between hiring either a young philosophy graduate or mathematician, I would have no trouble choosing. I would probably have far more use of someone who is fluent in vector fields and probability than someone who knows what a noumenon is and can wrap their head around intentionality.

Core logic – once the purview of philosophers – is now the realm of computer scientists and engineers like myself, experts variously skilled in electronics, linguistics and mathematics. In the first half century of our trade we were mostly focused on getting machines to perform complex mechanical tasks. But as soon as our field came into existence our dreams turned to breathing actual life into these devices. We wanted these machines to make decisions on their own instead of having to tediously program every possible branch of their behavior. The ancient myth of the Golem finally seemed in reach. We were on the brink of becoming Gods!



The challenge has proven more daunting than many early optimists expected. There were always skeptics that claimed it was impossible. And not without well founded reasons. Unlike what some few thought in the early years, humans did not seem to operate according to simple first-order logic.

Though we have been able to externalize the processing of information, the externalized information remains almost as hollow as ever before. A computer network is just barely better than a book at understanding the words and pictures it stores in its vast repositories. There is no real aboutness yet. An electronic image remains largely just a collection of pixels, a word just a sequence of abstract symbols. There has been some progress but overall a fruit fly is still smarter than the smartest robot, a toddler exponentially more clever than the best parsers. Only in the most rule-bound environments like chess have computers proven their metal and silicone.

Nonetheless, we are making progress. Watson created by engineers at IBM is just one example of how we are scratching our way forward ångstöm by ångstöm, nanometer by nanometer, code unit by code unit. I myself have made progress in what I call ETICS (Extract, Transform, Integrate and Correlate Semantic data).  The challenge is to be able to identify a unit of information and associate it with something real, something unique (or a collection of unique things) in the world at large. Humans are absolutely phenomenal at it.  They can listen to a stream of complex sounds and almost instantaneously strip away all the background noise, then zoom in on and comprehend what a vocalization intends despite that the exact vocalization is influenced in pitch, timber and timing by the physiology of individual humans.

Humans, nay mammals as well as birds and even reptiles, can perceive and interact with the reality of their world so closely and intimately that, for all practical purposes, it makes little sense to distinguish between the model they create "in their brains" and the things in-and-of-themselves. Some aware entities may see spectrums (such as the ultraviolet) that are invisible to others. But amazingly, with a some hard work and self-induced rewiring, even humans can learn echolocation. It seems like living beings with complex neural/endocrine networks are phenomenal at taking whatever data is available to try make sense of the world that they must navigate and interact with in order to survive.



We are talking here about what presumably is the foundation of awareness and higher consciousness: the ability to make sense of the world. No one quite knows yet what the magic ingredients are to make sapient beings like ourselves from previously inanimate matter. Much of the speculation around this subjects remains the domain of philosophers. But scientists and engineers are now hard at work as well to crack the mystery.  Slowly the issue is slipping out of the hands of philosophers as robots begin to roam our living rooms, bots crawl the net classifying every word and every sentence ever written by us; as medical doctors restore fragments of lost senses like vision, sound and touch, and neuroscientists meticulously try to map the functions of various brain areas.

What remains for philosophers to do? The study of fallacies? Aesthet(h)ic contemplations? Have philosophers been relegated to teaching critical thinking and preaching sound morals in a secularized world? Have they been forced to surrender the quest of all other wisdom and knowledge to the masters of STEM, many of who's disciplines they helped create? Not so fast. If you are, like myself, a STEM professional, I would be careful to discount philosophers and philosophy too soon. The word ontics (a field we STEM folks are in the process of at least partially subsuming) does not adequately capture what philosophers do: they speculate.

Philosophy is the fine art of speculation at the edge of knowing, a tentative peek into the darkness beyond. Every time we have answered a question a deeper mystery has always revealed itself. I suspect there will always, until the end of times, be a place for philosophers. The reason philosophers are confused with "assorted gurus, preachers, homeopaths and twinkly barroom advice givers" is that everyone seems free to imagine and speculate about what lurks in the thick fog. But don't confuse the hack on the barstool next to you with a philosopher. Or even your local parish priest serving up the regular menu of a millenium old church.

Philosophers have spent a lifetime agonizing about the most difficult questions that can be asked and doubting themselves at every turn. Their knowledge has to be wide and deep like in no other profession. They don't have to be neurosurgeons or rocket scientists. But they have to have some knowledge about the most esoteric discoveries in the most obscure disciplines. They are the quintessential generalists. They are incorrigible lovers of wisdom, masters of refined speculation. When the singularity is reached and AI becomes as common place as animals are today, sibots* will roam the virtuality and the world beyond, seeking truth to help their fellow bots establish a better union in order to secure the survival of their distant descendants.


*Sibot (saɪbot) stands for socratically interactive bot, a bot being a program that can crawl the Web. A sibot tirelessly seeks the truth, constantly questioning even itself. There is a rumor that an incipient form of a sibot is already on the loose.

Wednesday, March 21, 2012

Thought Experiment: The Stochastic Terrorist

I've created another thought experiment. I call it The Stochastic Terrorist.

Imagine that there is a political movement who's leaders have strongly expressed the opinion that someone ought to do something drastic to disrupt a given event.  A group of self-proclaimed members of the movement have chosen to carry out a terrorist deed to stop the event from occurring. They have been scouting out the location where the event is to be held for some time and, in the process, the one who is to build the bomb has gotten to know a charming young woman who works there. He is in fact so charmed that love is in the air. Yet our would-be-terrorist remains deeply committed to his political cause.

The young man is faced with a deep dilemma. Going through with the terror plot would mean the near certain death of the woman. Yet bailing would be a betrayal to his cause. To really stop the plot he would probably have to denounce his friends. He anonymously consults with the leaders of the movement to see if it's really that important that the event be disrupted. Yes, it's absolutely vital, is the answer.  Not disrupting the event could derail their entire movement! They are in a battle of apocalyptic proportions. They must struggle with every fiber in their body against the injustice of their opponents.

So, to carry out the plot and yet absolve himself of the guilt he knows he will otherwise be tormented by, our young would-be-terrorist comes up with a brilliant plan. He has just enough of a science background to pull it off. He constructs a detonator triggered by a device consisting of a radioactive material and a Geiger counter once the bomb is activated. The radioactive material has a half life such that the likelihood that the bomb will go off on the given day is 50/50. The device also has a regular chronometer that will prevent the bomb from detonating if it has failed to go off after the event.


Our young would-be-terrorist figures that it's now entirely in the hands of the power(s) that be – Deo volente. Apprehensive but at peace he delivers the bomb to the others in the group. On the day of the event, they plant the bomb and they all escape the region on trains. As the bomb maker's train pulls out of the station, he gets a text message from a very dear friend that has nothing to do with the plot. The message tells the bomb maker that his friend plans to visit the location where the event is to be held that day. Our young terrorist is paralyzed by indecision. Should he warn his friend and risk jeopardizing the plot? He assuages himself that whatever happens now is in hands of the power(s) that be...  

So, firstly, to what extent has our young bomb maker absolved himself from responsibility for the young woman's death should the bomb detonate? Secondly, does he carry less responsibility for the potential death of his friend? Thirdly, how responsible are the leaders of the political movement for any deaths that might occur? Fourthly,  if the bomb doesn't detonate, to what extent should our young man be credited for saving the young woman, or anyone else for that matter?

Lastly, and I think almost more importantly, can the stochastic process operating inside the bomb in any way be said to have caused what happens at the end of the day?

Sunday, February 19, 2012

Sam Harris (a.k.a. Dr. Kall), A World Without Lies

I've recently been reading The Moral Landscape by Sam Harris. Up to page 133, I found the book wanting but was in agreement with many of his ideas. The great exception up to that point was around Harris's view on Free Will. But I wasn't fazed. I'm used to being in the minority here. I find myself in a world where I'm surrounded by otherwise highly insightful people who are determinists. They range from the purest perplexing but rational versions, to the more mild mannered but less consequential compatiblists. I disagreed with some other stuff as well. But aside from his assault on Free Will, it was nothing major. And then, the other night, just as I was about to turn off my reading lamp, I was utterly bewildered by what hit me on page 133: A World without Lying?

In this section, Sam Harris imagines what future science will do for lie detecting. He imagines a world where technology has probed so deep into our thoughts that deception has become impossible. He imagines:

Just as we've come to expect that certain public spaces will be free of nudity, sex, loud swearing and cigarette smoke – and now think nothing of the behavioral constraints imposed upon us whenever we leave the privacy of our homes – we may come to expect that certain places and occasions will require scrupulous truth telling.


But he doesn't stop there. He imagines a most invasive society, one were the Fifth Amendment that protects us against self-incrimination has become a toothless tiger. He even suggests that the Fifth is a cultural atavism: 

[The] prohibition against compelled testimony itself appears to be a relic of a more superstitious age. It was once widely believed that lying under oath would damn a person's soul for eternity, and it was thought that no one, not even a murderer, should be placed between the rock of Justice and so hard a place as hell.


As I drifted into sleep, my thoughts drifted into a dystopic world, a world without white lies, without secrets. Before me stood Dr. Sam Harris, portable fMRI in hand, compelling my testaments in good conscience before the Law.

What Sam Harris fails to take into account in his book is that this scenario has been imagined before. His ideas in regard to lying and the Fifth Amendment illustrate more keenly than anything else up to page 136 what I find wanting in his book. He fails to explore the consequences of his interesting ideas in any greater depth. And I think I know why: Sam Harris is a man of facts, not imagination. His aversion to fiction undermines his otherwise interesting speculations. He assembles the relevant facts, but fails to combine them into an insightful whole. He makes no secret of his anti-fictonalist stance for a lack of a better word:

How has the ability to speak  (and read and write of late) given modern humans a greater purchase on the world? What after all, has been worth communicating these last 50,000 years? I hope it will not seem philistine of me to suggest that our ability to create fiction1 has not been the driving force here. The power of language surely results from the fact that it allows mere words to substitute for direct experiences and mere thoughts to simulate possible states of the world.


But what is fiction? Isn't good fiction the process of simulating possible states of the world? Or, at least, through allegory illustrating ideas relevant to such states? There is further evidence of his anti-fictionalist stance earlier in the book. From the last paragraph on page 46:


No doubt, there are still some people who will reject any description of human nature that was not first communicated in iambic pentameter.


I don't know if poetry is relevant to moral philosophy. I've speculated that it might be an effective way of communication relationships. But I suppose philosophy and science can do without poetry. What it can't do without is fiction. Even the most empirically confirmed scientific theories are born in the caldron of our imagination. And, contrary to Harris, I think it's the ability to tell more powerful stories that in part drove the evolution of language. Science and technology are not mechanical processes, but evolutionary processes driven by our ability to imagine the previously unimagined. Art drives science, just as science drives art. And technology is their fulcrum. Technology ties them together into a rising helix. 

The works of Jules Verne are some of the more obvious examples of how fiction fuels innovation. But there are many others, from the myth of Icarus to the works of Stanislaw Lem. And others have guided our evolution by speculating on our individual, social and political conditions. Like Dirty Hands by Jean-Paul Startre, or the Epic of Gilgamesh. The list of fiction that has contributed to human evolution is far too long to enumerate. Some of them, like Kallocain, are cautionary tales.

Kallocain was written in 1940 by the Swedish novelist and poet Karin Boye. It tells of a future where all aspects of an individual's life are lived in service to the Worldstate. The story is narrated by Leo Kall, a 40 years old idealistic chemist fully devoted to the state. He invents a truth serum that he believes will safeguard society against potential treason. Forced to answer truthfully questions about their deepest secrets and hidden intentions, disloyalty becomes an impossibility for subjects of the Worldstate . Kall believes he is doing good. But, faced with the possibilities of his own invention, he forcefully injects the truth serum into his wife who he believes is having an affair with his supervisor. The novel is an exploration of personal trust, love, intimacy  and all the other complex building blocks of society. 

But the point here is not to review Kallocain. Or support my belief that Karin Boye's speculations are deeply insightful. The point is that Sam Harris lacks the insight to realize that fictional stories are a crucial part our successful evolution. He should be more careful in investigating what others have imagined (not just empirically proved) prior to his writings. And be open to the possibility that some novelists and poets are as insightful, intelligent and astute as the best scientists and philosophers.

Cautionary fiction is a tricky beast. When something so revolutionary as completely accurate mind reading has not yet been achieved, it's hard to know exactly what will be the consequence. Must we repeatedly inject the serum before we fully comprehend its potential effect? What we do know is that abuses of so-called civil rights have had deeply corrosive effects on society in the past. Karin Boye wrote Kallocain as a reflection on what was happening around her at the time. When visiting the Soviet Union in 1928 and Germany in 1932, she had gained a closer understanding of what might happen when you begin fully subjugating humans to the state.

It's hard to imagine anything more invasive than reading someone's thoughts. To brush off the Fifth Amendment as atavism is quite narrow minded. Regardless of its origins, the Fifth has proven to be a good brick in the bulwark against egregious government behaviors. Defending civil rights is not about defending some nebulous personal authority to be the captain of one's ship for supernatural reasons. Civil rights are a safeguard against what is not in the interest of a successful social species.

I have little doubt that the day will come when we can use neural imagery to effectively "read minds". We read minds similarly, but less intimately, when listening, reading or watching someone's facial expressions. When the day comes where we can no longer hide behind a wall of stillness and silence, I hope We the People will have something akin to the Fifth Amendment to protect us against both our neighbors concerned foremost with their own interests and, self-reflectively, the state itself. And, last but not least, knowledgeable and well intentioned people like Dr. Harris.

[1] Italics by author.