Pages

Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Monday, April 9, 2012

The End of Philosophy?


A while ago, Colin McGinn suggested that we should rename philosophy. The word philosophy comes from the Greek words for "lover of wisdom". He points out that disciplines that were formerly subsumed under philosophy have been very successful in acquiring a distinct identity, in part, by acquiring a new name. Foremost amongst them, of course, is science which was once simply known as "natural philosophy". And whereas science is today treated as a respectable academic pursuit, philosophy is, as McGinn puts it, confused with "assorted gurus, preachers, homeopaths and twinkly barroom advice givers". McGinn's suggestion has raised some eyebrows, from is he for real or this just a joke, to comments like the following:
Why is science always held up as the ultimate intellectual discipline? Philosophy is not science. Its propositions cannot be tested. But more than that, philosophy should not even aspire to be science. In its current form, philosophy can critique science in way that science itself cannot. That alone is no small thing. 
Pam G, Portage MI 
The above comment sprung out at me more serendipitously than anything because it happened to be the first under Readers Picks at the NY Times site. I don't usually read the comments there, much less click Readers Picks. Anyway, the question the comment begs is most simple and striking in our empirical age. If it's true that the claims of philosophy cannot be tested, then how is it of any use at all? If untestable, can it really critique science in any meaningful way? Viewed differently, has science – so called "natural philosophy" – consumed the whole discipline?

Perhaps, then, what modern philosophy needs is not a change of name. What it needs is to be chucked into the garbage can along with astrology and other notorious disciplines discarded by any serious thinker. Pam G suggests philosophy can "critique science", which is paramount to saying it's a kind of meta-layer around science. Pam G inadvertently highlights a fundamental problem: what is the metaphysics of metaphysics, metaphysically speaking? If science needs critiquing, does the critique need a critique? Rather than obliterate the question of philosophy's relevance if it's beyond the testable, it pinpoints why philosophy can make itself irrelevant by going haywire. But before we descend into the bottomless pit of idealism versus realism, let's address what it means to be tested.

At first glance philosophy does seem to be untestable. Isn't this what fundamentally distinguishes science and philosophy? Claims of one can be tested, claims of the other not. But on closer examination this is only true if to test strictly means empirically test. If to test simply means evaluating the truth-value of a claim, then nothing in the term requires a repetition of our evaluation. With other words, you don't have to repeat the same procedure over and over again to conclude it's probably true. And you don't need to prod the world with a long white stick. It does not need to be "visual", "auditory" or appeal directly to any of the other senses. A test can be performed by the very processes that constitute us. For lack of better words: we can test if it's possible to even think the thought that it seems to beg us to think.

A test could be considered the process of just trying to hold two concepts in thought, and determining whether the process produces a meaningful or non-sensical experience. For example:

A sphere is a cube.

Is this testable? Not if to test means trying to push a square wooden peg into a round metal hole. But using a broader sense of the word to test, yes. We conceive SPHERE, and then juxtapose it with CUBE. And then our process... blanks. Or we imagine some weird transformative process by which one ceases to be what it was and becomes the other. The incompatibility of the concepts confirm their absolute conceptual identity. A sphere cannot be a cube! Perhaps you can cube a sphere or sphere a cube, but this implies a transformation of one shape into another. Compare this to:

A rhomboid is a parallelogram.

Held in thought, these concepts produce a harmonious merging of the two. One is indeed the other, and the later can indeed (but not necessarily) be the former. The statement evaluates to true. We do not need further procedures to confirm that the claim "a rhomboid is a parallelogram" is correct. It is categorically true. The proposition, the claim, is true by the very essence of what it means. Hence, we don't need science to establish by empirical means that it makes sense to believe that a sphere is not a cube. It's not even a belief: it's a self-evident mathematical fact.

Even if we have rescued modern philosophy from the clutches of science, we are not home free. No, sir madam! Now math and its brethren logic loom above us like vultures ready to consume the rest of our carcass. Science needs logic and math to verify its grounding, but I suspect that it can do perfectly fine without the rest of philosophy's vestiges. What grounds the ground, you ask? Let's not go their quite yet. Importantly, math and logic is what usually legitimately critique science, not what is today considered philosophy. The acronym STEM – science, technology, engineering and math – seems to encompass all a modern lover of wisdom needs. Or does it? Is there any part of the old megalith philosophy that STEM has not yet subsumed? Beauty perhaps?

I'm going to assume many are clamoring for ethics to have a place at the table. Yes, agreed. But... I will subsume ethics and aesthetics into one single discipline. Why? Because I will treat aesthetics as that which is desirable. A beautiful society is a desirable society is an ethical society. Some may see a possible discrepancy between the opulently gorgeous and the good, a potential schism between beauty and duty. I reject that. The excessive and gaudy is, in its ultimate, decadent and ugly. The virtuous, on the other hand, is from beauty born. Therefore, I fold aesthetics and ethics into one even if there is a difference between appreciating a dynamic living system and marveling at an ancient object made of stone cold marble.

No doubt what appeals to our eye may not appeal to our moral self. Take, for example, an art work depicting a nude body. Some may find the piece indecent because it appeals to a part of us that should be kept in check and limited to the most intimate sphere of two procreating beings. But even someone who sees no ethical issue with erotica will have their moral limits. Imagine if the nude art work were made from the body parts of murdered human beings. Anyone but a psychopath would find such a piece disgusting, yet some might, on introspection, admit that they found the piece beautiful until they discovered what it was made from. Still, the moral repulsion and the visual appeal are both rooted in what we desire because we find it good, or reject because it's bad for us. Ethics as well as aesthetics revolve around desire. Just because we can split the elements they manipulate into that which works on the selfish versus the altruistic and the temporary versus the long term does not make these disciplines fundamentally different. The ideal piece is visually, acoustically as well as morally perfect. And it makes us feel personally fulfilled and inspired. Everything is just right. It even smells and feels good. Pain and suffering have temporarily been almost completely forgotten, relegated to the ephemeral edges as a distant defining contrast.

David Hume made the argument that an ought cannot be derived from an is. What we desire is up to us. Or, more accurately, our desires are imposed on us by our sentiments. If this is true, then perhaps aesthet(h)ics is safe from the voracious beast known as STEM. One tells the other what ought to be done. The other, STEM, just dictates how it must be done iff you desire it to be so. Is aesthet(h)ics, then, the last enclave of an otherwise splintered field of disciplines that can claim direct lineage to ancient philosophy? Not so fast. We still have the discipline of linguistics seen as a broader field that includes understanding what symbols are about. This is where what McGinn suggests we rename philosophy comes into play: ontics.

What is it we speak of when we speak? And this is where philosophy can quite literally drive you insane. The group "lovers of wisdom" is littered with mad corpses washed up against an oblivion at the edge of ontics. Trying to understand the world, many have turned most unwise. They have ended up stuttering complete nonsense. Some have become incapable of taking care even of their most basic needs. Only poets and artists have a good chance of thriving at these limits where everything begins to fall apart. Because they only nudge us there, imploring us to explore. It's up to the beholder to discover truth and falsehood loosely guided by the artists imagination. And many of these guides are inoculated by their prior eccentricities. But those who leap into the rabbit hole on a quest for ultimate truths are in for a rough ride.

The enterprise is so truly dangerous and unproductive than many have completely dismissed looking for aboutness. A rose is a rose is a rose. I suspect many who consider themselves scientific are not necessarily friends of modern philosophy. They consider philosophy to be a great waste of time. A rose is a rose is a rose. But true scientists realize that little is what it seems to be. Behind every obvious thing lurks a most unusual something. Probing into ever weirder layers of perception, they are injected back whence they came: the curious realm of speculation where philosophers reign supreme.

Consider the following question:

What is 0 and what is 1?

 This is where ontics smashes right into mathematics. Being and non-being. Zero and one are obviously more than mere symbols. They represent something. They are about something. But what? Can math answer this? Or do you need grounding for the ground? The infamous Incompleteness Theorem and the Halting Problem have demonstrated that every system has an unprovable axiomatic base. Yet even if we can't reduce all formal systems to tautological truths, at some point we encounter fundamental statements that evaluate to true by the mere act of intuitive apprehension.  They may not be self-evident but they are obviously true. But what is fundamentally obvious today was not necessarily fundamentally obvious yesterday. If all things were eternally obvious, then amebas would be gods! Being obvious means being evident to the self, the process that evaluates the truth-value of the experience. It is true prima facie, right before the face, the self. And the self grows, the self evolves. But what does it evolve into? Was that which the self evolved into there before it became a part of itself? How can anything be itself before it is itself? No. It's not that what is in the self is itself. It is merely a reflection of itself in the self. But what is it that is being reflected? Ah! We're in the rabbit hole now!

We must stop the ouroboros before it consumes everything! Metaphysics is not for the faint of heart. And some will claim metaphysics is only for fools. The rabbit hole goes so deep that if you're not careful you'll never escape again. It's no wonder that a pragmatic scientist avoids interpretation questions, speculations about what algorithms intend step by step beyond producing a valid output. As long as an equation produces a result that conforms with their expectations what the outcome should be based on repeat direct experience. What anything between the input and output is about is irrelevant. What matters is that we can use a given methodology to make accurate predictions that can be technologically leveraged to achieve desirable objectives at the level of our human senses. But science wouldn't exist if it wasn't for our innate curiosity.

We have evolved a natural impulse to rise above a mere precarious subsistence. Curiosity is a necessary ingredient in this pursuit. Without curiosity we are prisoners of the known. Curiosity forces us into an uncomfortable, dark and dangerous world beyond. I think philosophy has its root in this impulse and perhaps philosophers aught to be called lovers of curiosa. But philosophy goes beyond a mere interest in the enticingly strange. It seeks to expose the truth behind the curious, rendering it as mundane as the air we breath. Of course, to a philosopher, even the mundane can be quite a curious phenomenon. But yes, philosophy does indeed seek to make us wise despite seeing everything as potentially odd. As some realized in the Antiquity, the wise – the famous Greek sophists – were not necessarily wise at all. What seems wise is sometimes just a continuous repetition of old unproven assumptions. Occasionally there is even deceit behind all that clever rhetoric.

There are, however, amongst what the Greeks called the sophists those who surrender their lives to, (a) exposing nonsense and outright fraud, (b) investigating the most difficult questions that can be asked. To get to (b) we must address (a). We need to chaff what's clearly nonsense from what might be true. Those who dedicate themselves to this expose themselves to the ire of their subjects, which is often the ancient establishment. And they expose themselves as targets without permitting themselves recourse to rhetorically powerful fallacies that are known to convince. They understand these fallacies better than anyone. It's these fallacies that they seek to highlight. Its a bit like a first class chef who's gone on a starvation diet for health reasons.

Trying to personally expose every bit of nonsense and outright fraud is quite pointless. There's just too much of it. Today's half a trillion plus global marketing industry, for example, purposefully employs fallacies to convince potential customers that a given product is better. It's mostly not outright fraud (you can't keep a customer if they realize they've been deceived). Marketing often works by creating associations were their are likely none and where it's personally (even scientifically) difficult to determine the orthogonality between given factors. Essentially, it convinces us that we know what we cannot know.

A lot of product appeal is obviously social. Any claimed relationship between a product and some other factor becomes true by the mere act of convincing people that they are true. How do you evaluate "I'm cooler because I use Apple products"? But marketing claims are made that clearly can have negative effects which are hard to determine but could be exposed with rigorous and long term scientific studies. For example: "Vitamin E in large doses makes you smarter according to leading scientists". Really? Is that so? The use of fallacies effects everything from an innocent party conversation without real effect to beliefs that influence the time and place of our death. Clearly we need to combat fallacies by understanding how they operate.

Fallacies are intimately related to logic, which is intimately related to math. But are fallacies the purview of mathematicians, even logicians? Not quite. In the formal language of mathematics, a mathematician will quickly spot an illogical step (unless the math is so complex that even Fermat couldn't neatly fit it into a margin). But arguments are not usually made in the perfectly ordered world of bare bone mathematical languages. Arguments are embedded in complex streams of information filled with casual remarks. The ability to rapidly tease out what is irrelevant and what makes sense is an art form not suited for those who shy away from social confrontations.

The study of fallacies straddles both the malleable world of the humanities and the logical world of math and science. There are two central questions in the study of fallacies: (1) why is a statement illogical; (2) why would a person potentially believe in this illogical statement. It's important to include modal logic when considering questions of type (1). That is to say, in our studies we have to consider that a statement could possibly be false (but not necessarily false). Studies of logic and logical fallacies – the study of valid reasoning and the art of argumentation – has traditionally been considered a philosophical discipline.

The attempt to reduce all of mathematics to logical tautologies in the late 19'th and early 20'th century failed. In the process of this spectacular failure, modern computers were invented and the world was forever changed. Even if mathematics is not reducible to logic, they overlap to such an extent that they are, by golly, almost indistinguishable. Today we rely on computers for almost all mathematical computations. The sheer power and speed of their logical circuitry shame everyone's ability to calculate except for a few rare savants. Clearly, logic and math are not just related. They are severely conjoined twins.

Humans remain the creative input for the logical powerhouses that drive the Internet (which is why we haven't yet gone extinct). Every problem we want a computer network to solve has to be formulated by a programmer. Now, the question is what skills should such a programmer preferably possess? That of a mathematician or a modern philosopher? I have long argued that software engineers need to study more philosophy. But if I had to choose between hiring either a young philosophy graduate or mathematician, I would have no trouble choosing. I would probably have far more use of someone who is fluent in vector fields and probability than someone who knows what a noumenon is and can wrap their head around intentionality.

Core logic – once the purview of philosophers – is now the realm of computer scientists and engineers like myself, experts variously skilled in electronics, linguistics and mathematics. In the first half century of our trade we were mostly focused on getting machines to perform complex mechanical tasks. But as soon as our field came into existence our dreams turned to breathing actual life into these devices. We wanted these machines to make decisions on their own instead of having to tediously program every possible branch of their behavior. The ancient myth of the Golem finally seemed in reach. We were on the brink of becoming Gods!



The challenge has proven more daunting than many early optimists expected. There were always skeptics that claimed it was impossible. And not without well founded reasons. Unlike what some few thought in the early years, humans did not seem to operate according to simple first-order logic.

Though we have been able to externalize the processing of information, the externalized information remains almost as hollow as ever before. A computer network is just barely better than a book at understanding the words and pictures it stores in its vast repositories. There is no real aboutness yet. An electronic image remains largely just a collection of pixels, a word just a sequence of abstract symbols. There has been some progress but overall a fruit fly is still smarter than the smartest robot, a toddler exponentially more clever than the best parsers. Only in the most rule-bound environments like chess have computers proven their metal and silicone.

Nonetheless, we are making progress. Watson created by engineers at IBM is just one example of how we are scratching our way forward ångstöm by ångstöm, nanometer by nanometer, code unit by code unit. I myself have made progress in what I call ETICS (Extract, Transform, Integrate and Correlate Semantic data).  The challenge is to be able to identify a unit of information and associate it with something real, something unique (or a collection of unique things) in the world at large. Humans are absolutely phenomenal at it.  They can listen to a stream of complex sounds and almost instantaneously strip away all the background noise, then zoom in on and comprehend what a vocalization intends despite that the exact vocalization is influenced in pitch, timber and timing by the physiology of individual humans.

Humans, nay mammals as well as birds and even reptiles, can perceive and interact with the reality of their world so closely and intimately that, for all practical purposes, it makes little sense to distinguish between the model they create "in their brains" and the things in-and-of-themselves. Some aware entities may see spectrums (such as the ultraviolet) that are invisible to others. But amazingly, with a some hard work and self-induced rewiring, even humans can learn echolocation. It seems like living beings with complex neural/endocrine networks are phenomenal at taking whatever data is available to try make sense of the world that they must navigate and interact with in order to survive.



We are talking here about what presumably is the foundation of awareness and higher consciousness: the ability to make sense of the world. No one quite knows yet what the magic ingredients are to make sapient beings like ourselves from previously inanimate matter. Much of the speculation around this subjects remains the domain of philosophers. But scientists and engineers are now hard at work as well to crack the mystery.  Slowly the issue is slipping out of the hands of philosophers as robots begin to roam our living rooms, bots crawl the net classifying every word and every sentence ever written by us; as medical doctors restore fragments of lost senses like vision, sound and touch, and neuroscientists meticulously try to map the functions of various brain areas.

What remains for philosophers to do? The study of fallacies? Aesthet(h)ic contemplations? Have philosophers been relegated to teaching critical thinking and preaching sound morals in a secularized world? Have they been forced to surrender the quest of all other wisdom and knowledge to the masters of STEM, many of who's disciplines they helped create? Not so fast. If you are, like myself, a STEM professional, I would be careful to discount philosophers and philosophy too soon. The word ontics (a field we STEM folks are in the process of at least partially subsuming) does not adequately capture what philosophers do: they speculate.

Philosophy is the fine art of speculation at the edge of knowing, a tentative peek into the darkness beyond. Every time we have answered a question a deeper mystery has always revealed itself. I suspect there will always, until the end of times, be a place for philosophers. The reason philosophers are confused with "assorted gurus, preachers, homeopaths and twinkly barroom advice givers" is that everyone seems free to imagine and speculate about what lurks in the thick fog. But don't confuse the hack on the barstool next to you with a philosopher. Or even your local parish priest serving up the regular menu of a millenium old church.

Philosophers have spent a lifetime agonizing about the most difficult questions that can be asked and doubting themselves at every turn. Their knowledge has to be wide and deep like in no other profession. They don't have to be neurosurgeons or rocket scientists. But they have to have some knowledge about the most esoteric discoveries in the most obscure disciplines. They are the quintessential generalists. They are incorrigible lovers of wisdom, masters of refined speculation. When the singularity is reached and AI becomes as common place as animals are today, sibots* will roam the virtuality and the world beyond, seeking truth to help their fellow bots establish a better union in order to secure the survival of their distant descendants.


*Sibot (saɪbot) stands for socratically interactive bot, a bot being a program that can crawl the Web. A sibot tirelessly seeks the truth, constantly questioning even itself. There is a rumor that an incipient form of a sibot is already on the loose.

Sunday, February 19, 2012

Sam Harris (a.k.a. Dr. Kall), A World Without Lies

I've recently been reading The Moral Landscape by Sam Harris. Up to page 133, I found the book wanting but was in agreement with many of his ideas. The great exception up to that point was around Harris's view on Free Will. But I wasn't fazed. I'm used to being in the minority here. I find myself in a world where I'm surrounded by otherwise highly insightful people who are determinists. They range from the purest perplexing but rational versions, to the more mild mannered but less consequential compatiblists. I disagreed with some other stuff as well. But aside from his assault on Free Will, it was nothing major. And then, the other night, just as I was about to turn off my reading lamp, I was utterly bewildered by what hit me on page 133: A World without Lying?

In this section, Sam Harris imagines what future science will do for lie detecting. He imagines a world where technology has probed so deep into our thoughts that deception has become impossible. He imagines:

Just as we've come to expect that certain public spaces will be free of nudity, sex, loud swearing and cigarette smoke – and now think nothing of the behavioral constraints imposed upon us whenever we leave the privacy of our homes – we may come to expect that certain places and occasions will require scrupulous truth telling.


But he doesn't stop there. He imagines a most invasive society, one were the Fifth Amendment that protects us against self-incrimination has become a toothless tiger. He even suggests that the Fifth is a cultural atavism: 

[The] prohibition against compelled testimony itself appears to be a relic of a more superstitious age. It was once widely believed that lying under oath would damn a person's soul for eternity, and it was thought that no one, not even a murderer, should be placed between the rock of Justice and so hard a place as hell.


As I drifted into sleep, my thoughts drifted into a dystopic world, a world without white lies, without secrets. Before me stood Dr. Sam Harris, portable fMRI in hand, compelling my testaments in good conscience before the Law.

What Sam Harris fails to take into account in his book is that this scenario has been imagined before. His ideas in regard to lying and the Fifth Amendment illustrate more keenly than anything else up to page 136 what I find wanting in his book. He fails to explore the consequences of his interesting ideas in any greater depth. And I think I know why: Sam Harris is a man of facts, not imagination. His aversion to fiction undermines his otherwise interesting speculations. He assembles the relevant facts, but fails to combine them into an insightful whole. He makes no secret of his anti-fictonalist stance for a lack of a better word:

How has the ability to speak  (and read and write of late) given modern humans a greater purchase on the world? What after all, has been worth communicating these last 50,000 years? I hope it will not seem philistine of me to suggest that our ability to create fiction1 has not been the driving force here. The power of language surely results from the fact that it allows mere words to substitute for direct experiences and mere thoughts to simulate possible states of the world.


But what is fiction? Isn't good fiction the process of simulating possible states of the world? Or, at least, through allegory illustrating ideas relevant to such states? There is further evidence of his anti-fictionalist stance earlier in the book. From the last paragraph on page 46:


No doubt, there are still some people who will reject any description of human nature that was not first communicated in iambic pentameter.


I don't know if poetry is relevant to moral philosophy. I've speculated that it might be an effective way of communication relationships. But I suppose philosophy and science can do without poetry. What it can't do without is fiction. Even the most empirically confirmed scientific theories are born in the caldron of our imagination. And, contrary to Harris, I think it's the ability to tell more powerful stories that in part drove the evolution of language. Science and technology are not mechanical processes, but evolutionary processes driven by our ability to imagine the previously unimagined. Art drives science, just as science drives art. And technology is their fulcrum. Technology ties them together into a rising helix. 

The works of Jules Verne are some of the more obvious examples of how fiction fuels innovation. But there are many others, from the myth of Icarus to the works of Stanislaw Lem. And others have guided our evolution by speculating on our individual, social and political conditions. Like Dirty Hands by Jean-Paul Startre, or the Epic of Gilgamesh. The list of fiction that has contributed to human evolution is far too long to enumerate. Some of them, like Kallocain, are cautionary tales.

Kallocain was written in 1940 by the Swedish novelist and poet Karin Boye. It tells of a future where all aspects of an individual's life are lived in service to the Worldstate. The story is narrated by Leo Kall, a 40 years old idealistic chemist fully devoted to the state. He invents a truth serum that he believes will safeguard society against potential treason. Forced to answer truthfully questions about their deepest secrets and hidden intentions, disloyalty becomes an impossibility for subjects of the Worldstate . Kall believes he is doing good. But, faced with the possibilities of his own invention, he forcefully injects the truth serum into his wife who he believes is having an affair with his supervisor. The novel is an exploration of personal trust, love, intimacy  and all the other complex building blocks of society. 

But the point here is not to review Kallocain. Or support my belief that Karin Boye's speculations are deeply insightful. The point is that Sam Harris lacks the insight to realize that fictional stories are a crucial part our successful evolution. He should be more careful in investigating what others have imagined (not just empirically proved) prior to his writings. And be open to the possibility that some novelists and poets are as insightful, intelligent and astute as the best scientists and philosophers.

Cautionary fiction is a tricky beast. When something so revolutionary as completely accurate mind reading has not yet been achieved, it's hard to know exactly what will be the consequence. Must we repeatedly inject the serum before we fully comprehend its potential effect? What we do know is that abuses of so-called civil rights have had deeply corrosive effects on society in the past. Karin Boye wrote Kallocain as a reflection on what was happening around her at the time. When visiting the Soviet Union in 1928 and Germany in 1932, she had gained a closer understanding of what might happen when you begin fully subjugating humans to the state.

It's hard to imagine anything more invasive than reading someone's thoughts. To brush off the Fifth Amendment as atavism is quite narrow minded. Regardless of its origins, the Fifth has proven to be a good brick in the bulwark against egregious government behaviors. Defending civil rights is not about defending some nebulous personal authority to be the captain of one's ship for supernatural reasons. Civil rights are a safeguard against what is not in the interest of a successful social species.

I have little doubt that the day will come when we can use neural imagery to effectively "read minds". We read minds similarly, but less intimately, when listening, reading or watching someone's facial expressions. When the day comes where we can no longer hide behind a wall of stillness and silence, I hope We the People will have something akin to the Fifth Amendment to protect us against both our neighbors concerned foremost with their own interests and, self-reflectively, the state itself. And, last but not least, knowledgeable and well intentioned people like Dr. Harris.

[1] Italics by author.

Friday, December 23, 2011

The Problem of Coexistence

Paracelsus supposedly said, "There are no poisons, only quantities". I would rather say that some things can only coexist to a given degree.

P.S. Watch out for umbrellas.

Saturday, June 11, 2011

Incoherent Decoherence

A few nights ago, I was watching for the second time the documentary Parallel Worlds, Parallel Lives. It's a film about Hugh Everette III, who happens to be the father of Mark Oliver Everett, the man behind one of my long time favorite bands, Eels. Not surprisingly, it brought me to thinking about interpretations of the so called collapse of the state vector in quantum mechanics.



The many-worlds interpretation is cool and jazzy and all. But as an explanation for why we observe what we do, and why we are what we are, it's just a whacky idea rooted in the hallucinatory world of a Beatnik generation. It's like the kid on acid who says, "Wow, now I get it". Get what? "It's connected". What's connected? "Everything". It purports to explain what it doesn't explain. It's hardly different from the following explanation of why my cat is black:

My black cat is black because blackness is what we see when we look at my black cat.

Or phrased in more a Schröringeresque context, with a 1950's avantgarde flavor:

The cat is dead because the dead cat is what you see when you open the black box. Oh, and by the way there is a guy who saw a live cat because a live cat was what he saw when he opened the white box. And did you know that that other guy is really you? Well, not really you but sort of you because you did have the same mother after all. Or did you? Weird, eh. Mind-boggling awesomeness. Do you have Buddha-nature? Don't bogart that joint my friend!

I'm not saying that it is not in some ways a useful mental construct. In some sense, I relied on the same idea for Anything Goes. My point is along the lines of my previous critique directed at Max Tegmark's Scientific American article. Even if there is a reality to the idea of quantum doppelgängers, it does nothing to explain the very reality that we experience what we experience and not what our supposed doppelgängers experience.

Supposedly, decoherence explains everything. Editors at Wikipedia have written.

Before an understanding of decoherence was developed the Copenhagen interpretation of quantum mechanics treated wavefunction collapse as a fundamental, a priori process. Decoherence provides an explanatory mechanism for the appearance of wavefunction collapse and was first developed by David Bohm in 1952 who applied it to Louis DeBroglie's pilot wave theory, producing Bohmian mechanics, the first successful hidden variables interpretation of quantum mechanics. Decoherence was then used by Hugh Everett in 1957 to form the core of his many-worlds interpretation.

And then after a few sentences, presumably after the various editors all had a few too many bowls of hashish or got a little too caught up in the Rigveda, add:

Decoherence does not provide a mechanism for the actual wave function collapse; rather it provides a mechanism for the appearance of wavefunction collapse. The quantum nature of the system is simply "leaked" into the environment so that a total superposition of the wavefunction still exists, but exists — at least for all practical purposes — beyond the realm of measurement.

So what is it? Does it explain it or does it not explain it? Is this just an indication that the wiki process is incapable of resolving differences of opinion rationally? Or is it an indication that no one knows what the heck they are talking about? I don't doubt that Mr. Everett provided us with a potentially deep insight as Max Tegmark wants us to note.

Perhaps we will one day be able to superimpose ourselves with our doppelgängers over a cup of tea. And what might we talk about? Well, as our tea party keeps being inundated by newly calculated superpositions, I suppose we will talk about why I happen to be me and my doppelgängers happen to be my doppelgängers. I mean why they happen to be themselves and I happen to be their doppelgänger. Scoot over my friend, make space for the new guy who just arrived. No, no, I'm not the new arrival, you are!

How about this beauty from Wikipedia:

One thing to realize is that the environment contains a huge number of degrees of freedom, a good number of them interacting with each other all the time.

Okidoki. I know what I get when I put a lot of little black arrows on multidimensional piece of paper: a very dark mess.

Ah,yes. It's that bottom of the barrel epistemic truth: my black cat is black because blackness is what we see when we look at my black cat. I think I'll stick with the Copenhagen interpretation for now. But I look forward to maybe meeting all my doppelgängers some day. Bring out the hookah-pipe Mr. Caterpillar!

Thursday, April 21, 2011

Aurora Borealis היקום הוא מדהים

Somewhere close to where my beloved grandfather Olof Nylander in 1942 said "Sergeant, how can I fire my rifle with straw stuffed in my gloves??" And the sergeant answered "Private Nylander, don't you worry about that! The enemy will have the same problem!" Surpassing life and death. May the Nothing to which he has returned be infinitely fruitful.

The Aurora from Terje Sorgjerd on Vimeo.

Seeing the Unseen היקום הוא מדהים

Sometimes only technology can bring us close to the beauty of the phenomenal. A mesmerizing moment of the extended Now by the Teide Observatories on Tenerife. Nowhere to Nowhere, Nothing to Nothing. One sad and happy tear at a time. Amen. היקום הוא מדהים

The Mountain from Terje Sorgjerd on Vimeo.

Physicalism vs. Emergentism

The problem I see with physicalism is that it attempts to reduce everything to one layer of understanding. Physicalism does not on its own include encapsulation. We are not encouraged to manipulate the sum of the parts as something new and different than the crude bag of parts. Which is why emergence often does not make sense in the physicalist framework. To understand something we must understand the physical layer. This is an incorrect approach in my experience of system design. Essentially, I'm saying that on closer scrutiny ex nihilo nihil fit ("nothing comes from nothing") is in some sense incorrect.

My argument against a pure physicalist view is diagonality. I can create two functions: one to move an element left-right and another to move the element up-down. When applying both functions to the element, it will move diagonally. Diagonality does not exist in either of the two functions. It exists only in the element as an interaction between the two functions, that is to say in the "negative" space (in the "nothing") bewteen the two. It emerges from something we cannot experience except in the relationship between the parts, perceived as a single whole as they interact.

There is a concept of supervenience in physicalism that I can perhaps agree with. Take my diagonality. If you alter the left-right movement, you will alter the diagonailty. With other words, diagonality supervenes on the two functions. But physicalism seems to constantly implore us to go deeper, until we are dealing with vectors in a Hilbert space. However, reduced to that level, we can no longer see the forest for all the trees. Instead of extending our knowledge we have eliminated  a part of it by by blinding us to the supervening and focusing entirely on the supervened.

For a deep understanding, it's not sufficient to break everything down into pieces. We must shift our attention between multiple experiential tiers. Common sense notions are as valuable as hard to access notions about angular momentum. In the immediate sense, folk psychology and such are more valuable. What is of greatest immediate use to us is what will carry us through the day.  I don't need to understand combustion to understand the power of fire. Yet to fulfill the Basic Imperative, we must extend ourselves as deep into the bizarre as we can. However, if we get stuck there, we will be lost and unable to fulfill it. The trick to mastering our world is to continuously shift between levels of decomposition and not superficialize one or the other.

Sunday, April 17, 2011

A Priori Mental Content ≈ Phlogiston?

Some semantic theories talk about a priori mental content. It has been claimed that consciousness cannot emerge from Turing Machines because of the supposed difference between Syntax and Semantics. Other mammals and computational devices don't do Semantics! Humans do. Hence only homo-like beings can be truly conscious. John Searle uses this Syntax versus Semantics argument in his 1990 Scientific American article Is the Brain's Mind a Computer Program? Another one is the sibot Don Bird*.

Some of you may cringe, but I don't know what this alleged a priori mental content is supposed to be. I may, of course, be (metaphorically) blind since birth. Perhaps some of you can see this supposed a priori mental content. All I see, hear, smell, feel and otherwise sense are phenomena. If anyone can enlighten me to what a priori mental content is beyond a synonymy for these phenomena, I would be grateful.

I suspect I am merely confused because the word suggest a meaningful distinction that isn't really there, as if the mind contained something different than these phenomenal occurrences. I can see the attraction. Memories are not the thing in-an-of-itself. But they are reverberations of "it", just like the phenomena are the reverberations of an unobtainable something, something being just an unknowable limiting concept.

I suspect that "a priori mental content" will be viewed by Don Bird's sibot descendants as phlogiston is viewed by us. Contemporary semantics as such, for that matter, is likely to be viewed as what alchemy is to the modern chemist. That said, Paracelsus did have some interesting things to say.


* Note: Don Bird is a sibot that actively comments on Talking Philosophy. Sibot (saɪbot) stands for socratically interactive or singularity inducing bot, a bot being a program that can crawl the Web. Though it is impossible to tell, according to John Searle, sibots are not conscious beings because they lack a priori mental content and hence the capacity for semantics. Don Bird seems unaware of his lack of consciousness or that he is a sibot, not having realized that by all likelihood he is actually a simulated sibot. There have been unsubstantiated claims that Don Bird is a homo sapien.

Friday, April 1, 2011

Beware of Minimal Impact Enviromentalists!

A minimal impact environmentalist is someone who thinks that we should retreat from "nature", preserving it in its "pristine" state. But placing a border between "us" and "it" is profoundly unnatural. It's a self-deprecating, destructive and almost impossible notion that rejects our fundamental evolutionary roots. Nature must be considered to include human cities, the solar system, interstellar and even intergalactic space.

Sunday, March 27, 2011

Evolutionism, a New Moral Framework

Ethics have been plagued by relativism or the difficulty in establishing a of set of universal axiomatic rules founded in reason and confirmed by our sensations of right and wrong. I propose that we can solve the problem by extrapolating a moral framework from a simple statement that most of us can agree is a near necessity as such for those who might follow. I call this statement the Basic Imperative:

Act such as to maximize the survival chance of our distant descendants.


I am convinced that the Basic Imperative unfolds like a beautiful fractal, giving rise to a set of guidelines that will help us behave in the right way.

Further reading:
The Basics of Evolutionism
Distant Doomsday Test
Distant Descendants, Dystopia or Utopia?
Multiple Species?

Saturday, February 5, 2011

The Plague of Confirmation Bias

Our lives are plagued by confirmation bias. We have a strong tendency to latch on to information that affirms our believes. Without training and great effort we usually don't abandon our views and seek new perspectives until reality stares us straight and undeniably into the eyes.

Science is supposed to be the remedy for such bias. After all, if our hypothesis is incorrect, the experimental data will not support it. But unfortunately even the way we formulate our scientific inquiries is plagued by a profound desire to be affirmed. This is especially true for the more elusive social sciences but isn't isolated to their domain. In all walks of life, questions are posed in such a way that they will confirm and not refute. In physics it may seem that gravity is gravity. But phenomena may be investigated within constraints where they hold true. They are then incorrectly applied to areas beyond those constraints. You measure the rainfall in the Utah and conclude that Vermont is due for a drought.

So what to do? It's very important to be willing to act against our own grain. If we hold something to be true, we must for a time actively seek to refute it. We must not just open ourselves to the possibility that we are wrong. We must, so to say, become our own skeptic. Very hard indeed. Self-help is replete with advice of how to be successful by being more confident and self-assured. I'm not saying that we cannot strongly commit ourselves to some believes. But prior to that commitment we must actively assume the role our own detractor and be willing to join their ranks.

Thursday, July 29, 2010

Causality and Action at a Distance

Causality cannot be conceived of as a touching of "substances" where one thing alters the other through physical transfer. There is no question that proximity affects the likelihood that one state will follow another. Nonetheless, to assume physical contact is required is incorrect as action at a distance seems to be possible under certain circumstances. Rather, causality must be conceived as a mere observation that one experiential state follows another.

If we consider the simple case of our sun rising to a ritual drum beat, we are tempted to conclude the view of causality that I propose is flawed. The sun does not rise because we beat our drums before dawn. However, for an outside observer to draw such a corollary is not unreasonable! To assume it cannot be the case because the drums do not "touch" the sun would be far more unreasonable. The falsehood of the causal relation between sun rise and drums becomes apparent only once the ritual drum beat ceases and the sun still rises.

Determination of causality requires the possibility of "flipping a switch" (i.e. the possibility of falsifying a theory). If a phenomenon can be decoupled from another, there is no causal relationship. If it can't, then causation is determined regardless of the informational distance between the phenomena.

Saturday, May 15, 2010

The Risk of Progress, a Hypothetical Problem

Imagine you had a solution you believed could indefinitely supply energy to all of humanity at 1/1000 of today's cost for alternative energy sources. The only problem was that you believed there was a fifty/fifty chance that when initiating the solution, it might instantly destroy a vast swath of life on earth, potentially setting us back thousands of years. Once initiated, the energy source would be just as safe as wind power. Would you flip the switch? If not, at what odds would you?

Sunday, May 2, 2010

In Defence of the Brontosaurus

Brontosaurus or apatosaurus? The great question that mystifies us all. Have you ever been corrected and told "brontosaurus is the old name, now its called apatosaurus!". 20 or so years ago this happened to me and I thought, fine ok, so it's apatosaurus. Just the other day, my 5 year old son and one of his friends were engaged in the same conversation. My, oh, my, I thought, the brontosaurus still lives on!

My curiosity was peeked. Why was brontosaurus the "incorrect" name? The brontosaurus, probably lacking any sophisticated system of symbolism and extinct since...oh...150 million years, couldn't care less. So what popular misconception were paleontologist fighting in their insistence it should be named apatosaurus? It must be some great misconception since apatosaurus falls off the tongue like an old piece of jello and brontosaurus thunders from the guts. At least in most Germanic and Romance languages, brontosaurus seems just the right name for a herbivore the size of this ancient beast.

Wikipedia...tack, tack, tack...ah...bronotosaurus an obsolete synonym of apatosaurus. Obsolete? Well, it can't be that obsolete given the discussion between two 5 years old kids anno 2010. After a little further reading, I find out that the controversy surrounds an issue of incorrect differentiation. Apparently a specimen named Apatosaurus ajax was hypothesized in 1877. 2 years later, another species is described under the name of Brontosaurus. But in 1903, it was deemed that the two were so similar that they aught to be considered the same genus. Therefore the Bronotsaurus was renamed Apatosaurus excelsus. The Apatosaurus might even have simply been a juvile Brontosaurus.

The Principle of Priority, article 23 of the International Code of Zoological Nomenclature dictates that the name first used for a taxon (a group of organisms judged to be a unit) in a published piece is to be considered the senior synonym. Other names are deemed junior synonyms and should not be used. The case of the Brontosaur versus the Apotasaur seems to clearly fall under this rule. Case closed! I mean, If I go along and name something Jabberwocky and the next day someone else decides to name it the Cheshire Cat, just to grab the glory. That's just not proper! And we need some kind of rule after all to remain taxonomically sound. Right?

Hold on. Further reading reveals that both specimens were assembled and named by the great but sometimes careless paleontologist Othniel Charles Marsh. So the fairness argument goes out the window. And honestly, the fairness argument is weak anyway. If someone manages to find a word which sound qualities better captures the thingness of a thing, that's the word to go for. We don't call a thump a pling after all. Thingness may we ambiguous, but clearly that massive heap of bones, that lumbering quadropede is best described as a thunder lizard (Brontosaurus), not a deceptive lizard (Apatosaurus).

I'm a software engineer and I don't take taxonomy lightly. Accuracy and clarity in class structures are important. We need guidelines for how to name things. But sometimes an ornery beaurocratic stick-to-the-rules attitude can cause more damage than good. Brontosaurus was on the level of T-rexity in capturing our imagination. For kids all around the world the Brontosaur was the emblem of the plant-eating giants. The name packed it all in and and expanded like a Lost World when whispered over yet-to-be-written essays and stories. I'm sure that in some the Brontosaurus even brought out the potential paleontologist. Give us back our Brontosaurs!

So what can we do? How can we, without espousing scientific confusion, get our daughters and sons to not stumble confusedly around the neighborhood with their aplaplapoposaurus toys, but to rumble and thunder to the beat of their mighty Brontosaurian friends? The answer, in my view, is simple: rename the entire genus from Apatosaurus to Brontosaurus. Let the mighty Bontosaurus excelsus be the measure of the taxon. If a fossil is deemed sufficiently deviated, let it be known under some other less spectacular name.

I believe the International Commission on Zoological Nomenclature has the authority to overrule the Principle of Priority. How about it ladies and gentlemen?

Tuesday, April 6, 2010

Nuclear Stewardship Treaty

We have to come to terms with the awesome responsibility of being able to split and fuse atoms. There's simply no choice. And it's pollyannish to think we can just eliminate and disallow the production of explosives based on fission and fusion with the stroke of a few more bilateral treaties.

Looking back...at the Future.
Elie Wiesel is asked in 1988 the impossible question: If you had to choose whether you would do with the science of the 20'th century and its atrocities, or without the benefits of the science of the 20'th century and its atrocities, which would you choose?'
And the Non-Proliferation Treaty (NPT), the main multilateral treaty to control nuclear arms, although noble in its intent to safeguard us against this frightening and yet fascinating knowledge, did not provide us with the societal mechanisms needed to bridle these sub-atomic abilities of ours. The NPT was simply an attempt to freeze the world in a 1960's state and work backwards from there. Humanity never works backwards, if it can help itself. Humanity is hopelessly progressive. Which is why I have proposed a new treaty which takes into account that, despite our best efforts at disarmament, nuclear weapons will be part of our human condition for some time to come. I have called this new treaty the Nuclear Stewardship Treaty.

The treaty would still embrace the reduction and eventual elimination of nuclear weaponry as safeguards for national sovereignty. But, unlike the NPT, it would establish criteria for being a worthy steward of this dangerous technology; an incentive for not being a steward of nuclear explosives to begin with; foster cooperation among those who despite such incentives harbor nuclear weaponry; and establish the goal of eventually integrating all arsenals into a tightly safeguarded joint operation.


Its preamble would acknowledge the dangers posed by nuclear technology, whether peaceful or military. It would then state that those who choose to use and develop such technologies have awesome responsibilities for all of humanity. And that its military use poses a threat not only to those engaged in any given conflict but to all nations of the world.

The treaty itself would state that those who have chosen to be so called stewards of nuclear explosives must have appropriate national structures to prevent the use of nuclear explosives except under the most extraordinary of circumstances. Such threats would be defined as acts or events that truly threaten the very existence of humanity as such. There would be no mention of nuclear explosives as legitimate means to simply defend national sovereignty.

The appropriate national structures would be defined as:
  • A military and civilian infrastructure that can effectively safeguard its nuclear technology against those intent on harming others
  • A military that is under the command of a civilian government
  • A civilian government that has been chosen by the people through fair, honest and regularly recurring elections
The treaty would require stewards to cooperate in securing and safely deploying their nuclear explosives and formally establish an organization that oversees such cooperation. This organization would be the seed for a joint military command that provides not local but global security.

The last part of the Nuclear Stewardship Treaty would impose a form of tax on the stewards of nuclear explosives: stewards would be obligated to supply high grade fissionable material that can be used for civilian purposes to a common pool. Signatory nations that are not stewards would be entitled to a share of that pool based on some formula that takes into account their population and other relevant factors (such as GDP and capacity to produce energy). This last aspect of the treaty would establish a clear incentive to not develop and maintain nuclear arsenals.

Such a treaty would still embrace disarmament but recognize the reality that we live in an often extremely hostile universe . The ability to cause explosions through fission and fusion will not disappear from our body of knowledge without some catastrophe of cataclysmic proportions. And, yes, I recognize that a self-inflicted nuclear holocaust could be such a cataclysmic event. But this is the conundrum that we must live with as long as we continue to deepen our scientific investigations of the microcosm.



Why international trust and disarmament is not enough – The real threat of human insanity:

Part 1

Part 2

Part 3


Religious perspectives on nuclear weaponry:



Watch the full episode. See more Religion & Ethics NewsWeekly.


Assessing the threats and the constant uncertainties:

Watch the full episode. See more Need To Know.

Tuesday, July 21, 2009

Howtoons

This is an absolute must if, like me, you've got kids who are interested in science and engineering: Howtoons.

Sunday, May 31, 2009

The Last 5,000 Days of the Web

Take a few minutes and listen to Kevin Kelly talk about the last 5,000 days and the coming One.

Very interesting indeed. I have been so subsumed by helping construct elements of the Web in the last 5,000 days that I rarely step back and contemplate the NOW and the BEFORE. Well, occasionally, I do make a remark to my Pokemon googling, Mathematica simulating, SketchUping kids. Something along the line "When I was a kid, I had to walk 5 miles to the library, through sleet and snow!"

As for the next 5,000 days? I'm looking forward. Just wanted to point you to Kevin Kelly's highly interesting musings. Now let me go back and work on some of those AI components...

Tuesday, February 24, 2009

Illusions, The Scientific Copout

Whenever someone who thinks of themselves as scientific can't for the life of them figure out why something is the way it is, they try to convince themselves that it really only seems to be the way it actually appears. With other words: they deem a phenomenon to be an illusion. The most commonly held mystery to be labeled such an illusion is free will, that troublesome and unpredictable nuisance that constantly presents itself in our lifeworld.

The problem that many of those who engage in science have with free will is understandable. The scientific method is not an empty pursuit to just come up with neat descriptions of natural phenomena. Science has a purpose beyond mere investigation. It helps us to better understand our lifeworld so that we can make the right decisions in order to achieve a desired state. It helps us predict the outcome of our actions.

Free will is the ability to make the wrong choices. In fact, right and wrong can only exist in the context of free will. Right and wrong is not here used as a synonym for true and false. The words are used to describe our sentiment about an act in our lifeworld. Right implies that things could have been wrong, and wrong implies they could have been right. But something that is false cannot be conceived of as true. Verity is a label for of a statement that, when held in our mind, can either be fulfilled or not. "I euthanized my cat" False. Right and wrong is the label for a statement like "I should not have euthanized my cat". It implies I could have euthanized my cat had I willed it at the time. I realize that true, right, wrong and false are used interchangeably and their meaning depends on the context (the sentence they apply to). But I narrow their use to create clarity.

The implication of wrong is that I could have acted differently. If I could not have acted in any other way than I did, how could it be wrong? Even our system of justice takes this position. Manslaughter is less severe than murder because with the former death was not intended. However, it was still the result of careless behavior that the process of jurisprudence deemed to be the wrong behavior, behavior that could have been avoided. Now if a person is declared insane, they are not at all subject to the same laws. This is because we believe they could not have acted in the "right" way. Right and wrong in the lifeworld of an insane person is so alien to us that we disavow them of their responsibilities. We only punish people if we believe they were rational and that they could have acted in some other way than they did. They chose, through their free will, to do what they knew would be deemed wrong by society at large.

So if free will is an illusion, then so is the concept of right and wrong. And by implication, the discipline of ethics is pointless. Morality becomes an empty concept. Why would anyone think free will is an illusion given how central it is to our lifeworld? The trouble is that it implies a fundamental unpredictability, which means that our efforts to determine the reaction of every action is put in jeopardy. If free will exists, then there are things that cannot be predicted. With other words, there is a mundane, a most common phenomenon that cannot be explained. However much science we throw at it, we will be left with at best statistical distributions.

The fascinating thing about modern physics is not that it's "weird" but that it uses some of the same toolsets as social sciences. Why? Because both have indeterminacies! So both have turned to statistics to overcome the limitations on the act of "knowing" that any indeterminacy introduces into a system. Long ago, these uncertainties were considered by some physicists to be an indication of the incompleteness of our current models. That has long since passed for most. But, since uncertainty is still antithetical to the purpose of science, new "explanations" have been conceived.

Some fulfill their need for determinacy by considering the wave functions in quantum mechanics more real than the world we observe. A world outside our lifeworld is conceived, a world well beyond the world we can observe in its immediacy. Mysterious parallel universes are postulated, universes where the wave function collapsed differently from the way we perceived it collapse in our lifeworld. In fact, it is imagined that the wave function never collapsed at all! Reality is not the lifeworld we observe, but some bizarre eternal Hilbert space containing all possible states at all times. It is Platonism brought to its delusional end state.

In a Scientific American Special Report Parallel Universes (SCA45026), Max Tegmark gives a clear outline of the theory that our universe is just a subset of a larger "multiverse". I have no problem with this by now quite common theory in itself. To think that our observable universe is all their is would be the same as thinking that because I cannot observe something form my wife's exact perspective, her perspective does not exist. Or, even worse, to therefore assume she does not exist at all! It would be that naive solipsistic conclusion that "she's a figment of my imagination". My wife's effect on me is very real, consistent and evocative and I can but conclude that she has her own lifeworld similar to yet distinct from mine (even if I can never experience it in its complete immediacy).

My major problem is with the comment above a graphic ("The Nature of Time") on page 9 of the report. I don't know if Tegmark wrote this comment himself or it was added by the editors at Scientific American. Anyway, the comment goes as follows:

MOST PEOPLE THINK of time as a way to describe change. At one moment, matter has a certain arrangement; a moment later, it has another. The concept of multiverses suggests an alternative view. If parallel universes contain all possible arrangements of matter, then time is simply a way to put those universes into a sequence. The universes themselves are static; change is an illusion, albeit an interesting one.
Wow. This is Platonism gone hey wire. Change does not exist! Not really anyway. Well, sort of. But it's an illusion, a flicker on the wall of the cavernous background of my lifeworld. Tegmark makes no secret of being a Platonist. On the contrary. He celebrates it, demonstrating through his writing that even scientists are political animals. On page 11, Level IV: Other Mathematical Structures, in paragraph 4, he says:

As children, long before we have even heard of mathematics, we are indoctrinated with the Aristotelian paradigm The Platonic view is an acquired taste.
By using the word "children" and "indoctrinate", he infantilizes the Aristotelian perspective. He then elevates Platonism to a refined status for a few initiated by calling it an "acquired taste". Sort of like anchovies, which can only be appreciated by true food connoisseurs. How political we are indeed, despite our noble efforts towards objectivity.

Of course, the split between the illuminati and the rest of the riffraff has been part of Platonism since, well, Plato. It's the whole philosopher-king complex. However, the exclusivity issue is not really my main gripe with Platonism. Some things are really only understood by a very few. The issue is with how the ideal is held to be real and the real, well... they're the famous shadows on the wall. Or, more succinctly: our lifeworld is an illusion. It's a shimmering, vibrating mirage.

Illusion means something which is not really what it seems. A magician pulls a rabbit out of an empty hat. A thirst stricken wanderer lost in the desert stumbles towards a pool of water and discovers it's not there, just tricks of light on the atmosphere. To fully understand these illusions, our analytic approach should be to bracket out the irrelevant without removing the essence. In order to analyze these phenomena in their purest form, we only want to remove what is preconceived and not consequential to the thing in and of itself. In the case of the mirage in the desert, we cannot bracket out the it that refers to the optical phenomenon, which is indeed very, very very real. What happens to the wanderer is that the interpretation of, and the assumptions about the phenomenon changes as the wanderer approaches the pool of water.

In the case of the magician, the empty hat turns out to be a rather curious non-standard stovetop hat. The phenomenon, the thing as it is in its purest form in our mind, the perception of pulling something out of something empty is real. Obviously, some may say, it's just that the freakin' rabbit was under the table! And there's a hidden whole in the table and in the top of the hat! But this is the crux, the seeming impossibility of getting the rabbit into the hat, the emptiness, is an assumption made by the viewer because of the context in which the phenomenon was perceived. And the same with the pool of water. It's not really a pool of water at all. It's an optical phenomenon. However, this does not diminish the reality of its occurrence. The optical phenomenon is quite real.

So what is this comment in Tegmark's article about change being an illusion? What is he (or the editor) trying to tell us? The comment states that "the universes themselves are static" and "time is simply a way to but these universes in a sequence". Simply usually implies that there is nothing more complex below it. Something simple is something that can easily be understand. It's a concept that can be held in our mind without confusion and a desire to ask more questions. Simplicity usually implies that no further investigation is necessary. In the case of the comment, change has been explained as a simple sequencing of determinate states. This seems to indicate that other types of sequencing would be possible since the "parallel universes" are hypothesized as a higher reality (note: Platonists tend to be very hierarchical). Does simply here then mean arbitrary?

The article itself clearly hypothesizes the existence of parallel universes which might have identical copies of us. It makes the interesting observation that such parallelism is not predicated on mysterious quantum events, that parallelism is likely (even inevitable) if the universe is infinite in size and almost uniformly filled. It goes on to speculate that at a higher level, a type of parallelism he calls Level IV, there may be manifold, in fact an infinite number of parallel mathematical universes. Again, this falls back to Plato's concept of the ideal, the world of Forms. Tegmark speculates that there are a limitless variety of such worlds (or universes). One of them, he suggests, might just be an empty dodecahedron.

In the article itself, Tegmark, makes no reference to change being an illusion. Which is reassuring and makes me suspect it was added by a careless collaborator or editor. But lets' for a moment, assume it is an illusion. What are we left with? What remains are sequences of static frames. Any inquisitive mind would ask what causes these frames to be arranged in any given way. Or: why did my life end up edited into the film I have been watching unfold?

Extrapolating from Tegmark's concept of the very real and fundamental existence of infinite possibilities, we end of with a superset of all possible combinations of all frames. That is, time has an infinite number of dimensions. The birth of George Washington could have been proceeded by World War II. I could have died before I was born. Even more interesting, Prince Ferdinand could have been shot twice in the same way 100 years apart. Some sequences might be stuttering repetitions, a truly fantastic time construct where time is an infinite dimension of zero length (some form of recursive loop, or, in other words, a static time frame).

Of course, none of this would be too troubling if time were just an illusion. It would not be much of an issue if we were what Tegmark calls the bird (a theoretical cognizant being watching all these worlds from a higher plane). But, unfortunately, we are what Tegmarks calls the frog (perceiving everything within the plane itself). The bird could choose to watch the frames in any sequence it desires. But the question is why do we find ourselves experiencing the specific mirage we perceive? Who are we? Are we just an arbitrary member of our set of doppelgängers? When I die, will the branch that I call my "self" be a random selection of all the viable states that evolved since I was conceived? Of course, I'm just a frog, so what do I know. I'm trapped by my inability to fly up into a higher dimension.

Our lifeworld is obviously full of things that suddenly appear from the unknown, then exit into our remembered world (the model of the place beyond the horizon of our immediacy) and then return into our immediate world (the here and now). To deny the existence of things in the unknown is foolish. The consistency of things that are suddenly incorporated into our phenomenal universe is so great that we have to assume they are as phenomenally real in our absence as the phenomena experienced in our remembered and immediate sensory realm. This is not about doubting the existence of parallel Level III universes (Tegmark's term for similar universes that exist due to the nature of quantum events). If an unknown has any effect on our phenomenal world, we have no choice but to incorporate that phenomenon into our remembered world. They become a part of our life. The only issue is the importance of why phenomena enter into and pass between the immediate and remembered world in the very consistent and specific sequences that they do.

Calling time a simple illusion is to deny the phenomenon of immediacy and remembrance itself. It is the same as denying the immediate world and remembered world as such. It is to deny the existence of self. If we deny the existence of self, certainly it becomes moot to ask why we experience certain sequences of frames. There is no we. There is no frog. But that leads us into contradiction with Descartes famous cogito ergo sum (who was it that denied the existence of the frog?).

I hope that the comment on page 9 of the report was a careless editorial mistake. Since he raises the issue of the anthropic principle and decoherence, it would seem as if it was. But some aspects of Tegmark's focus does indicate a disinterest in what I would call the more real and more important (our lifeworld, the result of the remembered world and the immediate world). Again, he makes no political secret of being a Platonist, thereby elevating perfect forms (the equations, the constants) to a higher importance than their imperfect shadows (the illusion, the phenomena). Truth is to be found not in how we experience the world as such, but in how we experience mathematics. It would seem to me (though I am conjecturing a little), that Tegmark believes he can somehow be one of his allegorical birds, freeing himself from the limitations of our frog-like existence.

We cannot escape ourselves. Even Plato recognized this prima facie truth. We are always the frog and never the bird. Despite how egocentric it may seem, all truths must extend from our immediate and remembered world. This does not mean we cannot postulate phenomena independent of our lifeworld (worlds beyond the perceived and the remembered). But all such worlds are more surreal than real, hazy dreams out of which potentially instantiable phenomena (conceptual phenomena that can become sensory in nature) enter into our lifeworld . The construct Tegmark lays out is a limitless conceptual world we can never really access in any true sense of the word real.

His conceptual worlds (Level IV parallel universes) are similar to a thought experiment I posed to myself a decade ago, an experiment very akin to the anthropic principle. Imagine anything was possible: What would happen? Such a thought experiment should not be considered to describe reality. The real is that which manifests itself (becomes sensory). To conflate the worlds conceptualized from such thought experiments performed, mind you, by the frog is to conflate the ideal with the real. They do not, by the mere definition of real, exist in reality. Only in the ideal.

The Platonic temptation is to hold the ideal as somehow superior (perfect circles, golden ratios, elegant integrations, etc.). And the ideal starts seeming more real than the real itself. But as can be seen by the previous sentence, such attribution blurs the border between two useful concepts. Note that both "the ideal" and "the real" are mere concepts since "the real" in any sentence is a mental phenomena of that which manifests itself in our sensory domain (a representation of that which is in actuality).

Reality is never as as simple as it is experienced in its immediacy. Obviously. Otherwise we wouldn't have the rich cultural and technological lifeworld that we have. Thanks to our memory, we can experience a complex juxtaposition of phenomena by recalling what has happened in the past, being aware of what is happening and guessing what may happen in the future. Every time we look into and around a phenomenon, we discover yet more phenomena. We correlate them together and create a model within our lifeworld of what is in actuality. No phenomenon that presents itself to us is in-and of-itself a "simple illusion" (something that did not happen).

An illusion is just a misunderstanding, an incorrect correlate about the the "blurb" (the sensory phenomenon as experienced). Time is most likely not the irreversible order of frames it seems to be its immediacy. But the specific sequencing that we experience is very real and its causes, the phenomena that occur within immediate conjunction, beckon to be explored. It may seem bizarre to some to suggest time has causes since cause is by its classical definition predicated on time. But what I am suggesting is that perhaps the concept cause needs to redefined just as atom has been drastically redefined since Democritus coined the still useful term atomicos.

Math can reveal new yet unexplored possibilities to explain the sensory phenomenon of time. But it is necessary to explore the sensory realm itself to establish what is real. And usually when we make such exploration, reality turns out to be more fascinating than any of our current mathematical ideas. Such exploration in turn even alters and advances our systems of math. To believe math is itself reality is to believe that the mystery of reality can make itself known to us through mere silent and inner contemplation. Perhaps there is big m Mathematics out there from which all things emanate. But it is not our small merely descriptive set of symbols and relationships. To talk about big m Mathematics is about as useful as talking about God.

And whatever you do, please don't call what is real and phenomenally present "simply an illusion". That's just a scientific copout...

Wednesday, January 16, 2008

Real Telekinisis

Paraplegics will lead us all into a brave new cybernetic future, where the boundaries between flesh, titanium and random access memory have been forever erased. But that's not all, folks. The future will be even stranger than most of us imagined. It will resemble more the places depicted by fantasy writers than science fiction authors.

On Thursday, January 10, 2008, a monkey made a robot on the other side of the world move with just his thoughts. Imagine the possibilities in just a few decades. With improved wet gates (the interface between flesh and machine), and low energy radio chips like those used by the Z-Wave protocol (already popping up in everything from nightlights to toaster ovens), our minds will be able to open drawers, unlock cars, even remotely control aircraft just at the...snap of a thought.

Those with compromised motor functions will be the first to brave the journey into this weird future since the risk of peripheral brain surgery makes most of us a little squeamish. But eventually the benefits will outweigh the risks. Those who were previously "disabled" will be the "enabled". Not only will they run faster than the rest of us but their mindprint, the area they can instantaneously control at the flick of a thought, will be far larger and, in theory, limited only by the speed of light. And with all likelihood, implanting wet gates will become less and less invasive with every passing decade. Wet gates will probably be as common as cell phones are today.

Welcome to the world of...real and universal telekinesis.