Pages

Friday, November 14, 2008

To President-Elect Obama

Congratulations President-Elect Obama! 

Over the course of your campaign, the vapid "change" speeches of the early primaries were replaced by real political suggestions. What emerged was a pragmatic world view that somewhat tempered the troubling messianic imagine that some of your more ardent supporters were trying to bamboozle us with. In the end, I gave you my humble vote without too much trepidation.

But I hope the realpolitik, the pragmatism you seem to espouse does not dim the notion that real concrete change is in fact needed. But it's not Washington that needs to change. What needs to be altered is the way we all govern our world as a whole.

I can only assume that a peaceful and civil world were people are free to innovate and pursue happiness is your goal. As leader of what despite circumstance of late remains the most powerful economic and military nation on our planet, you have a responsibility not just to further a fragile Pax Americana, to maintain a Westphalian status quo.  Your responsibility is to help establish a new global framework based on the same principles that joined our bickering American states into a civil union and that in the last 50 years have guided Europe towards lasting peace. 

Hypothesis about the 4 Color Theorem: Correction

There a problem with a previous statement I made. I stated that the outer circle forms another "layer", implying that the new layer borders only the layer above and below it. This would mean every element has a single left and a single right neighbor and multiple linearly arranged neighbors above and below. This is only true for some maps. But with more complex maps, however you divide the map into "layers", you will end up with: elements that extend themselves into several layers (not just the ones above and below);  or elements that border more than one element in the same layer (what I call a "wrap around" ).

It's terribly annoying, because it's like I can sense the reason for why only 4 colors are required but I can't quite see it! I guess this is what pretty much defines a theorem versus a proof... My hope was that arranging everything into isolated and linear numeric sequences (some form of "layering") would provide me with a clear vision of the why. But these so called "wrap arounds" keep throwing me off. 

If I could just stop bouncing through Kaliningrad!

Friday, October 3, 2008

Final Cause

It's been years since I thought about Final Cause. A good friend of mine, in fact one of the closest friends I ever had, with whom I discussed the issue of Final Cause extensively and with whom I formed, in part, a business around the concept of Final Cause, has passed away. He has slipped into the eternal unknown. I can still fell his cold body pumped rigid with toxic preservatives even though it was almost 2 years ago.  Still, in my mind it was only yesterday that he took his final chopped and disturbing breath. Like a an industrial machine forcing out filaments of coarse air. I wasn't there when his life came to an abrupt end. But his wife tells me how it went down, and I can feel it vividly like a belch exhausted at close proximity.

Final cause, so easily perturbed by the unexpected. But of course, no one is privy to the true end but the future events themselves. However, as Aristotle so aptly observed, there is indeed Final Cause: a desired end state towards which the living strive. A nut that struggles to expand into a majestic tree. It's really quite unfortunate that we have rejected the validity of final cause because of strict interpretations of deterministic models. Determinism places cause square in only one place: Original Cause, that one penultimate point where the single universal machine was set into motion. Since Newton's eclipse, we have worked our way into the shady world of probabilities, of collapsing wave functions. And yet we have resisted embracing the Forth Cause, Final Cause, as a legitimate member of our understanding of causation.

It's odd, to say the least. Final Cause is there at our daily level all the time. Why did we apply for a specific job? Because we desired towards a certain glorified end state, a state of being some specific entity in our human struggle for meaning? Or because we were determined to desire the unattainable by an almighty actor, an actor mysterious in its meandering ways? No, it makes no intuitive sense. Final Cause is as real as immediate cause.  Final Cause evolves, then affects the past in very real ways. Well, the future to be precise.

Friday, July 11, 2008

The 4 Color Theorem

My 8 year old son Julien, who has the ability to quickly "see" math, came to me and out of the blue started talking about some math issue. Unlike Julien, I'm a bit more of an incubator. When it comes to math, I need more time and a bit of "Zen staring" as my late and dear friend Alexey Pilipenko used to call it. Julien started rambling about something regarding even and odd numbers meeting. Even meant you needed 2 colors, odd meant you needed 3. 4 was always sufficient. And what you never, ever needed was 5. I had no idea what he was talking about. Then he said something like, "you know that map thing...that map problem", and I vaguely remembered something about a theorem on how many colors it takes to clearly distinguish regions on a map.


I sat down with Julien so he could show me what he meant. Per his instructions I drew a simple map. Julien pointed and said all you needed were 2 colors if "even numbers met at the point". If odd numbers met you needed 3. If there was a combination of points with odd and even numbers on the map, you needed a forth color. But you never needed 5, he emphasized again. I quickly tried out a map that seemed like it would need 5, placing numbers in the regions instead of colors. But before I could clear the Zen veil, Julien rapidly rearranged the numbers to prove 4 colors were sufficient.


Since I was vaguely familiar with the 4 color theorem, I didn't question the stated premise that 4 colors always sufficed for a map on a simple plane. But it suddenly intrigued me immensely. And what was it Julien was really pointing out about odd and even numbers? By this time Julien was already off to something else. Unperturbed by the problem, Julien was reading a book on the kitchen floor. His poor father, on the other hand, sat mesmerized at the kitchen table, drawing maps, Zen gazing at number coded regions, leaning his forehead against his knuckles.


This much was clear. Julien was absolutely right. If you chose a point where an even number of lines met on the map, call it an even vertex, you needed only 2 colors to distinguish the regions surrounding the vertex. But for an odd vertex, that is where an odd number of lines met, you needed at least 3 colors. Why? Because if you circled around the regions surrounding an odd vertex and alternated colors, by the time you reach the region neighbouring the region you started at, you would end up with the same color as you started with! It can easily be shown using a numeric sequence:


1, 2, 1, 2, 1


Any sequence with an odd number of alternating elements would end with the same element it started with. Since the regions around a vertex form a circle, blue ends up against blue, making it hard for us on a 2 colored map to tell a blue Sweden from blue Norway if Finland is red. It occurred to me that we can talk about binary pairs. Why? Because any line on a simple plane divides the plane into a binary pair. There is a region left of the line and a region right of the line, i.e. a binary pair of surfaces. Expressed numerically:


0,1



With an odd vertex, there will always be an incomplete pair:


0:1 , 0:1 , 0


We need to remember that the sequence is circular. And, importantly, each element should be able to form a binary pair with both of its neighbours. If we shift over the elements in the above sequence, we get:


0 , 1:0, 1:0


It doesn't matter how we shift it, left or right. There is simply one zero too many for all elements to form pairs. If we bond the start element with the end element, we get the flawed pairing:


0:0 , 1:0 , 1


So in an sequence with odd elements, we need a third type of element:


0, 1, 0, 1, 2


Now we can form a perfect pair (a pair were the two elements are of a different type) even if we pair the start element with the end element:


2:0 , 1:0 , 1...2


If we shift again in the same direction:


1:2 , 0:1 , 0...1


Now, what about that 4Th color? So far we have just had one vertex and rays (or lines) emanating out to the very edge of the plane (our map). If we cut rays somewhere before the reach the edge of the plane, and then attach each cut end to the ends of the two closest rays, we essentially get a triangulated polygon. If we want to clearly distinguish all the triangulated surfaces of a polygon that were formed around an odd vertex from the rest of the plane, the plane will have to have another color than any of the colors in the inner sequence (since the plane borders on all of the surfaces). In numeric terms, we can think of it as adding another sequence layer:


3

0, 1, 0, 1, 2



The element in the upper sequence should be able to bond and form perfect pairs with any neighbour in the lower sequence. In the above example, 3 borders on all elements in the lower (or innermost in spatial terms) sequence. It would be easier to picture this if we used colored squares instead of numbers. Above sequence works because the 3 can bond well with any of the 0, 1 or 2's below:


3:0 , 3:1 , 3:0 , 3:1 , 3:2


What if we divide the plane surrounding the triangulated polygon into 2 surfaces:


3, 2

0, 1, 0, 1, 2


Now we need to establish which elements of the upper sequence have to be able to form pairs with which elements in the lower sequence (or who is a neighbour of whom). Remember that the upper sequence forms a "circle" around the "circle" of the inner sequence. If two elements are neighbours "diagonally" it doesn't matter that they are of the same type. If the 3 in above sequence is only above the first zero in the lower sequence, then we get the following required pairs:


3:0

2:1 , 2:0 , 2:1 , 2:2


And, we have a problem Huston. There is a 2:2, which is an imperfect pair! How can we solve this? It's easy. All we need to do is replace the last 2 in the lower sequence with a 3, which will not be a problem since there are no 3's yet in the lower sequence. All elements in the circular lower sequence will therefore still be able to form perfect pairs with their two neighbors. The lower sequence then becomes:


0, 1, 0, 1, 3


The 3 in the upper sequence is still only above the first zero, forming the perfect pair 3:0. So lets check the The 2 in the upper sequence against all its neighbors in the lower sequence:


2:1 , 2:0 , 2:1 , 2:3


Which are all what I have repeatedly referred to as perfect pairs. It doesn't matter where we put the borders between the lower and upper sequence. Let's say we place it half way between the second and third element in the lower sequence ( the first 1, the second zero ). The upper 3 forms pairs:


3:0 , 3:1 , 3:0


The 2 of the upper most sequence now has to form fewer pairs:


2:0 , 2:1 , 2:3


But what if we move the border of the upper 3 all the way to the 3 of lower sequence (so both the 2 and the 3 in the uppermost sequence border the last element of the lower sequence)?


2:3

3:0 , 3:1, 3:0 , 3:3


Suddenly we have an imperfect pair again (a 3:3). It's self-evident that each element of the uppermost sequence will border a maximum of three types in the lower sequence (since we need only three types to distinguish the surfaces around an odd central vertex). It's also evident that, in the beginning, only the last element in the lower sequence needs to of a unique third type (since even sequences can just alternate between two types).


Only a maximum of two elements in the higher sequence will have to have the capacity to border on three types in the lower sequence (since only the last element in an odd lower sequence has to to be different). As soon as you place three elements into the higher sequence, at least one will have to border on only two elements in the lower sequence.


In the case where only one element in the higher borders three in the lower, only that higher element must, by necessity, be a forth type. The type of the other elements in the higher sequence are thereafter dictated by the type of the element that borders three types in the lower. If the higher sequence has an even number of elements, all pairing will be perfect. If, on the other hand, there are an odd number of elements, the last element must be different from the two primary alternating elements in the lower and the forth type, which is where we seem to run into a problem.


However, in any sequence you can replace an element between two types with a third type without destroying the elements ability to form perfect pairs with their neighbours:


0, 1, 0, 1, 2 => 0, 3, 0, 1, 2 => 0, 3, 2, 1, 2


Interestingly, assuming you use at most 4 types, Any time this is done, at least one alternation remains. In the above example we switched from a 0 / 0 alternation in the two first to a 2 / 2 alternation in the last sequence. But that not so important for us.


The important thing is that if there is an odd number of elements in the higher sequence (assuming only one element in the higher sequence borders three types in the lower), we can just switch the offending last element of a forth type with the second type of the primary alternating elements in the lower (i.e the 1 in our case). The offending element becomes a 1 and all the 1's bordering that element in the lower sequence become a 3.


Now interestingly, the only way to get two elements in the higher to border three types of elements in the lower, is to make a small element bordering only the last element in the lower (which is of a third type). 

Wow, fascinating. I think this will require another post....

Monday, March 17, 2008

Obama versus Clinton: Determining Popularity

After winning 11 contests in a row, the Obama team worked very hard at convincing us that their candidate was winning the popular vote. It would seem that the argument has weakened a bit after Clinton’s wins in the Texas, Ohio and Rhode Island primaries. But the argument about popularity was actually flawed from the outset. Voting and extracting meaningful information from the process is, or should be, a science, not the political mosh pit into which the primaries have evolved over the last decades. And science is predicated on the soundness of the process used to determine the ever elusive so called truth. In fact, science itself is simply a methodology, a set of intuitive rules about good procedure.

For several years, I’ve been working on improving voting systems at a fundamental level. My passion for how to interpret and deploy voting began as a software engineer. I’ve devoted the last 10 years to developing more intelligent systems. And one of the ways to make a software system smarter is to harness the power of collective decision making, which is also more commonly known as voting. When developing software that uses voting, it’s important to implement procedures that allow you to interpret the results in a meaningful way. No knowledge can be derived from votes without a clear concept of who participated and what their motives were. Everything from Google’s search engine to NASDAQ in today’s society uses votes to make important decisions. Google counts the number of links a web site gets from other sites. NASDAQ measures the votes of traders looking for a profit. Both are confronted with how to interpret their results, which becomes impossible without good universal rules. If Google’s search engine was unaware of and did not counter-calibrate mob web sites that are only there to increase the rank of some potentially unimportant web page, our daily searches would be inundated with junk. Within weeks Google would lose us as customers. NASDAQ and the S.E.C., on their hand, have to deal with problems of insider trading to preserve our ability to make intelligent market-based decisions. What about the two major U.S. parties that have been using collective decision making since their very inception?

Sadly enough, the national Democratic and Republican parties use procedures for selecting their presidential candidate that resist any meaningful interpretation. We are left only with the guesswork of talking heads who analyze exit polls predicated on people’s participation in these very flawed procedures. The situation is especially dire within the Democratic party. After Super Tuesday , I became interested in determining who was truly more popular: Clinton or Obama. What popularity weight would each pledged delegate take to the national convention? Would a pledged New York delegate represent more approval than a pledged Alaskan delegate? It seemed simple enough. All I had to do was divide the size of a state’s constituency by the number of pledged delegates from that state. This is where I confronted my major hurdle. What counted as the whole constituency? Pledged delegates are assigned to states in part based on total vote tallies for the Democratic candidate in the previous three presidential elections. But looking all the way back to 1996 did not seem a fair way of measuring the current size of a constituency, especially given the fluidity of independent voters.

So what would be a proper definition of these constituencies? All currently registered Democrats within a state? All eligible voters? Perhaps, simply, all actual voters. Or, maybe, the entire population of the state. The answer depended on whether I should consider the process an internal party affair or everyone’s concern. When I looked at the actual process to find my answer, I was blown away by how completely inconsistent the process is across our nation. Not only do some states use caucuses rather than primaries, but some are open (all registered voters can participate) and some are closed (only registered Democrats can take part). And some, like Texas, to my bafflement, use both primaries and caucuses! In the case of Washington State, only the caucus really matters, and yet a symbolic primary is held anyway. Just to confuse us even more. In some states voters do not even register party affiliation.

Clearly, not even the Democratic Party itself has made up its mind on whether or not the process for selecting their candidate is an internal matter. This decision has been left largely to each state. It’s bad enough that we mix caucuses, which are well-suited for party activists, and primaries, which are well-suited for the general public. When you add the fact that a state can choose to have an open primary or caucus, you are left with nothing but a big mess. And this schism percolates all the way to the national convention. Superdelegates are thought of as the Establishment vote, whereas pledged delegates are thought of as an expression of the popular will (which, given the aforementioned circumstances, is quite a stretch). Now a vicious argument is being waged inside the Democratic Party about whether the superdelegates should, as originally specified, be free agents. If they are, in fact, morally bound by what is incorrectly deemed to be the “popular will”, one has to ask why they were introduced at all. If they were just created to overwhelmingly confirm the slightest majority of pledged delegates, then the superdelegates are nothing but a deceitful psychological sleight of hand. I think the simple truth is that superdelegates exist because the Democratic Party could not reconcile its impulse to make the presidential primaries a public service with the Party’s need to retain control of its own identity. Adding superdelegates to the mix may have been a fruitless attempt to close the Pandora’s Box opened by the McGovern-Fraser Commission’s 1969 effort to open the nomination process to a wider group of interests.

Changes since the McGovern-Fraser Commission have pushed the process of selecting candidates for the presidency out of the hands of the parties and into the public arena. But not entirely. The result is an inconsistent and flawed system which cannot decide whether it should abide by federalist or centralist principles, and whether it’s a private or public enterprise. Anyone who claims they can extract meaning from this system has not considered its fractured rules. Everyone, whether Republican, Democrat or independent should take heed. Given that we live in a de facto two-party system, these are the rules that determine our limited choices for president. The point of having a democratic, small “r” republic is so We the People can make a meaningful and intelligent decision on who should be our head of state and commander-in-chief. This does not necessarily mean that the straight-forward national vote tally, the simple majority so to say, should decide. We are, after all, a federation in which votes of constituencies from small states are boosted and the powers of large states are properly tempered. But being a republican democracy means that our system for choosing the President of our union has to be consistent, universal and easy to interpret. Unfortunately, how we currently choose the next President is more governed by the arbitrariness of political information cascades than it is by the power of making meaningful decisions through voting.