Pages

Showing posts with label Ethics. Show all posts
Showing posts with label Ethics. Show all posts

Tuesday, March 18, 2014

Legality vs. Morality



Even amongst my fellow world federalists, who fully agree on the necessity for the global Rule of Law,  I've been noticing a lot of equivocation about what is morally versus legally right in international affairs. Laws (and their related social norms) should not be equated with what is ethically right. A law can be immoral. However, always following our moral convictions can be socially unsound. So what role does each play in creating a good global society, one that is conducive to the wellbeing of our species as a whole?

In simple terms, a law is a rule recorded and enforced by a government. In more general terms, it is is an agreed upon norm. When viewed in this general sense, the distinction between laws and moral precepts begins to fade. But importantly, to avoid continuous bickering about what we agree on, we record our norms in contracts that ethically bind us to abide by them in our future actions. A clearer distinction reemerges between norms that are based on instinct or personal contemplation — morals — and those which are clearly laws, objectively verifiable agreements.

If you and I sign a contract, and neither of us are under duress, the distinction between morality and law is less clear. It can be objectively determined that I personally consented in writing to the contract and, however egregious the conditions were, it was by my choice. If you agree for me to euthanize you, it becomes much harder to claim that I committed an unethical act. It's still possible within a well formed ethical framework, albeit such arguments have far less emotional force than claims about a moral obligation to act in any given way when no clear consent was given.

When laws are nationalized, the norms are made to be binding to the whole nation regardless of any single person's direct consent. Problematically lacking the element of our direct consent, our obligation to abide by them enters into a much more difficult scenario.

International treaties are (inter)nationalized contracts. Just like a contract between you and me, its a legally binding agreement. But because it is (inter)nationalized, and that it was made without the direct consent of all affected, we can easily begin to equivocate about the morality of a state's actions in the international arena. We muddy the distinction between what is morally and legally right by equating the two. Russia's actions vis-à-vis Ukraine are suddenly in the right/wrong simply because there is (or isn't) a treaty or legal precedence. Legal (in)correctness is sloppily transferred into the moral domain. 

So what role does morality legitimately play in (inter)national law? In modern times, this question harkens back to Jeremy Bentham and the philosophical school that he founded: utilitarianism. Bentham did no concern himself specifically with international affairs. His attention was directed at the very idea of human laws and actions regardless of context.

The more fundamental question is: how do we determine if any given law or action is right or wrong? If we just use our "hearts", given that desires diverge, we will end up with bellum omnium contra omnes, a war of each against all. Perhaps we should outlaw marriages between persons of type X and Y because, well it's just preposterous and against the natural order. Or their love demonstrates that it is in fact part of the natural order. Its one against the other. And the issue can ultimately only be resolved with sticks and stones. Or nuclear devices. Or, like between many of us in the blogosphere, bad or outright mischievous rhetoric.

To resolve this conundrum, Bentham famously invented modern utilitarianism by postulating that laws should abide by the following moral precept:

Act such as to maximize happiness and reduce suffering. 

Bentham alternatively called this the utility or greatest happiness principle. It was a clever idea but has been shown to be fraught with problems. A variety of so called thought experiments have indicated that it leaves something to be desired. Outcomes of these experiments frequently contradict our basic moral intuition.

A possible resolution to the legality versus morality dilemma is the relativist position which disregards that any legitimate distinction should be made: what is legal is, by definition,  morally acceptable. But now, suddenly, female circumcisions become an acceptable procedure simply because it is a culturally conditioned norm. Slavery is also acceptable. And only illegal in the U.S. because the Union had more guns, more people and a higher frequency of clever engineers to invent effective ways of killing lots of people. This flies in the face of most modern mindsets. And yet, frequently, we use this form of relativism in an attempt to be culturally and internationally sophisticated.

Yes, we should be nuanced. But we should not let relativism cloud our moral being, just as we should not mistake our immediate emotions for morality and paint the world in terms of a battle between good and evil.  But if we abandon ethical legalism — the relativist idea that laws determine morality and that morals are just culturally conditioned norms  — and, on the other hand, morality is not just a matter of the "heart", then what are we left with? This is the dilemma that Bentham, and later John Stuart Mill, tried to grapple with.

To be happy is certainly in most cases a good thing. But happiness is easily confused with our desires. And there are sociopaths who find happiness in the most monstrous conditions. The utilitarian hope is that by trying to maximize everyone's wellbeing, the voice of the sociopath will be drained out. The problem is that we easily end up with an overly and leisurely unproductive society. And at worst a decadent one. So what is wrong with a leisurely, happy-go-lucky society where we philosophize all day about the meaning of cheesecake and Kim Kardashian's latest sartorial endeavors? The problem is that such a society is unsustainable.

And it is here, in sustainability, that I think we can find the basis of a good moral precept. The triumph of science and engineering lead us to believe (for a while) that humanity stood at the center of creation. Then, through our great ingenuity, we discovered how to initiate nuclear reactions. Our ingeniousness was not only capable of saving ourselves from becoming extinct by the forces of nature; we were capable of initiating our very own "unnatural" destruction. Simultaneously, we peered deeper and deeper into the Universe and discovered that we really were truly insignificant in the great scheme of things. Our scientific and technological triumph was short-lived. No Better Living Through Chemistry necessarily. We realized we could unintentionally and possibly irreversibly poison the well of life that sustained us.

Survival remains our most prescient objective, not our own wellbeing by attaining some higher rung in Maslow's infamous pyramid. Our life is not about personal self-actualization. Yes, indeed, I firmly claim that it all boils down to survival. Not of ourselves, since death of the individual is an integral part of the cycle of life as such. Our goal is and must be, by natural inclination, the survival of our evolving species. Based on this recognition, we can formulate what I have called the Basic Imperative:

Act such as to maximize the survival chance of our distant descendants.

This imperative does not disregard the necessity for our general wellbeing. Without consideration for individual wellbeing, life can become such a drudgery that it no longer seems worth living. We will feel no compulsion to meet the Basic Imperative. And eventually, as a species, we will become extinct. But, importantly, the Basic Imperative subordinates our immediate wellbeing to a greater common goal: the potential existence of distant descendants with a higher consciousness and who can truly call themselves masters of their destiny. What the purpose of making sure such beings will one day exist I don't portend to know. All I know is that the desire to have descendants, descendants who are as (or even more) capable than myself, is deeply engrained into the fabric of my being. And by consequence of the very nature of life, it's easy to conjecture that it has been part of almost every organism since the dawn of life itself.

The Basic Imperative does not and cannot dictate how we act. At best it can guide us well. The future gets inherently murkier the further out we try to see. We are not privy to some causal certainty about the future of our planet. The best we can do is make educated guesses based on instinct, reason and past evidence. What it clearly indicates is that we must maintain some form of stasis, or (as it is called in international politics) stability. War is a clear sign of instability and should be avoided to the utmost extent we can, especially given the fact that we now possess nuclear weaponry capable of sending us back to the far harsher environment of the iron age if not further. We will devolve rather than evolve, undoing centuries if not millennia of human history. Assuming that we could even survive such a self-inflicted apocalyptic event.

So let us consider what conditions might establish sufficient stability for the conditions of our survival. (Inter)nationalized laws are such a stabilizing measure. Just like a contract between individuals, it holds a nation to be accountable to its past promises. It makes it possible for us all to expect certain behaviors from the other parties, a necessary condition to maintain the cohesion of any social organism. And legal precedence — customary international law — serves the same stabilizing objective. Based on how other nations have behaved in the past, expectations (i.e. norms) have been created about their future behavior. But that should not be taken to imply that stability seen in the narrow perspective of some given sliver of time is paramount. Ultimately, all our actions must be measured against the most basic moral imperative of the species as a whole.

Stasis and the unwillingness to take risks — that is the unwillingness to act even when there is insufficient information to make determinative statements about potential outcomes — can spell the death of an organism. Ultimately, the Basic Imperative should take precedence over all other considerations. And sometimes we have to risk our very existence in order to maximize the probability that there will ever be a being that can trace its ancestors back to us through the immense tracks of time that separate here from eternity. Synthetic chemicals may not always be good for the environment and lead to better living, but they are nonetheless one of the foundations for the Miracles of Science.


But how do we know when to live or not live by the prescribed laws of society? How do we know when or when not to obey the directives of our commanding officer? Is it when we are told to shoot? When our opponent is unarmed? If only there where such clearcut prescriptions. Despite the immense difficulty in balancing stability with risk, I think there are some methodologies we can adopt to help us make well informed educated guesses.

First, we need to distinctly recognize the difference between when something is morally versus ethically right/wrong. Determining the legality is the easier step. Despite the complexities of law, we can by research and historical studies determine with sufficient certainty if there's an explicitly stated law or if it's legally justified by overwhelming precedence.  Having determined with sufficient confidence that an act is legal, we can assume that there are deeper reasons for having turned what was originally a subjective norm into an explicit law or, by repeated action, an implicit (customary) law. The probability that it is immoral thereby decreases significantly. But we cannot, however, equate this with the law being morally justifiable. And we must, when looking at precedence (and even explicit treaties), be careful not to compare apples to oranges. Incursion into sovereign territory is not, full stop, an invasion for distinctly selfish reasons. And just like in judging homicide, intent matters.

I'm not claiming that it's easy to determine intent. This is why we have the concept of a jury of peers, or alternatively a panel of legal experts. And then subject them to extensive legal procedures so that they can determine (1) was the act committed by said entity; (2) what was their intent in so doing? In international affairs most of the time it's pretty easy to determine by whom an act was committed. Large troop movements can't be hidden from the world just by stripping one's military of their official insignias. There may be official denial, but none but the most willingly blind are usually in any doubt about the culprits. Intent, however, is often no easier to determine than when only a few individuals are involved.

Yet it is surprising how often the intent is stated and clearly contradicts the Basic Imperative. For example, claiming that an act is justified because you are defending, say, ethnic (or more euphemistically "linguistic") Russians is clearly immoral because it presumes an inaccurate biological or strict cultural division in the world. It presumes, if we consider the Basic Imperative as valid,  that the only legitimate distant descendants would be those who originated from Mother Russia. And, thereby, it denies the clearly and scientifically provable statement that genetic (and even cultural) diversity is beneficial to the survival of our species.

This is not to say that sometimes intent is cloaked in dubious statements about defending human rights when, in truth, there are clear motives of national self-interest. But by basing one's justifications on human rights, it becomes far more difficult to coherently persist on a purely self-interested course. We are formed by our own words because, as we are held to account, we seek to prove the truth of our claims until we either admit to lying or succumb to our own lies. Just the claim of being humane renders us more likely to be humane in the future (even if we were initially inhumane).

Morality does not trump the immediate necessity of considering the legality of an act. Legality is based on the complex moral interactions of many. Personal moral determinations, given the uncertainties about the consequences of any action on the future are at best tentative and at worst dubious. But the law should nor be confused with ethics. Properly formed ethics provide a universal and basic framework for making intelligent judgments about what is good or bad for the survival of our species and, in a wider context, our biosphere as a whole.

Good law is a reification of ethics, a process of making something theoretical into a practical solution. We should not willfully disregard laws based on our momentary moral hunches. But when our laws clearly fall short of the higher ideals as set by our moral framework, we should vigorously question and seek to change them. And never should we equate the two.

Sunday, March 17, 2013

Reflections on the State of Nature


Bellum omnium contra omnes. War of all against all. Is this the original state of nature?

In some abstract sense, it might be. But to assume it's the state of nature of our human species (even theoretically) prior to the institution of a strong government is completely absurd. The organizing principle of government exists in us before the word government can in a distinctly conscious manner be understood, even spoken. We are, as Aristotle originally presumed, social beings by nature. 

It might be true that there are monsters amongst us who seek unfettered power, ready to impose a Leviathan. But to attain such power requires the willing submission of one to another. It presumes an inherent willingness even for the most power-hungry to surrender some of their liberties and enter into an initial and untested relationship of seemingly irrational trust. They must become something else through the union of only potentially mutual benefits.

We can only gain power over our environment by organizing ourselves into primal tribes, groups of willing participants in a common mission set not arbitrarily by a sovereign but by the promises we trust will be mutually fulfilled. If it's true as Hobbes claims that, all things being equal, (wo)men are roughly of equal strength both physically and intellectually, then the only means by which to move forward is by joining into an arbitrarily trusting band of brothers and sisters.

Therefore, a social disposition, a willingness to "foolishly" trust despite the risks, must be assumed even without any fancy 21'st century psychological and medical examinations. (Wo)man is by nature the fertile egg for a society of willing individuals submitting to a common good – their continued existence – despite the inherent risks of submitting to the arbitrariness of someone else. 

To say we are all driven by the fear of death is merely a negation of our positive and common strife towards preservation of ourselves, our children and our extended family. Therefore, as the evolved conscious being we are, we must guided by the Basic Imperative.

The complexities of government evolves from this Basic Imperative. We are driven by a deep love of life and not the fear of its absence. It's not violent death we fear most, but the inevitable natural decay that comes from within. Only by continuos action and fusing our nature with others can we counteract our internal tendency towards a natural death. We surrender to a higher good because left alone we die not violently but prematurely.

Bellum omnium contra omnes, if at all, exists only in the original nuclear soup. But even there, it's the very possibility of proton fusion that is at the origin of all forward motion. In that stellar union of proton with proton lies the possibility of our own earthly evolution.

Friday, May 18, 2012

Stupid Enough to be Responsible?

A certain Stephen Lawrence wrote as follows in addressing comments I made in the context of the Harris Wars:
What we are interested in [when considering morality] is the real meaning of could have done otherwise [...] you’ve said why, it’s to do with what we mean by ability, have the power to, capable of, could. And it’s to do with [...] evaluating options and act on the bases of the evaluation. No need for unnecessary complication at all. Real randomness [...] can’t possibly make us deserving of blame, reward, shame, punishment and so on. And we can’t be deserving without it. Responsibility must be compatible with determinism or else it is a lie.
[...] what would you rather? Your decisions to depend upon the reasons that you have the desire set you do as well as the desire set? Or just the desire set, there due to indeterminism? [...] when you bring indeterminism to your computers or rather pseudo randomness, you place it very carefully somewhere, or else the thing would be utterly useless. [...] we have another perfectly good answer to why it’s a struggle to get computers to behave like us. Because we are much more complex.
Stephen Lawrence, 18/05/2012 (Jerry Coyne on free will, Talking Philosophy)
To begin with, I find the question what I would rather causality be like to be strange. Should I not prefer neither of the presented options but simply that causality be such that effects always benefit me? It's completely irrelevant what I would prefer. I also find it bizarre to appeal to simplicity for the sake of some rule that we need only consider what is convenient to maintain moral simplicity. Occam's razor does not state that the simpler theory is always true. The principle is a guide to where we should look for answers. But if anecdotal evidence points to a more complex picture, that's where we must head.

Stephen's comment about human complexity being what distinguishes us from computers is unhelpful. We know that we are more complex than the mother boards and CPU's that make up an electronic device. But what is it that renders us more complex? Is it just that there is more logic and opportunity for "mechanical" bugs in humans? Or is it that we differ in some more fundamental way? I'm suggesting it's the latter.

It is indeed true, as Stephen suggests, that I carefully use pseudo-randomness in my code. Having code that uses a pseudo-random number generator (PRNG) in software is associated with a known and morally interesting conundrum. If I deliberately introduce pseudo-randomness to make an airplane work well in most circumstances then is it acceptable that in some rare circumstance it causes crashes and kills people? If on investigation it turns out the crash could have been avoided would the PRNG have output anything lower than 0.7854 on a scale of 0 to 1, how will we react? Will it do to say that the PRNG statistically saved thousands of lives prior?



We do not seem to want computers to be like us, free and capable of error. But is it possible that our intelligence is fundamentally related to our capacity to err? I'm going to assume moral competence is associated with intelligence. After all, we don't put pigs on trial! And not even crows, orcas or chimpanzees even if they are some of the smarter non-humans. [1] If we assume this integral relationship between intelligence and moral competence, then it would seem obvious that ethicists should ask what intelligence is. Many assume and keep insisting that intelligence is equatable with rationality. What I'm suggesting is that it's not. At least not entirely. That said, if I understood the exact "mechanics" of intelligence I'd be be less researching developer and more birth nanny of non-biological babies. Why do we really quibble about free will in the contest of morality? What we're really quibbling about is how it's possible for humans to make good decisions.

This has been a corner stone of all modern law so far: are you competent enough to stand trial? So competence is what ethicists must study, not really the "freedom" part of free will. However, it's not unreasonable to claim that our "freedom" is what allows us to be intelligent, sensible beings and hence morally competent. Assume for a moment that some amount of pseudo-randomness makes for better software. What a PRNG does is allow a computer to be more free. The variables that rely on the PRNG are not fixed. It could be that the permissible range of the values are fluctuating depending on how well the software that uses it performs. When the software starts off, the ranges are wide. As the software matures, the values get more and more constrained [2].

The baby is growing up, the baby is becoming a (wo)man. Isn't it peculiar how long humans stay helpless and cuddly? And how knuckle-headed teenagers can be in their experimentation? Maybe the insanity of freedom and capacity for error has something to do with our great intelligence, sensibility and moral competence. Essentially, we couldn't be intelligent and morally competent if we couldn't on occasion be profoundly stupid.


1.^ At least not any more: Wikipedia: Animals on trial.
2.^ We call such code a type of evolutionary algorithm.