Pages

Sunday, March 27, 2011

Evolutionism, a New Moral Framework

Ethics have been plagued by relativism or the difficulty in establishing a of set of universal axiomatic rules founded in reason and confirmed by our sensations of right and wrong. I propose that we can solve the problem by extrapolating a moral framework from a simple statement that most of us can agree is a near necessity as such for those who might follow. I call this statement the Basic Imperative:

Act such as to maximize the survival chance of our distant descendants.


I am convinced that the Basic Imperative unfolds like a beautiful fractal, giving rise to a set of guidelines that will help us behave in the right way.

Further reading:
The Basics of Evolutionism
Distant Doomsday Test
Distant Descendants, Dystopia or Utopia?
Multiple Species?

2 comments:

Ira Straus said...

Not bad, Dreas. A lot better than the people who simply want to maximize the total human population in history. Your formula recognizes the value of evolution, potential future learning, generation of higher values...

Still, it could be criticized for being species-centric. What if other species are better than us? Or if a God or the Universe imposes concrete imminent moral imperatives on us, not just an imperative of maximizing our survival-time so as to learn more? Experience teaches us to be leery of a single goal from which all other morals are reduced to flowing deductively.

What if we're a nightmare species, one destined to destroy our world and all its species with it, better off with ourselves destroyed? What if (really we should say, "when") our species divides into multiple part-artificial species, which one should have priority? What if...

Despite these reservations on my part, I see a lot of value in your formulation.

Ira

Andreas B. Olsson said...

Ira,
Mike LaBossiere, a philosophy professor in Florda who blogs on Talking Philosophy said something very similar about the Basic Imperative:


Interesting. This got me thinking and, having been catching up on my Dr. Who, I found it easy to imagine my distant descendants being such a scourge on the universe that if I were to pop ahead to the future, I would regard their extermination as a moral good. That does seem to be a consistent position, at least on the face of it. After all, if I can regard some of my fellows as wicked enough to exterminate, then I surely can imagine that the entire race has achieved just such a status.


This was my response:


That is an extremely interesting thought experiment. But of course it is plagued by the same conundrums as all other such thought experiments. If we evolve time travel, then determine that we don’t like what we see in the future, and decide that we better annihilate ourselves in some self-imposed cataclysm, then what? Will life evolve again, including something like us and the time machine, repeating the same cataclysmic annihilation? Would we be stuck in a loop? Or would one “future” eventually jump passed the hick-up and thereby finally attain what all the previous us-like entities had deemed morally unworthy to be their distant descendants?


Note that though it is directed at thinking about our distant descendants, it includes what we might evolve into, life forms that might be so different from homo sapien sapien as to warrant a new designation. It even includes so called artificially intelligent entities that might spawn from our engineering efforts.