Sunday, August 12, 2012

Detritus

This is a really shitty essay on ethics and evolution, and I felt compelled to respond. Then I read it again and found the prospect of responding too dull. Then my partner got a phone call and I needed something to do for a few minutes.

At its heart, it seems--along with the usual anti-materialist concerns about "how dare you use our idiotic prejudices about bodies and physicality against us"--is a complete failure to distinguish between descriptive and normative ethics. That some scientists study how moral decisions are made seems, to the author, to lead inexorably to the conclusion that he must be an insect or a computer or something. Because after all, if ethics really did involve conscious decision-making at any level, surely it would be impossible to study how various animal species behave!

So, I'm going to skip largely over what the author is saying, because what the author is saying is stupid, stupid bullshit. But it's worth spending some time on what the author is implying, that the very idea of descriptive ethics is not only pointless, but actually offensive to the legitimate field of normative ethics. I'm not sure what the antipathy is, exactly, although it probably doesn't help that advances in sociobiology and evolutionary psychology have led to white-coated scientists empirically verifying things that Enlightenment philosophers pointed out three-hundred years ago, to wide cultural acclaim. We like scientists. Scientists make things. We are not, culturally, as enamored of fancypants professors. Nobody is more pissed off about this than I am; fancypants professor would be a good career path for me, whereas my science education is woefully inadequate, and my technical skill has so far served me to write some papers and code a text-based game about Kant's murderer-at-the-door scenario in C++.

The decisions made by ants are, in some ways, similar to decisions made by humans. They are also, in other ways, very different. They're primarily interpreted and executed via written/spoken language, an evolutionary technology so bizarre that only primates could come up with it. They are also orders of magnitude more complex, as our species has nested our fundamental concerns behind so many layers of interpersonal bureaucracy that we often lose sight of them entirely. But if ethics is to be merely a study of what we ought to do, it's worth pointing out that nearly every ethical philosophy already agrees on what any given person ought to do on a day to day basis, and argument tends to arise over issues that are either extraordinarily complex or hilariously rare. Still a worthwhile use of one's time, but there's beauty, and useful data, looking at it from the other end once in a while.

We can shake our fists at the blind, pitiless unvierse and bellow "I am human!" if we like, until Sheldon Cooper asks us why we're yelling tautologies at the sky. Of course we're human. This is not something in dispute. But we are also primates, and every part of us has some similarity to chimpanzees and bonobos, and we have a little less similarity to the gorillas and orangutans, etc. We didn't pop into the universe from nothing. What we did was develop a technology that radically accelerated our differentiation from the non-hominids. We walked into this movie in the middle, to paraphrase Stephen King. So we have a lot of work to do to get up to speed.

And it turns out there's a lot to learn from ants, and primates, and computers, because every metaphor we can develop for how humans function gives us new data to work with. And while "cooperative animal behavior" might not precisely equal "human virtue," it is worth noting that humans are animals, and all of our virtues (as well as many of our vices) involve cooperating with someone. More to the point, the cooperative animal behavior of ants isn't human virtue in much the same way that a cell isn't a person. They're different things. Still, get a few billion (?) cells together and weird things happen. Things you wouldn't have predicted. One of the things that can happen is a person, with awareness of moral law: an awareness just as certain as the fear of pain.

Big things are made of small things, to quote Gaius Secondus, and if you want to understand the big things, it helps to look at the small things. Free will is only a useful concept is we assume there are a) decisions to be made, and b) criteria for choosing one thing over another. While the gene theory of evolution, or theories of kin selection or group selection in general, might not be descriptive (human) ethics per se, they do suggest some fine candidates for where b) come from, and why they matters.

In Alien, the malevolent AI--who may or may not have any sense of "ought" in his synthetic brain--expresses admiration precisely for the titular xenomorph's lack of said "ought": "I admire its purity. A survivor, unclouded my conscience, remorse, or delusions of morality." And perhaps he's right. Primarily what the alien does to the crew of the Nostromo is kill and eat. Eating, as being part of that whole "urge to not die" thing, might be considered to be somewhere on our moral radar, but it might not. And besides, if we consider Alien to be a closed universe, unencumbered by the stories developed in sequels, the alien might not need to eat. It might be outside our rules of thermodynamics, or it might feed on starlight. Who the hell knows.

I bring it up because, if we do include the sequels, we see aliens working in groups to ensure the survival of their group. In particular, we see them making extraordinary sacrifices to ensure the protection of the queen and the survival of her eggs. What we see, in Aliens, and again in Alien Resurrection, is family. They likely don't "know" that's what they are, and they have no way to justify their actions as morally significant. I would question whether this is an entirely black-and-white distinction between cooperation and ethics. Animals don't have to "know" that fucking will prolong their species, but this ignorance doesn't make it any less effective. Perhaps a better question would be, can actions that reliably produce what we would determine to be moral outcomes be definitely said not to be moral actions?

2 comments:

M. Rupright said...

This article really requires a good fisking, but like you, the more I read it, the less it seems worth bothering.

I'll just pick at one annoying quote: "This is where ethical discourse comes in — not in explaining how we’re 'built,' but in deliberating on our own future acts. Should I cheat on this test? Should I give this stranger a ride? Knowing how my selfish and altruistic feelings evolved doesn’t help me decide at all."

No ethical system more sophisticated than "god said it's bad" is immediately helpful in decision making. Understanding altruism, selfishness, effects on society, humanism, etc. does not provide quick answers. It allows us to construct a set of quick answers.

Beeznuts said...

That's almost sentence-for-sentence what John Stuart Mill wrote in response to critics of utilitarianism's inability to provide quick, on-the-spot answers. You have to do that shit in advance.