5/21/2008

"Arguments" Against Enhancement

Last week I had the intellectual opportunity of a lifetime: a special invitation to an intensive 4-day faculty seminar on my main topic of interest, human enhancement technologies (HET). Without exception (excluding perhaps myself), everyone in the seminar had a fascinating and sophisticated take on the issues at stake. With their acute intellects and their intimidatingly large knowledge bases, the participants in the seminar taught me a ton about the issues surrounding enhancement.

Nevertheless (and not surprising to any student of human nature), my strong opinions have not really changed. I'm a little more skeptical both of what will be possible in the near future and of how desirable the transformations that enhancement promises will be in a larger social context. In truth, I think the only compelling arguments against it would be ones that emphasize negative social consequences.

As an illustration of the uncompelling fare usually offered, I will consider four claims, helpfully crystallized by our discussions, which are often invoked as reasons to oppose HET. I consider these to be non-arguments (hence my scare quotes above), which play upon strong emotions and vague moral intuitions for rhetorical efficacy. There are four, having to do with: God, nature, hubris, and dignity. As it turns out, these are often cited not just with respect to HETs, but for just about anything, technological or otherwise, which could lead to massive social change. In short, these are four conservative "arguments" seeking to maintain the status quo (or, in more reactionary forms, hoping to reinstate a nostalgic golden age that never was).

Playing God

This first is the easiest to debunk, but annoyingly perhaps the one most commonly cited by lay people. In general, the claim is that {genetic screening, therapeutic cloning, creating human/non-human hybrids, stem cell research, etc.} is contrary to God's will, or something which only he is allowed to tamper with.

Like any argument purporting to know the will of God, the burden of proof lies with the one making the claim. Since there is no way to prove either that (their) God exists or that such-and-such is what that God wants, this kind of claim has to rely on faith. In a secular liberal democracy, that is by itself insufficient reason for opposing a policy.

(To the extent that "playing God" is meant to be a metaphor for human beings taking on a power of which they are not worthy, it turns out to be another version of the hubris argument which I consider below.)

It's Unnatural!

This is similar to the God argument in that it tries to bring in some super-human authority, in this case "Nature", to justify an individual's personal prejudices. Whenever something being "unnatural" is cited as a reason for opposition, rest assured that some mistake is being committed. I've contemplated teaching a class just on this subject ("What is nature/natural/unnatural?")--and I think I could find enough material to do so, given the huge amount of confusion that exists--but I will keep my response here as brief as possible.

The "naturalistic fallacy" is a name often given to the kind of mistake being made here, although admittedly the specific designation of that term (and whether it is even fallacious) is disputed by philosophers. (Yeah, I know, big surprise!) As the term is usually used, it refers to the unwarranted leap from "is" statements (descriptive claims) to "ought" statements (normative claims). In general, regardless of what you call it, the mistake typically involves confusing the descriptive with the normative in some way or another.

This confusion is bad enough, but even greater is to be found when we consider the wide range of meanings given to the term "natural". If people used the term "natural" in the way that, say, physicists used it, applying to everything that actually exists, then everything we find in the world is natural. This is the sense that Spinoza uses when he talks about nature, importantly noting that good and bad, i.e., normative concepts, are not by nature. It would be simple to see how to debunk the naturalistic fallacy if this were the only thing people meant.

However, this is not the way in which the term is usually used. In the "natural law" tradition, a very different kind of moral interpretation is offered (often associated with Catholicism). To the extent that this is a religious theory, it amounts to another case of the "playing God" non-argument. In non-theological applications, the question that arises is this: can we furnish definitions of the terms "natural" and "unnatural" that have moral connotations that can be applied consistently and without begging the question (i.e., just equating "natural" and "moral" by stipulation).

A number of articles have been written on this issue, although the ones I've seen have taken the term as it applies to homosexuality (homosexual behavior is wrong, the "argument" goes, because it is unnatural). The authors, defending the permissibility of homosexuality, will go through as many different senses of natural as they can, showing that no definition can be consistently applied.

For example: if by "unnatural" one means "artificial", then anything made by humans (i.e., most of civilization) would also be immoral; if "unnatural" means "something that non-human animals do not do", the fact of homosexual behavior among penguins, bonobos, and numerous other animals would make homosexuality moral but, again, much of human society immoral; if one defines "unnatural" as that which provokes disgust, then this would not only leave it open to individual differences but would tend to make things like cleaning toilets immoral; if it means "unusual" or "uncommon" (i.e., if "natural" is taken in the sense of "normal"), then any human idiosyncrasy becomes unethical, while many common vices become perfectly acceptable; and so on. While not entirely parallel, a similar analysis can be applied to HETs.

In short, it is insufficient to oppose research into and use of HETs simply because they are "unnatural". Other reasons must be given and must be able to be applied consistently to other areas (otherwise, we're merely creating masks for our prejudices). Remember: just because something is natural, does not mean it is good.

Hubris

This term comes to us from Greek tragedy, usually referring to the tragic flaw that a dramatic protagonists suffers from which leads to his or her downfall. "Overweening pride" is a common definition of hubris. HETs are not only called "hubristic" but are sometimes said to express an obsessive human drive for "control" or "domination" or "mastery over nature". Generally speaking, under this heading I include any claim that amounts to: we are messing with powerful forces beyond our understanding that we foolishly think we can control.

(As an aside, one can find an interesting conflict in the Christian tradition on this point. Human beings are given by God dominion over the natural world, over all animals, vegetables, minerals, etc. However, pride is a great sin, Lucifer's sin and arguably a component of Adam and Eve's original sin. So, does our mastery over nature entitle us to manipulate it, say, at a genetic level? The Bible cannot answer this question conclusively, not only because it contains conflicting, even contradictory, passages, but also because it was written long before people even knew what "genes" were.)

Following Spinoza (and Aristotle and Nietzsche, among others), I do not see pride as a vice, nor humility as a virtue. It is possible to be overly proud, but what is ideal is having pride commensurate to one's capacities. It is good to know both our capabilities and our limitations. On the opposite end of the spectrum from hubris is excessive humility, which can prevent us from taking actions we should otherwise take. Believing oneself incapable of something is often sufficient to render oneself actually incapable. The point is: trying to control nature is not by itself a bad thing--in fact, we do it all the time--and there are times when we know well enough to do so, particularly if some greater good prompts us to act under uncertainty (and, in truth, we always act with some degree of uncertainty; none of us predict the future all that well). Don't forget: Hamlet also had his tragic flaw.

That said, the hubris "argument", unlike the previous two, actually offers an important lesson. We should not overhype our abilities, we should not presume that our technologies can satisfy all of our desires. Granted, but I would argue that human enhancement should only be developed with an eye to safety concerns and its potential social impact. We should use science and other cognitive tools to understand as well as we can, but just because there will always remain unknowns does not mean we should not act on our best information. Calling something "hubris" is effectively throwing in the towel, preempting inquiry before it even gets started.

I'm inclined to think that most people who accuse scientists of hubris simply are not aware of the vast extent of what we now know and what we are capable of doing with this knowledge. It's easy to throw up one's hands, especially if one occupies a privileged socioeconomic position, and argue that we should just "play it safe". However, refusal to change might just mean our downfall. Assuming everything will turn out OK if we leave well enough alone is simply not justified, especially if we recognize that evolutionary solutions to problems of survival are the products of blind trial and error. Just because something has worked or is working does not mean it will continue to work in the future. (In Enhancing Evolution, John Harris makes an excellent argument to this effect against what's often called the "precautionary principle".)

An Affront to Dignity

Dignity. There are few words in the English language that are so vague. Steven Pinker, in an excellent piece for The New Republic, makes a case for "The Stupidity of Dignity". His primary argument is that this nebulous quality is ethically unnecessary and, in the very least, should take backseat to more rigorous notions such as autonomy and respect for persons. If we don't even know what dignity is (is it something that everyone possesses equally, or can it be increased or decreased by what one does or by what one is supposed to endure?), we should not use it as a reason to oppose a policy which has the potential to do tremendous good.

Frankly, even if I ignore issues of vagueness, I simply don't see how HETs threaten dignity (just like I can't see how gay marriage is detrimental to the traditional institution of marriage). Adversity will never be eliminated from the human condition, nor death, nor suffering. We will always have limits. The quest for enhancement is not a quest for perfection, but for making life better. It may lead to new and unusual forms of life, but that does not effect the ethical worth of traditional forms.

As Pinker points out, dignity is not the sole human good, and ascriptions of dignity are extremely subjective. If we believe in a liberal democratic system in which individuals are as free as possible to determine their own conceptions of the good--to take a page from Rawls--then forcing everyone to conform to a moral standard of dignity is highly questionable.

***

Now, all this is not to say that there are no good reasons for opposing human enhancement. There may well be, perhaps even ones that could convince me. Nevertheless, using loaded terms like God, nature, hubris, and dignity adds little to the debate. (Conversely, merely invoking "freedom of choice" or a "right to bodily self-determination" is insufficient for justifying the use of HETs. There may be conflicts with other rights or freedoms that could take precedence.)

Bonus: Equality

A fifth claim against HETs is that they will heighten inequality (and hence, lead to conflicts and undermine democratic institutions). I think there is something to this argument, but that it too fails. I'll briefly say why.

Equality is an ideal. By nature, human beings differ in many qualities and there's no reason to think that everything balances out in the end. Social programs and environmental interventions can redress some differences, but it cannot change biological potentials. Whether we wish to admit it or not, there is a genetic lottery. With respect to our initial genetic makeup, some people get all the breaks, others several, most of us probably a couple, and some virtually none.

To use a personal example. I'm very lucky to be born with naturally high intellectual abilities. I'm smart, through no doing of my own. (I've seen enough cases of people who work harder than I do but who still cannot best me in many intellectual endeavors.) I'm grateful for that, but when I look around me I see mostly other smart people (because of where I live and work), and some of those people also have other talents which I lack. They are more creative, or more socially adept, or in better shape, or taller, or upbeat by disposition.

This last one, disposition or temperament, I cannot stress the importance of enough. Whether you have a sunny or a cloudy disposition colors the entirety of your experience. Some people lacking other talents can persevere simply because they are naturally resilient. This is grossly unfair.

Why do we ignore these natural inequalities when people complain about the "unfairness" of using steroids in professional sports or popping Ritalin before a test? People are no more responsible for their "natural gifts" than they are for whatever artificial means they use to boost themselves. But even if we consider something like how hard somebody works--because we think people who work hard deserve good things--we should ask, how did they acquire this predilection? Even if a work ethic is something largely socially determined, there is still a difference between people raised in different environments. Should I be blamed for not being raised to appreciate the value of hard work?

In truth, if equality is an ideal that we maintain, then the most egalitarian course of action is wide-scale funding for HETs, and subsidies to ensure that every individual can have the enhancements he or she wants. Trying to ban HETs will create a black market, raising costs and decreasing safety while simultaneously assuring that only a small elite will have access to them. (Even worse, this elite will be by definition law-breakers, meaning that they may find it easier to break other laws in the future.)

We should admit to ourselves that the main reason many of us oppose HETs is because we find them weird and unsettling; they are something radically new, something we aren't used to, and therefore something that frightens many of us. Let's stop dressing up these fears by invoking values which we apply selectively to rule out the things we just happen not to like.

If you don't want to enhance yourself, no one is forcing you too. But I do. So lay off!

5/02/2008

On the Evolution of Intelligent Life

Nick Bostrom, philosopher and transhumanist from Oxford, has penned an excellent analysis of why it might in fact be better if life were not found on Mars. His reasoning is not entirely without problems--he's (still) only human after all--but I think he covers most of the important bases.

Thinking about the distant future is extremely difficult, but worthwhile if we have any concern for the future of our species or our civilization (not necessarily identical). Even if technology is not advancing at an exponential rate as some technooptimists proclaim, the pace of discovery and invention seems to be accelerating and will likely create for us problems we've never before faced. It would be nice to have some foresight, perhaps to take preventative measures against major threats.

A lot of future forecasting is a probability game, and one in which we have no idea what the real figures might be. Even if we could determine them, would it do us any good? A 1-in-a-million chance may seem highly unlikely, but given the vastness of the universe, such happenings are rather frequent. We can scarcely fathom the difference between 1-in-a-million and 1-in-a-trillion, but the probabilities we're dealing with are probably more infinitesimal by many orders of magnitude. However--and this is key--if the universe is truly infinite, anything that is possible will exist somewhere or other. We have no way of determining whether our case is unexceptional or nearly impossible.

With those caveats in mind, Bostrom still offers some helpful insights on the basis of his notion of a Great Filter. Think of it as a kind of "natural selection" for advanced civilizations. Since the universe has existed for much longer than we or our evolutionary predecessors have, it is quite likely that if the genesis of intelligent life were not so improbable, we would find instances of it. But SETI has yielded nothing. (See the article for further elaboration; I'm skipping some of the finer points.) This means that the emergence of intelligent organisms and cultures is unlikely--but for what reasons? It may be that the hard part is getting self-replication going. Once that happens, intelligent and eventually space-faring life might be virtually inevitable (so long as the planet on which it arises is not destroyed or otherwise rendered uninhabitable in the meanwhile).

Still, even if the jump from non-life to life is tremendous, the jump from intelligent life to space colonization could itself be as tremendous, or more so. (To a certain extent, this seems to me unlikely: we probably now have the means to start colonies on the moon and Mars, but lack sufficient motivation for doing so. In the very least, it's something that we could accomplish within a couple decades. At this point, no technology we have created makes it inevitable that expansion into space be precluded.)

The upshot of the discussion is this: is it possible for any sufficiently advanced civilization to escape destruction at its own hands? Or is the relationship between intelligent life and technology like that of the necromancer devoured by summoned demons he could not control? Can a civilization colonize other parts of space allowing it to persist even if its home planet is destroyed? And how likely is this kind of existential catastrophe anyway? Since most of us will not be colonists, it would be nice to know about these potential global cataclysms. (The greatest ones which are now being developed make nuclear weapons look like children's toys. Self-replication, in biotechnology, nanotechnology, and AI, are much more potent threats, because self-replicating things multiply at an exponential rate and could quickly overwhelm us. See Bill Joy's article from the April 2000 issue of Wired.)

It may well be that we face such existential threats in our lifetime. One error in the laboratory could eradicate us before we even knew what was happening. Even if this is extremely unlikely, it only has to happen once to kill us all. We would be foolish not to try to predict and prevent such possibilities as far as is possible. Thus, if we find no life anywhere else in the observed universe, it may just mean that we've already overcome the hard, astronomically improbable parts, and it may be smooth sailing from here on out. Then again...