Depression and Human Enhancement (HE)

Here's something really neat but possibly scary to the uninitiated. The article, "Happiness is a Warm Electrode" in Popular Science details a new treatment for depression that involves, as you might expect, implanting electrodes in certain parts of the brain.

Called "Deep Brain Stimulation" (DBS), it does exactly that, and has been effective for about 2/3 of the couple dozen who have undergone it, and apparently a larger trial involving 100 patients is in the works. Most notably, the patients who are undergoing the surgery are ones who have not responded well to drugs or even ECT (electro-convulsive therapy). (Incidentally, DBS was first used on patients with Parkinson's disease, and has proven to be an effective treatment there.)

Of course, the usual suspects are comparing this to the lobotomy and other sense nonsense. I think such an assessment evinces a crass misunderstanding of what depression is and what treatment of it entails. An effective therapeutic intervention for depression should produce what Spinoza calls "joy" (laetitia), that is, not merely happiness but an increase in an individual's power to act.

And in the case study examined in the article, DBS does precisely that. A woman who could not even muster the energy to get off of her sofa and clean her house has her life turned around by the procedure. She finds herself able to communicate more effectively with friends, family, and even strangers and is able to get out and exercise more. The transformation is rather extraordinary.

When I think of human enhancement, this is precisely the sort of thing I have in mind. This woman has changed who she is, but what she was before was essentially a defective human being. People are afraid to make value judgments like that, and they try to skirt around the issue, but it's the truth. This is a good that technology can allow: HE as an enhancement of our humanness.

Of course, it can move us above the human level as well (and this is what is typically distinguished as "enhancement" in contrast to "treatment", although I obviously question such a distinction). But isn't becoming something greater than what one is "naturally" a distinctive part of what it means to be human? To me, that's the part that matters, not the biological substrate.

I'm beginning to get a clearer sense of what I want to write my dissertation on. I want to challenge the simplistic, sometimes positivistic value judgments that proponents--and in a different way, opponents--of HE implicitly or explicitly make. However, instead of merely criticizing the notion of HE, I want to put forth an alternative formulation that draws on ethical thinkers like Aristotle, Spinoza, and Nietzsche (my philosophical triumvirate, as it so happens).

In other words, I want to develop a richer conception of what it would mean to improve humanity through the use of technology, one that is responsive to the criticisms coming from, for instance, Adorno and Horkheimer, certain strains of feminism and environmentalism, and so forth.

(The criticisms coming from the religious are of little philosophical interest to me; they are based on a fundamental disagreement not simply in premises but, as Wittgenstein reminds us, in ways of life. In other words, there's no point in arguing against them. Religion will have to be defeated by other means. [I'm thinking robots. :-) ])

I have many details to work out, but I think this project is quite workable and certainly worthwhile from my standpoint. It's unpopular and perhaps even offensive to many academics in the humanities, but as far as I'm concerned that's a plus. (Haven't all great ideas been initially regarded as distasteful, foolish, crazy, and downright dangerous?)


Assorted Musings, Political, Technological, and Otherwise

After many months of actually working hard, I've finished all of the papers that I was working on, and now have additional free time on my hands to do, among other things, some blogging. Have written something like 150 pages just over the course of the summer, I'm more confident about finishing this program on time so that I can move on to better things, such as locations outside of the South.

I wrote on a variety of subjects, many of which went to the heart of my interests, and sustained reflection and research on these matters has led me to question old opinions and values. But there have also been some unanticipated side effects.

For instance, having abstained from reading political blogs for the past several months in an effort to use my time more productively, I think I've discovered that I enjoy not being engaged in politics. If nothing else, I'm less angry than I used to be. I've been reading snippets in the news here and there, but blogs have been excluded almost entirely.

Instead, I find myself reading more about technology. I recently purchased a subscription to Wired, a magazine run by techno-libertarians, but nonetheless one which addresses issues I believe important. The nice thing about technology news is that--with the notable exception of apocalyptic forecasts and so forth--it's on the whole positive.

The kinds of discoveries and inventions emerging in science and engineering are often astounding, and it's one area in which something like progress can be identified. (Speaking on the descriptive level here; the repercussions of new advances reside in far murkier waters.)

(For instance, I read in Wired just today that researchers at Wake Forest have successfully grown human bladders as replacement organs by extracting muscle and other tissue cells from, I assume, individuals in need of a transplant. This is a huge breakthrough that could totally revolutionize medicine if it could be extended to regenerate other organs and body parts.

The very same technology should, in principle, allow us to "grow meat" by cultivating muscle tissue from various animals. Thus, not only could we replace the horrific practice of factory farming with something probably more cost effective [at least, in the long run] and cruelty-free, but we could also have more control over things like fat content, I imagine, to make for healthier meat. Perhaps people will be uneasy about eating meat grown "in a vat" or whatever, but eventually we'll get used to it.)

But, really, my politics have changed, and I find myself now concerned with emerging issues that transcend our current simplistic political divide. I still detest the Republicans, if for nothing else than their anti-science ideological tendencies, but I identify less with the American left now. It's not as though I've ever strongly identified as a Democrat (I've registered to vote either as independent or as Green), but I am now more wary of the Ludditism all too often found in the left (especially among academics in the humanities; just because they can't do math they have to take it out on the whole scientific enterprise!).

While I still have a lot of interest in human enhancement, I've been reading up more on artificial intelligence. I feel more confident about a few predictions for the future (leaving aside the timescale issue). The 21st Century will, more likely than not, result in the end of human civilization as we know it. Either we will enter a new dark ages, make ourselves extinct (and take quite a few other species with us), or in the worst case eradicate virtually all life on earth (I suspect some strains of bacteria will survive). There is a slim chance, though, that we will be able to control our technology well enough so that it makes life for us a paradise.

It all hinges on the development of hyperhuman intelligence (to use the terminology of researcher J. Storrs Hall, whose excellent Beyond AI has been my pleasure reading of recent). Autogenous, that is, self-developing, AI could quickly grow many times more intelligent--or at least more powerful--than any human government. They might seize control of world affairs, but this is not necessarily a bad thing, because they could try to organize the world to make life better for us, and they might be able to stop the existential threats posed by rogue bio- and nanotechnology.

Our best bet is to develop strong safeguards, a kind of conscience for our machines, that would prevent them from viewing us as a threat. If done right (a big "if", mind you), I think it could result in almost godlike beings that would have possess wisdom orders of magnitude greater than could be found in any human being. (I've been working on an argument re-casting AI as the Platonic philosopher-kings of a Cyber-Republic.)

My thinking is that to be a capital-D Democrat, it is probably necessary to at least be a small-d democrat, and it is questionable whether I fall into the latter camp anymore. In a world growing in complexity at an exponential rate, why should a mob of bigoted, mouth-breathing yokels have any say at all in how affairs are conducted, and why should I entrust non-expert politicians who neither stand for nor understand anything of significance to make national decisions?

I should be clear. I am still interested in many of the values I espoused when I more readily affiliated with the left. I would like to see effective freedoms (within reasonable limits), basic provisions (guaranteed life necessities, medical coverage, access to advanced technology [free wireless internet for all!]), and justice (that is, everyone getting what is appropriate for them) for everyone. I just don't think popular sovereignty is an effective way of bringing this about. The prejudices of the vulgar are just too strong--and when it comes down to it, we are all vulgar.

I think I agree with Winston Churchill's apocryphal description of democracy as the worst form of government except for all of the other ones. Indeed, it is a lesser evil (although a hybrid form of democracy and aristocracy like, say, Madison preferred, might be an even lesser one). But new technology brings new possibilities and we should ask ourselves whether there are not better ways of achieving the goals we have when we settle for democratic governance.

For now, I think I might just call myself a non-traditional democrat, if for no other reason than the ostracism that someone who said they were opposed to democracy would likely face in today's political climate. But really, this is not a stretch, because I have strong Enlightenment values that lead me to favor democratic outcomes, just not procedures based on popular sovereignty. (Perhaps I will write more about this later.)