3/22/2007

Upon Further Reflection...

The musings of yesterday evening did not quite sit well with me, even immediately after writing them. While I do think the problem of technology is of vital importance, my feeling is that some of my political conclusions were over-hasty. I was caught up in the rush of radical new ideas and just got carried away. This post is a more critical take on some of the issues raised previously.

When one reaches conclusions that clash dramatically with the sensibilities of most others, it's a good idea to try to understand as well as possible what really underlies these conclusions, and whether they rest on secure foundations. Thus, I want to consider this from at least two perspectives. One is the rational and evidentiary basis of the claims, while the other is more psychological, looking at the factors that motivate the creation of the arguments.

First, it seems yesterday's conclusions depend on a number of presuppositions which I either failed to mention or inadequately argued for. The prediction of rapid progress is premised on the continuation of a trend that so far shows no signs of slackening, but it is far from certain that the conditions which support it will be stable.

While I mentioned the possibility of an apocalyptic disaster which other wiped out or species or at least destroyed a good chunk of it and ended civilization as we know it, I didn't consider other, more minor catastrophes which might serve to delay or reverse scientific progress. For example, the looming specters of global warming and the energy crisis constituted by the depletion of fossil fuels are significant problems which I've been content to lump in with other problems having merely technological solutions. It may be the case that we find new, cleaner sources of energy, or technological fixes to reverse the effects of excessive carbon emissions, but this is far from certain. One could raise other problems as well: instability caused by economic collapses and shifts in the balance of political power, wars over certain limited resources like oil, and so forth.

On top of that, there may be certain things which are just not physically possible to achieve, or that come with such adverse side effects as not to be worth pursuing. One could imagine certain genetic augmentations that disrupt a finely balanced natural system, or the problem of creating software to use all our powerful computing hardware to its fullest capacity, to name just two.

Next, in terms of the basis of the normative claims, it is by no means obvious that greater intelligence necessitates better judgment or wisdom. My inclination is to believe that this would be the case, but it's something of an empirical question, although one that is simultaneously normative since the meaning of better judgment is itself a question of ethical/political judgment. Furthermore, I downplayed the possibilities of other reasons why letting greater-than-human artificial intelligences decide for us might be undesirable. One such instance would be the importance for well being of the sense of being free to decide for oneself, determine the course of one's own life, etc.

Moving on to the second view, I find that psychologically my motivation in the previous post is highly misanthropic, in addition to being highly anti-natural. Now, granted, that I think this hatred of natural processes and of human beings is justified, but there are some serious repercussions to being so motivated. In the least, I need to offer a better justification for this attitude.

For the first time in my quarter-century or so of life, I have broken a bone. It happened I think about a month ago, but I didn't notice any effects of it until I started feeling pain in my right foot about two weeks ago, and it was not confirmed until just today after I had X-rays taken. Suffice it to say that it has reminded me of the frailty of human bodies.

Similarly, although it places me in a camp with some of the most violently ascetic individuals in history, there are many things that annoy me about the kinds of bodies that we have: all the effort that is required to maintain physical hygiene, the inefficiency of many systems, the unpleasant wastes that our bodies produce, the way that we get so easily tired by sustained activity, etc.

My body in particular is not in the greatest of shape, even after I devote considerable time to habits of maintenance. I have never been particularly strong or fast or resilient. Frankly, if not for safety concerns, I think I would be one of the first in line for a prosthetic body if such a thing should be developed. As it stands, I can easily imagine myself voluntarily opting for cybernetic limbs to replace my perfectly healthy ones, once these are safe and indistinguishable from natural limbs.

All of these shortcomings are to be expected by the unintelligent design of blind evolutionary forces. Unfortunately, as imperfect as they are they are still exceedingly complex, having reached solutions to problems of organization that we are not even aware of at the present. I think we will eventually design better bodies, but it may be quite some time. In the interim, the best option may just be to try to optimize the basic design that we do have, and this is a problem for biotech more than anything else.

Furthermore, I was explicit previously in my suggestion that suffering is pointless and worthless, but this is a bit of an overgeneralization. Pain certainly has an evolutionary function, which involves, among other things, learning how to respond to complex environments. What I see as another unfortunate consequence of nature's blind designs is that pain seems to accomplish the task of learning far more readily than does pleasure.

I have seen evolutionary justifications for this empirical fact, such as the far greater cost imposed by death (which must in turn be avoided to the greatest degree possible) versus the relatively small benefit accrued by successfully obtaining food for the day, or by one act of mating, and so forth. The consequence of this is that, on the whole, there is likely a far greater degree of pain than pleasure in the world, and to me this is simply unacceptable. On simple utilitarian grounds alone, it would be incumbent upon us to undertake to redesign natural processes as far as possible, so that we might reverse this pernicious trend with all its adverse consequences (e.g., the possibility of torture).

Lastly, on this note concerning my distaste for "nature" and "human nature", there are the myriad of social and political problems which I see as ultimately unresolvable. My contention is that most of these problems have at least two kinds of solutions, one of which is primarily social and the other of which is medical or otherwise technological.

For example, take the difficulties imposed on the handicapped in virtue of their injuries and defects. The problem here is fundamentally a mismatch between certain individuals and their social environments. To an extent, the environment can be altered: e.g., handicapped bathrooms and ramps are fairly common accommodations that society has made for those bound to wheelchairs.

But, rather than taking all that effort to redesign the environment--including some that are extremely difficult to change, viz., opinions and attitudes--we can attack the problem at an individual level. By screening for genetic defects, and by developing highly effective prosthetics, cures for paralysis, and so forth, we can simply eliminate the handicaps themselves. (And unlike the deplorable eugenics movements of the past, this totally avoids killing people; it simply remedies certain existing conditions and prevents certain genetic combinations from attaining life.)

Not all social problems may be resolvable on this model, but I believe that many of the causes of unhappiness in people can be so resolved, and this would be a major step forward. One can think of it as something like "applied stoicism": I change myself rather than my environment, because I have so little power over the latter.

In short, I like to see this as a problem of figuring out how reason can best overpower negative affects. I've been meaning to write a post to show why Spinoza would agree with me about all this technology stuff (and it's not even that big of a stretch, as I hope to show), but certainly we have a case here of people coming together (thus having more power than individuals alone) and crafting artifices which allow for more direct control of those things which disempower us. To my mind, that's what human enhancement truly is: the augmentation of our freedom.

Returning to the previous perspective (beliefs that underlie my conclusions), it is vital to note that certain conclusions I have reached concerning human freedom and divinity are essential presuppositions for me. I believe, but will not argue for here, that what is taken for free will is simply ignorance of the causes of our desires (for one excellent argument, see the invaluable Appendix to Spinoza's Ethics Part I). I maintain that the choices we are presented with when it comes to technological control are between numerous causes (of which we are ignorant) interacting in highly complex ways producing highly contingent effects, on the one hand, and more direct control based on scientific knowledge.

(As an aside, I use the word "choice" deliberately; I do not deny that we have the freedom to make choices. Rather, I maintain that the basis on which those choices are made consists of desires that are primarily the product of external forces--and to the extent that we can change our desires, this requires the operation of second order desires, whose origin will ultimately be derived from external factors.)

In short, the choice is between chance and ignorance, on the one hand, and control and knowledge on the other. To me, this is really no choice at all; only a fool would choose ignorance. The problem that most people face, and this is a point that B.F. Skinner, of all people, has made remarkably well, is that the external determination is more evident in cases of control (because, of course, we're ignorant in the more complex cases), so it's easier to see this as simply being manipulated. But, as I construe it, we're being determined to action either way; the path of knowledge though allows us to be more determined by our own nature, which is what I think true freedom is.

This should be sufficient for now. Let me to some extent rescind my previous rejection of certain democratic values, and leave them in a sort of questionable area, a matter of doubt requiring further reflection.

3/21/2007

A 3rd Dimension in 21st Century politics

I've begun reading Ray Kurzweil's most recent book The Singularity is Near. I'm tempted to call the man a visionary; his view of the world is profound and startling. It's got me thinking about some issues I've seen raised elsewhere, and I've come around to some political conclusions that are, to say the least, not mainstream.

In contemporary politics, a 2-dimensional model is sometimes proposed for plotting political views. Wikipedia has a nice discussion of the diversity of views on what constitutes the political spectrum. Many have seen the inadequacy of the left-right model, and have proposed alternatives, one of which is known as the "political compass" model (discussed in detail here), which is fairly representative of 2-axis accounts, so I will briefly discuss it.

The two axes are the social and the economic. On the economic side of things, you find a range of views from hardcore libertarianism and support of laissez-faire capitalism, on the right end, to various forms of communism and socialism, on the left. The range of views on economics is of course more complex than this, but it offers a rough guideline by focusing on the level of interference in an economy by the state or some other collective.

On the social axis, you have views that range from authoritarianism and the worship of tradition, on the right (actually, the top of the compass), to what they call libertarianism (it's probably appropriate to distinguish between economic and social libertarianism, as well as the political group that calls itself libertarian, whose members are often both) with its emphasis on personal freedoms, on the left (the bottom). The relevant characteristic here is a measure of permissibility, how much is left up to individuals to choose for themselves.

I trust these two axes are fairly straightforward. However, there a third dimension is emerging in 21st century politics, and will become extremely prominent very soon. My suggestion is that it warrants its own axis, since it does not map well with either of the other two. I think the easiest way to distinguish the ends are by calling them technoconservatives and technoprogressives, but you also might talk about the range as being from Neo-Luddite to Transhumanist. On either end, you find unlikely allies.

On the technoconservative side, you find environmental activists, religious fundamentalists, leftist academics (much to my chagrin; it makes me wish Brave New World were never written), and groups like the Amish. On the technoprogressive side, you have avowed transhumanists, technophiles of various sorts, avid consumers of gadgets and doodads, and even the occasional social conservative who sees technology as a means of "curing" the gay or as a great way to enforce law and order.

I view our culture as being far more technoconservative than technoprogressive, in spite of all the money we put into scientific and technological research. Yes, there is a sort of contradiction represented by our worship of progress and our insatiable consumer culture, but by and large Americans are weary of new technologies, particularly biotechnology like genetic engineering. Contrast us with nations like Japan, India, or Thailand, and you find far less support of, for instance, germline genetic augmentation, which many westerners are inclined to dismiss as scary "eugenics".

Carl Elliott has documented our tenacious relationship with human enhancement in Better Than Well, a text which I use in my intro philosophy class and one I highly recommend. What you find, Elliott suggests, is a tension in thinking about human identity between notions of authenticity and being true to yourself, and those of self-improvement and being all that you can be. And, in truth, this issue of technology is largely one of identity, although in a different sense than the trendy topic found in academia today.

The fact of the matter is that we will very soon be dealing with beings of our own creation who match and then exceed human capabilities. The extent to which this happens in part depends on how long the Luddites are able to hold off what I see as inevitable (or, inevitable so long as we do not destroy ourselves, which is actually a likely possibility).

But if the technoprogressives win out, we will see robots, genetically enhanced humans, humans who voluntarily integrate themselves with machines (cyborgs, they are often called), but also computerized intelligences without bodies properly speaking, as well as intelligences realized in new forms like molecular computing that call into question the living/non-living distinction, and then all the hybrids of these various types. Imagine the world of the "X-Men" combined with that of "Ghost in the Shell". Add in nanotechnology, and things get even more wacky--and all of this happening all at once.

Very soon, people will have no idea about what is human and who counts as a person deserving of legal and moral rights. In this era, the technoconservatives (or at least the social conservatives among their number) will probably limit humanity to the unenhanced and non-cybernetic. This group, which I like to think of as the Neo-Amish, may constitute a substantial portion of the population in the beginning. But ultimately, they will be unable to compete with their enhanced brethren (and good riddance!).

This picture of the world scares many people. It runs counter to our understanding of the equality of persons; when you talk about improving humans, you are talking about making some better than others. It essentially destroys the traditional human world. I think that it is more than likely--I'd say even 90% certain--that human beings, as we are constituted now, will be extinct by 2100. Either we will have destroyed our species, or we will have moved beyond it.

This is, as I see it, the last century of humanity. Because I far prefer augmenting humanity rather than eliminating it, I strongly support unimpeded technological development in the GNR (genetics, nanotechnology, robotics) fields. Now, some people oppose it for the same reason, since they see it as increasing the likelihood of human extinction, whether by some super-virus, a robot rebellion, or the destruction of life by omnivorous nanobots (the "gray goo" scenario). And to an extent, this is true.

However, I see a far greater threat in the form of the idiots who control our nuclear weapons. One of the problems with unenhanced humanity is constituted by our aggressive and tribal impulses that incline us to hate those different than us and go to great lengths to destroy them. The sooner the enhanced and machine intelligences take over the world, as far as I am concerned, the better. In short, for those of you who know the film, I see "I, Robot" as having an unhappy ending.

Now, this is a radical view politically. It makes me part ways with many democratic ideals. But the more I read about the exponential growth of technologies, the more inevitable it seems. Yes, we may ban a lot of biotech, but the change is coming faster than we can respond to. Ultimately, I think it's the machines who will triumph, because people will view advances in computing and robotics as relatively innocuous, until the robots and other machine intelligences become commonplace.

Before long, recognizing their superiority, the machines and their cybernetic allies will sweep the unenhanced out of power--hopefully nonviolently--and usher in a new era of peace and cooperation. My guess is that this would happen around mid-century. It behooves those of us with any sense to join them.

All this sentimental piffle about the flaws and sufferings that make human beings special should be recognized for the garbage that it is. Nietzsche articulates it perfectly: the one thing we cannot accept is the notion that our suffering is meaningless. Well, folks, like it or not, it mostly is. It's the product of blind chance, of complex natural processes that lack foresight, of the various idiocies that human beings foist upon themselves. A world with as little suffering as possible would be a far better world, and this will (I hope) be the world of the future, in which intelligence wins out over other natural forces.

Most of you who read this, although I know few do, will likely regard me as insane, or at least deluded. I say time will tell. Predicting the future is a difficult business, and I could likely be wrong. However, one thing I am quite confident in is my normative stance: human beings are ultimately unfit to govern themselves. You can arrange societies in only so many ways, some more preferable than others, but like a mosaic made out of dog shit, the substance with which you work will put serious limitations on how beautiful your finished product will be.

In the past, this view was simply used as a means for one group to gain power over another. We are "the well born", "the best men", the ones who speak directly to the gods, etc., etc. But, in the future, this pretense will become a reality. Let there be democracy amongst the enhanced, but most human beings have no idea about what is good for themselves. Certainly, the people are not equipped to recognize who among them would serve as the best leaders--George W. Bush is but one obvious example.

We should be honest with ourselves. Frankly, I see myself as lacking in these respects--certainly I lack the intelligence and wisdom to see how the world ought to be governed but, really, so do all human beings. For too long, we have been at the whims of fortune, but now that we have the possibility to become masters of fate, it is incumbent upon us that we do so.

I find myself with a new purpose, as an academic and a philosopher, which is to sell this vision of the world, to encourage people to put effort into making our radically different future as utopian as possible. More and more are waking up to the importance of the issue, but far too many react to it with revulsion. I hope to change that. In any case, this is really the defining issue of our time. Recognize it.

3/10/2007

What's Wrong With American Democracy

Among many other things, this.

For all the policy blueprints churned out by presidential campaigns, there is this indisputable fact: People care less about issues than they do about a candidate's character.

A new Associated Press-Ipsos poll says 55 percent of those surveyed consider honesty, integrity and other values of character the most important qualities they look for in a presidential candidate.

Just one-third look first to candidates' stances on issues; even fewer focus foremost on leadership traits, experience or intelligence.


Now this might not be such a big deal if it were actually possible to know the character of candidates. In theory, it seems that our leaders ought to be men and women of character. Perhaps this is not the most important quality, but it is worth something.

The problem is that a presidential candidate's "character" is the creation of PR and marketing specialists. If people actually knew George Bush's character (and, according to a poll cited in the above article, a whopping 44% think Bush is honest as of January; I didn't realize such a large percentage of the population was mentally challenged), he never would have come close enough to steal the 2000 election in the first place.

But speaking more generally, look at the types of people who are attracted to politics. Do any of them have the least bit of character? I think we've forgotten one of the fundamental premises of our form of government, as Glenn Greenwald has repeatedly pointed out. Power corrupts, politicians are not to be trusted. This is why we have check and balances, separation of powers and all that.

Now granted that some politicians are bigger crooks than others, but we should never just trust them, regardless of their party affiliation. Since the media has essentially forfeited its role as watchdog, and is selective about which facets of individual character it reveals, we should always be at least initially suspicious of the claims to good character that we hear from the candidates, or bad character from their opponents (recently, it came out that Obama was late in paying some parking tickets at Harvard--is this "character"?).

By what criteria should we judge our candidates? I'm inclined to say not his character, but whether I could have a beer with the guy is what really matters. But seriously, anyone who can honestly say "I voted for/against candidate X because s/he seems like a [insert trait here] person" has essentially abdicated their responsibility as a democratic citizen.

Numerous other qualities, like, say, oh, I don't know, what they plan on doing when elected are more responsible criteria for decision. In truth, though, the entire system of presidential elections almost ensures that no one can make a good decision about whom to vote for, and that that decision is in favor of the lesser of two evils.

In summary: character is a dumb way to decide who should be president, not because it doesn't matter, but because it's nigh impossible to discern who has the least awful character.