3/22/2007

Upon Further Reflection...

The musings of yesterday evening did not quite sit well with me, even immediately after writing them. While I do think the problem of technology is of vital importance, my feeling is that some of my political conclusions were over-hasty. I was caught up in the rush of radical new ideas and just got carried away. This post is a more critical take on some of the issues raised previously.

When one reaches conclusions that clash dramatically with the sensibilities of most others, it's a good idea to try to understand as well as possible what really underlies these conclusions, and whether they rest on secure foundations. Thus, I want to consider this from at least two perspectives. One is the rational and evidentiary basis of the claims, while the other is more psychological, looking at the factors that motivate the creation of the arguments.

First, it seems yesterday's conclusions depend on a number of presuppositions which I either failed to mention or inadequately argued for. The prediction of rapid progress is premised on the continuation of a trend that so far shows no signs of slackening, but it is far from certain that the conditions which support it will be stable.

While I mentioned the possibility of an apocalyptic disaster which other wiped out or species or at least destroyed a good chunk of it and ended civilization as we know it, I didn't consider other, more minor catastrophes which might serve to delay or reverse scientific progress. For example, the looming specters of global warming and the energy crisis constituted by the depletion of fossil fuels are significant problems which I've been content to lump in with other problems having merely technological solutions. It may be the case that we find new, cleaner sources of energy, or technological fixes to reverse the effects of excessive carbon emissions, but this is far from certain. One could raise other problems as well: instability caused by economic collapses and shifts in the balance of political power, wars over certain limited resources like oil, and so forth.

On top of that, there may be certain things which are just not physically possible to achieve, or that come with such adverse side effects as not to be worth pursuing. One could imagine certain genetic augmentations that disrupt a finely balanced natural system, or the problem of creating software to use all our powerful computing hardware to its fullest capacity, to name just two.

Next, in terms of the basis of the normative claims, it is by no means obvious that greater intelligence necessitates better judgment or wisdom. My inclination is to believe that this would be the case, but it's something of an empirical question, although one that is simultaneously normative since the meaning of better judgment is itself a question of ethical/political judgment. Furthermore, I downplayed the possibilities of other reasons why letting greater-than-human artificial intelligences decide for us might be undesirable. One such instance would be the importance for well being of the sense of being free to decide for oneself, determine the course of one's own life, etc.

Moving on to the second view, I find that psychologically my motivation in the previous post is highly misanthropic, in addition to being highly anti-natural. Now, granted, that I think this hatred of natural processes and of human beings is justified, but there are some serious repercussions to being so motivated. In the least, I need to offer a better justification for this attitude.

For the first time in my quarter-century or so of life, I have broken a bone. It happened I think about a month ago, but I didn't notice any effects of it until I started feeling pain in my right foot about two weeks ago, and it was not confirmed until just today after I had X-rays taken. Suffice it to say that it has reminded me of the frailty of human bodies.

Similarly, although it places me in a camp with some of the most violently ascetic individuals in history, there are many things that annoy me about the kinds of bodies that we have: all the effort that is required to maintain physical hygiene, the inefficiency of many systems, the unpleasant wastes that our bodies produce, the way that we get so easily tired by sustained activity, etc.

My body in particular is not in the greatest of shape, even after I devote considerable time to habits of maintenance. I have never been particularly strong or fast or resilient. Frankly, if not for safety concerns, I think I would be one of the first in line for a prosthetic body if such a thing should be developed. As it stands, I can easily imagine myself voluntarily opting for cybernetic limbs to replace my perfectly healthy ones, once these are safe and indistinguishable from natural limbs.

All of these shortcomings are to be expected by the unintelligent design of blind evolutionary forces. Unfortunately, as imperfect as they are they are still exceedingly complex, having reached solutions to problems of organization that we are not even aware of at the present. I think we will eventually design better bodies, but it may be quite some time. In the interim, the best option may just be to try to optimize the basic design that we do have, and this is a problem for biotech more than anything else.

Furthermore, I was explicit previously in my suggestion that suffering is pointless and worthless, but this is a bit of an overgeneralization. Pain certainly has an evolutionary function, which involves, among other things, learning how to respond to complex environments. What I see as another unfortunate consequence of nature's blind designs is that pain seems to accomplish the task of learning far more readily than does pleasure.

I have seen evolutionary justifications for this empirical fact, such as the far greater cost imposed by death (which must in turn be avoided to the greatest degree possible) versus the relatively small benefit accrued by successfully obtaining food for the day, or by one act of mating, and so forth. The consequence of this is that, on the whole, there is likely a far greater degree of pain than pleasure in the world, and to me this is simply unacceptable. On simple utilitarian grounds alone, it would be incumbent upon us to undertake to redesign natural processes as far as possible, so that we might reverse this pernicious trend with all its adverse consequences (e.g., the possibility of torture).

Lastly, on this note concerning my distaste for "nature" and "human nature", there are the myriad of social and political problems which I see as ultimately unresolvable. My contention is that most of these problems have at least two kinds of solutions, one of which is primarily social and the other of which is medical or otherwise technological.

For example, take the difficulties imposed on the handicapped in virtue of their injuries and defects. The problem here is fundamentally a mismatch between certain individuals and their social environments. To an extent, the environment can be altered: e.g., handicapped bathrooms and ramps are fairly common accommodations that society has made for those bound to wheelchairs.

But, rather than taking all that effort to redesign the environment--including some that are extremely difficult to change, viz., opinions and attitudes--we can attack the problem at an individual level. By screening for genetic defects, and by developing highly effective prosthetics, cures for paralysis, and so forth, we can simply eliminate the handicaps themselves. (And unlike the deplorable eugenics movements of the past, this totally avoids killing people; it simply remedies certain existing conditions and prevents certain genetic combinations from attaining life.)

Not all social problems may be resolvable on this model, but I believe that many of the causes of unhappiness in people can be so resolved, and this would be a major step forward. One can think of it as something like "applied stoicism": I change myself rather than my environment, because I have so little power over the latter.

In short, I like to see this as a problem of figuring out how reason can best overpower negative affects. I've been meaning to write a post to show why Spinoza would agree with me about all this technology stuff (and it's not even that big of a stretch, as I hope to show), but certainly we have a case here of people coming together (thus having more power than individuals alone) and crafting artifices which allow for more direct control of those things which disempower us. To my mind, that's what human enhancement truly is: the augmentation of our freedom.

Returning to the previous perspective (beliefs that underlie my conclusions), it is vital to note that certain conclusions I have reached concerning human freedom and divinity are essential presuppositions for me. I believe, but will not argue for here, that what is taken for free will is simply ignorance of the causes of our desires (for one excellent argument, see the invaluable Appendix to Spinoza's Ethics Part I). I maintain that the choices we are presented with when it comes to technological control are between numerous causes (of which we are ignorant) interacting in highly complex ways producing highly contingent effects, on the one hand, and more direct control based on scientific knowledge.

(As an aside, I use the word "choice" deliberately; I do not deny that we have the freedom to make choices. Rather, I maintain that the basis on which those choices are made consists of desires that are primarily the product of external forces--and to the extent that we can change our desires, this requires the operation of second order desires, whose origin will ultimately be derived from external factors.)

In short, the choice is between chance and ignorance, on the one hand, and control and knowledge on the other. To me, this is really no choice at all; only a fool would choose ignorance. The problem that most people face, and this is a point that B.F. Skinner, of all people, has made remarkably well, is that the external determination is more evident in cases of control (because, of course, we're ignorant in the more complex cases), so it's easier to see this as simply being manipulated. But, as I construe it, we're being determined to action either way; the path of knowledge though allows us to be more determined by our own nature, which is what I think true freedom is.

This should be sufficient for now. Let me to some extent rescind my previous rejection of certain democratic values, and leave them in a sort of questionable area, a matter of doubt requiring further reflection.

No comments: