Against "Against Happiness"

But should alienation always be eliminated? Some lives are better than other kinds of lives, regardless of the psychological well-being of the person who is living them. And some kinds of lives are so soul-deadening that we might worry more about a person who was not alienated. Is the happy slave really better off than the alienated slave? Is a medicated Sisyphus obviously better off than an unmedicated Sisyphus? Is there not something disturbing about trying to medicate that alienation away?

Kramer seems to miss this point. He argues that in The Myth of Sisyphus, Camus had imagined Sisyphus happy, despite the fact that the gods had intended Sisyphus to suffer. But the happiness or unhappiness of Sisyphus is not the issue. What is the issue is the wisdom of making psychological well-being the sole measure of a successful life. It is not hard to see why a psychiatrist would put Sisyphus on Prozac. Prozac might well help Sisyphus push the boulder up the mountain more enthusiastically. Sisyphus might even appreciate the prescription. Yet this would not mean that Sisyphus had a mental health problem. Sisyphus is in a predicament, and to understand his predicament you cannot simply look at his internal psychological state. You must also understand his circumstances. Given the fact that he will be pushing the boulder up the mountain for eternity, alienation seems like an appropriate response. [emphasis added]

This moving passage is from Carl Elliott's poignant review, entitled "Against Happiness", reviewing Peter Kramer's book Against Depression. Kramer is well known for his Listening to Prozac, an early account of the impact that antidepressants could have on people's lives, well Elliott is the author of the excellent Better Than Well, a "diagnosis" of the American cultural anxiety surrounding the tension between ideals of authenticity and self-improvement. I've read all 3 of these works and I would recommend all of them to individuals interested on this issue.

Elliott finds that Kramer's polemical new book, which tries to argue against people who romanticize depression, is aimed for an audience that scarcely exists. And insofar as it is targeted at an American audience, this is to a large extent right. When I read Against Depression, this did not occur to me, because as an academic in the humanities, I encounter people who try to justify depression all of the time, along with those who decry the overuse of antidepressants and other psychopharmaceuticals.

Regardless of whether the book has an audience, it does address a serious issue. Elliott gets at the heart of the matter in the portion I emphasized above: Should psychological wellbeing--happiness as most Americans understand it--be "the sole measure of a successful life"? This is a question I have struggled with for a few years now--perhaps since I started taking antidepressants myself--and the only response that makes sense to me is "yes".

Even though Elliott does a better job than most in his attack on the "medicalization" of what were once considered character traits (drunkenness as alcoholism, awkwardness as social anxiety, sadness as depression, etc.), the force of his appeal must ultimately be an emotional one. In fact, it's the very same appeal that you find in all kinds of arguments against the use of enhancement technologies on humans. Elliott asks, don't we find this whole business a little disturbing?

A lot of people do. I for one don't. Yes, sadness does have a function in human life. Yes, widespread alienation in the developed world is likely a consequence of the ways in which these societies are organized. Yes, it's a luxury that we even have the opportunity to think about questions of happiness--I was recently reminded of how, for so many, practical matters associated with survival and making a living rule out consideration of such things--let alone to choose many of the conditions of our lives according to whether we think we'll find them fulfilling.

But what does that matter to me as a depressed individual? I cannot change the fact--or at least, I would certainly not want to--that I live in an affluent society that allows me to pay little attention to basic issues of survival. Right now it's unfair, but I think this is a better way to live and I would love to see it more prevalent throughout the world, because it's a way of life that allows us to ask questions of how we ought to live. Moreover, we now live in an age in which we can inquire not only about the requirements of living and of living well, but also of the possibility of living "better than well".

Of course this makes a lot of people uncomfortable. One reason I think this is so, although Elliott doesn't explicitly raise it, is that the existence of this possibility depends on a lot of social conditions which are unjust and produce a lot of unnecessary suffering. Besides America's various underclasses, there are many developing nations in which persons are ruthlessly exploited to produce cheap consumer goods. Similarly, many of us lack awareness of the sordid and bloody history of attaining and sustaining this affluence.

I think that this is an excellent point. In focusing on one's own individual contentment, it's easy to lose track of the massive amount of unjust suffering in the world. However--and this is my central argument against a position like Elliott's--while in a state of extreme sadness, one simply lacks the motivation and the energy to do something about that.

As Spinoza understands it, and rightly so I think, sadness is a recognition of our impotence, of the ways in which we are limited. Nothing is gained by feeling sad about something that cannot be changed--such as the past--and something is lost or at least endangered, viz., our capacity to do something positive, if we feel sad about those things which we might be able to affect.

While sadness might have some evolutionarily adaptive value, it is simply a counterproductive feeling and is by no means a necessary prerequisite of bringing about positive social change. Sadness is the real luxury, not happiness. Alienation may be "reasonable" or "appropriate" in our society, but it sure as hell won't change anything.

Let us return to the example of Sisyphus. Keeping in mind that his situation is fantastical, I think that especially for him what matters is happiness. He is stuck in a situation he has no power to change. What does it matter if he, as Elliott interprets Camus, is happy, not in the sense of emotional wellbeing, but only in the sense of being conscious of the absurdity of his predicament? In other words, what does it matter if he is alienated? Perhaps it makes us feel better that his is an "appropriate response" to his "predicament", but it does little for Sisyphus except perhaps give him some feeling of moral superiority, a slim consolation indeed compared to the tragedy of his infinite torment.

As Wittgenstein so aptly puts it, somewhere in his notebooks, "The world of the happy man is not the same as the world of the unhappy man." And the difference between the happy world and the unhappy world can be the only one that matters in our nihilistic age. (This last point I recently wrote a paper about, so I will not develop it further for right now. By nihilism, I mean something like recognition of the contingency of all structures of meaning. In other words, every belief or value is simply regarded as a choice among numerous others, with no criteria upon which to choose. Picture life in the existential shopping mall, to use philosopher James Edwards' analogy.)

I am not convinced that medicalizing life's woes is incompatible with social critique. The suffering that depression brings about is as real as the suffering of malnutrition, although different in kind. It is certainly tragic that so many ignore the latter kind of suffering, but the cure for this is not to be found in the former. In short, this is why I am against "Against Happiness" and for Against Depression.


Problems with Technology

In preparation for a paper on Critical Theory and the critique of "Instrumental Rationality", I've been reading a number of articles in the philosophy of technology. As might be suspected from a discipline that has virtually eschewed any technological innovation--aside from the word processor, I can scarcely think of any concrete examples--most of them are critical, with some even celebrating their "Luddism".

Just today, I came upon a review for yet another book against genetic engineering by philosopher Michael Sandel, who I think I used to like. Called "The Case Against Perfection", it's typical of a certain class of problems that I'll explicate below. At least his critique seems somewhat novel, according to the reviewer, who calls it "half right".

Reading these various pieces has helped to clear up, in my own mind, what my general advocacy of technology more specifically entails, and where I am in agreement with its critics. To that end, I've been experimenting with a simple typology of problems, which I present in a draft form now.

I see two basic categories of problems, which I call "Problems of Destruction" and "Problems of Transformation". With respect to the former, I am on the same page with the most rabid of Luddites; it is the latter that I am less inclined to think of as problematic. Of course, these categories are not likely to be exhaustive--I can think of some examples which seem to fit into neither--nor mutually exclusive; they are simply schematic.

I) Problems of Destruction, as might be inferred, are those that deal with issues of survival. These include various threats to individuals, species, the environment, civilization, even life itself.

Certain fields like the recently emergent "synthetic biology"--a name I only came across days ago, but which already seems ubiquitous on the net--along with other sectors of biotechnology and nanotechnology have the potential to unleash massive devastation on par with nuclear holocaust (but without all that messy radiation). The engineering of a super-virus not found in nature or something like the infamous "gray goo" scenario (in which self-replicating nanobots are let loose and convert the entire biosphere into copies of themselves) would be examples of this.

(Artificial intelligence and robotics pose different kinds of threats, such as replacing human beings, which are considered by some as instances of the other class of problems. Insofar as an artificially intelligent civilization might mitigate the risk of other kinds of destruction, I find myself potentially sympathetic. This is an area I'll have to return to.)

Less destructive examples include pollution and other industrial processes which contribute to global warming, as well as more locally situated contaminations. The development of biofuels--undoubtedly the stupidest way of trying to resolve the energy crisis so unsurprisingly one championed by our president--poses significant dangers which have largely gone unrecognized. According to a recent report human beings already consume a quarter of nature's productive capacity, a figure which would be made even worse if fossil fuels were simply exchanged for biofuels.

Threats to the survival of human beings and other life would be regarded seriously by all but the most misanthropic. Efforts like the green energy movement and oversight of the most dangerous areas of research would be in our best interest. Those who resist these measures are simply not examining long-term consequences. Unfortunately, some major corporate players fall into this camp as a consequence of our brand of capitalism which is incapable of looking ahead more than a couple of years, usually being focused on this quarter's earnings and whatnot.

II) Problems of Transformation are more problematic in their problematicity. (As ugly as that last sentence is, it conveys what I want to say tersely.) Sandel's diatribe against genetic engineering is but one of a broad range of examples. In essence, what I'm calling "transformations" involve significant alterations to established ways of life, some more profound than others.

Since at least the industrial revolution, we have undergone a number of significant transformations. We live very differently than did our ancestors. The more conservative elements of society are likely to lament this as a loss, but most people are happy to call this "progress". I use the more neutral term "transformation" to avoid overt bias, even though I tend more often than not to fall into the latter camp. (Also, it's foolish to view a change as progressive simply in virtue of the fact that it is novel.)

The largest areas of concern today seem to be the potentially radical transformations to human beings that come from "enhancement technologies" such as genetics and cybernetics. Many critics contend that the blurring of boundaries occasioned by such interventions threaten our "humanity", "dignity", "meaning", or whatever other romantic buzzword one cares to use. In the very least, I grant them that not all "enhancements" will necessarily be improvements.

The more sophisticated critics realize that humans will (by and large) adapt to changes and take them for a new "normal"--and see this as part of the problem. However, it is difficult for such critiques to find purchase; either they rest on some dubious metaphysical ground, or they rely on the equally dubious strategy of taking certain characteristics of human beings--like the way that certain things disgust or frighten us--as essential. Quite frankly, I think "postmodern" intellectuals have no basis for criticizing transhumanism except their individual prejudices, which are only valid to those who share them.

Other examples of this would include heightening of the gap between technological elites and those without access. I'm inclined to think that such partitions are more a function of capitalism than a necessary consequence of technology. In fact, I see no way of finding positive alternatives to capitalism without significant technological change. Proponents of technology like to see this gap as more of a "lag"; the poor eventually do get access, as can be seen, e.g., in the spread of cell phones in the developing world.

I think that this Transformation category probably requires some greater specification since it covers such a broad range of issues. What's important to note is the way in which they tend to effect not merely our material circumstances, but also our beliefs, attitudes, and structures of meaning. The latter is what is most scary to people, but as an anti-essentialist, I am unconcerned.

Social critics will always find ways to complain of deficits of meaning in a society; conservatives will always long for the good old days that never were; but most people will adapt. Clinging to the old ways of life I see as a consequence of a couple of factors. Often, it's just greater fear of unknown evils than of known ones. When it's opposition to ostensible improvements, it's a way to make people feel better about the unnecessary suffering that they had to endure ("suffering is just a part of life!", "pain is what gives human existence meaning", "our imperfections constitute our humanness", and other such drivel).

Most likely, the intelligent inhabitants of earth a century from now (if there are any) would not appear "human" to most people today. As for me, I see no reason to cling to an evolutionary accident. What matters are things like rich experience, intelligence, reason, happiness, meaningfulness, benevolence, and so forth. Whether or not such beings think of themselves as "human", perhaps as some nostalgic sentiment, is to me entirely inconsequential. (I think an extension of the category of speciesism would be appropriate here.)