Why I want to move to Japan...


At this point, I think it's pretty much inevitable that, some time after I finish my Ph.D., I will have to move there. Unlike the West, Japan gets it. Here's what I mean:

Besides financial and technological power, the robot wave is favored by the Japanese mind-set as well.

Robots have long been portrayed as friendly helpers in Japanese popular culture, a far cry from the often rebellious and violent machines that often inhabit Western science fiction.

Robots are our friends and will only become our enemies if we make them that way (are you listening, proponents of robotic warfare?). I recently came across an interesting paper by Nick Bostrom on the ethics of what he calls "superintelligence" (i.e., smarter-than-human AI) that argues this point persuasively:

It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness. How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration. I would argue that at least all humans, and probably many other sentient creatures on earth should get a significant share in the superintelligence’s beneficence. [...] One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it.

If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A “friend” who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change one’s own top goal, since that would make it less likely that the current goals will be attained.

If friendliness can be assured, then the benefits of superintelligence are profound. It is not an exaggeration to suggest that it could help us solve every human problem (or at least those which admit of a technical solution, which are probably most of them). Read the remainder of Bostrom's essay (it's short) if you're not convinced.

Since the Japanese will be the first to harvest the bounty of this robotic golden age, perhaps you can see why I'm so eager to join them.

No comments: