The futures that don't need us, that didn't happen, and that we should avoid

Posted by Jesse Reynolds August 11, 2010
Biopolitical Times
The messenger can be as important as the message. A statement that seems incongruous with the speaker's broader ideology often piques the interest of those who may otherwise be unreceptive. For example, the critique of technology "Why the Future Doesn't Need Us" by the computer programming pioneer Bill Joy is the only cautious voice on the reading list for this weekend's Singularity Summit. Two similar recent publications may carry such extra weight due to their source.

First, on the lighter end of the spectrum, the cover story of this month's Wired (the same outlet which published Joy's essay) explores "The Future that Didn't Happen," a couple dozen highly touted inventions which have failed to materialize thus far. Although comedian Will Ferrell is featured as comic relief guide, other authors briefly but accurately touch on expectations not (yet) met including "designer babies," the Singularity and personalized medicine.

Second, Monday's New York Times features an important op-ed by Jaron Lanier, a polymath guru of virtual reality and the Internet who is regularly compared to Joy. His insider's criticism of information technology, in particular the potentials for artificial intelligence and cyborgs, are not new: his "One Half  a Manifesto" is almost a decade old, and a new book offers an expansion. However, in his Times essay, Lanier takes his argument against confusing or blurring humans with machines to the country's most prominent periodical. He concludes:
When we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on — with the machines and with ourselves. So, why, aside from the theatrical appeal to consumers and reporters, must engineering results so often be presented in Frankensteinian light?

The answer is simply that computer scientists are human, and are as terrified by the human condition as anyone else. We, the technical elite, seek some way of thinking that gives us an answer to death, for instance. This helps explain the allure of a place like the Singularity University. ...

If technologists are creating their own ultramodern religion, and it is one in which people are told to wait politely as their very souls are made obsolete, we might expect further and worsening tensions. But if technology were presented without metaphysical baggage, is it possible that modernity would not make people as uncomfortable?

Technology is essentially a form of service. We work to make the world better. Our inventions can ease burdens, reduce poverty and suffering, and sometimes even bring new forms of beauty into the world. We can give people more options to act morally, because people with medicine, housing and agriculture can more easily afford to be kind than those who are sick, cold and starving.

But civility, human improvement, these are still choices. That’s why scientists and engineers should present technology in ways that don’t confound those choices.