AI OK? WTF! LOL

Biopolitical Times
Robotic hands are typing on a computer's keyboard.

Artificial Intelligence (AI) is a hot topic at the moment. The World Economic Forum had a session on AI this year. The Future of Life Institute, with big names such as Stephen Hawking and Elon Musk, worries about AI. Huge companies like Intel and Oracle are working on incorporating AI technologies.

So naturally, magazines online and in print have features on AI. Fortunately or unfortunately, much of the talk they promote is hot air.

Not all: some forms of automation are described as “weak AI” which can include industrial robots and even voice-recognition apps. There are economic and social issues connected with these technologies, as reporter Jeff Guo pointed out in a Washington Post article on March 30 under the title “We’re so unprepared for the robot apocalypse.” But these problems can in principle be fixed, if we have the social and political will to do so.

Other recent commentaries, however, present the possibility of “strong AI” as a potentially catastrophic technology that only the masters of the universe can imagine and teach us either to avoid or embrace. For instance, this startling interview at Vox on March 27:

Yuval Harari on why humans won’t dominate Earth in 300 years
… If you asked me in 50 years, it would be a difficult question, but 300 years, it’s a very easy question. In 300 years, Homo sapiens will not be the dominant life form on Earth, if we exist at all. … in 200 or 300 years, the beings that will dominate the Earth will be far more different from us than we are different from Neanderthals or from chimpanzees.

How would he know? He may be a great historian, but he’s only human. Contrariwise, in the March issue of Vanity Fair there was this:

Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse
by Maureen Dowd

Dowd is famous, or notorious, for her snark and she gets this one right:

Guys who got rich writing code to solve banal problems like how to pay a stranger for stuff online now contemplate a vertiginous world where they are the creators of a new reality and perhaps a new species.

The Vanity Fair article is a useful extended overview of the discussions being held among our Silicon Valley overlords, who disagree strongly among themselves about the possible and/or desirable future of humanity. It features, among others, Larry Page of Google (an optimist), Jaron Lanier of Microsoft (a skeptic), Demis Hassabis of Deep Mind (bought by Google in 2014), Peter Thiel, Steve Wozniak, and of course Ray Kurzweil.

Many of them don’t seem to think much about individual people (other than themselves and perhaps each other): Musk himself is quoted musing about how many hours a week of his attention a woman deserves. “Maybe ten hours? That’s kind of the minimum?”

PZ Myers was enraged by the article, as well as by the larger discussion about whether artificial intelligences will enslave or liberate humanity:

These intelligences don’t exist, and may not exist, and will definitely not exist in the form these smart guys are imagining. It is the grown-up, over-paid version of two children arguing over who would win in a fight, Darth Vader or Magneto? The Millenium Falcon or the Starship Enterprise? Jesus or Buddha?

He has a point. As a final example of AI commentary: Almost three years ago, The Onion hit the hammer on the thumb once again:

World’s Supercomputers Release Study Confirming They Are Not Powerful Enough

There are genuine issues to discuss but sometimes the best way to handle outrageous fear-mongering is to laugh at it.

Previously on Biopolitical Times:

Image via Flickr