Perils of Artificial Intelligence

Posted by Pete Shanks January 22, 2015
Biopolitical Times

The Future of Life Institute launched an open letter last week, calling for "research on how to make AI [Artificial Intelligence] systems robust and beneficial." This follows warnings from a bevy of experts, including physicist Stephen Hawking and others (last May and December), and technology entrepreneur Elon Musk, who warned in October:

I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. … I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish.

Coming from a high-tech entrepreneur like Musk, dire language like this deserves — and received — attention (not all supportive). Musk not only signed the open letter but immediately donated $10 million to the Future of Life Institute. The Institute was founded in March 2014 as a volunteer-run organization with some very high-profile advisors: two Nobel prizewinners (Saul Perlmutter and Frank Wilczek), some rich entrepreneurs (including Musk), a couple of celebrities (Morgan Freeman and Alan Alda) and a bunch of top-notch academics (including Hawking, George Church, Stuart Russell, Nick Bostrom and Francesca Rossi).

The letter has attracted thousands of signatories. Over 5,000 are listed on the website, including many notable AI researchers and other academics. There are over 50 from Google, 20 connected with Oxford University, 15 with Harvard, 15 with Berkeley, 13 with Stanford — you get the picture — and several associated with Singularity University (but not Ray Kurzweil, popularizer of the notion that "the singularity" — the moment when AI surpasses human intelligence — is near, and now a director of engineering at Google). You can still join them.

The Institute also issued a 12-page document on research priorities [pdf], which does a fair job of listing the issues but makes no pretense of offering solutions. It notes, for example, that:

Our ability to take full advantage of the synergy between AI and big data will depend in part on our ability to manage and preserve privacy.

At least privacy gets a mention, as do labor-force disruptions, legal wrinkles, autonomous weapons and a host of other potential problems. But while theorists are discussing issues in the abstract, companies are aggressively working to "monetize" information they can gather by analyzing our actions and reactions — not just our purchasing decisions, genomes, health records, and everyday biometrics, but even our emotional responses, as described in an article in the current New Yorker titled "We Know How You Feel."

Big Data is central to all this. And of course Big Money is involved. This is, after all, a system in which a smartphone app can be valued at a billion dollars. An article in last week's Wired noted that:

Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

Here's a small-scale example. Amazon is offering an always-connected device called Echo ($199; only $99 for Prime members):

Echo uses on-device keyword spotting to detect the wake word. When Echo detects the wake word, it lights up and streams audio to the cloud, where we leverage the power of Amazon Web Services to recognize and respond to your request. … Echo's brain is in the cloud, running on Amazon Web Services so it continually learns and adds more functionality over time. The more you use Echo, the more it adapts to your speech patterns, vocabulary, and personal preferences.

So, Echo can play your music, tell you the weather forecast, help you write shopping lists … and of course will be updated to the very latest speedy wifi, for your benefit. What could go wrong? Plenty, suggests MIT Technology Review, in an article titled:

An Internet of Treacherous Things

It is not at all clear that the general public is on board with an optimistic view of the technological future, even if some of the elite are. The indicators are mixed:

  • Google Glass has essentially gone on hiatus, largely because most people find it creepy. The technology still has its defenders, and it's not dead yet.
  • The Washington Post published a lament from a young mother that her kids are buried in their phones rather than enjoying the sunset. That drew some pushback that said such commentary "isn't just unsettling, it's fear-provoking."
  • The global leaders assembling at Davos are set to discuss the risks to humanity of, inter alia, "synthetic biology, nanotechnology and artificial intelligence." They might be well advised to think about concentrations of power and wealth, in this context as well as in the more general economy — and not just about how to get more of them.

We are being sold technology as if it were an unvarnished good. (Bill Joy and Jaron Lanier, to name but two distinguished technologists, have disagreed.) But the net result may be, to adapt C.S. Lewis, that what we look on as our powerful high-tech wonders turn out to be the instruments of power exercised by a few people over the rest of us.

Previously on Biopolitical Times: