Photo by Wonderlane on Unsplash
On a nearly still and moonlit night last week, some 75 people formed a circle on Asilomar State Beach around a sand pit ringed by seaweed. Four dancers swayed around the pit to the sound...
Aggregated News
On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.
That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.
Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at...
The Center for Genetics and Society is fiscally sponsored by Tides Center, a 501(c)(3) non-profit organization.
Please visit www.tides.org/state-nonprofit-disclosures for additional information.
© 2023 Tides Center, through the Center for Genetics and Society. All rights reserved.
Privacy Policy. Terms of Use.