Aggregated News

Image of the EU flags

Photo by Guillaume Périgois on Unsplash

On 11 June, Blake Lemoine, a software engineer at Google, published transcripts of interviews he had conducted with an AI “chatbot”. The documents, the engineer claimed, showed the software had developed sentience. Google disagreed, however, and quickly suspended Lemoine for violating the company’s confidentiality policy. 

His claim that the company’s Lamda chatbot had gained human-like intelligence is widely disputed by experts in the field. Nevertheless, Google’s decision to punish one of its employees for raising concerns about an AI product publicly has seen fears about the Californian tech firm and its competitors resurface. The Silicon Valley giants, critics say, are failing to be sufficiently transparent about how they are developing a technology that promises to transform not only their sector, but many others over the coming decades.

Across the Atlantic, policymakers in Brussels and other European capitals are increasingly concerned by the rise of US-controlled AI. They are developing new legislation, the AI Act, to protect consumers and societies from harmful artificial intelligence, boost innovation, and create a set of common standards for the...