Aggregated News

binary code in blue on a white background

Timnit Gebru didn't set out to work in AI. At Stanford, she studied electrical engineering — getting both a bachelor’s and a master’s in the field. Then she became interested in image analysis, getting her Ph.D. in computer vision. When she moved over to AI, though, it was immediately clear that there was something very wrong.

“There were no Black people — literally no Black people,” says Gebru, who was born and raised in Ethiopia. “I would go to academic conferences in AI, and I would see four or five Black people out of five, six, seven thousand people internationally.… I saw who was building the AI systems and their attitudes and their points of view. I saw what they were being used for, and I was like, ‘Oh, my God, we have a problem.’”

When Gebru got to Google, she co-led the Ethical AI group, a part of the company’s Responsible AI initiative, which looked at the social implications of artificial intelligence — including “generative” AI systems, which appear to learn on their own and create new content based...