Google AI ethicist says she was fired after challenging company on diversity - Los Angeles Times
Advertisement

Google AI ethicist says she was fired after challenging company on diversity

The Google logo on the entrance to company offices in Zurich, Switzerland.
Timnit Gebru, a co-leader of Google’s Ethical Artificial Intelligence team, says she was fired after a conflict over an academic paper she co-authored.
(Walter Bieri / Associated Press)
Share via

Timnit Gebru, a co-leader of the Ethical Artificial Intelligence team at Google, said she was fired for sending an email that management deemed “inconsistent with the expectations of a Google manager.â€

The email and the firing were the culmination of about a week of wrangling over the company’s request that Gebru retract an AI ethics paper she co-wrote with six others, including four Google employees, that was submitted for consideration for an industry conference next year, Gebru said in an interview Thursday. If she wouldn’t retract the paper, Google at least wanted the names of the Google employees removed.

Gebru asked Megan Kacholia, Google Research vice president, for an explanation and told her that without more discussion on the paper and the way it was handled she would plan to resign after a transition period.

Advertisement

Meanwhile, Gebru had chimed in on an email group for company researchers called Google Brain Women and Allies, commenting on a report others were working on about how few women had been hired by Google during the pandemic. Gebru said that in her experience, such documentation was unlikely to be effective because no one at Google would be held accountable. She referred to her experience with the AI paper submitted for the conference, and linked to the report.

The next day, Gebru said she was fired by email, with a message from Jeff Dean, who heads Google’s AI division, saying that Google couldn’t meet her demands and respected her decision to leave Google as a result. The email went on to say that “certain aspects of the email you sent last night to non-management employees in the brain group reflect behavior that is inconsistent with the expectations of a Google manager,†according to tweets Gebru posted.

“It’s the most fundamental silencing,†Gebru said in the interview of Google’s actions in regard to her paper. “You can’t even have your scientific voice.â€

Advertisement

Representatives for Mountain View, Calif.-based Google didn’t reply to multiple requests for comment.

When 20,000 Google employees walked out in protest a year ago, it birthed a new era of tech workers banding together to influence their companies’ actions.

The research paper in question deals with possible ethical issues of large language models, a field of research being pursued by OpenAI, Google and others. Gebru said she didn’t know why Google had concerns about the paper, which she said was approved by her manager and submitted to others at Google for comment.

“We are a team called Ethical AI,†she said, “of course we are going to be writing about problems in AI.â€

Advertisement

Google requires all publications by its researchers to be granted prior approval, Gebru said, and the company told her this paper had not followed proper procedure. The report was intended for submission for the ACM Conference on Fairness, Accountability, and Transparency in March, a conference Gebru co-founded.

Gebru, an alumni of the Stanford Artificial Intelligence Laboratory, is one of the leading voices in the ethical use of AI. She is well-known for her work on a landmark study in 2018 that showed how facial recognition software misidentified dark-skinned women as often as 35% of the time, whereas the technology worked with near precision on white men.

She also has been an outspoken critic of the lack of diversity and unequal treatment of Black workers at tech companies, particularly at Alphabet Inc.’s Google, and said she believed her dismissal was meant to send a message to the rest of Google’s employees not to speak up.

Gebru disclosed the firing Wednesday night in a series of tweets, which were met with support from some of her Google co-workers and others in the field.

Gebru is “the reason many next generation engineers, data scientists and more are inspired to work in tech,†wrote Rumman Chowdhury, who formerly served as the head of Responsible AI at Accenture Applied Intelligence and now runs a company she founded called Parity.

Google, along with other U.S. tech giants including Amazon.com Inc. and Facebook Inc., has been under fire from the government for claims of bias and discrimination and has been questioned about its practices at several committee hearings in Washington.

Advertisement

A year ago, Google fired four employees for what it said were violations of data-security policies. The dismissals highlighted already escalating tensions between management and activist workers at a company once revered for its open corporate culture. Gebru took to Twitter at the time to support those who lost their jobs.

For the web search giant, Gebru’s alleged termination comes as the company faces a complaint from the National Labor Relations Board concerning unlawful surveillance, interrogation or suspension of workers.

Earlier this week, Gebru inquired on Twitter whether anyone was working on regulations to protect ethical AI researchers, similar to whistleblower protections. “With the amount of censorship and intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?†she wrote on Twitter.

Gebru was a rare voice of public criticism from inside the company. In August, Gebru told Bloomberg News that Black Google employees who spoke out were criticized even as the company held them up as examples of its commitment to diversity. She recounted how co-workers and managers tried to police her tone, make excuses for harassing or racist behavior, or ignore her concerns.

When Google was considering having Gebru manage another employee, she said in an August interview, her outspokenness on diversity issues was held against her, and concerns were raised about whether she could manage others if she was so unhappy. “People don’t know the extent of the issues that exist because you can’t talk about them, and the moment you do, you’re the problem,†she said at the time.

Bloomberg writers Helene Fouquet and Gerrit De Vynck contributed to this report.

Advertisement