Technology · October 19, 2022

Alex Hanna left Google to try to save AI’s future

“I am quitting because I’m tired,” Alex Hanna wrote on February 2, her last day on Google’s Ethical AI team. She felt that the company, and the tech industry as a whole, did little to promote diversity or mitigate the harms its products had caused to marginalized people. “In a word, tech has a whiteness problem,” she wrote in her post on Medium. “Google is not just a tech organization. Google is a white tech organization.”

Hanna did not take much of a break—she joined the Distributed AI Research Institute (DAIR) as the group’s second employee on February 3.

It was a move that capped a dramatic period in Hanna’s professional life. In late 2020, her manager, Timnit Gebru, had been fired from her position as the co-lead of the Ethical AI team after she wrote a paper questioning the ethics of large language models (including Google’s). A few months later, Hanna’s next manager, Meg Mitchell, was also shown the door. 

DAIR, which was founded by Gebru in late 2021 and is funded by various philanthropies, aims to challenge the existing understanding of AI through a community-­focused, bottom-up approach to research. The group works remotely and includes teams in Berlin and South Africa.  

“We wanted to find a different way of AI, one that doesn’t have the same institutional constraints as corporate and much of academic research,” says Hanna, who is the group’s director of research. While these sorts of investigations are slower, she says, “it allows for research for community members—different kinds of knowledge that is respected and compensated, and used toward community work.”

Less than a year in, DAIR is still sorting out its approach, Hanna says. But research is well underway. The institute has three full-time employees and five fellows—a mix of academics, activists, and practitioners who come in with their own research agendas but also aid in developing the institute’s programs. DAIR fellow Raesetje Sefala is using satellite imagery and computer vision technology to focus on neighborhood change in post-apartheid South Africa, for example. Her project is analyzing the impact of desegregation and mapping out low-income areas. Another DAIR fellow, Milagros Miceli, is working on a project on the power asymmetries in outsourced data work. Many data laborers, who analyze and manage vast amounts of data coming into tech companies, reside in the Global South and are typically paid a pittance. 

For Hanna, DAIR feels like a natural fit. Her self-described “nontraditional pathway to tech” began with a PhD in sociology and work on labor justice. In graduate school, she used machine-learning tools to study how activists connected with one another during the 2008 revolution in Egypt, where her family is from. “People were saying [the revolution] happened on Facebook and Twitter, but you can’t just pull a movement out of thin air,” Hanna says. “I began interviewing activists and understanding what they are doing on the ground aside from online activity.” 

DAIR is aiming for big, structural change by using research to shed light on issues that might not otherwise be explored and to disseminate knowledge that might not otherwise be valued. “In my Google resignation letter, I pointed out how tech organizations embody a lot of white supremacist values and practices,” Hanna says. “Unsettling that means interrogating what those perspectives are and navigating how to undo those organizational practices.” Those are values, she says, that DAIR champions.

Anmol Irfan is a freelance journalist and founder of Perspective Magazine, based in Lahore, Pakistan.

About The Author