
Artificial Intelligence is everywhere—curating what we watch, influencing hiring decisions, scanning faces at airports. But while these systems promise objectivity, their inner workings often reflect deep human flaws. One of the most troubling? Gender bias. AI is learning from data shaped by a history that hasn’t been kind to women—and it’s showing.
When Joy Buolamwini, a researcher at the MIT Media Lab, began testing facial recognition software, she made a shocking discovery: the AI could barely detect her face—until she wore a white mask. The system had been trained mostly on images of lighter-skinned men, meaning it struggled with women of color. Her research sparked a global conversation and forced tech giants like IBM, Microsoft, and Amazon to confront algorithmic bias.
But Buolamwini’s case isn’t isolated. Across the board, AI systems—especially those used in hiring, medical diagnostics, and predictive policing—have shown a tendency to disadvantage women. In 2018, Amazon scrapped an AI recruitment tool when it was found to downgrade résumés that included the word “women’s,” such as “women’s chess club.” The system had learned from historical hiring data, which reflected past (and present) gender bias.
These aren’t just bugs—they’re systemic issues baked into the data. If the training material reflects societal inequality, the AI will learn to reproduce it. And when algorithms are opaque, it's nearly impossible to hold anyone accountable. This makes the need for ethical oversight not just urgent but non-negotiable.
So, who’s stepping in? Fortunately, a growing number of women in tech and academia are leading the charge. From Timnit Gebru, co-founder of Black in AI, to Dr. Safiya Umoja Noble, author of Algorithms of Oppression, these women are confronting bias head-on. They're not just pointing out problems—they're rewriting the frameworks and advocating for transparent, fair systems.
And yet, they often face pushback. In 2020, Timnit Gebru was controversially fired from Google after raising concerns about bias in the company’s language models. Her departure sparked global outrage and raised uncomfortable questions about how even the companies preaching ethics handle internal dissent—especially from women of color.
Despite these setbacks, the movement is growing. Institutions like the Algorithmic Justice League and AI Now Institute are pushing for standards that ensure AI systems are regularly audited, openly tested, and held accountable. They advocate for including diverse voices—not just in user testing but in the rooms where these technologies are built.
As users, we can demand more too. Support companies that are transparent about their algorithms. Push for regulations that enforce ethical standards. Elevate the voices of the women doing this vital work.
Because AI shouldn’t just reflect who we’ve been. It should help build who we want to become. And that future must be free of the biases that have held women back for far too long.
THR Newsletters
Sign up for THR news straight to your inbox every day