Can Feminism Survive AI Bias?
- Ryan Yin
- Apr 30
- 4 min read
Gabrielle Yoo
New York City, USA

Artificial intelligence, once a sci-fi abstraction, is now deeply embedded in the machinery of modern life. From hiring platforms and predictive policing algorithms to generative text models and automated healthcare systems, AI is no longer simply shaping the future — it is actively constructing the present. But as we increasingly surrender decisions to systems we barely understand, an urgent question arises: can feminism survive the rise of algorithmic power? Or more precisely, can it thrive in a new world coded by historical inequality and entrenched structural bias?
At first glance, AI offers the illusion of objectivity. Code, after all, is not supposed to discriminate. Yet, this assumption collapses under scrutiny. As scholars such as Safiya Umoja Noble (Algorithms of Oppression) and Ruha Benjamin (Race After Technology) have compellingly argued, artificial intelligence is not neutral. It is deeply political — not because it chooses to be, but because it reflects the worldview of those who design it. And those designers, overwhelmingly, are male, white, and socioeconomically privileged, operating within profit-driven tech ecosystems that consistently undervalue ethical concerns.
Feminism, in contrast, is built on the recognition that systems — political, social, economic — are never neutral. They are historical, shaped by power and inequality, often at the expense of women and gender-diverse people, particularly those at the intersections of race, class, and sexuality. When these systems are digitized without scrutiny, they do not disappear. They scale.
Consider the hiring algorithm developed by Amazon in 2018, which penalized résumés that included the word “women” — as in “women’s chess club” or “women’s college” — or the well-documented fact that facial recognition technologies misidentify women, especially women of color, at far higher rates than white men, leading to real-world consequences in surveillance, policing, and immigration systems.
In the medical field, AI models trained predominantly on male-centric data have led to tools that under diagnose heart disease in women — a chilling echo of longstanding gender biases in clinical research. Meanwhile, generative language models, including those powering popular chatbots, have been caught reinforcing misogynistic, transphobic, and racially charged stereotypes, because their training data scraped from the internet is polluted with the same content. These are not merely technical glitches; they are expressions of structural inequality, embedded in code.
Feminist scholars have long warned that technologies are not created in a vacuum. Donna Haraway’s Cyborg Manifesto argued that the boundaries between human and machine are increasingly blurred — and that we must use this hybridity to challenge dominant power structures, not replicate them. More recently, activists like Joy Buolamwini, founder of the Algorithmic Justice League, have revealed the racial and gendered contours of algorithmic bias through empirical research and public advocacy.
Feminism, when understood not as a monolithic ideology but as an evolving, intersectional critique of power, is uniquely equipped to interrogate AI. It can ask the questions that developers rarely do: Who is this technology for? Whose values are encoded within it? Who is left out, misrepresented, or actively harmed?
Yet, despite its critical potential, feminism often finds itself sidelined in tech policy conversations. “Ethics” in AI is frequently reduced to corporate window-dressing — advisory boards with no real authority, guidelines with no enforcement, diversity initiatives without structural change. In these spaces, feminist insight is treated as an accessory, not a necessity.
If feminism is to survive — and shape — the AI revolution, it must be more than reactive. It must be radically constructive. This means embedding feminist values into the earliest stages of technological development: demanding transparency in data sets, centering marginalized voices in design, and building regulatory frameworks that hold tech companies accountable for algorithmic harm.
Moreover, it means moving beyond a white, Western feminist lens. In many parts of the Global South, AI technologies are deployed with even less oversight, often through partnerships with authoritarian regimes or underfunded institutions. Feminist movements worldwide must connect across borders to ensure that the digital future does not become another mechanism of neocolonial control.
It also requires cultural work — the reclaiming of narratives. As AI systems increasingly shape public discourse through content moderation, recommendation algorithms, and synthetic media, feminist storytelling must be amplified rather than suppressed. The voices of those who critique, reimagine, and resist must not be drowned in the algorithmic feed.
Feminism has always adapted — from the ballot box to the boardroom, from consciousness-raising to courtrooms. But AI presents a uniquely complex challenge. It is fast-moving, opaque, and often intangible. It reshapes society not through law or policy, but through design choices, interface patterns, and training data. To survive — and more importantly, to lead — feminism must learn to “speak machine,” to decode the systems that increasingly govern our lives, and to embed its ethics at the root, not the periphery.
The question is not simply whether feminism can survive AI bias. It is whether our technological future can survive without feminism. In a world where machines make decisions once reserved for humans, justice must be more than a bug fix. It must be the blueprint.
Comentarios