With the continuous advancement of
science and technology, people have become increasingly fascinated by its
impressive features and its transformative role in everyday life. From
smartphones and AI tools to advanced medical systems and automated public
services, technology has become integral to modern living. Across corporate
sectors and government agencies alike, it plays a vital role in daily
operations. Its efficiency, effectiveness, and time- saving capabilities have
made technology not only indispensable but also ever more in demand in
today’s fast-paced
world. However,
over the
past few
decades, scholars
from cultural
studies, humanities and social sciences, and science and technology
studies have increasingly begun to interrogate the assumed neutrality of
technology. These scholars, through their writings and public discussion forums,
have raised concerns that technology is not what it seems from the outside and
state that it hides a politics under the guise of its efficiency.
Over time, a group of scholars from
various fields has helped evolve the critical debate about the supposed
neutrality of technology. Their works have shown that technology is not isolated
from the world; rather, it is deeply embedded in existing social, political, and
cultural structures. Early thinkers and their works, like Langdon Winner’s
The Whale and the Reactor (1986) and Donna Haraway’s “A Cyborg
Manifesto” (1985), advanced the discourse by revealing how technologies are
shaped by political agendas and dominant ideologies. Later, scholars such as
Sandra Harding, Lucy Suchman, and Wendy Hui Kyong Chun contributed to this
critique by analyzing how gender, race, and power influence scientific knowledge
and design processes.
Recently, a
new generation of
scholars, including Safiya
Umoja Noble,
Ruha Benjamin, Catherine D’Ignazio, Lauren F. Klein, Virginia Eubanks,
Sarah Myers West, Meredith Broussard, and Simone Browne, has shifted its focus
to the digital realm. Their work examines how data-driven technologies,
algorithms, and AI systems often reinforce structural inequalities while
presenting themselves as objective and neutral. These works collectively reveal
how technologies often reinforce existing systems of domination under the guise
of neutrality, urging
a reconsideration
of how
technological systems are
designed, who
they benefit, and whose voices
they silence.
To further discuss the ongoing discourse
about the myth of technological neutrality, this essay focuses on three
contemporary works: Race After Technology
(2019) by Ruha Benjamin, Data Feminism
(2020) by Catherine D’Ignazio and Lauren F. Klein, and
Algorithms of Oppression (2018) by
Safiya Umoja Noble. These works have been selected for their contributions
to revealing
the structural
inequalities embedded in
digital systems
and for
offering interdisciplinary
frameworks that
bridge science
and technology
studies with
critical race
theory, feminism, and media studies. Through detailed analysis of these
texts, the essay examines how each author or their works interrogate the
ideological, racial, and gendered dimensions of algorithmic design and data
infrastructure. Rather than treating technology as an impartial or universal
tool, these scholars’ foreground how power relations shape its development and
deployment. Their
collective work
not only
challenges the
dominant narratives of innovation
and objectivity but also insists on the need to rethink technological
systems through the lenses of justice, equity, and accountability.
Race After Technology: Abolitionist Tools for the New Jim Code
by Ruha Benjamin is a powerful and accessible critique of how
emerging technologies, often assumed to be neutral, objective,
and progressive,
can reproduce
and reinforce
existing racial
inequalities of
the society. The book consists
of five chapters, each focusing on a different way technology can perpetuate
systemic racism,
even while
claiming to
be fair or impartial.
In her introduction,
Benjamin introduces the term
“New Jim
Code,” which
refers to
“the employment of new
technologies that reflect and reproduce existing inequities but are promoted
and perceived as more objective or progressive than the discriminatory systems
of a previous era” (5). Chapter 1, “Engineered Inequality,” examines how
automated systems such as predictive policing and algorithmic sentencing
can worsen
racial bias
in law
enforcement. Chapter 2,
“Default Discrimination,” shows
how discriminatory assumptions are
built into
design defaults,
using examples
from facial recognition
technologies that fail to detect darker skin tones accurately. In Chapter 3,
“Technological Benevolence,” Benjamin critiques “feel-good” technologies like
apps designed to address
social problems,
arguing that
they often
ignore structural causes and
reinforce existing
hierarchies. Chapter 4, “Coded Exposure,” focuses on surveillance systems and
their disproportionate targeting of Black and Brown communities, highlighting
examples such as biometric tracking and databases. Chapter 5, “The New Jim
Code,” ties together all the arguments and calls for abolitionist approaches to
technology, tools, and thinking that challenge the root causes of injustice
rather than simply reforming biased systems. Throughout the book, Benjamin uses
a range of methods, including case studies, critical theory, media analysis, and
historical parallels. Her conclusion emphasizes that rather than accepting
technological systems as inevitable or neutral,
we must
ask who
designs them, for what
purpose, and
with what
impact. By emphasizing
often invisible
forces shaping
technology, Benjamin
calls on
readers, designers, and
policymakers to resist what she calls the seduction of coded fairness and
instead imagine abolitionist alternatives that prioritize justice and equity
over convenience and profit. In a few words, Benjamin presents a thorough,
accessible, and urgent critique of how racism operates through
technological systems,
and it offers both
a framework
for understanding
these issues
and a call to action for building better futures.
Data Feminism by Catherine D’Ignazio and Lauren F.
Klein, a collaborative and deeply thoughtful work that challenges the common
understanding of data science as neutral and objective. The authors argue that
that data science is deeply shaped by unequal power structures and that these
imbalances must be addressed through a feminist perspective. Structured on seven
major chapters, along with a separate introduction and conclusion, the book
draws on intersectional feminism as a critical framework to reveal how data
science reinforces existing forms of oppression, including racism, patriarchy,
and colonialism. Central to their argument is the idea that feminist thinking
can help reimagine data science by shifting who participates in data work, how
power operates within it, and whose voices are heard or silenced. Through seven chapters, the authors discuss key issues
like embracing pluralism, challenging power, rethinking binaries and hierarchies
etc. The authors bring diverse case studies and theoretical grounding to
demonstrate these
issues. For
example, they
analyze the
Counted project
by The Guardian, which
documented police violence against Black people in the U.S., showing how
grassroots data activism can fill institutional gaps. They highlight Data for
Black Lives and the Feminicide Database in Mexico as examples of how
community-led data work empowers marginalized groups. The authors also critique
mainstream data practices, such as those used in predictive policing and facial
recognition technologies, which often reinforce systemic bias. They emphasize
the importance of bringing back method and understanding the
social histories behind datasets. The book employs feminist standpoint
theory, showing that knowledge is situated and partial, and that those most
affected by injustice have valuable insight into systems of
oppression. Methods used include storytelling, participatory action
research, and collaborative data projects that value care, emotion, and the
invisible labor often excluded from traditional science. The authors also stress
the importance of making labor visible by recognizing the contributions of
people who clean data, maintain systems, or do unpaid emotional work in data
projects. They advocate for rethinking the binary logics of traditional data
science by valuing plurality and ambiguity instead of false neutrality. They
call for a shift from the myth of objectivity to a model of data justice rooted
in equity and accountability. Their conclusion urges readers
to transform data science
into a
tool for
liberation rather than
oppression. They insist
that feminist values must
guide every
stage of
data work,
from collection
to communication,
and that the future of data
must be shaped by care, inclusivity, and shared power. The book ultimately works
as both
a critique
of dominant
data practices
and a
hopeful manifesto
for change,
showing that data science, when reoriented around justice, can support
more equitable futures.
Algorithms of Oppression: How Search Engines Reinforce Racism
by Safiya Umoja Noble offers a convincing critique of the
widely assumed belief in the objectivity and neutrality of algorithmic systems.
Primarily focusing on Google’s search engines, the author reveals how these
technologies systematically reproduce and reinforce racial and gender biases,
though seemingly it is neutral
and unbiased.
Drawing on
Black feminist
thought, critical
race theory,
and science and technology, Noble comes up with an argument that
algorithmic mechanisms are not simply technical constructs, but are designed
with the ideologies and motives of designers to serve
a certain hidden
agenda or
politics. Her central claim
is associated with
the idea that
search engines are there to reflect and reinforce dominant cultural
narratives to harm the marginalized communities. Structured in six chapters,
along with a separate introduction and conclusion, Noble
presents some cases suggesting how
Google searches,
such as
“Black Girls”
or “Latina Girls”,
get hypersexualized and
derogatory content. Through in-depth analysis,
she exposes
how such results reflect the commodification of racialized bodies
and contribute to the systemic erasure of positive and accurate representations
of women of color. She also critiques the corporate logic that governs digital
platforms, arguing that prioritization of profit over ethical considerations
results in
algorithmic discrimination. Using a
mixed methodology
consisting of qualitative content analysis, case studies, interviews, and
critical discourse analysis, Noble examines various episodes where Google’s
algorithmic recommendations had real-world consequences, including examples of
misinformation and harm during events like the 2015 Charleston church shooting.
The book brings striking examples, such as Google search results for “Black
girls” and autocomplete suggestions linked to racial slurs. She presents these
not as isolated incidents, but as systemic failures rooted in data capitalism
and the profit-driven logic that underlies algorithmic design. Her critical
perspectives build on scholars like Patricia Hill Collins, Kimberlé Crenshaw,
and Lisa Nakamura, integrating sociotechnical systems analysis with a deep
commitment to social justice. She explores how traditional institutions like
libraries once served as curators of knowledge, contrasting them with today’s
algorithm-driven platforms that lack accountability, editorial responsibility,
or public oversight. Noble also engages with the role of advertising
revenue and keyword auctions in shaping search outputs, showing how companies
like Google monetize stereotypes through their algorithms. By revealing how
commercial imperatives override democratic values, Noble makes a case for
stronger public policy, regulatory frameworks, and alternative technological
infrastructures that prioritize
equity and human rights. She concludes that in a world increasingly governed by
opaque technologies, critical
information literacy
and algorithmic
accountability are essential
to resist the
reproduction of systemic inequalities. Her work seems as both a
diagnostic tool and a call to action, urging scholars, technologists, and
policymakers to interrogate the ethical implications of algorithmic
decision-making and to demand transparency and justice in digital systems. In a
few words, Noble's book repositions algorithmic design as a political and
cultural act, underscoring the urgent
need to
understand how technical
systems can
entrench existing
social hierarchies
and contribute to further marginalization of already oppressed groups.
Each of these three books,
Algorithms of Oppression by Safiya
Umoja Noble, Data Feminism by
Catherine D’Ignazio and Lauren F. Kleise, and
Race After Technology by Ruha
Benjamin, provides a strong and compelling critique of the systems of knowledge,
politics, and society that shape contemporary digital technologies. Although
these books differ in method, scope,
and ideological emphasis,
they all
challenge the common
belief that
technology is
neutral, progressive, and helpful to everyone. While comparing these
three books, it is not only that they focus on the shared belief that algorithms
can cause harm, but also collectively advocate for a radical reframing of
technology as a site of struggle over meaning, power, and justice. To fully
grasp what the authors focus on in these books, it's important to excavate their
methodological foundations, and the degree to which they imagine viable
alternatives to the status quo.
A
strong aspect of all three
books is their
core critique
of the myth of algorithmic
objectivity or
neutrality. Nobel’s
Algorithms of Oppression
particularly stands strong
in this regard. Drawing from
Black feminist thought and information science, Noble argues that “algorithmic
operation is not just a glitch in the system but, rather, is fundamental to the
operating system of the web” (9). According to her, the impact on user is sever.
Further, she underscores the convergence of racial capitalism and information
infrastructures by centering corporate
logics and
capitalist motivations. Her critique
is most
effective when
she historicizes digital media
within a longer lineage of oppressive knowledge systems, such as library
classification schemes and academic canons that marginalized Black and Brown
voices.
Similarly,
D’Ignazio and
Klein’s
Data Feminism also
makes a
parallel intervention, though it does so
through a more explicitly epistemological approach, focusing on how knowledge is
shaped and whose voices are centered. Their core aspect of their argument is
that the data science is
never neutral,
which is
supported by
a thoughtful use of
feminist standpoint
theory, especially the ideas of Sandra Harding and Donna Haraway.
Haraway’s idea of 'situated knowledges', which challenges the illusion of
neutral, all-knowing objectivity, is a key influence in
this work.
The authors
reanimate this idea
to life
by calling
for a
kind of
data science
that pays attention to context
and values people’s lived experiences, feelings, and knowledge from communities.
In so doing, they extend Haraway’s challenge to masculinist epistemologies into
the realm of algorithmic design. In addition, Ruha Benjamin’s
Race After Technology also challenges
the liberal
idea of neutrality by
introducing the
term 'New
Jim Code'—a
concept that combines Michelle
Alexander’s 'New Jim Crow' with the influence of algorithms on social control.
Benjamin’s argument
states that
coded inequality
is easier
to present
as progress
because it is hidden behind complex technology and the appearance of good
intentions. Her analysis of facial recognition systems, predictive policing, and
healthcare algorithms demonstrates that
racial harm is often masked by the rhetoric of efficiency and fairness.
In this way, three authors are focused on showing that digital systems are not
neutral. They work to reveal how these systems reflect certain beliefs and power
structures, and how they shape what we think of as truth or knowledge.
While
the books
share similar
arguments about the
politics behind
the technology, they use different methods, and these differences help bring out
new and valuable ideas. Noble employs
a qualitative, critical case
study approach
grounded in
media and
information studies. Her
method involves close readings of search engine outputs contextualized within
larger sociopolitical structures. This allows
her to
illuminate how platforms
like Google reproduce dominant ideologies under the guise of algorithmic
curation. However, one limitation of Noble’s approach is its
relatively narrow empirical
focus; by
concentrating heavily
on search
engines, she sometimes
underplays how other algorithmic systems (e.g., social media algorithms,
biometric data processing) function in different modalities of harm. In
contrast, Data Feminism adopts a
broader methodological toolkit that is both interdisciplinary and
praxis-oriented. D’Ignazio and Klein blend feminist theory, participatory
design, and data visualization with a commitment to community-based knowledge
production. Their use of "design justice" frameworks, as explained by Sasha
Costanza-Chock, makes their critique more practical, especially in areas like
civic tech and public data projects. Costanza-Chock argues that design justice
“explicitly rethinks design processes to center marginalized communities” and
challenges the assumption that technology is inherently
neutral (6).
This perspective makes
Data
Feminism more
useful for
real-world change, but the book is sometimes a bit too hopeful. It suggests
that small reforms within current data science systems might be enough to fix
deeper problems. However, this view can overlook how hard it really is to change
the system at its core, a challenge the authors recognize, but don’t completely
address. On the other hand, Benjamin’s style seems more sort of synthetic. She
applies the combination of historical
analysis, critical race theory, and
ethnographic observations to show how "technological benevolence" often hides
harmful effects that are deeply rooted in race. The strongest part of Benjamin
is her clear and creative way of explaining complex ideas.
Terms
like "the New Jim
Code," "techno-benevolence,"
and "discriminatory
design" help her explore the
influence of algorithms without relying too much on technical language. Her
abolitionist perspective
keeps her
different from the
others. While
Noble and
D’Ignazio
and
Klein
push for
reform and
accountability, Benjamin
calls for
tearing down
prison-like systems, both
digital and non-digital, and building new, freeing alternatives.
The
books collectively challenge the
notion that
technological inequities can
be resolved simply by refining
algorithms or diversifying design teams. Ruha Benjamin is particularly incisive
in critiquing this belief, arguing that placing marginalized individuals within
fundamentally racist systems does little to address their underlying structures.
As she notes, “many diversity initiatives offer little more than cosmetic
change,” concealing systemic injustices rather than dismantling them (67).
Drawing on Simone Browne’s Dark Matters, Benjamin stresses how technologies of surveillance
were not just deployed in racist ways but were conceived with anti-Blackness at
their core, from slave patrols to modern predictive policing. This historical
continuity reveals that racism is not an accidental byproduct of technological
systems but a foundational element. Accordingly, Benjamin’s abolitionist
framework rejects
reformist strategies
such as
diversity quotas
or minor
algorithmic tweaks. Instead,
she calls for a radical reimagining of
technology, one that confronts and uproots its embedded racial
hierarchies rather than merely diversifying its operation.
D’Ignazio and Klein take a more mixed or
uncertain position. While they push to challenge traditional ways
of doing
data science,
they also
engage deeply
with institutional actors and academic communities. This shows a struggle between
criticizing the system and still being part of it. Can you change unfair tools
without keeping the unfair system? D’Ignazio and Klein see
“data feminism”
as both
a way
to question
power and
a method for
change. They
believe in slow, step-by-step
progress and working together with others. While this approach is practical and
realistic, it may not go as far as the bold changes that scholars like Ruha
Benjamin and Simone Browne call for.
On
the other
hand, Safiya
Umoja Noble
focuses her
critique on
the corporate
logics driving what she calls the “algorithmic oppression” of
marginalized groups. Drawing on Shoshana
Zuboff’s concept of “surveillance
capitalism,” Noble
argues that search engines,
especially Google,
do not operate as
neutral tools
but as
profit-driven systems
that reinforce existing
racial and
gender hierarchies.
She reveals
how searches
for terms
like “Black
girls” produce dehumanizing and
hypersexualized results, stating, “Algorithms are not objective, and they are
not just technical—they are loaded with power” (Noble 5). She exposes how
digital platforms commodify identity, turning women of color into clickable
content while masking systemic bias behind claims of algorithmic neutrality.
Although she proposes public policy interventions
and greater
oversight as
possible responses, her focus
on institutional
solutions may not go far
enough. As she notes, “Corporate-controlled information platforms are shaping
knowledge in ways that are neither democratic nor accountable” (27). While such
reforms are necessary, they risk
overlooking the deeper
political and
social structures that enable
algorithmic harm. Compared to the more radical calls for abolition and
systemic redesign advanced by thinkers
like Ruha
Benjamin or
even the
critical interventions by D’Ignazio
and Klein,
Noble’s solutions may appear cautious or limited in scope.
Although they take different approaches,
Noble, D’Ignazio and Klein, and Benjamin all seek to envision
what fair
and just
technological systems could
look like.
Noble advocates
for the creation of
public-interest platforms that serve democratic values instead of corporate
profit, arguing that technology should be governed by principles of equity and
accountability.
D’Ignazio and Klein propose a framework
of data feminism grounded in participation, transparency, and community control,
aiming to shift power within data science toward those most
affected by
its outcomes. Meanwhile,
Benjamin pushes
beyond reformist
solutions, arguing that
true justice
requires an
abolitionist approach—one that dismantles
oppressive technological systems entirely and nurtures new forms of social
life based on care, collective responsibility, and mutual aid. Together, their
visions offer overlapping yet distinct pathways toward reimagining
technology as a
tool for
justice rather than
oppression. However,
while each
author offers powerful ethical and conceptual frameworks, they stop short
of fully theorizing the material conditions required for transformation. What
types of labor, institutions, and global political movements are needed to
sustain such justice-driven tech practices? How do these frameworks respond to
global asymmetries, especially those affecting the Global South? These
unresolved questions suggest the need for broader interdisciplinary engagement
across political economy, development studies, and global STS.
Each of these books makes significant
contribution to a range of interdisciplinary academic fields, including Science
and Technology Studies (STS), Feminist Theory, Critical Race Studies, and
Critical Data Studies. Despite they have distinct approaches, they, together,
stress a shared belief: digital technologies are not neutral, and they deeply
embedded socio- technical systems shaped by human desires or intentions,
institutional settings, and social and historical factors. Their arguments
challenge the myth associated with the technology about its neutrality and
objectivity, drawing attention to how systems of power and oppression are built
into the very codes and structures of digital tools. To fully understand the
importance of their contributions, it is important to position these three books
within the larger academic conversations they interact with, conversations they
not only engage but also challenge and broaden.
These discussions increasingly
emphasize that
technology must
be viewed
not just
as a technical tool but as a
political and ethical project shaped by social values, power dynamics,
and historical contexts. In one way to another, the three books follow the STS
tradition as their foundational context to argue that technology is socially
constructed and carries the same social biases to reinforce the existing social
discrimination through the digital tools. As Langdon Winner, a political
theorist focused
on social
and political
issues of
modern technologies
are not
neutral tools but political artifacts that embed and reinforce power
structures, “technical things have political qualities… they can embody specific
forms of power and authority (121). These critical frameworks align strongly
with Nobel’s
Algorithm
of Oppression,
which critiques
search engines are racialized tools
that reproduce
discrimination. Noble seems
to expand
on Winner’s
argument, showing how digital infrastructures “encode and reinforce
dominant ideologies” ( Noble 85), particularly through the political economy of
algorithms. By showing how technologies perpetuate existing power structures,
these works emphasize STS’s main concern that technological development is never
neutral, but is always shaped by, and entangled with, preexisting systems of
power and oppression.
Building
on the STS tradition
that views
technology as socially
constructed, Ruha Benjamin extends this perspective by focusing on its
intersection with racial justice. She introduces the concept of the “New Jim
Code” to illustrate how modern technologies do not merely
reflect existing
social biases
but actively
reproduce and
deepen racial
inequalities through digital systems. To critically analyze this phenomenon,
Benjamin combines insights from STS and Critical Race Theory in what she calls
“race critical code studies.” This interdisciplinary framework exposes how
racism influences both access to technology and the underlying logics
of its design and implementation. As Benjamin explains, this approach
enables us to “open the Black box of
coded inequity,” adapting the STS metaphor
of the “Black box” to reveal
the often- invisible mechanisms
through which race and power are encoded into technological systems (36).
This aligns
with Simone
Browne’s concerns
of surveillance technologies that
have
long been used to define, regulate,
and police Black life. Browne
writes that “surveillance
is nothing new to Black folks. It is the fact of antiblackness,” linking
today’s algorithmic monitoring to historical practices like slave patrols,
biometric tracking, and stop-and-frisk policing (Browne 10). This shows
Benjamin actively
contributes and
expands the
ongoing dialogue
between STS, Critical Race
Theory, and other social justice frameworks concerned with the politics of
technological design and use.
Data Feminism, in particular, draws extensively
from feminist theory to question and reframe
dominant epistemologies in data
science. Catherine
D’Ignazio and
Lauren Klein
anchor their work in feminist standpoint theory, drawing contributions
from scholars like Donna Haraway. Harding’s
concept of “strong objectivity”
is central to their argument.
She describes it as a methodological
approach that
“draws on
feminist standpoint epistemology to
provide a
kind of logic of discovery for maximizing our ability to block 'might
makes right' in the sciences” (Harding 331). This concept emphasizes that all
knowledge is partial and that marginalized perspectives
are crucial
for more
rigorous and
equitable knowledge
production. Building
on this, D’Ignazio and Klein
argue that data science must actively engage with questions of power, privilege,
and context. Similarly, Haraway’s notion of “situated knowledges” plays a
key role in their framework.
By asserting that “the only way to find a larger vision is to be somewhere in
particular,” Haraway challenges the idea of universal objectivity and emphasizes
the importance of acknowledging
the positionality
of the
knower (590).
D’Ignazio and Klein adopt this
stance to advocate for the
inclusion of emotional, embodied, and lived experiences in data work, thus
promoting a
more inclusive
and socially
responsible approach to
knowledge-making. In addition, it
continues a legacy of feminist activism that connects theory with practice,
drawing on Kimberlé Crenshaw’s theory of intersectionality to show that data
injustice must be understood through
overlapping systems
of oppression,
including race,
gender, and
class. They
also build
on Patricia Hill Collins’s idea of the “matrix of domination” to reveal
how power functions within systems
that appear
neutral. Rather
than simply
criticizing current data
practices, D’Ignazio and Klein
offer new, justice-centered ways of thinking about and using data.
The
books make
important contributions to the
growing field
of critical
data studies, which
examines the
social, cultural,
and political areas
of datafication.
Scholars in
this field reject the common belief that data brings better results.
Shoshana Zuboff’s analysis of surveillance
capitalism emphasizes this critique
stating “unilateral
claiming of
private human experience as
free raw material”, in which corporate entities not only monitor but also shape
behavious (94). Nobel’s analysis of
Googles algorithms reinforces this
perspective: “When a company can determine what knowledge is legitimate
and what is not, it exerts enormous influence
over culture,
politics, and
economics” (Noble 32).
Similarly, Benjamin and
D’Ignazio and Klein state similar perspectives about the danger of
algorithmic governance. While Noble critiques search engines as sites of
racial and gender bias, Data
Feminism expands this analysis to examine how data is collected,
interpreted, and used across various systems. In parallel, Benjamin investigates
the role of technologies like biometric surveillance, predictive policing, and
risk assessment tools, revealing how these systems reinforce and amplify
structural inequalities.
These three texts reveal how algorithmic
systems have increasingly stepped into public life with no
transparency or democratic control.
They challenge
mainstream tech ethics that
often reduce bias to
a technical
flaw. Instead,
they argue
that bias
is built
into systems
shaped by
racist, sexist, and capitalist logics. Wendy Hui Kyong Chun’s
Discriminating Data adds depth to this
critique by
questioning dominant notions
of fairness.
She notes,
“What counts
as discrimination and what
doesn’t often depend on who gets to define fairness” (Chun 145), a claim that
aligns with Benjamin’s warning about
“techno-benevolence”,
superficial efforts to fix
AI bias
that fail to challenge
structural inequalities. These works advocate for a more justice-centered
approach to understanding data systems and the significant power they hold.
One of the strongest aspects of the books
is their interdisciplinary nature. They bridge computer science, social
science, media
studies, and
political theory,
contributing to
a growing conversation on how
digital technologies are studied and taught. Their accessible writing and
activist orientation have also made them influential beyond academia, impacting
policy, journalism, and grassroots
organizing. For example, D’Ignazio and Klein’s work has
informed data justice initiatives at the local and municipal levels,
while Noble’s analysis has shaped debates
surrounding search
engine regulation and content
moderation. Similarly, Benjamin
has influenced abolitionist tech movements and education on race and
digital literacy. The books also engage with global discussions about data
colonialism and digital inequality. While their case studies primarily focus on
the United States, the issues they address—algorithmic bias, surveillance,
and systemic
injustice—have far-reaching
global implications.
Scholars like
Lilly Irani have highlighted the exploitative labor practices
underpinning global AI supply chains, particularly
in the
Global South, pointing
out how
tech companies “outsource
the dirty work of labeling data
to precarious
workers” while
masking their
labor behind
narratives of
automation (Irani 15). D’Ignazio and Klein’s call for feminist data
practices acknowledges these global concerns,
though more
work is
needed to
fully incorporate
transnational perspectives into
these frameworks.
In conclusion, these three books
challenge our common understanding that technology is a neutral force that
simply makes life better. Through a close examination of the emerging technology
and its functions, the authors powerfully contribute to the growing discourse on
the hidden politics embedded within digital systems. Through different
convincing case studies and the everyday experiences of marginalized
communities, they reveal how technologies subtly perpetuate historical and
sociopolitical inequalities. Despite their differing methods, each work urges a
fundamental rethinking of how we design, implement, and govern technological
infrastructures. Central to their collective argument is a call to shift our
critical gaze from focusing solely on algorithmic outcomes to interrogating the
deeper ideological, institutional, and
epistemological structures that shape
these systems.
For instance,
Safiya Noble
exposes how search engines like
Google perpetuate
hypersexualized and
dehumanizing narratives about Black girls,
illustrating the racial and profit-driven foundations of information
infrastructures.
Similarly, Catherine D’Ignazio and Lauren
Klein critique mainstream data science for erasing emotion, labor, and context,
and advocate for inclusive, participatory methods grounded in lived experience.
Ruha Benjamin builds on historical analysis and critical race theory to
demonstrate how digital
tools often
reinforce racial
hierarchies while
appearing neutral
or benevolent;
her call for
abolition, not
mere reform,
pushes the
boundaries of current
debates, demanding
entirely new systems rooted in
justice and
care. Moreover, all three texts
draw upon
foundational scholars like Donna
Haraway, Sandra Harding, and Simone Browne to broaden our understanding of
critique in the digital age. Rather than simply documenting harms, they offer
transformative frameworks, such as
data justice, abolitionist
design, and
participatory epistemologies, that envision
liberatory alternatives. They challenge readers, designers, and
policymakers to move beyond superficial notions
of fairness
and engage
in sustained,
intersectional critique of
how power,
knowledge, and infrastructure intersect. Lastly, they remind us that we
cannot create fair technologies on top of unfair systems. Building a more just
future requires more than technical fixes—it also needs strong
ethics, shared
vision, and
deep changes
in our institutions. Their
message is
not just about pointing
out what’s wrong, but also about inspiring us to take action and create better
systems from the ground up.
1.
Benjamin,
Ruha. Race
After Technology: Abolitionist Tools
for the
New Jim
Code. Polity Press, 2019.
2.
Browne, Simone. Dark
Matters: On
the Surveillance of
Blackness. Duke University
Press, 2015.
3.
Chun, Wendy Hui
Kyong. Discriminating
Data: Correlation, Neighborhoods, and
the New Politics of
Recognition. MIT Press, 2021.
4.
Costanza-Chock,
Sasha. Design
Justice: Community-Led Practices to
Build the
Worlds We Need. MIT Press,
2020.
5.
D'Ignazio, Catherine, and
Lauren F. Klein. Data Feminism. MIT Press, 2020. Haraway,
Donna. "Situated
Knowledges: The Science
Question in
Feminism and
the Privilege of Partial Perspective."
Feminist Studies, vol. 14, no. 3, 1988,
pp. 575–599.
6.
Harding, Sandra. "'Strong
Objectivity': A Response
to the
New Objectivity Question."
Synthese, vol. 104, no. 3, Sept. 1995, pp. 331–349.
7.
JSTOR,
www.jstor.org/stable/20117437.
8.
Irani, Lilly. Chasing
Innovation: Making Entrepreneurial
Citizens in
Modern
India. Princeton
University Press,
2019.
9.
Noble, Safiya Umoja.
Algorithms of Oppression: How
Search Engines Reinforce
Racism. New York University Press,
2018.
10.
Winner,
Langdon. The
Whale and
the Reactor:
A Search
for Limits
in an
Age of
High Technology. University of Chicago Press, 1986.
11.
Zuboff, Shoshana. The
Age of
Surveillance Capitalism: The
Fight for
a Human Future
at the New Frontier of Power. PublicAffairs, 2019.