The revolution shall not be automated: On the political possibilities of activism through data & AI

Every other day now, there are headlines about some kind of artificial intelligence (AI) revolution that is taking place. If you read the news or check social media regularly, you have probably come across these too: flashy pieces either trumpeting or warning against AI’s transformative potential. Some headlines promise that AI will fundamentally change how we work and learn or help us tackle critical challenges such as biodiversity conservation and climate change. Others question its intelligence, point to its embedded biases, and draw attention to its extractive labour record and high environmental costs.

Scrolling through these headlines, it is easy to feel like the ‘AI revolution’ is happening to us — or perhaps blowing past us at speed — while we are enticed to take the backseat and let AI-powered chat-boxes like ChatGPT do the work. But the reality is that we need to take the driver’s seat.

We need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently.

If we want to leverage this technology to advance social justice and confront the intersecting socio-ecological challenges before us, we need to stop simply wondering what the AI revolution will do to us and start thinking collectively about how we can produce data and AI models differently. As Mimi Ọnụọha and Mother Cyborg put it in A People’s Guide to AI, “the path to a fair future starts with the humans behind the machines, not the machines themselves.”

Sure, this might seem easier said than done. Most AI research and development is being driven by big tech corporations and start-ups. As Lauren Klein and Catherine D’Ignazio discuss in “Data Feminism for AI” (see “Further reading” at the end for all works cited), the results are models, tools, and platforms that are opaque to users, and that cater to the tech ambitions and profit motives of private actors, with broader societal needs and concerns becoming afterthoughts. There is excellent critical work that explores the extractive practices and unequal power relations that underpin AI production, including its relationship to processes of datafication, colonial data epistemologies, and surveillance capitalism (to link but a few). Interrogating, illuminating, and challenging these dynamics is paramount if we are to take the driver’s seat and find alternative paths.

But here I want to focus on what alternative paths might look like. For the last four years, I have been involved in a collaborative research project called Data Against Feminicide, which explores how we can use data and technology to support existing struggles against gender-related violence. The project was originally developed by Catherine D’Ignazio, Helena Suárez Val, and Silvana Fumega, and I now co-lead it with them. This work is supported by a number of partners and students, is inspired by lineages of feminist activism against gender-related violence, and speaks to various ongoing efforts to explore possibilities for data activism, ‘techno resistances’, and ‘participatory AI’.

When I started working on this project, I knew very little about data systems and AI models; I was interested in feminist activism and participatory forms of research and planning. Our work has since made clear to me that the politics of data and AI is, at heart, a politics of knowledge production. We can start to dispute and transform these spaces by asking seemingly simple questions: who and what is this for, who is involved and how, and towards what ends?

Photo by Isadora Cruxên, London 2022.

Photo by Isadora Cruxên, London, 2022.

Who and what is this for?

Thinking about AI and data production differently requires asking what we want to achieve and whose work, knowledge, concerns, needs, and aspirations we want to support and uplift. Data Against Feminicide is not an “AI project”; it is an action-oriented and collaborative project that seeks to both understand and support the work of activists and civil society groups who produce data about gender-related violence and feminicide across various contexts. It was this broader aim, as I explain below, that eventually led us to work with AI.

Feminicide — in some contexts, femicide — is the gender-related killing of cisgender and transgender women and girls, a form of violence that reflects structural and intersectional forms of inequality. We know this is a global challenge: around 89,000 women and girls were intentionally killed in 2022, according to United Nations’ estimates.

But we also know that existing statistics underestimate the problem due to underreporting and incomplete or inaccurate data. In Brazil, where I am from, the Laboratório de Estudos de Feminicídios (LESFEM), one of our project’s collaborators, counted at least 1,706 feminicides in the country in 2023, 16% more than official statistics. Addressing such data gaps, making the structural nature of this violence visible, and holding institutions to account are some of the reasons that many activists begin to produce data of their own.

But this labour of feminicide data production, as our research shows, is emotionally draining, time intensive, and often volunteer-based and unremunerated. Through qualitative interviews, for example, we learned that most activists use news articles to identify feminicide cases, meaning they need to read through a lot of violent — and often not directly relevant — content to document cases in their contexts.

To address this issue, we collaborated with several activists across the Americas to develop a tailored email alerts system, the Data Against Feminicide Email Alerts. The system first runs search queries defined by the activists against a news media catalogue sourced from Media Cloud, which is a partner in the project. The initial results are then filtered through a machine learning model — a form of AI — to identify news articles that are highly likely to concern a feminicide and send them to the activists via email alerts. The system is available in English, Portuguese, and Spanish.

Our goal with this tool is to both draw attention to the labour involved in feminicide data production and facilitate it — rather than automate and replace it. Activists still do the work of identifying and recording cases according to their own monitoring frameworks, but the system helps with spotting relevant news articles. This approach draws on data feminism, a set of principles developed by Catherine D’Ignazio and Lauren Klein for taking seriously and tackling power asymmetries in data production, analysis, and circulation. One of the principles is to make labour visible. This perspective contrasts with prevailing approaches to labour in mainstream, corporate-driven data and AI production, which both mask the extractive nature of data labelling work and raise concerns about labour replacement and the future of workers across industries.

A pluralistic process helps enable conceptual pluralism.
— Suresh et al. 2022

Who is involved and how?

While reflecting on where we want to go with data production and AI is a good starting point, the questions of how we get anywhere, whom we bring on the ride, and how much they have a say on the journey are equally important. Also grounded in data feminism principles, we believe in centring collaboration and participation as ways of including different lived experiences, perspectives, and contexts, and of moving towards data practices and technological design that work for diverse communities.

To develop the Data Against Feminicide Email Alerts, we sought to engage activists throughout the process such that the system would be useful to them and reflect, as much as possible, their own understandings of feminicide. This entailed co-design sessions that helped to clarify data-gathering challenges and to collectively envision potential solutions; collaborative data annotation to create the news datasets that were used to train the machine learning model in various languages; and participatory testing and workshops. As our team has written elsewhere, our view is that “a pluralistic process helps enable conceptual pluralism.”

Towards what ends?

With this last question, the concern is not with the objectives or applications of specific projects or technologies that relate to data and AI production, but with the perhaps more philosophical question of what the end goal of all this is. Is it to produce more and more data? To automate everything while automating people out?

We know that data is now central to all sorts of productive, commercial, financial, and socio-political activities. We also know that generative artificial intelligence has a data addiction: loads and loads of data are required to support these models. In this context, it seems crucial and radical to ask: how much data (or AI) do we actually need and for what?

This was precisely the kind of question we posed in our annual community building event last year, called “Strategic Datafication.” The name comes from a concept elaborated by Helena Suárez Val, one of the project’s co-leads, to denote the possibility of mobilizing data carefully and strategically for social change while remaining aware of the ways in which data can be and has been used to oppress, exclude, and discriminate.

During the event, collaborators and other participants questioned what data might conceal, when data production is unhelpful, and how much data do we need to take collective action. But they also made clear the political possibilities contained in data production: as a basis for generating dialogue and mobilization, for challenging partial or inaccurate state and media narratives, for developing a socio-political consciousness, and for healing and repairing.

Revolutions as continuous, collective projects

Creating and taking part in spaces for continuous dialogue and reflection about data and AI production is part of approaching or (re)claiming these sites as spaces of resistance. Audre Lorde reminds us that revolution is not “a one-time event” and that “change is the immediate responsibility of each of us.”

In proposing that we take the driver’s seat, I am not suggesting that we should all reorient our work, research, or other activities towards directly engaging with data and AI. Rather, I am inviting you to take an active interest in being part of the conversation. I am inviting you to join us in asking who and what is this for, who is involved and how, and towards what ends. These questions help us to challenge a sense of AI “inevitability.” The more of us that engage in these conversations and that bring along colleagues and students, the more likely it is that we can make potential “AI revolutions” into collective political projects that take structural inequalities seriously and plural experiences and ways of knowing the world into account.

 

Isadora Cruxên is Lecturer in Business and Society at Queen Mary University of London and a member of the Centre on Labour, Sustainability and Global Production (CLaSP). She is also a research affiliate with the Data+Feminism Lab at the Massachusetts Institute of Technology (MIT).

 

Further reading

Cruxên, I., Jungs de Almeida, A., D’Ignazio, C. 2023. Data activism against feminicide: co-designing digital tools to monitor gender-related violence across the Americas, Research Insights #2, School of Business and Management, Queen Mary University of London. Available at www.qmul.ac.uk/busman/research/research-insights.

D'Ignazio, C. and Klein, L.F., 2023. Data feminism. MIT press.

Klein, L. and D'Ignazio, C., 2024. “Data Feminism for AI.” In 2024 ACM Conference on Fairness, Accountability, and Transparency.

Lorde, A., 2018. The Master’s Tools Will Never Dismantle the Master’s House. Penguin House.

Mejias, U.A. and Couldry, N., 2024. Data grab: The new colonialism of Big Tech and how to fight back. University of Chicago Press.

Suárez Val, H., Forthcoming, “Strategic datification.”

Suresh, H., Movva, R., Dogan, A.L., Bhargava, R., Cruxên, I., Cuba, Á.M., Taurino, G., So, W. and D'Ignazio, C., 2022. Towards intersectional feminist and participatory ML: A case study in supporting Feminicide Counterdata Collection. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 667-678).

Next
Next

How green bonds hit the ground: Notes from Brazilian agribusiness