In 2024 we commissioned artists Blast Theory to collaborate with us on the University of Sheffield’s Framing Responsible AI, Implementation and Management Project (FRAIM). ODI / Data as Culture were part of a cross-sector group of partners including Sheffield City Council, The British Library and Eviden. A key aim of FRAIM is to reshape views of ‘Responsible AI’ from simplistic solutions to dynamic processes and conversations, inviting everyone to consider their role in using and working with AI. Curator Hannah Redler-Hawes interviewed Blast Theory lead artist Matt Adams about their resulting artwork Constant Washing Machine.
Hannah Redler-Hawes: Let’s start with the obvious, why Constant Washing Machine?
Matt Adams: When we started thinking about AI and ‘machine’ intelligence, we considered the way computers themselves have become one of the central metaphors of our time, and how the use of the computer as a metaphor for the human brain is hotly debated.
HRH: So the machine here is the system of AI processes, people, computers, things and so on, not a laundry washing machine.
MA: Right. AI is replete with metaphors; even the phrase ‘artificial intelligence’ is two metaphors, neither of which are appropriate terms, so investigating what AI metaphors might be felt very important. There’s been a controversy around the use of ‘intelligence’ in AI since the 1950s. We also started to think about AI as a term that’s too big and baggy and not necessarily appropriate.
HRH: So there’s the metaphor for AI, but also questions around what the machinery of AI systems are. And the people, the physical bodies of people and our ‘meat brains’ play an important part in this?
MA: In contrast to the mind/body separation so embedded in computer/mind metaphors, we have been thinking about the role of the body in human intelligence. How do we represent our subjective and visceral response to the abstract, opaque and, at times, threatening world of Artificial Intelligence? And how do we place our fragile, subjective, human selves in the loop?
HRH: You’ve also been interested in the language of Responsible AI (a central focus of the broader FRAIM research).
MA: We have been intrigued by the instability of the language used in Responsible AI policies. When language is this slippery, how do we grasp the profound changes that AI is bringing? In Responsible AI policies, terms such as ‘transparency’ and ‘alignment’ are used frequently, but they are rarely defined (which is why FRAIM is so important). Given that the current boom in AI is driven by Large Language Models (and their outstanding fluency with language), there is an added irony around questions of meaning and intent. Are Large Language Models ‘intelligent’ in any sense or just stochastic parrots capable only of aping what they have heard? Are the metaphors of parrots (or apes) helping us to grasp important concepts, or just simplifying complex ideas into a palatable form?
HRH: Soap felt like a rich metaphor …
MA: A variety of open-ended questions led us to think about soap. A key concern in Responsible AI is the role of human control and input. ‘Human in the loop’ is cited as an important requirement both for the development of AI models and for their operations and outcomes. Where do we assert our humanness in a world of code, high level abstraction and crude extrapolations of ‘processing power’? Soap is intimate, it goes in our armpits and our genitals. We touch it to our bodies and spread it over the surface of our skin. Soap is social. We share bars of soap with others and we clean ourselves for other people. It is an everyday product – produced industrially in thousands of tiny variations – that is rich with its own set of metaphors. We wash our hands of something, we greenwash things, we clean house, we appear squeaky clean, we get ourselves in a lather. These metaphors entwine the body and ethics. And they speak to slippery behaviour and slippery meanings.
HRH: It’s a brilliant metaphor which we all had to stop and think about at the ODI and the University, and as it took hold we realised it was perfect as something that describes process, habit, behaviour, messiness, materiality and social consequence. Your focus on embodied knowledge also speaks to the way in which, particularly in our world of screens, we experience dislocation from our body and a disconnect or distrust of embodied knowledge.
MA: We were interested in the context of where responsible AI happens. Is it in big public policy boardroom contexts? We asked ourselves, where does it hit the road, where is it enacted? At the heart of the work is the fact we think this is about habits and behaviours, the relationship between things that we do in public and the things that we do in private, about what leaves a human trace.
HRH: The eight bespoke bars of soap you eventually produced have specific words etched into them, why?
MA: We wanted to focus on the centrality of human action and decision-making in AI. We invited participants in the FRAIM project – as researchers, curators and industry partners – to select a word or short phrase that they consider vital for Responsible AI, and then have their portrait taken. We then created a limited edition of bars of plain white soap, each bar bearing that word or phrase. More people contributed words than we were able to turn into soap at this stage. The words and phrases we selected are: Practice, Iteration, Inclusion, Transparency, Context, Care, Shared Understanding, Data is the Feedstock of AI. Photographer Melanie Pollard took a portrait of each person with their bar of soap. This reflects the subjective and human presences that work in the field of Responsible AI and the embodied knowledge that we bring.
HRH: Blast Theory are recognised for your pioneering work in emerging, interactive and digital media, but Constant Washing Machine is a mixed media and multimodal rather than digital work, in which handmade objects are key
MA: It’s very unusual for Blast Theory to make a tangible object that you might want to touch and pick up. When we started to think about soap we liked the idea of it being something precious that will get destroyed. We also considered whether we wanted to use AI itself, but discarded these ideas as looking like the least interesting options; it became a bit like a snake eating its own tail.
HRH: So the materiality is central?
MA: We started to think about the way in which the people we met in relation to the work might be very honoured or very respected, but the question remains, are they trustworthy? We really considered the human process of generating a technology like AI – we are meaning makers. For example, the process of interviewing staff at all levels of each partner organisation is a process of meaning making. Soap is untidy and it gets wet and messy and it sits in its own puddle. Central to the concept is this intimacy and messiness. The soap slowly ceases to exist through its use. All systems are entropic and inherently decaying, like the soap.
HRH: What was the experience of creating this work as part of an interdisciplinary, cross-sector research project like?
MA: We learned a great deal from conversations with the researchers, reading the AI policies and the workshop, and we felt able to contribute in each setting. The FRAIM researchers all have such a supple intellect, where they can be both abstract and philosophical and complex while also acknowledging that this is a real thing happening now that affects real people.