I’ve been giving some more thought to the research paper, and the main problem I keep running into is scale. Every topic seems workable at first, but as soon as I start to flesh it out it becomes obvious that it’s far too big.
That has actually brought me back to one of my earlier ideas: testing the comparison between collage and AI-generated images through the category of what might be called appropriation art. It feels sensible to focus on one ethical issue within AI, because although I want to shout into the wind about all of them, I only have around 3000 words to work with.
One thing I’ve had to think carefully about here is labour. There are so many things hidden or obscured in AI discourse, and labour may be the largest of them. In my experience, quite a lot of people can talk knowledgeably about environmental cost, extraction and appropriation, or corporate power, but there seems to be much less awareness of the severe labour issues tied up with AI. By that I don’t mean the usual scare stories about jobs being replaced. I mean hidden data labour, the psychological harm associated with content moderation, task degradation, workforce monitoring, algorithmic management, and dispersed supply chains. In many ways I’d like to write about this, but I also want to keep my contribution as closely connected as possible to what I do in the studio.
For that reason, questions around authorship and appropriation seem like the most sensible place to settle for now, at least until the supervisory meeting. At the moment, this is the draft question and abstract I’ve arrived at:
Question:
By contrasting Martha Rosler’s explicit use of appropriated imagery with Refik Anadol’s AI-based transformation of archival datasets, this paper examines how the ethics of appropriation change when acts of borrowing become less visible to the viewer.
Abstract:
This paper examines how the ethics of appropriation change when borrowed images move from visible collage to AI-based transformation. By contrasting Martha Rosler’s explicit use of appropriated imagery with Refik Anadol’s transformation of large image datasets through artificial intelligence, it asks what happens when acts of borrowing become less visible to the viewer. Rosler’s photomontages make appropriation legible through cut, juxtaposition, and visible construction, foregrounding the fact that the work is built from pre-existing images. Anadol’s practice, by contrast, draws on archival datasets that are processed and transformed into immersive generative outputs in which source material is far less immediately identifiable. Drawing selectively on Derrida, the paper argues that both practices unsettle stable ideas of origin, originality, and authorship, but under different ethical conditions. If Derrida helps us understand the image as always marked by trace and repetition rather than pure self-presence, this comparison suggests that the ethical stakes of appropriation shift when those traces become difficult to locate or assess. Where Rosler foregrounds borrowing as part of the work’s critical operation, Anadol’s seamless visual environments make that dependence less perceptible. The paper therefore asks whether legibility is a significant ethical condition of appropriation, and to what extent this comparison can illuminate broader ethical questions surrounding generative AI, whose dependence on prior images often remains difficult for viewers to identify.
I’m sure this isn’t exactly where things will end up, but it does at least feel manageable!
Leave a Reply