As I reworked my Study Statement, I gave some thought to whether I might use AI images generated by other people rather than making new ones myself. On the surface, that seemed like it might be a way to reduce my own direct entanglement with image-generation systems. It felt like a possible way of keeping a little critical distance: to work with what was already circulating rather than adding more images to the pile. But the more I thought about it, the less clean that position became.
Part of the difficulty is that the question immediately shifts from making to ownership. If I use an AI image produced by someone else, what exactly am I using: a picture they authored, a platform output, or a derivative artefact assembled from a much larger field of prior images and data? Midjourney’s current Terms of Service try to stabilise this uncertainty by stating that users “own all Assets” they create “to the fullest extent possible under applicable law.” But that ownership is qualified from the start. The same terms note exceptions, including third-party rights, the requirement for certain higher-revenue companies to be on particular paid plans to own their assets, and the rule that if you upscale images made by others, those upscaled images remain owned by the original creator. Midjourney also says users grant it a perpetual, worldwide, non-exclusive, sublicensable, royalty-free, irrevocable licence over both inputs and generated assets, and that content is by default publicly viewable and remixable unless a user has access to Stealth mode (only available on the highest tariff plans).





That language is striking because it offers a strong claim to ownership while never quite resolving the deeper instability underneath it. Midjourney explicitly advises users to consult their own lawyer about the state of intellectual-property law in their jurisdiction. In other words, even where ownership is asserted contractually, the legal and ethical ground remains uncertain. The image is presented as an ownable asset at the end point, but the conditions of its production are much harder to delimit.
This is where collage becomes useful as a comparison. Collage has long worked through cutting, borrowing, recontextualising, and recombining existing material. Artists like Hannah Höch helped establish photomontage precisely through appropriation and recombination of images from mass media. In that sense, collage has always lived with a certain legal and ethical instability around citation, reuse, permission, and transformation. More recent copyright disputes around what might be called ‘appropriation art’, including the Warhol Foundation v. Goldsmith case, have also shown that claims of transformation do not automatically settle questions of infringement or fair use. So, there is already a long history of image-making practices operating in a legal grey area.
But AI images intensify that problem. With collage, the acts of selection, cutting, juxtaposition, and transformation are usually legible in the work itself. The borrowed material often remains visible as borrowed material. With generative AI, by contrast, appropriation is displaced into the training process and hidden behind the interface. What appears at the end is a singular, coherent output, even though it emerges from diffuse and contested processes of scraping, ingestion, modelling, and recombination. That is part of the irony I keep circling: images produced through systems built on large-scale extraction and appropriation can still be presented, at the point of output, as clearly ownable assets. Midjourney’s terms do exactly that, even as they preserve broad platform rights over the same content and acknowledge the limits of that ownership under applicable law.
For my project, that makes the idea of using other people’s AI images feel less like a clean solution and more like another version of the same problem. It might have reduced my direct role in generating new outputs, but it would not have removed me from the contradictions attached to them. If anything, it would have sharpened them. I would still be working with images whose status depends on a strange double movement: extraction and appropriation at the level of production, followed by ownership claims at the level of the finished artefact.
That tension is useful because it reveals something broader about AI images. They do not arrive as neutral pictures. They arrive already carrying unresolved questions about authorship, labour, permission, and value. Collage has always tested the boundaries of ownership, but AI systems make those boundaries both larger in scale and harder to see. The problem is not only who owns the image at the end, but what it means to assign ownership to an object whose very conditions of possibility are collective, opaque, and contested.
Leave a comment