Ben Lingard


Assessment feedback

I have reworked my study statement in response to three developments.

First, I revisited it in light of the Unit 1 assessment feedback, which I found thoughtful and genuinely helpful. Later in this post, I outline the ways in which I have responded to that feedback.

Second, the Interim Show has influenced how I now see the project developing. I will make a separate post documenting and reflecting on the work itself, but it is worth noting here that the process reminded me how much I enjoy working with paper, and how effective it can be as a quick, flexible means of testing ideas and generating multiple outcomes. As a result, I have reconfigured the study statement to include collage as an investigative tool.

Third, my exhibition at Leith comes down this weekend, and this has prompted me to revisit the work. I have now finally documented the exhibition and plan to make a separate post about it in the next few days. As ever, looking back at work with a little distance makes it easier to see it more clearly. The process of making the paintings became somewhat all-consuming, and I reached a point where I simply wanted the work finished. Looking at them now, I think they hold up reasonably well. There are, of course, things I would approach differently now, and I will record some of those thoughts in the forthcoming post.

What feels most relevant here is that, although the methodology I used to generate source material for the paintings was not especially well thought-through, it does seem connected to the direction I want this project to take. What matters now is that I develop a more structured approach to using Midjourney as a kind of collaging tool: one that makes my activity as visible, trackable, and clearly bounded as possible.

So this is the new Aims & Objectives section of my Study Statement:

This piece of research considers some of the ways that contemporary painting might respond to the ontological challenges posed by generative AIs. The current proliferation of AI-generated images has unsettled ideas about authorship, originality, ownership, and the status of the image. Moreover, this slew of digital content raises questions about what sort of being an AI-generated image possesses and how this might differ from pre-existing methods of image production (i.e. painting). Generative systems do not simply produce images; they reshape the conditions under which images come into existence.

Central to this enquiry is the concept of ontology. In philosophy, ontology concerns questions of being and the conditions through which entities emerge and acquire meaning. In AI engineering, an ontology refers to a formal system of categories and relations designed to structure knowledge for machine processing. This shared terminology conceals significant differences. Where philosophical ontology can embrace ambiguity, contingency, and debate, technical AI ontologies prioritise classification and operational clarity. This project considers how painting might test and complicate these tensions through material and embodied processes, and how collage – understood as an ontological probe – can make visible the constructedness, seams, and occlusions that AI images often smooth over.

A key claim of the project is that AI-generated images cannot be understood as ontologically neutral outputs. They come into being through constitutive conditions: large-scale data extraction, hidden human labour, energy-intensive hardware and data centres, and the concentration of power in a small number of platforms. These conditions are not simply ‘context’ but part of what an AI image is. The research therefore asks how image-being, origin, agency, and stability behave when images are produced within an apparatus that both reveals and conceals its operations, and what happens when those images are translated into material processes that reintroduce time, touch, resistance, and accountable decision-making.

To keep the enquiry manageable within the MA timeframe, the theoretical frame is narrowed to a practical toolbox drawn from two thinkers: Martin Heidegger (technology as revealing/concealing; enframing) and Jacques Derrida (trace, iteration, unstable origin). These concepts function as prompts for making, and as vocabulary for reflection rather than as the basis for an extensive philosophical survey.

I have chosen to situate this research in painting for two interrelated reasons. First, I have examined my practice rigorously in the first months of the course and have recognised that painting is the central axis of my activities. Regardless of how my practice expands or shifts, I always return to painting, and this is not just because I enjoy it (I often do not) but because I find it the most useful tool for answering my questions. This does not mean that this project will be limited to paint applied to a surface, but rather that the other activities that I undertake (especially collage and iterative image-handling) will be enacted within the context of painting as the primary methodological framework.

Secondly, painting is uniquely positioned to engage with generative AI because it carries a long history of negotiating seemingly existential challenge. Painting has “died” many times in the past 200 years and yet it persists and thrives. From the birth of photography through to postmodern declarations of its funeral rites, painting has repeatedly adapted and transformed. In the context of generative AI, the slowness, materiality, and embodied labour of painting offer a grounded counterpoint from which to consider, test, and respond to ontological uncertainty.

Aims

  1. To consider how painting might operate as a site for testing, negotiating, and responding to the ontological uncertainty introduced by generative AI images.
  2. To explore the tensions and problems that emerge when philosophical notions of ontology intersect with technical AI ontologies, and to consider how painting (and collage as a research tool within painting’s methodological frame) might be employed to highlight, complicate, and materialise these conflicts.
  3. To develop a practice-led methodology that treats the political, ethical, and economic conditions of AI image production (extraction, labour, infrastructure, power) as constitutive of the AI image’s being, and to make those conditions legible through iterative studio processes.

Objectives

  1. To define with precision the ‘ontological challenges’ posed by generative AI, focusing on image-being, origin/source, distributed agency, versioning/instability, and constitutive production conditions.
  2. To build a focused conceptual toolbox from Heidegger and Derrida that can generate specific studio prompts and support critical reflection.
  3. To construct a bounded, accountable, pilot dataset from my own archive (starting with 30 paintings and 30 collages) and use it to test how AI systems reshape images through prompting and versioning.
  4. To use collage as a primary research tool – printing, cutting, recomposing, occluding, and re-photographing outputs – in order to interrupt AI coherence, expose trace and iteration, and make construction visible.
  5. To translate selected stages of the loop (source → AI output → collage → optional re-generation) into painting, preserving seams and drift rather than resolving them, as a means of comparing AI image-production with embodied image-making.
  6. To maintain systematic documentation (tags, version trees, process notes, reflective writing) that enables a critical assessment of both the images produced and my own participation in the systems under examination.

I will now detail the points in my Unit 1 feedback that I have addressed in the re-working of the Study Statement:

You write “My interest in AI is driven by the ontological challenges that I perceive in the current headlong rush to embrace deeply flawed and over-hyped technologies.” and again in your study statement you write “This piece of research considers some of the ways that contemporary painting might respond to the ontological challenges posed by generative AIs.” What specifically are these ontological challenges?

The phrase “ontological challenges posed by generative AI” is operationalised through four tightly defined challenges:

  1. the status of the AI image as an apparatus-produced output rather than a direct trace of world or intention. What is the impact of this on ideas such as trust, agency, meaning, and accountability?
  2. The instability of origin and “source” under training, prompting, and versioning. How do we understand an image when there is no single, privileged ‘original’ but rather an iterative chain of hidden/ opaque origin.
  3. Distributed agency across dataset/ platform/ model/ prompt/ user/ hand. How do we understand an image when authorship, intention, and responsibility are difficult to locate in a single origin or maker.
  4. The constitutive conditions of production (extraction, labour, energy use, corporate concentration) without which the AI image cannot be understood as the kind of object it is. 

● In your study statement you write “the research contributes to both contemporary painting discourse and critical technology studies. It expands discussions of AI beyond questions of authorship and automation toward a deeper consideration of how images come into being under different ontological conditions.” 
— Does the research require a political and ethical dimension? Consider how training data is extracted, the labour conditions of mechanical turks, the reinforced racial and gender biases, the environmental costs and concentration of power in a small number of companies – these conditions are constitutive of what an AI-generated image is. How can AI-generated images be understood, ontologically, without them?
— Can you achieve your objective of critically assessing your own engagement with AI and other digital tools without engaging with the political, ethical and economic discourse around generative AI? 
— What data have you contributed to the training of these systems? How might making that entanglement visible become part of the work itself?

This is super-important and I am disappointed that I did not articulate this well in the first iteration of the Study Statement. I have re-worked the project so that the political, ethical, and economic dimensions are not treated as a separate discourse that sits alongside the studio work; they are treated as ontological conditions. Methodologically, this is addressed by making the image’s conditions of coming into being legible in two ways: first, through my choice of an accountable, bounded image set; second, through procedures that foreground infrastructure and concealment. The project’s infrastructural emphasis (data centres, GPUs, energy) is not illustrative ‘subject matter’ but a way of insisting that the AI image’s apparent immediacy is underwritten by material systems and power relations.

To enable critical self-assessment of my engagement with AI tools, I will begin with a discrete pilot dataset: 30 paintings and 30 collages some made in the first couple of weeks, some from my own archive, renamed, tagged, and accompanied by short notes. This ‘image-world’ functions as a visible contribution and a declared boundary: what I include, how it is documented, and how it is described becomes part of the research evidence. The tags act as a minimal technical ontology – an explicit schema of categories that can be compared to the system’s own tendencies to classify, smooth, and generalise.

Midjourney is used in this project as a widely adopted, artist-facing generative image platform that makes the mechanisms of prompt-based image production and iterative versioning accessible within an MA studio context. The project does not treat Midjourney as a neutral ‘tool,’ but as part of the apparatus under examination: its outputs are understood as infrastructural images, produced through systems that depend on large-scale training data extraction, hidden human labour, and energy-intensive computation. By working with Midjourney through a bounded pilot dataset drawn from my own archive, and by interrupting its outputs through physical collage and subsequent translation into paint, the research is able to test (materially and comparatively) how origin, agency, and image stability behave under contemporary conditions of AI image production, without turning the project into a purely technical study of machine learning.

● Now somewhat contradictorily, while the above asks whether certain parts of the theoretical research needs to be broadened, we also wonder if other parts need to be narrowed down further within the MA timeframe. You have “narrowed” your philosophical ontological research to focus on the continental tradition, could it be beneficial to narrow your framing further by only looking at one or two thinkers more deeply within this period?

I completely agree. I have already got quite bogged-down with too much reading and have learned that ‘ontology’ is quite the rabbit-hole to fall down! I am going to focus on Heidegger’s theories about the ways in which technology simultaneously reveals and conceals whilst enframing the world as stockpiles of resources (images become content/ training data). I will also look at Derrida’s ideas about trace, unstable origin, and iteration, all of which are relevant when we consider the versioning that occurs in generative AI.

● At what rhythm will you be moving between theory, making, and critical reflection? How quickly should experiments follow conceptual prompts? 

The project starts with a quick, repeatable cycle. Each period I extract one concept from the reading and put it into practice within 48 hours through a small set of studio tests. Every session ends with a short note recording what went in (images and prompts), the constraints I set, the decisions I made, what failed, and where the system appeared to ‘smooth’ or normalise the material. At the outset I also begin with a paint-first baseline period to establish what becomes visible when image-making is slow, embodied, and situated. This baseline acts as a point of comparison for the later AI/collage experiments. Overall, the project is structured but not over-determined: the workflow and documentation are planned, while the outcomes of making are allowed to remain open and surprising. There will definitely come point where experimentation gives way to more resolved work.

● Perhaps we wonder, should you ignore the conceptual and theoretical for now and just get back to painting and see what happens? What happens when you just paint and then reflect critically, what changes, what do you notice, what is revealed?
● do you enjoy painting – whether yes or no, — ask yourself why? Are there other materials that might be more freeing, like paper?

I think that it is better for me to have some sort of framework to work within. I can drift very easily and having a ‘project’ tends to anchor me. What I have done with the re-framing of this project is made it less limiting (the first iteration was, on reflection, a bit ‘dry’!) and added in some fun with the collage. It is though, important that the collage serves a purpose in the project and helps to perform the research rather than just decorating it. By printing AI outputs and physically recomposing them, collage makes trace and iterability tangible (Derrida) and stages revealing/concealing (Heidegger) through occlusion, layering, and the visibility of seams. The core experimental loop is: 

  1. generate Midjourney outputs using an image prompt from the pilot dataset
  2. print and collage outputs to interrupt coherence and expose construction
  3. photograph the collage and optionally reintroduce it to Midjourney to produce a second generation

translate selected stages into painting, preserving seams and drift rather than resolving them.

Overall, can you treat this as a project to be managed? Does this approach help or would it get in the way? We ask as you professional background might be something you can draw on?
— maybe consider giving yourself a long focused period of time just painting — that will give time for your brain cells – your neural pathways to actually form around painting and the process? (this could work with other mediums as well, like cut paper — the principle is the same).

I definitely feel that it will help. I think that there may well come a point in the process where I move away from the experimental cycle and move to some longer-form making but I currently feel that I need to build a platform for this. Also I have some paintings on the go that were started when I was making work for the exhibition. I will continue to work on these in parallel to the collage experiments.

Comments

Leave a comment