This week I was asked to do a short interview with the organiser of my exhibition to use for publicity. We had a chat and not entirely seriously, we thought that it might be quite interesting to see what would happen if we asked ChatGPT to give the interview as me. We uploaded this blurb that I had written for the show:
We also told the AI to scrape any other information about me that it could find.
The ‘interview’ went as follows:
The answers are quite wildly inaccurate but they are also interesting. I feel like I have quite a good understanding about how the things that we currently call AIs work, but I was still surprised by the giant leaps that Chat GPT makes in these answers. More than this I was very quickly worried by how much these ‘interpretations’ of the work (which the AI had never actually ‘seen’) might then on some level influence the way that I might think about the paintings.
There is also an ethical aspect to this. I want to be very conscious of the impact of my activities as I conduct this research. Chat GPT, like most AIs, uses every output as a datapoint, so every time I run a prompt I am potentially adding to the data pollution that I worry about. There isn’t an easy answer to this and I have to be mindful of the decisions that I make when using AI. We did not do this for great reasons but the outcome was quite thought-provoking which is probably lucky, but also something to bear in mind as I develop my methodology.
Also interestingly, I was quite wary about answering the same questions for the real interview and asked Sarah to vary them. This is the interview that was sent out:

Leave a comment