The Next Biennial Should be Curated by a Machine is an inquiry into the relationship between curating and Artificial Intelligence (AI). The project unfolds as a series of experiments exploring the application of machine learning techniques (a subset of AI) to curating large scale contemporary art exhibitions, to reimagine curating as a self-learning human-machine system. Making reference to the e-flux 2013 project ‘The Next Documenta Should Be Curated by an Artist’ – which questioned the structures of the art world and the privileged position of curators within it – the project extends this questioning to AI. It asks how AI might offer new alien perspectives on conventional curatorial practices and curatorial knowledge. What would the next Biennial, or any large scale exhibition, look like if AI machines were asked to take over the curatorial process and make sense of a vast amount of art world data that far exceeds the capacity of the human curator alone?
Experiment AI-TNB, the second in the series, takes the Liverpool Biennial 2021 edition as a case study to explore machine curation and visitor interaction with artworks already selected for the Biennial by its curator. It uses data from the biennial exhibition as its source – the photographic documentation of artworks, their titles and descriptions – and applies machine learning to generate new interpretations and connections. At its heart is OpenAI’s ‘deep learning’ model CLIP, released in 2021, which is able to compare the similarity between an image and a short text.
On the project’s landing page, visitors encounter fifty eerie images – some of which look like photographs, others like drawings or collages. These are images generated by AI in response to the titles of the source artworks, using technique CLIP to guide a GAN (Generative Adversarial Network) into creating an image that ‘looks like’ a particular text. Navigating through the experiment, visitors are presented with a triptych of images and texts, with the source artwork placed in the centre, AI-generated image on the left and a heatmap overlaid on the source image on the right. ‘Deep learning’ models are used to create new links between the visual and textual material, as well as entirely new images and texts. Every page is also a trifurcation: visitors can explore the links between the original source and generated material, word and image, art and data. As visitors navigate the project, they create their own paths through the material, each journey becoming a co-curated human-machine iteration of the Biennial saved to the project’s public repository (Co-curated Biennials).
Experiment AI-TNB is funded by the UKRI’s Arts Humanities Research Council program ‘Towards a National Collection’ under grant AH/V015478/1.
Find out about Experiment 1: B³(NSCAM), Ubermorgen, Leonardo Impett and Joasia Krysa, 2021
For a larger discussion on AI and curating see Liverpool Biennial’s journal Stages vol 9/2021.
The Experiment AI-TNB explores machine curation and visitor interaction in large scale exhibitions with a focus on human-machine co-authorship. It takes Liverpool Biennial 2021 as a case study to create a parallel AI-visitor co-curated online version.
The project takes data from the Liverpool Biennial 2021 – including the photographic documentation of artworks, their titles, and descriptions – and applies contemporary machine learning techniques to generate new interpretations and connections. At its heart is OpenAI’s ‘deep learning’ model CLIP, released in 2021, which is able to compare the similarity between an image and a short text.
On the project’s landing page, visitors encounter fifty eerie images – some of which look like photographs, others like drawings or collages. These are images generated by AI in response to the fifty titles of the source artworks, based on a technique developed by Ryan Murdock (@advadnoun) which uses CLIP (Contrastive Language–Image Pre-training) to guide a GAN (Generative Adversarial Network) into creating an image that “looks like” a particular text. For example, “Fraught for those who bear bare witness” by Ebony G. Patterson results in an image of a bear’s face in the woods, whilst “Masterless Voices” by Ines Doujak and John Barker produces a dark image with half a dozen disembodied open mouths. These AI-generated images provide a new interpretation of the artworks, but they don’t create connections between them.
Navigating through the experiment, visitors are presented with a triptych of images and texts and with three possible directions to explore. Placed in the centre is the source artwork, AI-generated image on the left and a heatmap overlaid on the source image on the right.
In the centre, deep text networks are used to extract the most salient keywords from the source artwork’s descriptions (which can be found on the Liverpool Biennial website). Navigating in this direction, we use visual and textual links – the visual links (the similarity of the artwork source photographs) and the textual links (the similarity of the keywords from the artwork descriptions) are combined. This is the most similar similarity measure to the search and recommendation engines we see on the internet today.
On the left (in pink) is the AI-generated image, first encountered on the landing page of the project. Navigating left, visitors reach the artwork with the most similar generated image in terms of colour, form, or texture. These generated images are created only from the titles of the original artworks – nothing else. The two works are therefore connected through the visual similarity of their (textual) titles.
On the right is an AI-generated description of the photograph of the source artwork. These are the deep network’s best guess at what is going on in the image, using the image alone, without any textual information. For instance, Sonia Gomes’ fabric sculpture Timbre, leads the AI to generate the description: “a person wearing colorful clothing is sitting on a stool”. Above the AI-generated description, you will see a heatmap overlaid on the original image: this is an indication of the points of the image that the AI considers important for generating that description. Navigating in this direction leads you to the artwork with the most similar description, using textual similarity alone – the two works are connected through the textual similarity of their (visual) appearance.
As visitors navigate the project, they create their own paths through the material, each such journey becoming a co-curated human-machine iteration of the Biennial saved to the project’s public repository.
Our GitHub repository provides a summary of technical steps and modules used in the process. Repository developed in collaboration with Mark Turner, Durham University.
The Next Biennial Should be Curated by a Machine, Experiment: AI-TNB is commissioned by Liverpool Biennial, 2021
Series curator: Joasia Krysa
Series technical concept: Leonardo Impett
Experiment machine learning concept and implementation: Eva Cetinić
Web development and design: MetaObjects (Ashley Lee Wong and Andrew Crowe) and Sui
AI-TNB is funded by Arts and Humanities Research Council Towards a National Collection programme under grant AH/V015478/1
Project title: Machine Curation and Visitor Interaction in Virtual Liverpool Biennial
PI: Leonardo Impett, Durham University
Co-I: Joasia Krysa, Liverpool John Moores University
PDRA: Eva Cetinić, Durham University
With thanks to Sunny Cheung
This is a repository of individual visitor journeys through the AI-generated interpretations of source artwork images and text from the Liverpool Biennial 2021. Different paths through the biennial are generated based on the similarities of the text and images, drawing connections through a machine's 'thought' process.
Loading...
This website does not collect any Personal Data, see here for more information.