
Image credit: Wix, 12th Sept 2025. Prompt by JK.
AI Afterlives - examples from museum and heritage contexts
The following examples are designed to accompany the AI Afterlives workbook, produced as part of the Synthetic Pasts project. They illustrate the range of ways in which AI and automation are being used to 'revive' persons and objects from the past. To find out more about each example you can follow the link.

An installation featuring a deepfake recreation of Salvador Dalí, developed in collaboration with advertising agency Goodby Silverstein & Partners, was opened at the Dalí Museum in Florida, 2019. Built from over 6,000 archival video frames and comprising 125 videos (totalling 45 minutes of footage), the system can generate 190,512 possible combinations to respond to visitors (Sen 2021). In these videos Dalí greets visitors, quoting both his original writings and newly scripted lines, and even offers to take a selfie with them, which he shows to visitors and offers to send to their phones, encouraging social media sharing. The installation also roots Dalí in the present, featuring scenes where he carries that day’s New York Times or comments on the local weather (Lee 2019). In 2024 the Museum extended this synthetic resurrection through Ask Dalí, a chatbot allowing visitors to converse with the deepfaked Dalí about his life, art, and even contemporary events through a lobster-phone connected to an LLM-powered chatbot. This system was trained on transcripts of Dalí’s interviews and translated writings, including Diary of a Genius and The Secret Life of Salvador Dalí (Solomon 2024).

The Hello Vincent installation at the Musée d'Orsay is built around the use of generative AI. Launched in 2021, Hello Vincent was trained on a corpus of 900 letters written by Vincent van Gogh, allowing visitors to ‘chat’ with the artist via a microphone and interactive terminal as he paints Wheatfield with Crows. The Hello Vincent project was part of broader programming around the Auvers-sur-Oise: The Final Months exhibition, which also included VR experiences such as Van Gogh's Palette. The development of Hello Vincent involved a wide range of collaborators, including tech startup Jumbo Mana and was funded by PI France and the Grand-Est region, with technical support from the University of Paris-Saclay and the Jean-Zay supercomputer at IDRIS.

USC Shoah Foundation’s Dimensions in Testimony project enables people to ask questions that prompt real-time responses from pre-recorded video interviews with Holocaust survivors and other witnesses to genocide. The project used advanced filming techniques to capture 3D testimonies – although for now they are mostly displayed as 2D –, and harnesses a range of specialised technologies so it can be accessed online, or through on site installations in museums. It uses natural language processing to create an interactive biography. Despite these sophisticated technical possibilities however, the project is grounded in its commitment to testimony and oral history, as the name suggests. The idea is to facilitate conversations where users can ask questions that are pertinent to them. There are a host of additional resources available online to support these interactions – for example if teachers want to use Dimensions in Testimony in class.

The Living Museum is an AI-powered digital interface created by Jonathan Talmi, an independent AI engineer. The Living Museum allows users to create personalised exhibits and ‘chat’ with museum objects by animating them through large language models (LLMs). Built on open data from the British Museum’s online collections (but unaffiliated), it creates interactive dialogues using each object's metadata to ‘bring museum subjects to life’. The experience aims to make cultural heritage more engaging by enabling visitors to feel the ‘presence’ of artefacts and explore history through conversational AI.

Launched in October 2024, Nature Perspectives collaborated with the University of Cambridge’s Museum of Zoology on a project that brought museum specimens to life using generative AI. This initiative enabled visitors to engage in two-way conversations with 13 different exhibits – including a dodo skeleton, a fin whale, a red panda, and a cockroach – via QR codes that opened a chat interface on their mobile devices. These AI-powered interactions claimed to provide personalised, multilingual responses, simulating each specimen's unique voice and perspective. The project aimed to deepen public engagement with biodiversity and foster empathy toward nonhuman life by giving these animals a voice. It also served as an experiment to explore how AI can enhance museum experiences and influence visitors' perceptions of the natural world.