AI-generated film [duration: 19 mins 19 seconds]
The Wizard of AI is a twenty minute, 99% AI-generated visual essay in which a hoodie-wearing faceless ‘AI Collaborator’, voiced by the artist, is our critically incisive guide. Defining this particular epoch as one of “wonder-panic”, Alan Warburton takes us on a speeding visual rollercoaster of flawlessly executed visual styles, encompassing histories of comic books, animation VFX and film.
As we journey from the AI candy ‘ooohs’ and ‘ahhhs’ of new aesthetic or artistic reach, to the lows of recognising the exploitation, erasure and potential redundancy of creative practice, Warburton offers non-judgemental probing and provocative perspectives from both the ‘wonder’ and the ‘panic’ camp reactions. We are introduced to the doom loops of inbuilt bias these systems have been designed to perpetuate, and the complexities of hallucinating a future based on AI models’ creatively suspect backward-compatibility with the past.
Alan Warburton says:
Generative AI is something that will have a deep and permanent effect on the ‘culture industries’ – by which I mean curators, art institutions, art schools, design firms and so on. It’s not another trend, it’s a tectonic shift in the currency and culture of images that we can’t reduce to ‘deepfakes’ or ‘post-truth’ but to a relationship between humans and images. It’s an epistemological break!
The tools are inherently problematic, and directly contribute to real conditions of professional and socioeconomic adversity that affect me directly. Yet instead of boycotting, I’m playing in the sandbox and seeing what the tools tell me. I do this to demystify and educate, but also because no matter how succulent and seductive an AI image is, the real juice is in analysis, criticism and reflection.
The Wizard of AI is approximately 99% AI-generated. Warburton produced a few thousand images and video clips, using multiple generative AI tools, including Midjourney, DALL-E 3, Stable Diffusion, Runway, Pika and HeyGen. Of those few thousand, hundreds made it into his final film.
The animation was done over an intense three-week period where the updates to the tools I was using were significant – historic, even. Videos that I generated for inclusion on the 20th October were generated again in early November (just days before the Summit), with improvements in quality analogous to the kinds of improvements we saw in digital cameras between 1995 and 2000. This meant that as I developed the film I was utilising the most advanced generative AI tools, within hours or minutes of them becoming available.
Thanks to everyone who helped with concept development, even if your work was lost in the edit: John Butler, Samine Joudat, Ben Dosage, @dzennifer, Ben Dawson, Alejandro González Romo, @symbios.wiki, Ugur Engin Deniz.
Note: This work is intended to be a non-commercial work of critical/educational/satirical commentary. Under UK law, this is referred to as ‘fair dealing’ and protects the work from claims of copyright. Find out more here.
AI Tools used: Runway Gen 2 to generate 16:9 ‘AI Collaborator’ video clips; Midjourney, Stable Diffusion and DALLE 3 to generate still images; Pika to generate 3 second fish loops; TikTok for detective speech synthesis; HeyGen to generate AI talking detective head; Adobe Photoshop AI to expand images; Topaz Gigapixel AI to upscale images, and Adobe After Effects to put everything together.
Alan Warburton is a London-based researcher, artist, animator, filmmaker, writer, curator and critic. He is currently researching digital images and labour for a PhD at Birkbeck’s Vasari Research Centre for Art and Technology. Alan’s work —…