menu
Talks & Events
Course / Workshop

AI Aesthetics and Fine Tuning AI Models Workshop for Beginners

21 Sep-28 Sep 2024

arebyte Gallery
London E14 0LG

Overview

Please note that this is a two-part workshop series with sessions taking place on Saturday 21 and 28 September from 10am - 12pm. One ticket covers both sessions.

With the rise of image-generating AI systems like DALL-E, Stable Diffusion, and Midjourney, creating images with computers has evolved into a text-driven approach known as 'prompting.' This process involves guiding a pre-trained AI model to generate pastiches from datasets based on the specifics of text-based inputs that shape the image's style, composition, and genre.

In this workshop series, participants will learn how to harness new levels of control over text-to-image (T2I) AI systems and build a small dataset of stylised images and accompanying alt-text descriptions for fine-tuning a Stable Diffusion LoRA (Low-Rank Adaptation) model. Throughout this process, and by using a fine-tuned model to generate images, participants will explore the renewed notions of agency and authorship that exist at the edges of these systems. The workshops will provide an opportunity to interrogate what happens when we insert a slice of local human decision making back into the process of creating imagery with AI.

Workshop 1: An AI Monster Hunt

Date: 21 September 2024, 10am - 12pm

Participants will work with Stable Diffusion Automatic 1111 software in Run Diffusion to build a dataset of images for fine-tuning an AI image generating model. Sets of images will be generated that attempt to map out a language of AI, celebrate its hallucinations and look for value within its glitched oddities.

These images, together with accompanying text descriptions (written by participants within the workshop), will be used to train a Stable Diffusion LoRA model in the interim period between Workshop 1 and Workshop 2.

Workshop 2: Fine Tuning Stable Diffusion

Date: 28 September 2024, 10am - 12pm

This workshop covers the process of fine-tuning a Stable Diffusion model and explores how to work with fine-tuned AI models to generate images.

Requirements:

A laptop
Prior to the session, please create an account with rundiffusion.com and add $5 credit to the account. This covers 10 hours of Run Diffusion, which is what is needed for the workshops and personal projects.

Workshop leader bio

James Irwin is an Artist, PhD researcher at Kingston School of Art, Lecturer at UAL and Digital Media Tutor at the Royal Academy Schools. He works with web technologies, AI systems and digital sound and image to investigate the notion of a vital life force inherent within digital media.

By creating cognitive assemblages - made from a combination of networked digital hardware, software and human wetware - his work builds from new materialist ideas around recentering the human, undoing our role as autonomous individuals and pointing to the ways in which the production of subjectivity is offset to forces outside of our bodies; the posthuman is biological, but also networked and dispersed through machines.

Book now