Sign up here
PostersSyllabus
This seminar-style course will cover topics related to generative art and will provide tutorials on a variety of generative art tools for image, text, and audio generation. A focus of this course will be on the similarities and differences between human and machine perception. This will be tied in to human-machine interfaces as well as how noise effects perception. In addition, this course will provide an overview of the basic operation of widely-used generative models including GANs and diffusion models. Prerequisites: None experience with Python will be beneficial
Course Objectives
After this course, you should be able to:
- Generate AI art (using both GUIs and APIs)
- Identify similarities between natural and artificial neural networks
- Understand adversarial examples as they pertain to neural networks
- Generate random seeds using varying forms of noise
- Understand the basic operation of text-to-image, text-to-video, autoregression, and language modeling algorithms (such as GPT-3)
Software & Readings
Software
You will need access to a computer capable of accessing AI generation tools including DALL-E 2, Stable Diffusion, Wombo.Art, Nightcafe, and DeepDream. Additionally, you will need access to a computer that can run Google colab notebooks. All of these tools are web/cloud accessible.
Readings
TBD
Lectures
- Lecture 1 -- Intro to Generative Art
- Lecture 2 -- Human and Machine Perception
- Colab Notebook -- DeepDream
- Lecture 3 -- Adversarial Examples
- Lecture 4 -- Stable Diffusion
- Lecture 5 -- Human-Machine Interfaces
A cool related paper on making art
Colab Notebook -- Stable Diffusion
Lecture Video