Header Image

Noise, Perception, and Learning: Applications in AI Art

MIT IAP 2023

Sign up here

Posters

Syllabus

This seminar-style course will cover topics related to generative art and will provide tutorials on a variety of generative art tools for image, text, and audio generation. A focus of this course will be on the similarities and differences between human and machine perception. This will be tied in to human-machine interfaces as well as how noise effects perception. In addition, this course will provide an overview of the basic operation of widely-used generative models including GANs and diffusion models. Prerequisites: None experience with Python will be beneficial

Course Objectives

After this course, you should be able to:

Software & Readings

Software

You will need access to a computer capable of accessing AI generation tools including DALL-E 2, Stable Diffusion, Wombo.Art, Nightcafe, and DeepDream. Additionally, you will need access to a computer that can run Google colab notebooks. All of these tools are web/cloud accessible.

Readings

TBD

Lectures

-->
Authors and Contributors
Sarah Muschinske, PhD student in EECS, RLE 
    muschins@mit.edu
  Aspen Hopkins, PhD student in EECS, CSAIL 
  Logan Engstrom, PhD student in EECS, CSAIL 
  John Simonaitis, PhD student in EECS, RLE
  Mikey Fernandez, PhD student in MechE, Media Lab 
  Chandler Squires, PhD student in EECS, LIDS