Generative AI for Filmmakers: From Text Prompt to the Edit Bay

Generative AI for Filmmakers: From Text Prompt to the Edit Bay

By York U Motion Media Studio
Online event

Overview

A hands-on course that turns complex AI tools into intuitive filmmaking techniques for creators of all levels.

AI Generated Content for Filmmakers is a 10-week, project-based course for directors, editors, writers, and producers ready to harness AI to streamline and elevate their creative workflows.

Start with an intuitive, visual tour of today’s AI capabilities—from storyboarding to full-scene composition—while demystifying concepts like diffusion models and latent space in filmmaker-friendly terms. Using tools like ComfyUI and ControlNet, you’ll learn how to craft cinematic visuals from simple prompts and direct AI as a trusted creative collaborator.

You’ll also work with synthetic audio, automated foley, lip-syncing, and visual consistency through custom-trained LoRA models. Practical workflows for prepping assets—color grading, upscaling, and final delivery—are covered in detail. You’ll explore 3D blocking tools for rapid previsualization and learn to guide AI as you would actors, editors, or DPs.

By the end, you’ll complete a personal project with your own production-ready video clips—proof of your new creative capabilities.

Category: Science & Tech, High Tech

Speakers

Good to know

Highlights

  • 63 days 2 hours
  • Online

Refund Policy

Refunds up to 14 days before event

Location

Online event

Agenda

February 5, 2026: The State of the Art & Big Picture Concepts

Real-time snapshot of generative AI, covering what’s possible, emerging trends, and what might be overhyped. It introduces core concepts like diffusion models and latent space, explaining how they apply to visual storytelling.

February 12, 2026: From prompt to image to video to DI-ready asset

Current state-of-the-art pipeline for professional video production. Topics include prompting, CLIP, VAEs, latent space, prompt engineering, and basic image editing using multimodal large language models.

February 19, 2026: Deep dive into ComfyUI

Deep dive into ComfyUI, covering installation techniques, adding models and nodes, using template workflows, and alternatives for low-RAM GPUs.

Frequently asked questions

Organized by

York U Motion Media Studio

Followers

--

Events

--

Hosting

--

CA$689.30
Feb 5 · 3:00 PM PST