Master Guide

Creator Flow: Master Guide

This guide is intended to be the practical source of truth for Creator Flow as it exists in the product today. It is written from the current workflow surfaces, state logic, and editing behavior, not from an abstract marketing description.

Use it to decide what to set up first, what actually blocks scene generation, when to use bulk systems, how Scene Forge fits into the pipeline, and where to spend time if you want higher quality output with less rework.

Table of Contents

1. Workflow at a Glance

Creator Flow is an all-in-one AI content creation workflow for building cinematic universes, generating scene-based output, and finishing that work inside the same project structure. It is designed for creators who care about continuity, reuse, and revision speed, not just one-off prompt results.

What the workflow is trying to optimize

  • Consistent characters, locations, and style across scenes and projects
  • Faster revisions through reusable project context instead of manual re-prompting
  • A cleaner path from story setup into shots, video, stitch, and audio
  • Fewer tool handoffs and less continuity loss during production

The practical production stack

  1. AI-native simple idea to cinematic universe entry
  2. Project settings and reusable universe context
  3. Story review at chapter and story beat level
  4. Character and location grounding assets
  5. Scene generation and Shot level execution
  6. Scene video editing and stitching
  7. Scene audio, chapter audio, and movie-level finalization tools

The core mental model is simple: treat each project as a production system, not a prompt box. The more stable your canon is at the top of the workflow, the less cleanup you need later.

2. Tab Model

Creator Flow tab order is Project → Story → Characters → Locations → Scenes. This is the primary setup and production path users should follow.

Primary tabs

  • Project: high-level setup, style, aspect ratio, advanced settings, asset tools
  • Story: chapter review and approval
  • Characters: base characters and variants
  • Locations: locations, zones, and angles
  • Scenes: generation launch and Scene Forge

Secondary surfaces

  • Universe Generator: accessed from Project for world-building and poster/thumbnail context
  • Advanced Settings: modal from Project for story structure, camera style, and lore
  • Scene Video Editor: after Image, Video, and Audio is generated for a Shot
  • Scene Audio Editor: after video stitching and video finalization

Guided setup exists to help new users move through the early project stages, but advanced users are not forced into a rigid one-by-one lock. The operational scene-generation gate is Story Review.

3. Project Settings and Universe Context

Project is the setup hub. This is where creators set title, summary, aspect ratio, content types, and visual style.

What belongs on Project now

  • Project title and summary
  • Aspect ratio selection to choose the format of your final video output.
  • Visual style, which grounds style across all assets generated across the Cinematic Universe
  • Access to the Universe Generator to create one-off images and videos with Universe context.

What Advanced Settings includes

  • Story structure, which is a secondary guidance for overall story direction and rewriting.
  • Camera style, which affects what cinematic camera techniques are used in your video generations
  • Important lore points that ground the Cinematic Universe

Universe Lore context remains central to multi-project continuity. In practice, visual style and project intent are set directly on Project, while deeper Universe Lore controls live in Project-accessed Advanced Settings.

Scratch vs existing universe

  • Scratch: use when you are defining a new world from zero
  • Existing universe: use when reuse of assets and story continuity matter more than reinvention

If you are extending an existing universe, avoid overwriting too much. Add only what is new. The purpose of the universe layer is to stop you from rebuilding the same visual canon every time.

4. Story Review and Scene Generation Gate

Reviewing the generated Story is the only hard gate for scene generation in the current workflow. Characters and locations help downstream quality, but they do not block scene generation by themselves.

What Story is doing operationally

  • Stores chapters and story beats
  • Acts as the review step before scene generation
  • Prevents the workflow from skipping straight into scenes with an unreviewed draft

Current behavior is important here: when you enter the Story tab, the app marks the current unapproved chapters as reviewed. That means Story is not just documentation, it is an operational checkpoint in the flow.

How Story drives scenes and shot lists

  • Story beats are the narrative source used to create scene coverage.
  • Scene generation expands those beats into filmable scene units with dramatic intent.
  • For short-form projects, those generated scenes are automatically expanded into shot lists.
  • For longer-form projects, the Story still anchors scene direction before shot-level production in Scene Forge.

In practical terms, Story is the narrative blueprint. If Story is weak, scenes and shot lists will drift. If Story is clear and intentional, downstream scene and shot generation stays aligned.

Short-form note

For some short-form content types like Shorts and Advertisements, the Scenes launch surface will act as a shot-list-first pipeline. In those cases the scene list appears after the short-form generation stage is ready, rather than behaving like a long-form chapter-first workflow.

5. Character System

Characters are grounding assets, not just labels. The base character image is the anchor that supports continuity across shot prompts, variants, and later scene execution.

Current rules that matter

  • Main character generation is the primary setup milestone for character progression
  • Variants depend on the base character image existing first
  • Variants should represent purposeful recurring states, not random experiments
  • Character Manager and Shot Pre-Production now share more of the same selection patterns

Object ownership and dependency behavior

  • Objects that truly belong to a character can be marked as owned by that character.
  • Owned objects are dependency-linked so owner identity is established first for continuity.
  • Bulk generation tools queue owner-first automatically when object ownership is defined.
  • Manual object generation can prompt you to generate the owner first to avoid visual drift.

This reduces a common continuity failure where an owned prop looks disconnected from the character who uses it across scenes.

How character references are used downstream

  • The base character image is the primary identity reference for downstream image generation.
  • When a variant is selected, that variant image/state becomes the active reference for the shot.
  • Shot image generation uses selected character references to keep identity, wardrobe, and silhouette consistent.
  • Shot video generation can carry those character references forward so motion stays tied to the same on-screen identity.

In practice, better character references upstream usually means fewer prompt fixes and fewer retries in Shot Pre-Production and Shot Production.

What to optimize for

  • One strong base character before a large variant tree
  • Clear immutable identity markers first: face, silhouette, wardrobe anchors
  • Variant prompts only for recurring state changes that will be reused

Good reason to create a variant

Costume change, battle damage, time-of-day state, disguise state, or any look that will reappear across multiple shots or scenes.

Bad reason to create a variant

You are still trying to figure out the base look. That usually means the base character is not stable enough yet and you should fix the source identity first.

If your output quality is drifting, the first place to audit is usually not the shot prompt. It is often the character canon itself: too many loose variants, unclear identity anchors, or a weak base image.

6. Locations, Zones, and Angles

Locations are the environmental continuity layer. Zones (sublocations) and angles are subordinate structures that let you preserve geography while changing framing intentionally.

Current hierarchy

  • Location: the master environment identity
  • Zone / sublocation: a reusable sub-area inside the location
  • Angle: a reusable viewpoint for either a location or a zone

The current product direction is to make prompting do more of the camera work over time, and use angles as support rather than forcing everything through angle assets. In other words: prompting should increasingly drive camera positioning; angles are there when repeatable framing is useful.

Best practice

  • Create the base location first
  • Add zones only for places you will revisit
  • Add angles when you need repeatable framing, not because every location must have many angles
  • Name zones and angles by function, not by vague left/right labels only

How location references are used downstream

  • The base location image is the primary environment anchor for shot image generation.
  • When a zone is selected, that sublocation context narrows environment geography for the shot.
  • When an angle is selected, framing intent is carried forward as a reusable camera viewpoint.
  • Shot image generation uses selected location/zone/angle references to keep environment continuity stable.
  • Shot video generation can reuse that environment context so motion stays grounded in the same world space.

Strong location references upstream reduce downstream cleanup, especially for scene-to-scene geography, camera consistency, and continuity between opening frames and motion outputs.

If you are overbuilding locations before the story has started to harden, you are usually moving too early into asset exhaust. The goal is not to build a huge library; it is to build the minimum stable world that supports the scenes you actually need.

7. Generate All Assets

Generate All Assets is the project-level background asset run for Creator and Pro plans. It is designed to prepare missing universe assets without requiring the user to manually click every character, location, zone, variant, or angle one by one.

What it currently covers

  • Base characters
  • Character variants
  • Base locations
  • Zones
  • Location angles
  • Zone angles

Current dependency behavior

The runner uses priority waves, but stages are treated as priorities rather than rigid barriers. The practical rule is: if a dependency is satisfied, the target can start even if other work from the earlier wave is still running elsewhere.

  1. First wave: main character image and all base locations
  2. Second wave: main character variants, all other characters, all location angles, and all zones
  3. Third wave: other character variants and all zone angles

Because the runner is dependency-aware, reality can be more dynamic than the waves suggest. Example: if the main character already exists but locations do not, other base characters and main-character variants can start immediately while locations are still being generated.

User-facing behavior that matters

  • Selections persist on the universe document, so reopening the UI preserves your choices
  • Upstream dependencies auto-select when you choose a dependent category
  • Deselecting an upstream category removes dependent categories too
  • The Project surface shows when an asset run is active in the background
  • The run can recover because generating/completed state is reconciled back into the source docs

The right use case for this tool is not “generate everything always.” It is “prepare the world enough that scene and shot production stop getting blocked by missing canon.”

8. Scenes Launch and Scene Generation

When scenes do not exist yet, the Scenes tab acts as a launch surface rather than a manager. This is intentional. Scene generation is launched from the Scenes tab.

What the Scenes launch panel is telling you

  • Project summary and current story context
  • Whether Story is reviewed and scenes are ready to generate
  • Scene generation progress and failure state if a run is active or failed
  • Special short-form status when a shot-list-first path is active

A key product behavior here is that character and location generation are optional preparation, not hard blockers. If your Story is reviewed, you can generate scenes and continue building assets in parallel.

When Projects auto-open to Scenes

Once setup is complete enough and scenes already exist, the app can default back into Scenes. But the logic is not supposed to skip you there while Story is still unreviewed. If Story has not been reviewed, Project and Story remain the expected path.

9. Scene Manager and Scene Forge

Once scenes exist, the Scenes tab becomes Scene Manager. This is where you navigate chapters, scenes, and then individual shots. It is also where Scene Forge takes over for shot-level production.

How the scene phase actually breaks down

  • Review generated scenes and select the one you want to work on
  • Open Scene Forge for shot-by-shot execution
  • Generate or refine shot images
  • Generate or refine shot videos
  • Stitch and edit the scene sequence
  • Move into scene audio and, later, chapter audio

Scene Forge and advanced scene editing are still optimized for desktop. Mobile supports more of the setup flow now, but if you are doing deep scene work, desktop remains the expected environment.

10. Shot Workflow

Shot Pre-Production

Shot Pre-Production is where you define the opening frame. This is still one of the highest-leverage parts of the entire workflow because a weak start frame often creates downstream motion problems that are expensive to fix later.

  • Select the correct character, variant, location, zone, and angle context
  • Write the image prompt as an opening-frame directive, not as a full action paragraph
  • Generate missing asset dependencies inline when needed
  • Use the validation language to understand exactly what is missing

Shot Production

Shot Production converts the approved opening frame into motion. This is where duration, motion prompt, continuity references, and narration context start interacting more aggressively.

  • One dominant action beat per shot is usually stronger than several competing actions
  • Camera verbs should be explicit and physically plausible
  • Use reference support when identity, partial visibility, or spatial continuity is high-risk
  • Shorter durations are usually better unless the action arc genuinely needs time

Shot Post-Production

Shot Post-Production is where you evaluate generated outputs, choose the keeper for each shot, and lock that shot into the scene timeline. This is the quality-control pass before scene-level stitching.

  • Review shot results against story intent, continuity, and pacing
  • Select the best take and re-run weak takes instead of carrying them into stitch
  • Finalize the shot version you want represented in Scene Video Editor
  • Use shot-level fixes early so scene-level editing is focused on timing, not rescue work

Current validation systems are intentionally more precise now. If the blocker is a missing variant, missing zone image, or missing angle image, the UI should say that directly rather than collapsing everything into a generic “missing character” or “missing location” message.

11. Scene Video Editor and Stitching

After shot videos are finalized, the scene moves into Stitch Preview. This stage includes a scene video editor for controlling sequence timing before audio polish.

What the Scene Video Editor currently supports

  • Sequence preview of the stitched clips
  • Timeline-based trimming and clip ordering
  • Split, duplicate, delete, restore, undo, and redo
  • Clip-level settings and timeline cursor behavior
  • Uploaded video clips that can be woven into the timeline

The important product rule is that this editor is for video timing and sequence decisions. Audio editing comes after. That separation is deliberate because locking picture first produces cleaner audio work and safer finalization cascades.

Editing rule of thumb

Use the video editor to solve pacing, order, trim, and clip-selection questions. Do not move into final scene audio until you are satisfied that the visual cut is close to done.

12. Scene and Chapter Audio

Audio editing is layered on top of finalized picture. The scene audio editor is always the first audio surface that matters. In the current workflow, even single-scene chapters should go through scene audio first rather than jumping straight to a chapter-only mix.

Track model: shot/scene tracks vs additional tracks

  • Shot / scene tracks: timeline-aligned tracks derived from finalized visual structure.
  • Additional tracks: user-added layers such as narration, music, ambience, and custom SFX.
  • Shot/scene tracks should be treated as structural timing anchors for sync.
  • Additional tracks should be used for creative mix decisions on top of that structure.

Practical rule: lock picture and structural timing first, then shape the emotional mix with additional tracks. This keeps sync stable while still giving full creative control over final sound design.

Current audio behavior that matters

  • Scene audio editor comes before chapter audio
  • When a chapter has only one scene, scene audio approval can cascade chapter finalization
  • When that chapter is also the only chapter, the approval can cascade up to movie finalization
  • Chapter audio uses scene-linked tracks rather than an empty generic rack

This is important because it keeps the workflow in the natural order of operations. You finalize the scene’s own timing and mix first, then let the system promote that state upward when the structure is simple enough to justify it.

Mixing priority

  1. Picture timing and scene cut
  2. Narration timing and intelligibility
  3. SFX and music balance
  4. Chapter or movie-level polish only after the lower level is stable

13. Automation, Autopilot, and Bulk Tools

Creator Flow includes several automation layers, but they solve different problems. The most common mistake is using the wrong automation system at the wrong stage.

Generate All Assets

Project-level universe preparation. Best when the world is underbuilt and you want the system to prepare characters, variants, locations, zones, and angles in the background.

Generate All Characters / bulk location tools

Category-specific bulk preparation. Best when one part of the world is lagging behind the rest.

Autopilot

Scene-level shot image generation. Best when the scene structure is already good and you want missing shot images generated in a more hands-free, context-aware way.

Generate All Missing

Shot / scene-level unblocking tool. Best when you have specific missing dependencies and want the system to prepare them from the current production context.

Use automation to remove repetitive setup work, not to avoid reviewing the story or the world canon. The highest-quality projects usually combine manual control for high-leverage decisions with automation for coverage and cleanup.

14. Studio Assistant

On the Creator Flow make page, Studio Assistant has project and workflow context and can support hands-on work such as rewriting prompts, helping with setup fields, and assisting with creator-flow operations.

Best use cases

  • Rewriting image or video prompts with existing field context in mind
  • Improving narration beats or tightening scene summaries
  • Helping you decide what to fix first when a workflow is blocked
  • Explaining why a scene or shot is not ready to progress

The best way to use Studio Assistant is as a focused collaborator. Give it context, ask for one concrete action, and keep it tied to the current step of the workflow rather than asking for a vague “make this better” pass across everything at once.

15. Optimization Playbook

If your goal is to get better output with less waste, the highest-leverage improvements are usually operational, not model-related.

Recommended operating order

  1. Write a project summary that actually constrains the work
  2. Review Story before touching scene generation
  3. Generate one strong main character and one strong base location early
  4. Use zones and variants only where recurrence justifies them
  5. Use Generate All Assets when the world is underbuilt, not by default on every project
  6. Lock picture timing before spending time on audio polish

Quality heuristics that usually work

  • Short, precise prompts outperform flowery prompts in production contexts
  • One dominant action beat per shot is better than several competing beats
  • More variants is usually not better; better canon is better
  • Angles are useful, but prompting should still drive composition intent
  • If revision speed is collapsing, your project canon is probably underdefined or overscattered

16. Troubleshooting and Operational Notes

If scene generation fails

The current product surfaces failures on the Scenes launch UI instead of silently doing nothing. The correct first move is to retry from Scenes after reviewing the failure, not to hunt around the Project tab for a hidden loader.

  • Retry from the Scenes launch surface so the run state and UI stay aligned.
  • Read the failure message first and fix the specific blocker before retrying.
  • If Story was recently rewritten, re-check Story intent before launching again.

If Scenes looks stuck on loading

For short-form projects, scene generation and shot generation can run as a pipeline. During this period, the loading surface may remain visible until shots are ready, even if scenes already exist.

  • Wait for short-form pipeline status to move past scene and shot generation.
  • Avoid double-triggering generation while a pipeline run is active.
  • If the status does not change for an unusually long window, check chapter and scene readiness.

If Generate All Assets seems incomplete

Reopen the asset UI and inspect the current missing state. The run is designed to skip completed work and recover from persisted generating state, but the right mental model is still dependency-aware preparation rather than “every selected thing fires at once immediately.”

  • Remember dependencies: some targets will wait for upstream assets to complete first.
  • Re-check selections if you changed categories between runs.
  • Use the current missing-state view as source of truth, not old assumptions.

If a character/location says generating but an image is already present

State can occasionally lag while background polling catches up. If you see both “generating” and a finished image, wait for the next poll cycle and then re-open the relevant manager view.

  • Treat the latest image state in the manager as your primary continuity source.
  • Avoid launching duplicate generation jobs for the same asset while state is reconciling.

If shot quality is inconsistent

  • Audit the base character or location canon first
  • Reduce prompt ambiguity
  • Check whether you are using the wrong zone or variant
  • Use references where continuity risk is genuinely high

If shot generation fails or outputs are weak

  • Confirm selected character/variant/location/zone/angle assets are actually generated.
  • Simplify the shot intent to one dominant action beat and one camera intention.
  • Regenerate only the weak shot first before making scene-wide changes.
  • Use Shot Post-Production to lock keepers before moving to stitch.

If you are on mobile

Mobile supports setup, story review, character/location prep, and scene launch well. Advanced scene editing remains primarily a desktop workflow. Use desktop for the deepest Scene Forge, stitch-editor, and audio-editor work.

When to switch to desktop

  • Heavy scene/shot iteration across many assets
  • Timeline-intensive stitch editing
  • Detailed scene/chapter audio mix work
  • Complex troubleshooting across multiple production stages

Need direct help?

If the flow is blocked or behavior looks wrong, contact support with your project URL, chapter/scene context, and what you expected to happen: cannonstudio.app/contact.

Final rule

Creator Flow performs best when you use it as a production system: stabilize canon early, review Story before generating scenes, treat Scene Forge as a structured execution environment, and postpone audio until picture is genuinely stable.

Continue Learning

Use these adjacent guides to improve prompts, compare workflows, and decide where automation actually helps.