Media Generation Canvas: Node Workflows

Flowmo's Media Gen Canvas is a visual, node-based workspace where you build multi-step media pipelines by connecting nodes together. Think of it as a creative flow chart: you wire up text prompts, image uploads, AI generators, editing processors, layer compositors, and render outputs into a single graph -- then watch data flow through the pipeline automatically. Every node does one job, and connections between nodes carry text, images, or video from one step to the next.
The Media Gen Canvas is built on React Flow and lives inside the AI sidebar. It supports undo/redo, auto-saving to Firestore, full-screen mode, and a mini-map for navigating complex workflows.
Prerequisites
Before you start building node workflows, make sure you have:
- An active Flowmo project open in the designer. The canvas saves generated assets into your project's media library automatically.
- A signed-in account. AI generation nodes require authentication and consume AI credits.
- A basic idea of what you want to create. Node workflows shine when you chain multiple operations together -- for example, generating an image, removing its background, then compositing it with styled text.
How the Canvas Works

The canvas is an infinite, pannable workspace. You add nodes to the canvas, then draw connections (called "noodles") between them. Data flows downstream through these connections in one direction: from outputs on the left/right side of a node to inputs on the next node.
Each connection carries a specific data type, shown by color:
| Data Type | Color | Carries |
|---|---|---|
| Text | Sky blue | Prompt text and descriptions |
| Image | Violet | Base64 images or image URLs |
| Video | Amber | Video files or URLs |
| Layer | Rose | Composition layer data |
| Compose | Emerald | Full composition output |
The canvas uses a fingerprinting system to keep things efficient. Each node generates a short hash of its media data, and downstream nodes only re-process when that fingerprint actually changes. This prevents unnecessary re-renders when you move nodes around or make cosmetic changes.
Node Types Reference
The canvas offers 17 node types organized into five categories: Inputs, Generate, Edit, Compose, and Render.
Input Nodes
These are your starting points. Every workflow begins with one or more input nodes.
| Node | Menu Label | What It Does |
|---|---|---|
| Text Prompt | Inputs > Text Prompt | A text field where you type prompts and descriptions. Has built-in "Generate Idea" and "Add Details" buttons that use AI to help you write better prompts. |
| Image Upload | Inputs > Image Upload | Upload an image from your computer or paste a URL. The image becomes available as a base64 source for downstream nodes. |
| Video Upload | Inputs > Video Upload | Upload a video file. Works like Image Upload but carries video data downstream. |
| Import from Selection | Inputs > Import from Selection | Pulls the currently selected element from your Flowmo designer canvas directly into the media workflow. Only appears when you have an element selected on the main canvas. |
Generate Nodes
These nodes use AI to create new media from your inputs.
| Node | Menu Label | What It Does | Credits |
|---|---|---|---|
| Image Generator | Generate > Gen Image | Generates images from text prompts and/or reference images using Nano Banana. Supports aspect ratio selection (1:1, 3:4, 4:3, 16:9, 9:16), negative prompts, and batch generation of up to 4 images at once. | 1 image credit per image |
| Video Generator | Generate > Gen Video | Generates video clips from text prompts, with optional start image for guided generation. Choose duration (4, 6, or 8 seconds), aspect ratio, and whether to generate audio. Can also extend existing videos. | 1 video-second credit per second |
| SVG Generator | Generate > Gen SVG | Creates vector SVGs from text descriptions. Choose from icon, illustration, or flat styles. Outputs both SVG markup and a rasterized PNG preview. | 1 text credit |
Edit Nodes
These nodes process and transform existing media.
| Node | Menu Label | What It Does | Credits |
|---|---|---|---|
| Brush Edit | Edit > Brush Edit | Paint a mask on an image to define an area, then describe what you want to change. The AI inpaints the masked region based on your text prompt. | 1 image credit |
| Remove BG | Edit > Remove BG | Strips the background from an image using AI-powered background removal. Runs entirely in the browser. | Free |
| Vectorize | Edit > Vectorize | Converts a raster image into an SVG vector. Choose from Detailed, Simplified, or Flat vectorization methods. Optionally remove the background or crop before vectorizing. | Free |
| Styled Text | Edit > Styled Text | Renders text with custom typography into a rasterized image. Configure font family, size, weight, line height, letter spacing, fill colors (including gradients), text alignment, direction (LTR/RTL), and transforms. Auto-rasterizes on changes. | Free |
| AI Editor | (via noodle drop) | A masked editing node that supports both inpainting and background removal modes. Connect a text prompt and an image to describe edits. | 1 image credit (inpaint) / Free (remove-bg) |
Compose Nodes
These nodes combine multiple media sources into a single composition.
| Node | Menu Label | What It Does |
|---|---|---|
| Layer Comp | Compose > Layer Comp | A multi-layer image compositor powered by Konva. Arrange, transform, rotate, and blend image/text/shape layers on a shared canvas. Supports blend modes (multiply, screen, overlay, etc.), clipping masks, alpha masks, paint overlays, and layer grouping. Opens a full-screen editor modal for precise control. |
| Video Comp | Compose > Video Comp | A timeline-based video compositor. Each layer has a start time, end time, trim points, and per-property keyframes (position, rotation, scale, opacity) with easing curves. Supports the same layer types as Layer Comp but adds time-based animation. |
Render Nodes
These nodes produce final output from your compositions.
| Node | Menu Label | What It Does |
|---|---|---|
| Video Render | Render > Video Render | Renders a Video Comp composition into a final video file. Configure quality (low/medium/high) and output FPS. Includes a render progress indicator. |
| Image Sequence | Render > Image Sequence | Renders a Video Comp composition as a series of individual PNG frames. Configure FPS and quality. The output is compatible with Flowmo's image-sequence scroll addon. |
| SVG Render | Render > SVG Render | Converts a Video Comp composition into an animated SVG with SMIL animations. Configure repeat count (indefinite, none, or a specific number) and yoyo (forward-then-backward) playback. |
| Image Render | Render > Image Render | Renders a Layer Comp composition into a high-resolution PNG. Configure scale multiplier (1x, 2x, etc.) and quality. |
Step-by-Step: Build Your First Workflow

Here is a walkthrough for a common workflow: generating an image from a text prompt, removing its background, and adding it to your designer canvas.
Step 1: Open the Media Gen Canvas

- Open your project in the Flowmo designer.
- Open the AI sidebar on the right side of the screen.
- Switch the assistant type to Media Gen. The canvas workspace appears in place of the chat interface.
Step 2: Add a Text Prompt Node

- Click the + button in the top-left toolbar of the canvas. The Add Node dropdown menu appears.
- Hover over Inputs and click Text Prompt.
- A new Text Prompt node appears on the canvas. Type your image description into its text field.
Tip: Click the sparkle button inside the Text Prompt node to use Generate Idea -- the AI writes a creative prompt for you. Or type a rough idea and click Add Details to have the AI expand it.
Step 3: Add an Image Generator Node

- Click the + button again, hover over Generate, and click Gen Image.
- An Image Generator node appears on the canvas.
Step 4: Connect the Nodes

- Hover over the text output handle (small circle) on the right side of your Text Prompt node. The handle highlights in sky blue.
- Click and drag from the output handle toward the Image Generator node.
- Drop the connection on the text input handle on the left side of the Image Generator node.
A sky-blue "noodle" now connects the two nodes. Your text prompt automatically flows into the Image Generator.
Alternative -- Noodle Drop Menu: If you drag a connection from a node and drop it on empty canvas space, a context-sensitive menu appears showing only compatible downstream nodes. Select one to create it and connect it in one step.
Step 5: Generate Your Image

- Click the Generate button on the Image Generator node.
- The node shows a loading spinner while the AI processes your request. Credits are checked and charged automatically.
- When generation completes, thumbnails of the generated images appear in the node's output area.
Step 6: Add a Remove BG Node

- Drag from the image output handle on the Image Generator node and drop it on empty canvas space.
- In the Noodle Drop Menu that appears, click Remove BG under the "Process" category.
- The background removal runs automatically once the image data propagates.
Step 7: Send the Result to Your Canvas

Every node with image or video output has an Add to Canvas action in its output area. Click it to insert the result as a new element on your Flowmo designer canvas. If you have an element selected on the designer canvas, you will also see a Paste to Replace option that swaps the selected element's source image.
Working with the Canvas

Navigation
- Pan: Click and drag on empty canvas space, or use the scroll wheel.
- Zoom: Pinch on trackpad, or use Ctrl/Cmd + scroll wheel. Zoom controls are also available in the bottom-left corner.
- Mini-map: A small overview map in the bottom-left shows all your nodes at a glance. Click and drag on it to navigate quickly.
Context Menu

Right-click on empty canvas space to access the context menu with the full Add Node submenu (same options as the + button).
Undo/Redo
The canvas tracks a history of node and edge changes. Use standard keyboard shortcuts (Ctrl/Cmd + Z for undo, Ctrl/Cmd + Shift + Z for redo) to step through your changes.
Saving and Persistence
Your workflow auto-saves to Firestore as you work. The persistence layer sanitizes non-serializable data (functions, Blobs) before storing, so your node graph is preserved between sessions. The flow name defaults to "Untitled Flow" -- click on it to rename.
Result Versioning
Generation nodes (Image Generator, Video Generator, SVG Generator, Vectorize) maintain a result history. Each time you generate, the new result is added to the history. Use the version selector on the node to browse previous outputs and switch between them.
Tips
- Chain freely. You can connect the image output of one generator to another generator as a reference image. This is great for iterative refinement -- generate a rough concept, then feed it back with a more detailed prompt.
- Branch your workflow. A single output handle can connect to multiple downstream nodes. Generate one image and simultaneously remove its background, vectorize it, and use it as a composition layer.
- Use Styled Text for overlays. The Styled Text node lets you create perfectly typeset text as an image, which you can then layer into a composition alongside generated images.
- Batch generate. The Image Generator supports generating up to 4 images at once. Set the count in the node's settings, then pick your favorite from the result history.
- Start from the canvas. If you already have an element selected on your designer canvas, the "Import from Selection" input node pulls it directly into your media workflow.
Common Issues
| Problem | Cause | Solution |
|---|---|---|
| "Engine is not available" error | The Flowmo engine has not loaded yet | Wait for the designer canvas to fully load before opening Media Gen |
| Generation button is disabled | No text prompt connected, or the node has no input data | Make sure you have connected a Text Prompt or Image Upload node and that it contains content |
| Credit check fails | Insufficient AI credits for the operation | Upgrade your plan or wait for credits to refresh. Free operations (Remove BG, Vectorize, Styled Text) do not consume credits |
| Noodle won't connect | Incompatible data types between the two nodes | You can only connect outputs to compatible inputs -- for example, a text output cannot connect directly to a video-only input. Use the Noodle Drop Menu to see what is compatible |
| Changes not propagating downstream | The data fingerprint has not changed | Modify the input data (update your prompt, upload a different image) to trigger propagation |
Related Articles
- AI Image Generation -- Chat-based image generation workflow
- AI Video Generation -- Chat-based video generation workflow
- Vibe Mode Overview -- Introduction to all AI assistants in Flowmo