Flowmo AI Assistant: Smart Router & Agents Guide
When you type a prompt into Flowmo's AI assistant, you are not talking to a single chatbot. Behind the scenes, Flowmo runs a Smart Router — a system that reads your prompt, understands your intent, and hands the work off to the right specialist agent. There are ten of these agents, each one purpose-built for a specific kind of task. You do not need to think about which one to use. Describe what you want, and the router figures out the rest.
This article walks you through every agent, how the routing works, and how to get the best results from the AI assistant.
Prerequisites
Before you start working with the AI assistant, make sure you have:
- An active Flowmo account with AI features enabled
- A project open in the Flowmo designer
- At least one page or section on your canvas (some agents work best when there is existing content to reference)
The agents
Below is every agent available in the assistant type dropdown, listed in the same order you will see them in the UI. The name in bold is the exact label shown in the dropdown.
Main Agent
Dropdown label: Main Agent
This is the default. When you open the AI panel, Main Agent is already selected. It acts as the smart router — it reads your prompt, checks what you have selected on the canvas, and hands the work off to whichever specialist agent fits best. You never have to leave Main Agent unless you want to force a specific agent.
Good for: Any prompt. The router handles the delegation automatically.
Research & Discovery
Dropdown label: Research & Discovery
A multi-step workflow agent. It runs competitor analysis, asks you brief questions, generates a full design system (colors, typography, spacing), and builds a strategic structure plan — all before producing any design. This is your starting point when you are beginning a project from scratch and want strategic direction, not just visuals.
Use it for prompts like:
- "Research SaaS landing pages and create a design based on best practices"
- "Analyze competitor sites in the fitness space and build me a homepage"
- "I'm starting a new project for a pet food brand — help me plan it out"
Design Agent
Dropdown label: Design Agent
Generates high-fidelity layouts, sections, and full page designs with CSS. This agent works with your project's design system when one exists, so the output matches your fonts, colors, and spacing tokens.
Use it for prompts like:
- "Create a hero section with a headline and CTA"
- "Design a pricing page with three tiers"
- "Build a testimonial grid with alternating layout"
Interactions Builder
Dropdown label: Interactions Builder
Creates animations and micro-interactions — scroll-triggered effects, hover states, entrance animations, carousels, and more. Select the element you want to animate before sending your prompt for the best results.
Use it for prompts like:
- "Add a fade-in on scroll to this section"
- "Make this button bounce on hover"
- "Create a parallax effect on the background image"
Element Editor

Dropdown label: Element Editor
Makes targeted modifications to the currently selected element — sizing, colors, typography, spacing, backgrounds, and more. This is the agent for quick, precise edits rather than full section generation.
Use it for prompts like:
- "Make this text larger"
- "Change the background color to dark blue"
- "Add more padding to this container"
Image Generator
Dropdown label: Image Generator
Creates images directly inside your project using AI image generation models. You can describe what you want, and the generated image is ready to use in your design immediately.
Use it for prompts like:
- "Generate a hero background image of mountains at sunset"
- "Create an abstract gradient illustration for the about section"
- "Make a product mockup on a clean white background"
Video Generator
Dropdown label: Video Generator
Generates short video clips. Useful for background videos, product demos, or decorative motion content. You can configure the aspect ratio (16:9 or 9:16), duration (4s, 6s, or 8s), and resolution (720p or 1080p) before generating.
Use it for prompts like:
- "Generate a 5-second product demo clip"
- "Create a looping background video of ocean waves"
Settings Agent
Dropdown label: Settings Agent
Manages project settings, publishes to staging, and handles WordPress server configuration. Use this agent when you need to adjust project-level settings rather than design elements.
Use it for prompts like:
- "Publish this project to staging"
- "Update the project name"
Code Component
Dropdown label: Code Component
Creates interactive components with custom JavaScript — canvas animations, WebGL scenes, API integrations, form logic, and more. This agent goes beyond visual design and into functional behavior.
Use it for prompts like:
- "Create an interactive 3D globe with WebGL"
- "Build a live countdown timer component"
- "Add a form that submits to an API endpoint"
Media Gen (Beta)
Dropdown label: Media Gen (Beta)
A visual flow canvas for generating and editing images and videos by connecting nodes. This is a separate workspace from the chat-based agents — it opens a node-based editor where you can chain together generation, editing, and rendering steps. This agent is currently in beta.
How the Smart Router works
Every time you send a prompt with Main Agent selected, two things happen:
- Your text is analyzed. The router looks for intent signals and keywords. If you mention "animate" or "hover", it knows you are asking about interactions. If you say "generate an image", it routes to the Image Generator. If you describe a layout, it goes to the Design Agent.
- Your current context is read. The router checks what element you have selected, what page you are on, and what content exists on the canvas. This helps it make smarter routing decisions even when your prompt is ambiguous.
The result: your prompt lands with the agent best equipped to handle it, without you needing to pick from a menu.
The router also maintains a unified conversation memory. Even if your prompts get routed to different specialist agents across a session, the conversation context carries over. The Design Agent knows what the Research & Discovery agent just produced, and vice versa.
Manually selecting an agent
The Smart Router picks the right agent most of the time. But if it does not, you can override it:
- Open the AI chat panel.
- Click the assistant type dropdown at the top of the panel — it shows the currently active agent name (for example, "Smart Agent" or "Design").
- Select the specific agent you want from the list.
- Type your prompt as usual.
This is useful when your prompt is ambiguous. For example, "make this section better" could go to the Design Agent or the Element Editor depending on what you mean. Selecting manually removes the guesswork.
To switch back to automatic routing, select Main Agent from the dropdown.
Context and attachments
The AI assistant is context-aware. It automatically sees:
- Your current selection — the selected element's HTML and CSS are included in the prompt context
- Page structure — the full layout of the page you are working on
- Design system — your fonts, colors, and spacing tokens
You can also give it more context by clicking the + button next to the input field. The attachment menu offers four options:
- Attach Image — upload a screenshot or mockup you want the AI to match or use as reference
- Attach PDF — upload brand guidelines, design specs, or other documents
- From Assets — pick files from your project's asset library
- Add Selection — attach the currently selected element's HTML and CSS as context for the AI
The more context you provide, the more accurate the output.
Model selection
Some agents offer a model selection dropdown where you can choose between different AI model tiers. The available models depend on which agent you are using:
- HTML Generator offers Gemini 2.5 Flash (faster, good for iterative work) and Gemini 3 Pro (higher quality, better for complex layouts)
- Image Generator offers Nano Banana (standard image generation) and Nano Banana Pro (higher fidelity output)
- Main Agent uses Gemini 2.5 Flash Lite for fast intent routing
Agents that do not show a model selector use their default model automatically. You can switch models at any time from the AI chat panel when the option is available.
Insertion options
When the AI generates content (a section, a component, a block of HTML), you control where it goes on the canvas. After the AI produces a result, you will see insertion options on the generated output:
- Insert Inside Selection — nests the content inside the currently selected element
- Insert Before Selection — places it directly above the selected element
- Insert After Selection — places it directly below the selected element
- Append to Page End — adds the content after everything else on the page
For images and videos, you will also see a Replace option that swaps the generated media into the selected element's background or source.
Pick the right insertion target before clicking to avoid having to rearrange things afterward. Make sure the correct element is selected on the canvas before inserting.
Tips and best practices
- Let the Main Agent choose first. It is right most of the time. Only switch to a specific agent manually if the result clearly came from the wrong one.
- Attach a reference image when you want the AI to match a specific visual style. Words alone can be ambiguous — a screenshot removes doubt.
- Use "Add Details" to expand a vague prompt before generating. When you have typed something in the input field, the button below it changes to "Add Details" and rewrites your prompt with more specificity (color theory, UX logic, layout details). If the input is empty, the same button becomes "Generate Idea" and creates a full prompt for you.
- Try the idea category buttons. When the Design Agent or Main Agent is selected and no messages exist yet, you will see category buttons — SaaS, Lead Gen, Product, and Magazine — that generate tailored prompt ideas for each business type.
- Iterate naturally. The AI remembers your conversation within a session. You can say "make it darker" or "try a different layout" without re-explaining everything.
- Be specific about colors and fonts if your brand has strict guidelines. The AI uses your design system when available, but explicit instructions always win.
- Use Research & Discovery for new projects. It front-loads the strategic thinking (competitor analysis, audience psychology, design system creation) so every design that follows is grounded in research rather than guesswork.
Common issues
"The AI generated code instead of a design."
The router may have sent your prompt to the wrong agent. Select Design Agent manually from the assistant type dropdown and resend your prompt.
"The AI is not using my brand colors."
The AI pulls from your design system when one exists, but it will not guess. Attach your brand guidelines via Attach PDF or mention specific colors in the prompt (for example, "use #1A1A2E for the background"). Running Research & Discovery first will also establish a design system the other agents can reference.
"Generated content went to the wrong place."
Check which insertion option you clicked. If you chose "Append to Page End" but wanted it inside a specific container, click Insert Inside Selection next time and make sure the right element is selected on the canvas before inserting.
"The AI keeps picking the wrong agent."
Use more explicit language in your prompt. Instead of "improve this section", try "redesign this section with a new layout" (routes to Design Agent) or "adjust the font size and spacing on this section" (routes to Element Editor). You can also select the agent manually from the dropdown.
"I do not see model options."
Not all agents expose a model selector. Only agents with multiple model configurations (like the HTML Generator and Image Generator) show the model dropdown. Other agents use their default model automatically.