Kasava
Back to Blog
Product

Your Prototype Is Only as Good as Your Prompt

Ben Gregory-Apr 3, 2026

Today we're launching Kasava's Prompt Generator, a free tool that turns your design system and codebase into ready-to-use prompts for AI prototyping tools like v0, Lovable, Cursor, and Claude Code.

Here's an example of what it produces. This is a prompt we generated for an Airbnb vacation roulette wheel prototype:

Build a full-page vacation roulette wheel prototype for Airbnb using Next.js 14, TypeScript, and Tailwind CSS. Design direction: playful, interactive, and engaging. Large circular SVG-based roulette wheel (400px diameter on desktop, responsive down to 280px mobile) with 12-16 segments in a gradient using Airbnb's accent colors (#FF385C, #E07912, #428BFF, #008A05, #3B07BB, #440589). Use the Airbnb Cereal VF font family throughout. Segment borders in #FFFFFF, outer ring with box-shadow elevation-primary. Result cards: 16px corner-radius, 1px border #DDDDDD, 12px padding, elevation-secondary shadow. Font: 18px bold for titles (#222222), 14px for secondary text (#6A6A6A). Mobile-first: 100vh viewport, centered column layout. Desktop (1024px+): wheel left-center, results cards right...

That goes on for another 150 lines. Exact hex values, font stacks, border radii, responsive breakpoints at three widths, interaction states, accessibility focus rings, component choices. The output from v0 actually looks like an Airbnb product, not a generic template with a red accent color.

Nobody wrote that by hand. Kasava extracted it from the design system and codebase automatically.


What It Does

You connect your repo and (optionally) your Figma files. The Prompt Generator extracts four layers of context:

Design Tokens. Your color palette, typography scale, spacing tokens, and font families, pulled directly from Figma variables. Not approximations. Your actual hex values, your actual font weights, your actual spacing scale. These get transformed into the format the AI tool needs: CSS variables, Tailwind config values, or raw tokens.

Component Patterns. We analyze your codebase and identify the components, services, and patterns you're already using. If you have a card component with a specific border radius, shadow, and padding, the generated prompt references those exact values. The AI builds with your components, not its defaults.

Product Context. Your routes, your architecture, your tech stack, your domain terminology. If you're building a new feature, the prompt includes context about how your existing features are structured so the output is architecturally consistent, not just visually consistent.

Brand Guidelines. If you've defined design principles or a constitution (we have a whole system for this, which I wrote about previously), those principles get embedded in the prompt. The AI doesn't just follow your colors. It follows your philosophy: information hierarchy, progressive disclosure, interaction patterns.

You describe what you want to build in plain language. The Prompt Generator wraps your idea in 100-300 lines of specific, actionable context. One click to copy, paste into v0 or Lovable or Cursor, and the output actually looks like it belongs in your product.

Free to use. No credit card. No trial expiration.


Why We Built This

The gap between a vague prompt and a good one is enormous. Most people prompting v0 or Lovable write something like "build me a dashboard with a sidebar and charts." They get back something that technically works but looks like every other AI prototype. Default spacing. Generic blue accent. System fonts. No personality.

Then they iterate. "Make it more modern." "Change the colors." "Add more padding." Each iteration burns tokens and time. Bolt users on Reddit report spending millions of tokens in build-break-fix cycles, where adding a new feature breaks four things that were working. One user documented spending 100,000 tokens just to add a testimonials section to a landing page.

This isn't a tool problem. It's a context problem. The Stack Overflow 2025 Developer Survey found that despite 80% of developers now using AI tools, trust in output accuracy fell to 29%. The number-one frustration, cited by 45%: dealing with "AI solutions that are almost right, but not quite." And 66% say they spend more time fixing AI-generated code than if they'd written it themselves.

The industry is starting to converge on a fix. Addy Osmani wrote a guide on writing specs for AI agents based on GitHub's analysis of 2,500+ agent config files. The finding: "Most agent files fail because they are too vague." GitHub's engineering team released an open-source toolkit for spec-driven development. Simon Willison put it simply: "Write good documentation first and the model may be able to build the matching implementation from that input alone."

Better prompts beat better models. We just automated the "better prompts" part.

Where It Fits

There's a study from METR that ran a randomized controlled trial with experienced open-source developers. The core finding: developers using AI tools took 19% longer to complete tasks. But they believed AI had sped them up by 20%. The perception-reality gap comes down to where time actually goes. Generation is instant, but the prompting, reviewing, iterating, debugging, re-prompting cycle adds up.

The Prompt Generator attacks this directly. If the first prompt includes your design tokens, component patterns, responsive breakpoints, and interaction states, you skip most of the "make it more like our brand" back-and-forth.

From our perspective, it's also the fastest way for everyone to get a taste of what Kasava can do. The prompt generator is a straightforward tool that can produce a meaningful result with a very small amount of information. Now imagine what you could with a product graph that connects your codebase to your product decisions, a planning system that generates code-aware specs, and an AI layer that understands your architecture deeply enough to produce implementation-ready work breakdowns. This prompt is the tip of that iceberg.


Try it at prompt.kasava.dev. If you want to compare notes on AI prototyping workflows or argue about whether vibe coding is real engineering, find me on LinkedIn.

Sources

  1. Lovable ARR and Growth (Fortune, Nov 2025) - $200M ARR in Nov 2025, doubling from earlier that year
  2. Lovable $400M ARR Timeline (Startup Riders) - $400M ARR reached in ~14 months
  3. Vibe Coding Market Statistics (dev.to, 2026) - v0 2M users, Bolt $2.1B valuation, $1B+ VC in 2025, 38-42% CAGR projections
  4. Stack Overflow 2025 Developer Survey - 80% AI adoption, trust fell to 29%, 45% cite "almost right" frustration, 66% spend more time debugging AI code
  5. Bolt.new User Feedback (Reddit) - Token waste in build-break-fix cycles
  6. Bolt.new Token Costs (YouTube) - 100K tokens for a simple feature addition
  7. Addy Osmani, "How to Write a Good Spec for AI Agents" - GitHub's analysis of 2,500+ agent configs, structured specs as the key lever
  8. GitHub Spec-Driven Development Toolkit - Open-source spec-driven workflow for AI
  9. Simon Willison, "Vibe Engineering" (Oct 2025) - Documentation-first approach to AI coding
  10. METR Randomized Controlled Trial (July 2025) - 16 developers, 246 tasks, 19% slower with AI, perception-reality gap