Skip to main content

Documentation Index

Fetch the complete documentation index at: https://kasava.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

The Kasava Agent is the unified chat assistant that powers every conversational surface in the product. It’s a single agent — not a collection of bots — that decides which tools to use, gathers data from your connected sources, and returns answers as interactive cards you can click into. What makes it different isn’t the language model. It’s what the model can see.

Five things a generic LLM can’t do

  1. Grounded in your product graph. Every answer references your actual feature areas, architectural layers, and symbol graph. The Agent doesn’t hallucinate your architecture because it’s reading it live from the product graph.
  2. Code intelligence built in. Semantic search across your indexed repositories, symbol-level navigation, call-graph traversal, and pattern detection — all available as tools the Agent can chain together mid-reasoning.
  3. Cross-platform by default. A single question can pull from GitHub, Linear, Jira, Asana, Gong, Intercom, and your indexed docs in one turn. You don’t have to tab between them.
  4. Agentic tool-use with delegation. For complex queries, the Agent routes to specialist sub-agents (code intelligence, product insights, sprint analysis, architecture investigation) that each run their own tool loops before reporting back.
  5. Persistent memory. The Agent remembers your terminology, preferences, and repository context across sessions so you don’t re-explain your stack every conversation.

One agent, many capabilities

The docs list “Catch Me Up,” “Sprint Insights,” “Code Impact,” “Feature Planning,” and more. Those aren’t separate agents. They’re prompts the same Agent recognizes and triggers the right tools for. Whether you type “catch me up on the last week,” “what’s the blast radius of changing the auth middleware,” or “what should I work on next,” it’s the same Agent routing to different tools. See the Agent Overview for the full capability map.

What it can see

CategoryWhat the Agent accesses
Code analysisSymbol search, call chains, import graphs, impact radius, deep code exploration
Product intelligenceFeature areas, architectural layers, heuristics, ownership, product health
Cross-platform PMGitHub, Linear, Jira, Asana issues and PRs in one unified query
PlanningCreate plans, generate documents, export work breakdowns, produce agent-ready specs
Research & synthesisWeb search, Firecrawl, deep research, competitor tracking, customer signals
Health & metricsProduct health dashboards, sprint analysis, velocity forecasts, diagnostics
Every tool is available on every conversation. The Agent decides which to use based on what you ask.

Interactive cards, not walls of text

When you ask for a sprint recap, you get a sprint recap card — clickable rows, expandable panels, direct links back to GitHub and Linear. When you ask for impact analysis, you get a blast-radius diagram. When you ask for a feature plan, you get a plan card with a link to open the full workspace. This matters because product work is spatial, not linear. Cards let you explore laterally without losing the conversation.

Workspace-scoped conversations

When you open the Agent inside a workspace or plan, every message also carries the workspace’s context, sources, hypotheses, and decisions. You ask “summarize progress” and the Agent already knows what “progress” means for this initiative.

The Product Graph

The code- and commit-grounded foundation the Agent queries on every turn

Workspaces

Bounded context the Agent uses for scoped conversations

Agent Overview

Full catalog of capabilities with how-to guides for each

Code Intelligence

The indexing pipeline that feeds the Agent’s code tools