The Complete Edge Architecture Guide (Part 4): Our AI Orchestration Journey
The Complete Edge Architecture Guide (Part 4): Our AI Orchestration Journey
Jane Cooper
•
Sep 5, 2025
This is Part 4 of our four-part series on building AI-powered applications on the edge. In Part 1, we covered our Cloudflare Workers architecture. Part 2 explored our Hono framework and dynamic loading strategy. In Part 3, we dove into our sub-10ms AI pipeline. Now, let's discuss the orchestration layer that ties it all together.
——
The LangGraph Era
We started Kasava with a simple plan: help teams build products faster by automatically synchronizing GitHub activities with task management systems, providing intelligent insights, connecting bugs to the relevant code, and enabling natural language interactions across the entire development stack. LangGraph seemed like the obvious choice – it was the most mature orchestration framework, had decent traction, and promised all the features we needed.
The honeymoon phase lasted about two weeks.
Don't get me wrong – LangGraph is powerful. It's built by smart people solving real problems. But for a TypeScript-first team building production software on Cloudflare Workers, it felt like wearing someone else's shoes. Sure, they're shoes, and they technically work, but every step reminds you they weren't made for you.
Documentation that felt like archaeology.
We'd find examples from three different versions, none matching our current setup. The TypeScript docs were clearly an afterthought – Python examples hastily translated with a "this should probably work" vibe. I once spent an entire afternoon discovering that a critical feature mentioned in the docs had been deprecated two versions ago. The GitHub issues were filled with other TypeScript developers asking the same questions we had, often without answers.
TypeScript as a second-class citizen.
Type definitions were incomplete, generics were wonky, and we constantly fought the type system instead of leveraging it. We'd write perfectly valid TypeScript that would fail at runtime because the underlying assumptions were Python-centric. Our codebase became littered with `// @ts-ignore` comments and `as any` casts – the programming equivalent of duct tape.
An abstraction that felt like abstraction for abstraction's sake.
LangGraph's graph-based approach is conceptually elegant, but in practice? We were building sequential workflows 90% of the time. The mental overhead of thinking in nodes, edges, and channels for what should have been simple step-by-step processes was exhausting. It reminded me of using a chainsaw to cut butter – technically possible, but why?
We shipped our initial version with LangGraph, and it worked. But every feature addition felt like a battle. Every debugging session was a archaeology expedition through incomplete documentation and GitHub issues. We were spending more time fighting our tools than building our product.
Enter Mastra (Or: Finally, Tools That Speak Our Language)
The switch to Mastra wasn't immediate. We'd already invested months into LangGraph, and rewriting core infrastructure isn't something you do lightly. But when we saw Mastra's approach, something clicked.
First, the team behind it: Sam Bhagwat, Abhi Aiyer, and Shane Thomas – all Gatsby alumni. Now, I'll admit I'm biased here. We've admired what the Gatsby team built for years. They took the complex world of static site generation and made it accessible, powerful, and dare I say, enjoyable. They understood that developer experience isn't a nice-to-have; it's the foundation everything else builds on.
When these folks decided to tackle AI orchestration, they brought that same philosophy: make the complex simple, make the simple delightful.
What We Love About Mastra
Sequential Composition That Makes Sense
Most workflows are sequential with occasional branches. Mastra gets this. Instead of forcing us to think in graphs, we write workflows like we think about them:
workflow
.then(analyzeCode)
.then(searchForIssues)
.then(generateSummary)
.then(updateTasks)
.commit();
When we need branching, it's there. When we need parallel execution, it's there. But we're not forced to use a sledgehammer for every nail.
TypeScript-First, Not TypeScript-Eventually
Every API, every method, every configuration option was designed for TypeScript. The types aren't an afterthought; they're the foundation. Our error rate dropped by 30% in the first month just from catching issues at compile time that we used to discover in production.
Production-Ready From Day One
Mastra ships with OpenTelemetry tracing, error handling, retries, and monitoring built-in. With LangGraph, we had to build all this ourselves. With Mastra, it just works. Our observability improved overnight without writing a single line of monitoring code.
This was especially crucial for our edge deployment on Cloudflare Workers (covered in [Part 1](./part-1-architecture.md)). Mastra's lightweight runtime and efficient execution model meant we could run complex workflows within Workers' constraints without sacrificing functionality.
Our Workflow Architecture Today
We're running four major Mastra workflows in production:
GitHubWorkflow: Coordinates between GitHub and 10+ task management platforms
ChatWorkflow: Powers our conversational AI with streaming responses and conversation memory
DocumentWorkflow: Handles document processing with OCR, chunking, and embedding generation
ReverseQueryWorkflow: Enables natural language queries across all connected platforms
Each workflow consists of specialized agents – we have 40+ in total – all inheriting from Mastra's BaseAgent class. The consistency is beautiful. Every agent follows the same patterns, uses the same error handling, emits the same telemetry.
What We're Looking Forward To
The Mastra team isn't resting. Their upcoming TypeScript AI Conference in November is bringing together the community that's been fragmented across various tools. The scorers system they recently introduced makes evaluation actually practical instead of theoretical. And their commitment to the TypeScript ecosystem gives us confidence we're building on a foundation that will only get stronger.
We're particularly excited about:
Golden answers for regression testing our AI outputs
Enhanced observability for tracking agent performance over time
The growing ecosystem of integrations and patterns
The Gatsby Connection
I mentioned our admiration for the Gatsby team earlier, and it's worth elaborating. Gatsby (the framework) succeeded because it understood that developer experience multiplies everything else. A tool that's pleasant to use gets used. A tool that fights you at every turn gets replaced.
The Mastra team brings that same philosophy. They've been in the trenches, building tools used by thousands of developers.
They know the difference between a feature that demos well and one that actually works in production. They understand that documentation isn't an afterthought, that TypeScript types aren't just nice-to-have, that debugging tools aren't optional.
Looking Ahead
The AI orchestration space is evolving rapidly, and tools will come and go. But Mastra feels different. It's not trying to be everything to everyone. It's trying to be the best tool for TypeScript developers building production AI applications. For teams like ours, that focus makes all the difference.
The irony isn't lost on me: we switched from a graph-based system to a simpler sequential one, and our workflows became more powerful, not less. Sometimes the best abstraction is the one that matches how you actually think about the problem.
Sometimes the best tool is the one built by people who've felt your pain.
Sometimes, second-class citizenship in someone else's ecosystem isn't worth the supposed benefits.
We chose Mastra, and every day validates that choice. Our AI orchestration journey taught us that the right tools don't just enable what you're building – they elevate it. And for us, Mastra does exactly that.
——
Kasava is an AI-powered development workflow platform that revolutionizes how development teams work. We're built on Cloudflare Workers, powered by Mastra, and obsessed with developer productivity.
Want to know more? Check out [kasava.ai](https://kasava.ai) or dive into our [GitHub repo](https://github.com/kasava/kasava).
Special thanks to the Mastra team for building tools that don't suck. In a world of overcomplicated AI frameworks, you're a breath of fresh air.
This is Part 4 of our four-part series on building AI-powered applications on the edge. In Part 1, we covered our Cloudflare Workers architecture. Part 2 explored our Hono framework and dynamic loading strategy. In Part 3, we dove into our sub-10ms AI pipeline. Now, let's discuss the orchestration layer that ties it all together.
——
The LangGraph Era
We started Kasava with a simple plan: help teams build products faster by automatically synchronizing GitHub activities with task management systems, providing intelligent insights, connecting bugs to the relevant code, and enabling natural language interactions across the entire development stack. LangGraph seemed like the obvious choice – it was the most mature orchestration framework, had decent traction, and promised all the features we needed.
The honeymoon phase lasted about two weeks.
Don't get me wrong – LangGraph is powerful. It's built by smart people solving real problems. But for a TypeScript-first team building production software on Cloudflare Workers, it felt like wearing someone else's shoes. Sure, they're shoes, and they technically work, but every step reminds you they weren't made for you.
Documentation that felt like archaeology.
We'd find examples from three different versions, none matching our current setup. The TypeScript docs were clearly an afterthought – Python examples hastily translated with a "this should probably work" vibe. I once spent an entire afternoon discovering that a critical feature mentioned in the docs had been deprecated two versions ago. The GitHub issues were filled with other TypeScript developers asking the same questions we had, often without answers.
TypeScript as a second-class citizen.
Type definitions were incomplete, generics were wonky, and we constantly fought the type system instead of leveraging it. We'd write perfectly valid TypeScript that would fail at runtime because the underlying assumptions were Python-centric. Our codebase became littered with `// @ts-ignore` comments and `as any` casts – the programming equivalent of duct tape.
An abstraction that felt like abstraction for abstraction's sake.
LangGraph's graph-based approach is conceptually elegant, but in practice? We were building sequential workflows 90% of the time. The mental overhead of thinking in nodes, edges, and channels for what should have been simple step-by-step processes was exhausting. It reminded me of using a chainsaw to cut butter – technically possible, but why?
We shipped our initial version with LangGraph, and it worked. But every feature addition felt like a battle. Every debugging session was a archaeology expedition through incomplete documentation and GitHub issues. We were spending more time fighting our tools than building our product.
Enter Mastra (Or: Finally, Tools That Speak Our Language)
The switch to Mastra wasn't immediate. We'd already invested months into LangGraph, and rewriting core infrastructure isn't something you do lightly. But when we saw Mastra's approach, something clicked.
First, the team behind it: Sam Bhagwat, Abhi Aiyer, and Shane Thomas – all Gatsby alumni. Now, I'll admit I'm biased here. We've admired what the Gatsby team built for years. They took the complex world of static site generation and made it accessible, powerful, and dare I say, enjoyable. They understood that developer experience isn't a nice-to-have; it's the foundation everything else builds on.
When these folks decided to tackle AI orchestration, they brought that same philosophy: make the complex simple, make the simple delightful.
What We Love About Mastra
Sequential Composition That Makes Sense
Most workflows are sequential with occasional branches. Mastra gets this. Instead of forcing us to think in graphs, we write workflows like we think about them:
workflow
.then(analyzeCode)
.then(searchForIssues)
.then(generateSummary)
.then(updateTasks)
.commit();
When we need branching, it's there. When we need parallel execution, it's there. But we're not forced to use a sledgehammer for every nail.
TypeScript-First, Not TypeScript-Eventually
Every API, every method, every configuration option was designed for TypeScript. The types aren't an afterthought; they're the foundation. Our error rate dropped by 30% in the first month just from catching issues at compile time that we used to discover in production.
Production-Ready From Day One
Mastra ships with OpenTelemetry tracing, error handling, retries, and monitoring built-in. With LangGraph, we had to build all this ourselves. With Mastra, it just works. Our observability improved overnight without writing a single line of monitoring code.
This was especially crucial for our edge deployment on Cloudflare Workers (covered in [Part 1](./part-1-architecture.md)). Mastra's lightweight runtime and efficient execution model meant we could run complex workflows within Workers' constraints without sacrificing functionality.
Our Workflow Architecture Today
We're running four major Mastra workflows in production:
GitHubWorkflow: Coordinates between GitHub and 10+ task management platforms
ChatWorkflow: Powers our conversational AI with streaming responses and conversation memory
DocumentWorkflow: Handles document processing with OCR, chunking, and embedding generation
ReverseQueryWorkflow: Enables natural language queries across all connected platforms
Each workflow consists of specialized agents – we have 40+ in total – all inheriting from Mastra's BaseAgent class. The consistency is beautiful. Every agent follows the same patterns, uses the same error handling, emits the same telemetry.
What We're Looking Forward To
The Mastra team isn't resting. Their upcoming TypeScript AI Conference in November is bringing together the community that's been fragmented across various tools. The scorers system they recently introduced makes evaluation actually practical instead of theoretical. And their commitment to the TypeScript ecosystem gives us confidence we're building on a foundation that will only get stronger.
We're particularly excited about:
Golden answers for regression testing our AI outputs
Enhanced observability for tracking agent performance over time
The growing ecosystem of integrations and patterns
The Gatsby Connection
I mentioned our admiration for the Gatsby team earlier, and it's worth elaborating. Gatsby (the framework) succeeded because it understood that developer experience multiplies everything else. A tool that's pleasant to use gets used. A tool that fights you at every turn gets replaced.
The Mastra team brings that same philosophy. They've been in the trenches, building tools used by thousands of developers.
They know the difference between a feature that demos well and one that actually works in production. They understand that documentation isn't an afterthought, that TypeScript types aren't just nice-to-have, that debugging tools aren't optional.
Looking Ahead
The AI orchestration space is evolving rapidly, and tools will come and go. But Mastra feels different. It's not trying to be everything to everyone. It's trying to be the best tool for TypeScript developers building production AI applications. For teams like ours, that focus makes all the difference.
The irony isn't lost on me: we switched from a graph-based system to a simpler sequential one, and our workflows became more powerful, not less. Sometimes the best abstraction is the one that matches how you actually think about the problem.
Sometimes the best tool is the one built by people who've felt your pain.
Sometimes, second-class citizenship in someone else's ecosystem isn't worth the supposed benefits.
We chose Mastra, and every day validates that choice. Our AI orchestration journey taught us that the right tools don't just enable what you're building – they elevate it. And for us, Mastra does exactly that.
——
Kasava is an AI-powered development workflow platform that revolutionizes how development teams work. We're built on Cloudflare Workers, powered by Mastra, and obsessed with developer productivity.
Want to know more? Check out [kasava.ai](https://kasava.ai) or dive into our [GitHub repo](https://github.com/kasava/kasava).
Special thanks to the Mastra team for building tools that don't suck. In a world of overcomplicated AI frameworks, you're a breath of fresh air.
Start Building with Momentum
Momentum empowers you to unleash your creativity and build anything you can imagine.
Start Building with Momentum
Momentum empowers you to unleash your creativity and build anything you can imagine.
Start Building with Momentum
Momentum empowers you to unleash your creativity and build anything you can imagine.
Kasava
No Spam. Just Product updates.
Company
Kasava
No Spam. Just Product updates.
Company
Kasava
No Spam. Just Product updates.
Company
Kasava
No Spam. Just Product updates.
Company