Passband: 100-Hour
Architecture Breakdown
How we built a production SaaS — AI content intelligence with multi-platform distribution, Stripe billing, and conversational AI — from concept to deployment.
Executive Summary
Passband is an AI content intelligence platform that helps creators write, refine, and distribute content across five platforms from a single conversational interface. It shipped with Stripe billing, OAuth integrations, an E2E test suite, and full mobile responsiveness.
The entire system was built in 100 hours by one architect using an AI-first development pipeline. No team of ten. No sprint planning ceremonies. One architect designing systems, one AI pipeline implementing them, and engineering discipline ensuring production quality.
This case study breaks down the architecture decisions, the development timeline, the technical challenges, and the lessons learned — a transparent look at what AI-orchestrated development actually looks like in practice.
By the Numbers
Quantified outcomes from concept to production deployment.
The Stack
Every technology choice serves a specific purpose: type safety for AI agents, developer velocity, and production reliability.
Next.js 15
App Router, RSC, API routes
tRPC
End-to-end type-safe APIs
Prisma + Neon
Type-safe ORM on serverless Postgres
Clerk
Auth, session management, RBAC
Vercel AI SDK
Streaming chat, tool calls, multi-model
Vercel
Edge deployment, preview environments
Architecture Decisions
Every choice was deliberate. The stack isn’t a preference — it’s an agent supervision mechanism.
tRPC over REST
End-to-end type safety means AI agents writing API calls get compile-time feedback. No runtime schema mismatches, no integration bugs that surface only in production.
Prisma over raw SQL
Generated TypeScript types from the schema create a single source of truth. When an agent writes a query, the types enforce correctness at the compiler level.
Clerk over custom auth
Auth is solved infrastructure. Building custom auth would consume 20+ hours with no differentiation. Clerk gives production-grade session management, MFA, and RBAC out of the box.
Neon over traditional Postgres
Serverless Postgres with branching enables preview environments with isolated databases. Every PR gets its own database branch — zero risk of test data leaking to production.
Vercel AI SDK over direct API calls
Unified interface across models (GPT-4, Claude, etc.) with built-in streaming, tool calling, and structured output. Switching models is a config change, not a rewrite.
Monorepo with App Router
Collocated API routes, server components, and client components reduce context-switching. The AI agent works in a single codebase with clear file conventions.
Database Design
The Prisma schema defines 12 models that map cleanly to business entities. The key design principle: every model represents a single concept, and relationships are explicit via foreign keys — no implicit coupling, no JSON blobs for structured data.
Core entities include User, Source, Draft, OAuthConnection, and Engagement. The content pipeline flows Source → Draft → Post, with each stage tracked independently for observability.
model User {
id String @id @default(cuid())
clerkId String @unique
email String @unique
plan Plan @default(FREE)
sources Source[]
drafts Draft[]
connections OAuthConnection[]
conversations Conversation[]
createdAt DateTime @default(now())
}
model Source {
id String @id @default(cuid())
url String
content String @db.Text
summary String? @db.Text
userId String
user User @relation(fields: [userId])
drafts Draft[]
createdAt DateTime @default(now())
}
model Draft {
id String @id @default(cuid())
title String
content String @db.Text
status DraftStatus @default(DRAFT)
sourceId String?
source Source? @relation(fields: [sourceId])
userId String
user User @relation(fields: [userId])
posts Post[]
createdAt DateTime @default(now())
}
model OAuthConnection {
id String @id @default(cuid())
platform Platform
token String @db.Text // encrypted
userId String
user User @relation(fields: [userId])
verified Boolean @default(false)
createdAt DateTime @default(now())
@@unique([userId, platform])
}AI Pipeline Architecture
The conversational AI interface isn’t a chatbot — it’s an orchestration layer. The user talks to a single interface that can ingest content, draft posts, manage connections, schedule distribution, and analyze engagement. All through natural language.
Under the hood, it’s a tool-calling architecture using the Vercel AI SDK. The LLM decides which tools to invoke based on the conversation context. Each tool is a thin wrapper around a tRPC mutation or query, so every action the AI takes goes through the same type-safe, validated path as a direct API call.
12 AI Tool Integrations
User message
→ LLM selects tool(s) based on intent
→ Tool executes via tRPC mutation
→ Prisma validates + persists state
→ SSE streams result back to UI
→ Tool output rendered inline
→ LLM continues or awaits next inputDevelopment Timeline
100 hours from empty repository to production deployment. Every phase built on the patterns established in the phase before it.
Infrastructure & Auth
- Prisma schema design with 12 models
- Clerk auth with production credentials
- Stripe billing: products, webhooks, plan gating
- Neon Postgres with connection pooling
Core Product
- Conversational AI chat with 12 tool integrations
- SSE streaming via Vercel AI SDK
- Content drafting with voice fidelity
- Persistent conversation history
Platform Integrations
- 5 distribution platforms: Bluesky, Mastodon, X, Dev.to, Hashnode
- Automated scheduling via cron jobs
- Thread-format parsing for social vs. long-form
- Chat-driven posting and manual triggers
Quality & Polish
- Playwright E2E suite (12 tests, Docker Compose, CI)
- Mobile responsive pass across 19 files
- Landing page with scroll animations and OG images
- SEO: sitemap, robots.txt, JSON-LD, canonical URLs
Production Hardening
- Error boundaries and graceful degradation
- Rate limiting and abuse prevention
- Monitoring dashboards and alerting
- Documentation, runbooks, and knowledge transfer
Challenges & Solutions
The hard problems we solved and the engineering decisions behind each solution.
Multi-Platform Content Formatting
Each distribution platform has different content limits, formatting rules, and thread semantics. Bluesky facets, Mastodon character limits, and Dev.to markdown all need different output.
Built a content transformation pipeline that takes a single canonical draft and produces platform-specific variants. Thread parsing extracts hooks for social, joins for long-form. Each integration has its own formatter, but they share a common interface.
AI Output Quality Gates
LLM outputs are probabilistic. Without guardrails, the AI could post content that doesn't match the user's voice or contains hallucinated information.
Implemented a multi-stage quality pipeline: voice fidelity scoring against user profiles, confidence thresholds that gate automated posting, and human-in-the-loop approval for high-stakes content. Every AI output passes structured review before reaching a platform.
Streaming + Tool Calls
The chat interface needs to stream AI responses in real-time while also executing tool calls that modify server state — scheduling posts, connecting platforms, editing drafts.
Used the Vercel AI SDK's server-sent event streaming with interleaved tool execution. Tool results render inline as the response streams. State mutations happen server-side via tRPC, and the UI optimistically updates while the stream continues.
OAuth Credential Management
Five platforms means five different OAuth flows, token refresh strategies, and failure modes. Credentials need to be stored securely and tested regularly.
Unified OAuth connection model in Prisma with encrypted token storage. Each platform connection includes a test endpoint that validates credentials on connect and on a rolling schedule. Disconnection is clean — revokes tokens and removes stored credentials.
Authentication & Authorization
Clerk handles authentication with production-grade session management, social login, and MFA support. But auth is just the entry point. Authorization is where the real decisions happen.
Every tRPC procedure is wrapped in an auth middleware that extracts the Clerk session, resolves the internal user, and attaches it to the context. Downstream resolvers never touch auth primitives directly — they receive a typed user object with plan information, connection status, and feature flags.
Plan-gated features (number of sources, drafts, connected platforms) are enforced at the tRPC layer, not the UI. The UI reflects limits, but the server enforces them. This means an AI agent writing a new feature doesn’t need to think about billing — the middleware handles it automatically.
Deployment & CI/CD
Every push triggers a pipeline: TypeScript compilation, ESLint, Prettier, Prisma validation, and the Playwright E2E suite. Vercel deploys preview environments for every PR with isolated Neon database branches, so reviewers test against real data without risk.
Production deploys happen on merge to main. Vercel’s edge network handles global distribution. Neon’s serverless Postgres scales connection pooling automatically. The entire deployment pipeline requires zero manual intervention.
Push to branch
→ TypeScript: tsc --noEmit
→ Lint: eslint + prettier
→ Schema: prisma validate
→ Test: playwright (Docker Compose)
→ Preview: Vercel deploy + Neon branch
→ Review: architect reviews diff
→ Merge to main
→ Production: Vercel edge deploy
→ Monitor: uptime + error trackingPerformance & Scaling
Next.js App Router with React Server Components means most pages render server-side with zero client JavaScript by default. Interactive components opt into client rendering explicitly. The result: fast initial loads and minimal hydration overhead.
Neon’s serverless architecture handles connection pooling and auto-scaling. During peak usage, the database scales horizontally without configuration changes. During idle periods, it scales to zero — no paying for idle compute.
The cron-based scheduling system processes queued posts every 5 minutes. Each run is idempotent: it checks for pending posts, validates credentials, publishes, and records results. Failures are logged and retried with exponential backoff. The system handles partial failures gracefully — a Bluesky outage doesn’t block Mastodon posting.
Lessons Learned
What we’d do again, what we’d change, and what surprised us.
What Worked
- Type safety as agent guardrails — the compiler caught 90% of structural errors before human review
- Entity patterns that compound — each new entity shipped faster than the last
- Clean separation of concerns — zero cross-feature regressions across 30+ PRs
- tRPC end-to-end types — API integration bugs essentially eliminated
- Fixed-scope tickets with acceptance criteria — the AI agent performed best with clear boundaries
What We’d Improve
- Earlier E2E test setup — writing tests after features made some rework necessary
- More granular error boundaries — a few edge cases surfaced only in production
- Platform-specific rate limiting from day one — each API has different throttling rules
- Better draft versioning — content edits were destructive instead of append-only
- Dedicated staging environment — preview deploys are good but not identical to production
Want Results Like This?
Our AI Orchestration Sprint delivers production-grade systems in 4 weeks. Same architecture discipline, same engineering rigor, applied to your problem.