Credits flow through four product areas. Each area is shipped and working. The question is where to focus to drive 27.8M credits/month.
| Product Area | How It Generates Credits | Current Evidence | The Opportunity |
|---|---|---|---|
| SQL Execution | Query seconds → query credits. Pipelines run daily. | Marinade: 54K credits in 35 days. CMC/Stockholm: high volume during trials. | Proven credit driver. Orgs building data pipelines consume 1,800–6,000 credits/day. |
| Chat | AI tokens + triggered queries. Sessions are one-off. | 6 orgs active. Laminated, Avalanche, Monad, Aleo exploring. 79% AI, 21% query. | Entry point for new orgs. Low credits today ($50–$200/mo) because sessions don’t recur. |
| Agentic CLI | Agent reasoning + iterative queries. 3–60x more credits than UI. | Internal power users: 300–500 credits/day. Zero external adoption. | Highest credits per user. An agent session triggers 3–10 queries that iterate and self-correct. |
| Automations incl. alerts, briefings, scheduled queries |
Recurring credits without human action. Runs on schedule. | A few orgs have automations — all created by Flipside team. | Best NRR lever. Converts one-time exploration into baseline recurring consumption. |
Two months from now. Here’s what we should be able to point to.
| Metric | Current | April ’26 Target | Why This Matters for $10M |
|---|---|---|---|
| Orgs at $500+/mo | Active orgs across tiers | 5 | Need 440 at target. First 5 prove the $500/mo floor works as a business. |
| External Agentic CLI users | 0 external | 3+ | Highest credits/user surface. Must prove it works outside internal team. |
| Orgs using agents to build queries | 0 external | 3+ | Core thesis: agents building pipelines drives AI + query credits together. |
| Self-serve Automations | Built, team-created only | Self-serve | Recurring credits = NRR. Orgs must create their own automations. |
| Orgs using 2+ product areas | Orgs siloed in one area | 3+ | Cross-surface usage (SQL + Chat, SQL + CLI) = sticky, higher ARPU. |
Thesis: programmatic usage → credit consumption → revenue. Credits are consumed in two categories: AI tokens (chat completions, agent reasoning) and query execution (SQL seconds). Across 21 external orgs, 86% of all credits come from programmatic access (API, CLI, automated pipelines) — not from users clicking around in the UI. If we want to grow revenue, we need to invest in product that drives programmatic usage.
| Signal | Classification | What It Means |
|---|---|---|
session_id = NULL | Programmatic — Direct API | Query or AI call made outside any chat session. Pure API/SDK usage. |
meta.source = “cli” | “api” | Programmatic — CLI/MCP | Chat session initiated from CLI, Cursor, or MCP tool. Has a session, but not human-in-the-UI. |
meta.source = “web” | “agents-page” | Manual — Web UI | User in the browser, chatting with agents or running queries manually. |
| Burst queries (<2s gaps) | Programmatic — Pipeline | Rapid-fire queries at machine speed. Marinade: 90% burst rate. CMC: 98%. |
session_id.
After incorporating meta.source, his classification moved to 97.7% programmatic (233 of 298 sessions are CLI-originated,
plus all sessionless query events). This confirmed the heuristic works.
| Pattern | Orgs | % of Credits | Programmatic % | What They Do |
|---|---|---|---|---|
| API Pipeline | 3 | 80% | ~100% | Automated SQL pipelines via API. Machine-speed burst queries. Zero AI. High volume, low stickiness. |
| AI Chat | 6 | 13% | ~15% | Exploring data through agents in the UI. Human-paced. High AI %, low query volume. Undermonetized. |
| Mixed | 3 | 11% | ~60% | Both AI exploration and query execution. Closest to the target pattern. Highest stickiness potential. |
Credit consumption follows a clear hierarchy. Each step up the programmatic ladder multiplies revenue per user:
| Access Pattern | Avg Credits/Day | Rev Multiplier | Why |
|---|---|---|---|
| Manual UI only | 50–100 | 1x | Browsing, occasional chat. Human-speed ceiling caps consumption. |
| Manual chat + API queries | 200–350 | 3–5x | Chat for exploration, then direct API for extraction. Eric Stone’s pattern. |
| Agent-driven (CLI/Cursor/MCP) | 300–500 | 5–10x | One agent session triggers 3–10 downstream queries that iterate and self-correct. |
| Automated pipeline (API) | 1,800–6,000 | 20–60x | Machines run 24/7 at burst speed. No human bottleneck. |
| Organization | Credits | AI % | Qry % | Cred/Day | Est Rev/Mo | Pattern |
|---|---|---|---|---|---|---|
| Marinade Finance | 64,025 | 3% | 97% | 1,829 | $1,646 | Pipeline |
| CMC Capital SRL | 20,817 | 0% | 100% | 5,204 | $4,684 | Pipeline |
| Stockholm SSE | 18,066 | 0% | 100% | 6,022 | $5,420 | Pipeline |
| CoW Swap | 6,922 | 36% | 64% | 865 | $779 | Mixed |
| Laminated Labs | 6,242 | 91% | 9% | 223 | $201 | AI Chat |
| Near Foundation | 5,879 | 31% | 69% | 210 | $189 | Mixed |
| Avalanche | 3,591 | 81% | 19% | 257 | $231 | AI Chat |
| Monad | 2,684 | 71% | 29% | 112 | $101 | AI Chat |
| Aleo | 2,631 | 86% | 14% | 329 | $296 | AI Chat |
| Somnia | 1,302 | 74% | 26% | 93 | $84 | AI Chat |
| Karatage | 1,228 | 34% | 66% | 123 | $111 | Mixed |
| Flow | 824 | 16% | 84% | 59 | $53 | Light |
| Attestant | 628 | 88% | 12% | 45 | $41 | AI Chat |
| + 8 orgs under 500 credits | Circle, Solana, Ink, Polygon, Sui, Morpho, Kraken, Movement | |||||
The ideal customer uses AI agents to explore data and runs queries to extract it. Query-only users are high-volume but commoditized (any SQL warehouse will do). AI-only users are low-volume and substitutable (ChatGPT + a SQL tool). When both layers are engaged, AI drives queries and queries feed AI — a flywheel that’s hard to leave.
Two internal power users with completely different workflows independently converged on similar consumption patterns. This gives us the per-user building block — the question is how many users per org we can drive.
| Jim Myers — “Agent-First” | Eric Stone — “Explorer” | |
|---|---|---|
| AI : Query ratio | 52 : 48 | 44 : 56 |
| Credits / day | ~500 | 326 |
| Monthly projection | ~15,000 ($450) | ~9,800 ($294) |
| Workflow | CLI agents trigger cascading queries | Chat exploration, then direct API queries |
| Channel | 78% CLI/Cursor, 49% no-session queries | 11 chat sessions, 105 direct API queries |
| Input | Value | Basis |
|---|---|---|
| Credits per user per day | 300–500 | Eric = 326, Jim = 500. Conservative: 350. |
| AI : Query ratio | 40 : 60 | Eric = 44:56, Jim = 52:48. Midpoint. |
| Active days per month | 22 | Weekdays |
| Revenue per user per month | $198–$330 | 6,600–11,000 credits × $0.03 |
| Org Stage | Users | Cred/U/Day | Credits/Mo | Rev/Mo | Rev/Year | |
|---|---|---|---|---|---|---|
| Entry — small team | 2–3 | 350 | 16,700+ | $500 | $6,000 | Minimum. 2–3 users, team workflows. |
| Healthy — data team | 5 | 350 | 38,500 | $1,155 | $13,860 | Target. Pipelines + agents + analysts. |
| Strong — power team | 10 | 450 | 99,000 | $2,970 | $35,640 | Ideal. Multiple workflows, org-wide. |
| Enterprise — scaled | 20 | 500 | 220,000 | $6,600 | $79,200 | Enterprise. Deep integration across teams. |
Margins are identical (80%) on AI and query credits. The 40:60 target is about stickiness and moat, not profitability per unit. More AI usage compounds revenue — one agent session at ~$0.75 of AI credits triggers ~$1.20 of query credits.
With a $500/month floor, every org is a real business customer. Product drives adoption and expansion. Sales closes the largest accounts.
| Org Cohort | Count | Avg ARPU | Rev/Mo | % of Rev | Who These Are |
|---|---|---|---|---|---|
| Entry ($500–$1K) | 200 | $700 | $140K | 17% | 2–3 users, early workflows. Just cleared the floor. |
| Growing ($1K–$3K) | 150 | $1,800 | $270K | 32% | 5–8 users, daily workflows. Expanding programmatic access. |
| Strong ($3K–$6K) | 60 | $4,000 | $240K | 29% | 10+ users, pipelines + agents. The target org. |
| Enterprise ($6K+) | 30 | $6,000 | $180K | 22% | 20+ users, deep integration. Sales-assisted. |
| Total | 440 | $1,886 | $830K | 100% | |
| ARR | $10.0M | GP: $8.0M at 80% margin |
Every org starts at $500/month or doesn’t start at all. The question is how fast they expand. If orgs grow from $500 → $1,200 → $3,000/month over 6–12 months as they add users and workflows, net revenue retention exceeds 150%. That expansion compounds.
| Stage | Users | Credits/Mo | Rev/Mo | What Changes |
|---|---|---|---|---|
| Month 1 — Onboard | 2–3 | 16,700 | $500 | Floor. Small team sets up initial queries + AI sessions. |
| Month 3 — Expand | 4–5 | 33,000 | $990 | Additional analysts join. First pipelines forming. |
| Month 6 — Data team | 5–7 | 49,000 | $1,470 | Team pipelines + agents running daily. AI at 30%+. |
| Month 12 — Power team | 10 | 99,000 | $2,970 | Org-wide: pipelines + agents + analysts. |
| Month 18 — Scaled | 20 | 220,000 | $6,600 | Full adoption, 20+ active users. Enterprise territory. |
Today’s evidence: Marinade is at $1,646/month (54,870 credits) with sustained daily usage over 35 days. CMC and Stockholm showed high daily burn rates during trials but haven’t sustained. The question is whether we can get orgs like Marinade using agents to build their workflows — moving from 0% AI to 30%+ to make the revenue sticky and expandable.
Every org on this roadmap is at $500+/month. “Orgs” means paying business customers above the floor.
| Paying Orgs | Avg ARPU | Rev/Mo | ARR | AI % | What Has to Be True | |
|---|---|---|---|---|---|---|
| Today Feb ’26 | early | — | — | early | ~0% | 21 orgs active across product areas. Credit consumption validated. Pipeline orgs driving highest volume. Zero agent adoption on paying orgs. |
| Month 2 Apr ’26 | 5 | $1,800 | $9,000 | $108K | 10%+ | Gate. 5 paying orgs. CLI packaged for external. 1+ pipeline org using agents to build queries. |
| Month 4 Jun ’26 | 10 | $1,400 | $14,000 | $168K | 25% | 5 net new qualified orgs. Early orgs expanding — first crosses $1K/mo. Pipeline orgs building with agents. |
| Month 6 Aug ’26 | 18 | $1,250 | $22.5K | $270K | 35% | Validation checkpoint. 18 paying orgs at $500+. At least 5 above $1K/mo. AI adoption visible. Expansion signal confirmed. |
| Month 9 Nov ’26 | 40 | $1,375 | $55K | $660K | 38% | Adding ~7 qualified orgs/month. 10+ orgs above $1K/mo. Early cohorts hitting $2K+. ARPU rising with expansion. |
| Month 12 Feb ’27 | 75 | $1,600 | $120K | $1.44M | 40% | Scale checkpoint. Adding ~12 orgs/month. 20+ orgs above $1K/mo. First enterprise accounts ($6K+). NRR > 130%. |
| Month 15 May ’27 | 140 | $1,714 | $240K | $2.9M | 42% | Adding ~20 orgs/month. Expansion engine running. 10+ orgs at $3K+. Dedicated sales for enterprise tier. |
| Month 18 Aug ’27 | 230 | $1,783 | $410K | $4.9M | 43% | Adding ~30 orgs/month. 25+ orgs above $3K/mo. Enterprise pipeline growing. NRR > 150%. |
| Month 21 Nov ’27 | 340 | $1,838 | $625K | $7.5M | 44% | Adding ~35 orgs/month. Older cohorts at $3K–6K pulling avg ARPU up. Enterprise closings accelerating. |
| Month 24 Feb ’28 | 440 | $1,886 | $830K | $10.0M | 45% | Target. 200 entry ($700), 150 growing ($1.8K), 60 strong ($4K), 30 enterprise ($6K). Adding ~35 qualified orgs/month. |
Everything is built. The work is packaging, onboarding, and directing users toward the actions that drive credits. The single question for every initiative: does this convert data usage into AI usage?
Agentic CLI
The problem: The CLI works — Jim generates 500 credits/day through Cursor.
But zero external users have adopted it because it’s not clear what it is, how to set it up, or what you’d use it for.
There’s no onboarding, no setup guide, no “here’s what you get.”
Do: Setup guide + quick-start tutorial. Onboarding flow that gets a user from zero to first agent invocation in 10 minutes.
Clear packaging: “Access Flipside agents from Cursor, Claude Code, or any MCP-compatible tool.”
Measure: 3+ external Agentic CLI users by April.
SQL Execution Chat Agentic CLI
The insight: People come here for the data. Marinade is writing SQL by hand and spending $1,646/month.
They’re getting value — but doing the hard work themselves. The pitch: let agents build your pipelines.
An agent writes the SQL, iterates on it, deploys it as an automation. Faster for the user, and it drives both AI and query credits.
Do: Hands-on sessions with pipeline orgs. Demo the agent building a pipeline they already have —
“you wrote this in SQL, the agent does it in 30 seconds.”
Make “build with an agent” the suggested path inside the query interface.
Measure: At least 1 pipeline org with AI% >10% by April.
Automations
The problem: Automations are built and working — briefings, alerts, scheduled queries — but users don’t know what they’d create one for or how.
We provide examples but there’s no clear path from “I just ran a query” to “this runs every day and sends me a briefing.”
Every automation deployed was created by our team.
Do: Clear CTAs in the product after a query or chat session: “Want this as a daily briefing?” “Set up an alert when this crosses X.”
Templates for common use cases. Self-serve creation flow that doesn’t require our team.
Measure: 3+ orgs with at least one self-created Automation by April.
Why now: 5 orgs at $500+/month is the first proof point that the floor works as a business.
These need to be teams (2–3+ users) using multiple product areas, not individuals in one.
Do: Identify from existing pipeline (10 sub-$500 orgs) or net new.
Onboard them across SQL + Chat + CLI. Direct them — don’t give them five options, give them one path.
Measure: 5 orgs above $500/month by April.
Why: Most orgs have 1–2 users. The floor is $500/mo (2–3 users). Healthy is $1,155+ (5 users).
Every additional user is linear ARPU growth. New users should land in the product area that fits their role:
analysts into SQL + Chat, engineers into the Agentic CLI.
Do: Team invite flow, role-based onboarding, shared dashboards/queries, org-level billing visibility.
The first experience for user #2 on an org should be as directed as user #1.
Measure: Avg users per paying org > 4 by August.
Automations
Why: Credits consumed without human action are the purest revenue —
and the product already supports it. Once self-serve Automations work (item 3), the goal is adoption:
every paying org should have at least one recurring automation. This is what makes NRR > 120%.
Do: Automation templates, suggested automations based on usage, “set it and forget it” flows.
Measure: 30%+ of credits from Automations by August.
| Risk | Why It Matters | How to Validate |
|---|---|---|
| Pipeline orgs churn | Highest-volume orgs run raw SQL pipelines with zero switching cost — any SQL warehouse works. Without AI adoption, there’s no moat. | Get pipeline orgs using agents to build their workflows. AI usage = switching cost. Track AI % per pipeline org monthly. |
| Per-user economics don’t replicate | 300–500 credits/day is validated on 2 internal power users. External users may consume less, use AI differently, or not adopt agents at all. | Instrument per-user daily credit consumption for external orgs. Need 10+ external data points in 3 months. |
| Orgs don’t expand | The model assumes orgs grow from 1 to 5–10 users. If most orgs stay at 1–2 users, avg ARPU stays at $200–400 and $10M requires 2,000–4,000 orgs. | Track users-per-org and ARPU monthly. If no expansion signal by month 6, reassess the target. |
| Acquisition engine doesn’t exist yet | 440 orgs at $500+/month in 24 months requires adding ~18 qualified orgs/month avg. Today there’s no scalable acquisition channel. | Identify top-of-funnel channel by month 4. Test: content, partnerships, PLG viral loops, outbound. Need 5+ qualified orgs/month by month 6. |
| Metric | Today | Apr ’26 (2 mo) | Aug ’26 (6 mo) |
|---|---|---|---|
| Orgs at $500+/mo | early | 5 | 18 |
| Orgs building with agents | 0 external | 3+ | 50%+ |
| External Agentic CLI users | 0 | 3+ | 15+ |
| AI % across paying orgs | ~0% | >10% | 35% |
| Orgs with self-created Automations | 0 (team-created) | 3+ | 10+ |
| Orgs using 2+ product areas | siloed | 3+ | majority |
| Orgs > $1K/month | early | 3+ | 10+ |
OrgUsageEvent table via internal-cli -e prod admin usage-events.
21 external organizations, Flipside internal org (538,644 credits) excluded.category field: ai_usage (chat completions) or query_execution (SQL execution).
Credits are calculated by the usage event service based on token counts and query seconds respectively.session_id = NULL → Direct API (no chat session involved)ChatSession.meta.source = "cli" | "api" → CLI/MCP-drivenChatSession.meta.source = "web" | "agents-page" → Web UIsession_id presence/absence.