All Case Studies
B2B SaaSOpenClaw (Agent Framework)

OpenClaw powers custom research agent inside a compliance SaaS product

A compliance SaaS company needed a customer-facing AI research agent built directly into their product. OpenClaw was deployed in their codebase — code ownership, custom data integrations, production-grade guardrails from day one.

Published February 2026
$420K
ARR added in first 6 months

The challenge

A compliance SaaS company (customers: Fortune 500 legal and compliance teams) needed an AI agent that could research regulatory changes across 50+ jurisdictions and summarize implications for specific customer risk profiles. The product team had built a v1 using off-the-shelf SDKs; it hallucinated case citations, couldn't handle customer-specific data with required security, and cost more than expected per user. Competitors were shipping similar features and threatening to win demos.

Our approach

  • 01

    Architectural review of the v1 build and decision to rebuild on OpenClaw — code in the client's repo, their engineering team involved throughout

  • 02

    Implemented retrieval-augmented generation over primary regulatory sources with citation verification (no hallucinated references)

  • 03

    Built per-customer data access controls — each customer's agent sees only their own risk profile and document library

  • 04

    Added observability dashboards showing every agent decision, retrieved source, and token usage

  • 05

    Implemented model routing — Haiku for classification tasks, Sonnet for synthesis, Opus for high-stakes analytical briefings

  • 06

    Pair-programmed with client's senior engineers so they could maintain and extend the agent after engagement

Results

  • $420K ARR added in first 6 months post-launch (agent closed deals previously losing to competitors)

  • Citation accuracy: 99.4% (verified against primary sources) — up from ~70% on v1

  • Per-user cost: 58% lower than v1 due to model routing

  • Feature-specific NPS: 71 (highest in the product)

  • Client engineering team running agent fully autonomously post-engagement

Timeline

Engagement: 12 weeks from scoping to production launch. Ongoing retainer for model upgrades and new use cases.

What we learned

  • Pair programming with the client's engineers was the highest-leverage part of the engagement — they own it now
  • Model routing was invisible to users but saved >50% on costs without quality compromise
  • Citation verification (every reference re-checked programmatically) was what unlocked enterprise trust
  • The v1 failed because it treated AI as a feature, not a product surface area with its own architecture

Our v1 was a liability. OpenClaw is a strategic asset. The difference isn't the model — it's the discipline around the model. We wish we'd started with this approach.

VP of Engineering, B2B SaaS

Want a case study like this?

Tell us about your goals. We'll send a scoped proposal within one business day — no pressure, no obligation.