ContractBuddy
Built an AI-powered contract reconciliation platform that compares deal summaries with legal contracts, identifies risks, and highlights mismatches automatically. Reduced manual review time by over 70%.
Problem
Contract managers manually reviewed agreements line-by-line to verify payment terms, usage rights, territories, and duration. This process was slow, error-prone, and limited scalability.
The platform is used internally to analyse influencer and brand agreements. These contracts are high-volume, varied in structure, and often inconsistent with the original deal summary.
Average review time per contract: 45 to 90 minutes.
Approach
Built a hybrid AI and deterministic parsing pipeline. The system extracts structured data, classifies clauses, compares the contract against the source agreement, and flags discrepancies.
The key design decision: keep a human reviewer in control. The AI highlights what matters. The human makes the call.
How it works
- Document ingestion. DOCX and PDF support with chunking, embedding, and semantic retrieval
- Structured extraction. Claude and GPT extraction adapters pull key terms from unstructured legal text
- Clause classification. Deterministic matching layer categorises clauses by type (payment, territory, usage rights, duration)
- Reconciliation. Compares extracted terms against the original deal summary and flags mismatches
- Risk highlighting. Surfaces discrepancies with severity ratings for human review
Architecture
Frontend: Next.js 15, React, Tailwind CSS, Supabase Auth
Backend: Supabase PostgreSQL, pgvector embeddings, Node workers
AI pipeline: Claude / GPT extraction adapters, hybrid parsing system, deterministic clause matching layer
Document processing: DOCX + PDF support, chunking, embedding, semantic retrieval
Results
- Review time reduced from 60 minutes to 15 minutes per contract
- 75% time saving across the contract review workflow
- Improved accuracy. Automated extraction catches terms human reviewers miss under time pressure
- Compliance improvements. Standardised review process reduces risk of missed clauses
Now forming the foundation of a commercial SaaS product.
Lessons
The hybrid approach (AI extraction combined with deterministic matching) proved more reliable than pure LLM analysis. Legal text needs precision. Generative models are good at extraction but unreliable for logical comparison without guardrails.
Starting with a narrow domain (influencer contracts) made the system practical. A generic “contract AI” would have been too broad to build well.