Home/Work/Meridian
AI & RAG

Meridian
AI Knowledge Base.

Client
Meridian Consulting
Year
2024
Timeline
6 weeks
Services
RAG ArchitectureLLM IntegrationUI/UX DesignAPI Development
Meridian AI knowledge base interface
91%
reduction in time spent searching docs
<300ms
average retrieval response time
94%
answer accuracy on internal benchmarks
6wk
from brief to production
Overview

A production RAG system that lets 200 consultants query a decade of internal documents, methodologies, and case studies in plain English — and get accurate answers in under a second.

Making institutional knowledge instantly accessible.

Meridian's consultants were spending hours every week hunting through shared drives, Confluence pages, and email threads for answers that existed somewhere in the company's institutional knowledge. The information was there — it just wasn't findable.

We designed and built a RAG-powered knowledge base on top of their existing document library. Consultants now ask questions in plain English and get cited, accurate answers drawn directly from internal sources. The system handles 300+ queries a day with a 94% accuracy rate on their internal benchmark suite.

The project covered everything from chunk strategy and embedding model selection through to the chat UI and role-based access controls. We deployed to their existing AWS infrastructure and handed over full documentation on day one.


Screens
Chat interface
Source citations panel
Document ingestion pipeline
Admin dashboard
Mobile interface
The challenge

Building a retrieval system accurate enough for professional consulting use — where a hallucinated answer has real business consequences.

Accuracy isn't optional when the stakes are real.

Most RAG demos look impressive until they hallucinate. For a consulting firm where wrong answers erode client trust, we needed a system that knew what it didn't know — and said so clearly rather than confabulating.

We tackled this through a combination of hybrid search (dense + sparse retrieval), a reranking layer, and a strict citation requirement baked into the system prompt. Every answer the system returns includes the specific document and page it drew from. If the retrieval confidence falls below threshold, the system declines to answer rather than guessing.


How we worked

A 6 weeks process built for speed and quality.

Week 01
Discovery & data audit
Document inventory, access pattern analysis, accuracy benchmarking strategy, and infrastructure scoping.
Week 01–02
RAG architecture
Chunking strategy, embedding model selection, vector store setup (pgvector), hybrid retrieval design, and reranker evaluation.
Week 02–05
Build & evaluate
Pipeline development, UI design and build, RBAC implementation, accuracy benchmarking, and iterative retrieval tuning.
Week 05–06
Deploy & handoff
AWS deployment, monitoring setup, user onboarding sessions, and full technical documentation handoff.
Dr. Patricia Okafor
Chief Knowledge Officer
Meridian Consulting
★★★★★
"We evaluated three vendors before choosing CenterPoint. They were the only team that talked seriously about accuracy from day one — not just demo performance. The system they shipped is used by our entire team every day, and the ROI was visible within the first week."
Next project
Project preview
Web Design & Development
Flowbase Rebrand
Full website redesign and brand refresh for a SaaS startup — Next.js, new CMS, 68% faster load times.
View project

Want results
like these?

Book a free 30-minute call and let's talk about what you're building.

Book a free call →