When Chris and I started building Servantium, we had a whiteboard session that lasted about six hours. We’d both spent years working with services firms. Chris on the operations side, me on the technology side. And we kept circling the same problem.
Services firms don’t have a software problem. They have too much software. CRM, PSA, time tracking, project management, invoicing, resource planning. What they have is a memory problem. None of those systems remember anything useful about how the firm actually operates.
By the end of that session, we’d written two words on the whiteboard and circled them: institutional memory. That’s what we set out to build. Not another tool. A memory.
What “Remembers” Means, Concretely
I want to strip the marketing language away and explain exactly what happens in our system. “The platform that remembers” is not poetry. It’s architecture.
Here’s what Servantium remembers:
Every estimate, in structured form
Not as a PDF attachment. As a data object with phases, roles, hours, rates, assumptions, and risk factors — all individually queryable. When you build a new estimate, the system can show you every similar estimate you’ve ever created, how they were structured, and how they turned out.
Every engagement outcome
When an engagement closes, the system automatically compares the estimate to the actuals. Phase by phase. Role by role. It records where variance occurred and tags the contributing factors. Was it a scope change? A client delay? An underestimate of complexity? This happens without anyone filling out a form. The data is already in the system.
The relationship between estimates and outcomes
This is the part nobody else does. The system doesn’t just store estimates and outcomes separately. It connects them. It knows that your estimates for data migration engagements with enterprise clients tend to overrun by 28% in the extraction phase. It knows that when you staff a senior architect on discovery, the overall engagement variance drops by half. It knows these things because it’s been watching.
Service definitions as living objects
Your service catalog isn’t a static document. It’s a living data structure that evolves as you deliver. When you create a new service offering, it starts with your initial definition. As engagements are delivered, the system enriches that definition with actual delivery data. The effort ranges get tighter. The complexity drivers get more specific. The risk factors get validated or invalidated by real outcomes.
“A platform that remembers isn’t one that stores data. It’s one where the data from every engagement makes every future engagement more informed.”
Why We Built It This Way
The honest answer is that we both watched firms repeat the same mistakes for years and it drove us slightly crazy.
I spent a decade building enterprise software, including a stint building out a PSA platform. I saw the limitations firsthand. The PSA was great at tracking time. Terrible at connecting that time data to anything meaningful about the business. Utilization dashboards everywhere. Zero insight into whether the work being done was priced correctly, staffed well, or delivering value.
Chris spent years on the operations side — running engagements, building proposals, managing delivery. He’d seen what happened when a firm’s best estimator left. The new person would build proposals from scratch, making all the same mistakes the previous person had already learned to avoid. Years of accumulated wisdom, gone in a two-week notice period.
The fundamental design decision we made was this: the engagement is the atomic unit, not the project.
Projects have tasks and timelines. Engagements have relationships, scope negotiations, pricing decisions, staffing choices, delivery outcomes, and margin results. An engagement is a far richer object than a project. And when you model the engagement as your core entity, you can connect everything: the estimate that started it, the people who delivered it, the outcomes it produced, and the lessons it taught.
The Three Types of Memory
We think about institutional memory in three distinct layers, and the system architecture reflects this.
Transactional memory
What happened. The raw facts. This engagement was estimated at X hours, delivered in Y hours, billed at Z rate. The team was these people. The timeline was this long. Every platform stores this (or should). This is table stakes.
Analytical memory
What it means. The variance was concentrated here. The cause was this. The pattern matches these other engagements. This is where observation records live — structured artifacts that connect what happened to why it happened. Most platforms don’t do this at all.
Predictive memory
What we expect next time. Based on everything we’ve seen, here’s what a similar engagement is likely to look like. Here’s where the risk is. Here’s what we’re confident about and what we’re not. This is the confidence layer — the part of the system that uses accumulated memory to inform future decisions.
Each layer builds on the one below it. Without transactional memory, you can’t do analysis. Without analytical memory, you can’t predict. Most firms are stuck at layer one, with partial data that’s spread across disconnected tools.
Unpopular Opinion
The “single source of truth” is the wrong goal. You need a single source of context.
Every enterprise software pitch includes the phrase “single source of truth.” We used to say it too, until I realized it misses the point. Truth is just facts. Your PSA is a source of truth about how many hours were logged. Your CRM is a source of truth about your pipeline. Your invoicing system is a source of truth about what was billed.
What none of them provides is context. The connection between the facts. Why those hours were logged. Whether the pipeline deals are similar to past deals that succeeded or failed. Whether the billed amount matched the estimate and, if not, why.
Context requires memory. It requires a system that can hold facts from different domains and connect them. That’s a fundamentally different design goal than “single source of truth,” and it leads to a fundamentally different architecture.
What This Looks Like Day to Day
Let me walk through a real scenario. A partner at a firm using Servantium gets a request for an ERP implementation assessment. Here’s what happens:
She opens a new estimate and selects “ERP Assessment” from the service catalog. The system immediately surfaces relevant context: the firm has done 8 similar assessments. Average duration was 6 weeks. Estimated hours averaged 340, actuals averaged 390 — a 15% overrun, mostly in the stakeholder interview phase. Two of the eight had significant scope creep related to data quality discovery.
She sees this context as she builds the estimate. She decides to add buffer to the interview phase and include an explicit data quality discovery line item. She builds the estimate in 40 minutes instead of the 3 hours it used to take.
When the engagement closes 7 weeks later, the system automatically captures the outcome. This one came in at 360 hours — under the adjusted estimate. The observation layer records: the interview buffer was partially used, the data quality discovery took longer than expected but was caught in scope (not as scope creep). These observations feed into the confidence model for the next ERP assessment.
The firm just got a little smarter. Automatically. No post-mortem meeting. No knowledge management initiative. Just the system doing what it’s designed to do: remember.
Where We’re Heading
I want to be transparent about the roadmap because I think it matters.
Right now, the memory system works within a single firm’s data. Your engagements, your outcomes, your patterns. That’s already powerful, but it’s limited by the size of your dataset. A 30-person firm might do 40 engagements a year. That’s useful data, but it takes time to build statistical confidence.
Where we’re heading — and this is a longer-term vision — is anonymized, aggregate learning across firms. Not sharing your data with competitors. But pooling anonymized patterns so that a 30-person firm can benefit from the aggregate experience of hundreds of firms. Industry-level confidence on service types, complexity drivers, and variance patterns.
Think of it like Waze for professional services. Your individual route matters, but the system gets smarter because it sees all the routes. Nobody sees your specific data, but everyone benefits from the aggregate patterns.
We’re not there yet. We’re being careful about this because data privacy and competitive sensitivity matter enormously. But the architecture is designed to support it when the time is right.
What We Got Wrong (So Far)
I’d be dishonest if I didn’t mention the things we’ve had to iterate on.
We initially over-engineered the observation layer. We tried to capture too many variables too early. The result was a system that generated observations nobody understood. We stripped it back. Now it focuses on the variables that actually predict outcomes: service type, client segment, team composition, and scope characteristics. Simpler. More useful.
We also underestimated how hard the cold start problem would be. I wrote about this recently. The system needs to be useful before it has memory, and that’s a different design challenge than making it useful after 100 engagements. We’ve gotten better at this, but it’s an ongoing tension.
And we spent too long building features nobody asked for before we realized the core value was just the memory itself. The CPQ is important. The resource planning is important. But the thing that makes firms stop and pay attention is when they see their own data reflected back at them in a way they’ve never seen before. That moment of recognition — “we do that a lot, and we always overrun there” — that’s the product.
Bottom Line
“The platform that remembers” isn’t a slogan. It’s a design philosophy. Every feature we build passes through one filter: does this make the organization’s memory richer? If it does, we build it. If it’s just another feature that generates data nobody connects to anything, we don’t.
Tomorrow, ask yourself this: if your best estimator left today, how much of what they know would survive in a system versus walk out the door? If the answer makes you uncomfortable, that’s the memory gap. And it grows every day you don’t address it.
Ready to encode your services?
See how Servantium brings CPQ discipline to professional services.