Years ago, as a financial controller, I made the mistake every finance leader makes once: I trusted the evaluation process.
Months of feature comparisons. Vendor demos. Reference calls. Consultant credentials checked and double-checked. Then on day one of implementation, my carefully selected expert sat across from me and asked how I wanted the system configured.
He could configure anything. He just believed it was my job to know exactly how the system should look.
Silly me — I'd expected an experienced team of professionals who'd seen it all to walk me through the options, explain the pros and cons, lay out the decision drivers so I could make a confident, well-informed call. Apparently, they didn't see that as their job.
The replacement came armed with "best practice." Better, but only marginally — he seemed to push the same solution across all his clients with minimal customisation. My job became explaining why each piece of "best practice" wouldn't work for my process, my team, or my system landscape. Half the contracted features were incompatible with how we actually operated. Some functions didn't work as documented. And the report we'd built the whole project around — a straightforward, standard finance output we'd assumed any system could generate — couldn't be produced at all.
Can you imagine a matching system that cannot report on transactions unmatched as of a date?
We built a workaround, and that job, naturally, landed on my lap too.
I taught myself the system out of necessity. The consultants coded — to the specs I developed. I delivered the project, but I never forgot what that experience cost me, or what it revealed about how broken the entire selection process is.
Every "BlackLine vs TrinTech vs OneStream" comparison ranks the same things: features, integrations, ratings, pricing. None of them ask the question that actually determines whether your implementation will succeed:
"Will the output your finance team needs actually come out the other end?"
A ridiculous question? Not if the consultant you're considering can't produce it out of the demo system.
I dare you to ask for the transactions unmatched as of a date — and you'll hear every kind of excuse:
"Our demo data isn't report-ready; We only demo the features; We'll get back to you on that one. But look! We have a reporting suite! It can sort, it can filter, it can export to Excel — surely it'll produce whatever report you need!"
All kinds of excuses and promises — but never the report you asked for - "Sure, of course it's doable, we'll add it to the SOW!" And now they've got you!
After implementing all three platforms in high-transaction environments, I can tell you: all three can be configured to fail to produce the required output. The matching engines will match, even effectively — if you've found a decent consultant. The reporting capabilities differ in subtle ways that only become visible when you start with the report and work backwards. Almost nobody does this.
The other thing nobody tells you: the consultant's track record means less than you think. Every consultant firm has happy customers. Every consultant has years of successful implementations on their LinkedIn profile. None of that tells you whether this specific person can produce your specific output for your specific process.
The only thing that tells you is making them prove it.
That's why I now insist on a Proof of Concept stage on every engagement. Not a vendor demo. A working POC: real data, real process, real output. Imperfect, unpolished, still work-in-progress — but functional, with a clear path to the final solution from there.
If they can't produce it at POC, they can't produce it at go-live either. Find out before the six-figure contract is signed.
That's the framework I want to share with you on my webinars — and you won't leave empty-handed.
Walk away with a ready-to-use POC brief template — customise it with your own data and hand it to any consultant. How they respond will tell you more in two weeks than two months of reference calls.