When Specs Lie
Or: The maths your PRD is failing
Your PRD passed review. Three people signed off. Development started.
Two weeks later, someone asks: “What does ‘associated with’ actually mean?”
Nobody knows.
The Tool
I built something recently. It takes a written specification — a PRD, a ticket, a requirements doc — and runs it through formal logical analysis.
Not AI-generated summaries. Not vibes. Actual logic: extracting premises, checking for contradictions, identifying hidden assumptions, flagging undefined terms.
The kind of analysis a philosophy student would do to an argument. Except applied to software specs.
I expected it to be moderately useful.
The results were humbling.
What It Finds
I needed real specs to test it on. I used my own — a few tickets I’d written recently.
Missed inaccuracies. One ticket described a new filter called “CC’d Cases.” Except in the description, I’d written “My Cases” — the name of a completely different filter. I’d talked it through with the team and nobody flagged it. Probably because it made sense in the context of our discussion, and they skipped over reading every line of a ticket that already made sense to them. This did cause delays when our UAT tester wasn’t sure exactly what he was testing.
Incomplete specifications. Step 5 of a workflow says “Tooltip.” That’s it. I think I talked about it on a call? But I never documented it. Oops.
Ambiguous visibility rules. A matrix shows that certain users see “None” for certain data types — but that’s the hyperlink table. There’s a separate table for whether something can be linked at all. So does “None” mean no hyperlink but the data still shows? Or completely hidden? I thought I’d described it well. But reading it back... maybe not.
Missing definitions. Terms I assumed the devs would know — and they probably did. But if the ticket got shared more widely, or picked up by someone new, it could easily trigger confusion.
Unclear relationships. Using “associated with” a vendor to describe key logic. It’s fair that a developer might interpret this in different ways. Is that a database join? A business rule? A contractual status? I knew what I meant. I didn’t write what I meant.
That was from just 3 tickets.
Why This Happens
Specs are written in natural language. Natural language is ambiguous by design. It’s flexible, contextual, forgiving.
Code is not forgiving.
The spec is a translation layer — from intent to implementation. But we never check the translation. We check whether the spec sounds right. Whether it seems complete. Whether the stakeholders nod.
We don’t check whether it’s logically coherent.
The Maths
This isn’t fancy maths. It’s not machine learning. It’s not even statistics.
It’s the maths underneath meaning: formal logic. Predicate calculus, to be specific — a branch of both philosophy and mathematics that’s been around for over a century. The stuff that asks:
If P, then Q. Is P true? Then Q must follow.
You said X in paragraph 2 and not-X in paragraph 5. Which is it?
You’re using this term as if it’s defined. It isn’t.
This is the foundation. The boring stuff. The stuff we skip.
And skipping it costs us weeks.
What This Means
I’ve been writing about working at the interface level. About specs as the layer that matters. About treating code as voltage underneath.
But here’s the thing about interfaces: they have to be good.
A spec that contradicts itself isn’t an interface. It’s a trap. It’s a set of instructions that can’t be followed because they point in two directions at once.
Formal analysis doesn’t make specs harder to write. It makes bad specs visible. It’s a linter for logic.
And right now, we’re shipping specs with errors we’d never tolerate in code.
The Ask
I’m not saying everyone needs to learn predicate calculus.
I’m saying: your specs have bugs. You just can’t see them yet.
Maybe we should start looking.
— Mcauldronism

