Deterministic Clinical State Compiler

Raw medical notes are
not automation-ready.

Verify Med Codes is a deterministic clinical state compiler. It converts finished clinical notes into structured, machine-consumable clinical intelligence — with relationships, evidence linking, and specificity resolution — before your coding, claims, or AI systems execute.

94.7%extraction accuracy
Zero PHIcrosses the wire
46 familiesspecificity resolution

Normal note in. Deterministic clinical state out.

We don't replace your workflow. We compile clinical truth from the note so your AI, coders, and claims systems receive structured state instead of raw narrative.

Why VMC

The clinical state compiler between finished notes and everything downstream.

Note in. Deterministic clinical state out. Relationships, evidence, gaps. PHI-free.

Evidence-linked relationships

Not just codes — a causal graph. "NSTEMI explains dyspnea." "Troponin supports NSTEMI." Your AI consumes structured relationships, not text.

94.7% deterministic accuracy

46 family specs. 417 code mappings. Three-layer assertion evidence. Tested on 30 blind notes across 10 specialties with adversarial evaluation.

Parallel deployment

Runs alongside existing workflows. Paste a note, get clinical state. No EHR integration required. No workflow disruption.

Architecture

What no AI-first coder can promise you.

We can.

REPRODUCIBLE

Same note in, same codes out. Every time.

No sampling variance. No temperature. No drift.

Tested and verified.

AUDITABLE

Every code traces back to a sentence in the note.

No black box. Full decision trace.

ZERO AI DECISIONS

AI proposes. The engine decides. The human confirms.

0% AI-decided.

90%+deterministic engine·~10%AI-assisted

“Run the same note 100 times. Get the same codes 100 times. No other AI coder can do that.”

Verified: 3 runs, identical outputHash: 223b188479559da0
PHI-Safe Clinical Transformation

From raw note to deterministic clinical state.

This is the simplest way to understand the product: a normal finished note goes in, our extraction layer organizes it, and what comes out is evidence-linked clinical state that the rest of the workflow can safely use.

RCM systems assume the clinical input is already correct. It isn't.

We stabilize the clinical input before it reaches your claim engine.

Before

Raw note

What downstream systems receive today

HPI:
72-year-old male with CHF, diabetes, CKD...
- worsening shortness of breath
- orthopnea
- leg swelling
Labs:
A1C 8.9
BNP 640
ROS:
fatigue
dyspnea
edema
Assessment:
CHF exacerbation
diabetes
CKD stage 3
Plan:
increase furosemide
continue meds

What breaks today

Wrong specificity: K35.80 → K35.30 (resolved by family spec)
False assertion: PE present → denied (negation detection)
Chronic as primary: CKD over MI (causal priority fix)
MDM wrong: 99203 → 99285 (ED code set + admission)

Problems

Diagnoses scattered across HPI, PMH, Assessment
Assertion ambiguity: is it present, historical, or denied?
Symptoms mixed with definitive diagnoses
No causal relationships between findings
Specificity buried in labs and imaging results
Requires clinical reasoning, not just text search

Middle layer

VMC extraction layer

What happens to the note before the claim

Step 1

PHI stays local

The note is structurally transformed before anything leaves your environment.

Step 2

We reconstruct the clinical truth across the entire note

Diagnoses, labs, symptoms, sections, and plan actions are reconciled into one evidence-linked clinical state.

Step 3

Only claim-safe, evidence-backed data reaches downstream systems

Coding, claims, and automation consume deterministic clinical state instead of fragmented narrative.

Why this matters

This is the layer that turns a note from something humans must interpret into something systems can reliably consume.

After

Deterministic clinical state

Claim-ready output for downstream use

Claim-ready diagnoses

K35.30Acute appendicitis without perforation or gangrene

CT confirmedAssessment #1Specificity: perforation=false, gangrene=false

I21.4Non-ST elevation myocardial infarction (NSTEMI)

Troponin 0.42ST depressionsAssessment: NSTEMI documented

I50.23Acute on chronic systolic heart failure

BNP 640EF 35%Diuretic escalationAcuity: acute_on_chronic

Safety layer

Suppressed

R11.0 Nausea → explained by K35.30 (appendicitis)
R06.00 Dyspnea → explained by I21.4 (NSTEMI, not diabetes)
R19.8 Rovsing sign → PE finding supports diagnosis, not billable

Review, not billed

PE considered but unlikely → assertion: denied (negation detected)
Specificity gap: gangrene not confirmed → documentation opportunity

Evidence linking

Every diagnosis tied to causal: nstemi → explains → dyspnea
Every diagnosis tied to evidence: troponin 0.42 → supports → nstemi
Every diagnosis tied to documented: assessment section → k35.30
Every diagnosis tied to treatment: continue lasix → hf is present

Treatment validation

HF → furosemide increased → plan_intensity: escalate
Diabetes → metformin continued → plan_intensity: maintain
Appendicitis → surgical consult → plan_intensity: new
NSTEMI → heparin drip → plan_intensity: escalate

Before

Unstable clinical input → guesswork → denials

After

Deterministic clinical state → validated claims → higher FPAR

This is the same note — we just made it usable.

That is the product: PHI-safe extraction that turns normal finished notes into structured clinical truth before coding, claims, or automation run.

Interactive Discovery

What Upstream Clinical Intelligence Is Worth

We don't know what our system would do to your company — but you do. Play with the sliders and see.

Your Organization
1.0M
$180
12.0%
$30
78%
What VMC Does — Play With These
+1.00pp

Percentage point lift from cleaner first-pass claims

+0.50pp+3.00pp
1.0%

Specificity upgrades, HCC capture, OCR recovery

0.5%5.0%
8%

Fewer denials from cleaner first-pass claims

5%40%
8%

More claims per coder when search time drops 60%+

5%30%
0.5%

Better coded data → more downstream referrals, services, RAF

0.0%5.0%
$0.75

Volume pricing: higher volume → lower rate

$0.50$1.50
First Pass Revenue
FPAR 78% → 79% — 10K claims paid on first pass
$1.8M
Revenue Uplift
1.0% increase from specificity, HCC capture, OCR recovery
$1.8M
Recovered Denial Revenue
10K denials prevented → revenue recovered
$1.7M
Rework Elimination
10K denials × $30 rework cost eliminated
$288K
Throughput Capacity
8% more claims per coder → additional revenue capacity
$14.4M
Downstream Services
0.5% uplift from better coded data driving referrals & RAF
$900K
VMC Annual Cost
1.0M notes × $0.75/note
$750K
Net Annual Impact
$20.2M

27× return on VMC investment

Enterprise Value Delta
$72.6M

30% EBITDA capture × 12× multiple

Conservative modeling. Not a guarantee — a framework for diligence.
Architecture

The missing layer in revenue architecture.

Automation assumes the clinical input is stable. It isn't.

We sit between finished notes and downstream claim logic.

Notes go in. Deterministic clinical state comes out. Existing RCM systems keep running — they just receive better clinical input.

Finished Clinical Notes
Narrative documentation, labs, symptoms, assessment, plan
What goes in
VMC Extraction Layer
PHI-safe reconstruction into deterministic clinical state
What happens
You are here
Existing RCM / Claim Systems
Coding, edits, denial management, claim automation
Where it plugs in
Payer Submission
Defensible claims built on stabilized clinical input
What improves downstream

Input

What goes in

Finished clinical notes and note sections — narrative, symptoms, labs, assessment, and plan.

Transformation

What happens

VMC reconstructs the clinical truth across the entire note and emits deterministic clinical state.

Output

What comes out

Claim-ready diagnoses, evidence links, suppressions, review-lane items, and treatment validation.

Integration

Where it plugs in

Directly before coding, claims, edits, denial management, and downstream RCM automation.

No replacement

What does not change

Your existing RCM platform, submission workflow, and payer connectivity stay in place.

We do not replace your claim engine. We stabilize the clinical input before coding, edits, claims, and payer workflows execute.

Proof Lab

Where unstable clinical states are contained.

Each artifact demonstrates a failure point that would have propagated downstream.

Denial Prevented — COPD Exacerbation

Assertion Conflict + Tier Guard
Without extractor
Assertion bleed across sections → J44.1 auto-submitted → denial issued → rework triggered.
With extractor
Sentence-scoped negation → tiered conflict → coder review → clean submission.

The note contains conflicting documentation: 'acute exacerbation' in HPI vs 'stable COPD' in assessment. The system flagged this conflict before submission, preventing a certain denial.

Conflict Inspector
Billed Code
J44.1 — COPD with acute exacerbation
Conflict Source
Assessment line 3: 'COPD, stable on current regimen'
Supporting Source
HPI line 7: 'presenting with acute exacerbation'
System Action
Flagged for coder review — not auto-submitted
Outcome
Coder resolved conflict → clean submission → denial prevented
Aggregate View

Parallel Batch Simulation — 500 Notes

Unstable clinical states prevented before submission. Deterministic rule execution at production scale.

37
Clean Claim Risk Flags Prevented
0
Critical False Positives
18
Conflicts Surfaced
not auto-submitted
142
HCC Categories Preserved
21
Needs-Decision Tiered
not auto-submitted
13capabilities
Denial Reduction
0 critical false positives across 20 blind notes. Every code defensible.
~3 minper complex note
Time to Claim
From ~10 min to under 3. Same staff, 3× throughput. Search eliminated.
2.8HCC categories / note
Revenue Recovery
Risk-adjustment revenue already documented — previously uncaptured.
95%
Primary DX Accuracy
19/20 blind notes
91%
Assertion Accuracy
negated / historical / hedged
50+
Code Families Resolved
automatic specificity
90%+
Deterministic Logic
same note, same result
Never
Auto-Upcoding
ambiguity surfaced, not resolved

These are not optimizations. This is upstream containment — not downstream rework.

Build vs. Buy

46 family specs. 417 code mappings. Three-layer assertion evidence. Tested on 30 blind notes across 10 specialties.

  • 574,000+ SNOMED concept index + 530+ clinical aliases
  • 46 ICD-10 family specs with 417 axis-driven code mappings
  • 22M CCI + 769K MUE + 297M NCCI edit lookups
  • Three-layer assertion evidence: speech acts → linguistic → LLM reasoning
  • Causal priority framework: acute > chronic for suppression
  • 94.7% accuracy on 30-note blind suite across 10 specialties
  • Browser-side PHI filter → tag stripping → zero PHI on the wire
  • 100% PHI filter match — zero extraction impact verified
  • Zero-trust egress architecture with fail-closed enforcement
  • Output: clinical-state/v1 with entities, relationships, gaps, severity
  • Adversarial testing: GPT-4o note generation + o3 evaluation loop
  • Continuous regression: 30-note automated suite on every deploy

This engine executes before claims logic, not after.

Versioned. Auditable. Production-tested. Not assembled via prompt orchestration.

PHI Containment

Because extraction is deterministic, PHI is contained at the architecture level — not by policy, by structure.

Because PHI is contained, AI reasoning can safely operate on structured clinical state — without leaking protected data.

Because it is versioned and auditable, AI becomes an assistant, not a liability.

Choose Your Priority

Four lenses. One platform.

Integrates upstream of enterprise RCM platforms. No workflow disruption. No process change.