MVP vs proof of concept: don’t make this mistake

A PoC proves technical feasibility; an MVP tests market value. Confusing them ships code without business learning — or the opposite.

The problem

Teams often conflate MVP and proof of concept (POC), or use one name for the other. A POC answers a feasibility or technical question; an MVP aims to learn about perceived value, real usage behavior, and willingness to pay in a realistic setting. When the two blur, you get impressive demos that validate nothing about the market, or "minimum" products too thin to survive honest use. The problem shows up in roadmaps: teams "prove" a complex integration without showing anyone wants the end state; they ship a bare UI without isolating the main risk hypothesis. Internal stakeholders applaud velocity, yet no durable purchase or adoption signal appears. This confusion is expensive because it pushes risk downstream, when expectations have hardened and pivoting becomes politically painful. In distributed teams the fog thickens: backlogs mix "POC" and "MVP" tickets without distinct definitions of done, which makes sprint reviews opaque to the business. Success metrics become interchangeable ("works in demo" versus "a customer renewed"), and nobody knows which decision should follow which ship.

Why it fails

Clarifying MVP versus POC is not pedantry; it is a product governance tool. A well-scoped POC reduces targeted technical uncertainty; a well-scoped MVP reduces targeted market uncertainty. If you label a lab experiment "MVP," you permit yourself to ignore distribution, onboarding, and sales objections. If you label a client launch "POC," you skip defining reproducible technical success criteria. Founders need shared language to decide what to build, for whom, and how far to go before the next decision. In a Lumor-adjacent style, that means naming the dominant risk, picking the artifact that confronts it fastest, and refusing vanity features that do not cut any critical uncertainty. MVP/POC confusion also feeds silos: engineering celebrates a demo, sales waits for something sellable, the pilot customer cannot tell if they are testing tech or a product. Misalignment is paid for in lost cycles and eroded trust. Splitting the two also clarifies acceptable debt: a POC can be disposable by design, while an MVP must stay maintainable enough to learn without lying about perceived quality.

A concrete method

Operational frame

Name the dominant hypothesis — Is doubt about "can we build it?", "do they want the outcome?", or "will they pay under real constraints?" If doubt is technical, you are in POC territory. If it is economic or behavioral, lean MVP.

POC: narrow scope, binary criteria — Examples: acceptable latency, model accuracy, stable connector to the target ERP. Expected output: a go/no-go technical decision with reproducible metrics, not an NPS score.

MVP: user promise, realistic friction — Include the minimum activation path, even if back-office is manual. Goal is to observe drop-off, perceived value, and payment or serious commitment signals.

Do not stack both without sequencing — Run POC when technical risk blocks everything, then MVP for value. If you combine blindly, you lose clarity on what failed.

Explicit exit criteria — For POC: measurable thresholds and a named decision owner. For MVP: concrete user events (short-term retention, pilot conversion, letter of intent).

Honest external labeling — Call a pilot a "technical POC" when it is one; avoid selling a "finished product" that is still lab work.

Budget and time — Separate envelopes so a POC cannot silently become a product roadmap without market validation.

Cold review — After each deliverable ask: "Which uncertainty dropped, and by how much?" If the answer is vague, the ship was neither a useful POC nor MVP.

Common traps — "MVP" with ten features because the committee piled them on; "POC" without real data because the test environment is too clean; recorded demos that hide real latency. Name these risks explicitly in the brief.

Ownership — Assign a market owner for the MVP and a technical owner for the POC when both exist; otherwise responsibility dilutes into hybrid half-ships.

This frame helps reject comfortable work that does not cut the priority risk.

Example

A growth-stage company wants an AI layer on a legal workflow. The team announces an "MVP" in three months: semi-automatic clause extraction, a clean UI, partial integration. In truth, risk is twofold: can the model hold on messy PDFs (POC), and will partners commit to a hybrid flow with human review (market MVP)? By blending both, the release shows uneven accuracy on some formats; pilots say "interesting" but delay pricing commitments. If they had sequenced, the POC would lock a representative corpus and quality metrics before any commercial UI. The MVP would then test the upload-review-export path with indicative pricing and time obligations on the client side. The confusion cost a narrative of delay: they shipped a product shell while technical trust was still unstable. After reframing, a two-week POC isolates the document pipeline; the next deliberately narrow MVP adds only what the time-pressed partner sees. Learning becomes attributable: tech, offer, or go-to-market—not the opaque mashup from before. Another case: a fintech builds an "MVP" that is really an internal scoring POC; partner banks ask for audit logs and SLAs the MVP cannot honor. Renaming to POC plus a compliance-aware MVP roadmap separates feasibility from regulatory packaging, avoiding a premature promise.

What to do now

For your current initiative, write two lines: "Our #1 risk is …" and "The artifact testing it is a POC or an MVP because …." If the answer wavers, split scope into two correctly named deliverables. Set measurable exit criteria and a decision date for each. Share the frame with design, engineering, and sales so vocabulary aligns. Run a stress-test style review (Lumor or a blunt peer) especially on the MVP: is the promise defensible without magic demo assumptions? If you only have a POC, ban "launch" language until a value hypothesis is tested with real friction. This discipline prevents premature celebrations and steers budget toward what actually cuts critical uncertainty. Add two columns to your internal board: "Type (POC/MVP)" and "Uncertainty targeted" for every epic; if the column is empty mid-cycle, stop the feature until clarified. Reassess after each sprint whether the next ship should switch from POC to MVP or vice versa based on what you learned.

Related reading


Lumor puts your idea in front of 13 AI roles to stress-test assumptions, surface blind spots, and deliver a verdict, scores, and an execution plan.

Frequently asked questions

Can an internal prototype be an MVP?
Only if real external users run it on their data/process; otherwise it is a PoC or demo.
Should there always be a PoC before an MVP?
If the #1 risk is technical and blocking, yes—timebox it, then shift to market value.
What about a Minimum Lovable Product?
Same rule: name the behaviour hypothesis you test, not the polish level.
How does this tie to stress-testing?
See the [stress-test guide](/en/blog/stress-test-guide-early-stage-founders) for hypothesis and kill criteria.