Briefs
Briefs
Today

MVPs are learning tools that test whether an idea is valuable to users.
Summary: MVPs are learning tools that test whether an idea is valuable to users.
Low-code platforms and AI-assisted design tools have made it faster than ever to build new products. But speed of creation can obscure the critical question in product development: Are we building the right solution for our users? Minimum viable products (MVPs) exist to answer that question before you commit to a full build.
A minimum viable product (MVP)is the simplest version of a product or feature that enables a team to assess whether users will derive meaningful value from that product.
An MVP is essentially an experiment designed to gather feedback and determine whether an idea has the potential to succeed in the market before investing in a full-scale solution.
The term “MVP” was popularized by Eric Ries in The Lean Startup, building on lean-manufacturing principles that originated at Toyota. Early definitions of “viable” focused narrowly on whether something functioned –could it work at all?
Today, the usability of an MVP is just as important as its functionality. If an MVP is difficult to use, people may abandon it not because the idea lacks inherent value, but because the design obscures it.
Without adequate usability, an MVP test becomes a test of the interface, not of the idea. Designer involvement ensures that the MVP is clear and usable enough that the feedback actually reflects the value proposition.
MVPs enable product teams to answer two fundamental questions sequentially:
1. Do users see value in the offering? 2. Will this specific implementation deliver that value successfully?
These questions map directly onto two hypotheses that are evaluated through testing: the value-proposition hypothesis and a solution hypothesis. These hypotheses articulate your assumptions and include clear criteria for evaluating whether data supports, partially supports, or fails to support them.
### Do Users See Value in the Offering?
The answer to this question helps determine if something is worth building and testing in the market.
First, you need to articulate your value-proposition hypothesis: what you think is valuable and how you will determine value.
A good template for the value-proposition hypothesis is:
We believe that value proposition is valuable to audience. We will know this is true when we observe behavioral signal during testing.
Once you’ve established your hypothesis, you will need to test it with minimal effort and resources.
#### Example
To illustrate the use of an MVP, consider the following value-proposition hypothesis:
We believe that actionable, personalized weekly guidance on growing and saving their money is valuable for young professionals. We will know this hypothesis is true when we observe participants express willingness to act on a personalized recommendation during test sessions.
You could evaluate this idea by building a prototype that analyzes a sample bank statement and generates a personalized weekly recommendation, such as:
You spent $85 on subscriptions this month. Cancel one streaming subscription and invest the $25/month, which could grow to over $4,500 in 10 years.
Then you run a user study to see how people react to the recommendation.
• If participants describe the advice as useful and say they would act on it, the value-proposition hypothesis is supported. • If participants find the information interesting but express doubts, the value proposition is partially supported. Their reactions suggest it’s valuable, but there are barriers to further investigate. • If participants show little interest in this kind of guidance, the value-proposition hypothesis is not supported.
### Will This Specific Implementation Deliver That Value Successfully?
This question tells us whether our particular solution is satisfactory. Again, to determine the best way to answer this question, it’s good to start with a clear solution hypothesis. A good template for your MVP product-solution hypothesis is:
For audience who need, we believe that product/feature will deliver value. We will know this is true when metric reaches target within timeframe.
#### Example
Consider the following solution hypothesis:
For young professionals who want to grow and save their money, we believe that a lightweight AI-based investing assistant that connects to users’ bank accounts and delivers personalized investment recommendations will convert them into confident first-time investors. We will know this is true when, within the first month of the pilot: at least 30% of users complete the setup process in their first session, at least 30% act on a recommendation in their first week, and at least 30% return to the product within 7 days of their first action.
This solution hypothesis statement effectively defines measurable goals for attracting, retaining, and expanding a profitable user base.
• The hypothesis would be supported if the test met all the success criteria in the solution hypothesis. For example, within the first month of the pilot, 35% of users completed setup in their first session, acted on a recommendation within the first week, and returned within 7 days. • The hypothesis would be partially supported if some, but not all the success criteria were met. For example, users completed setup and acted on recommendations at target rates, but return rates fell below 30%, indicating that the product may not deliver ongoing value. • The hypothesis would be unsupported if none of the success criteria were met.
If your solution hypothesis is unsupported, don’t abandon the idea. The issue could be with your approach, not with the opportunity. Investigate why you missed the target. Examine your design and technical approach, and external factors that you may have missed, such as the pilot reaching the wrong audience through a misaligned marketing channel or unfavorable market conditions.
Creating an MVP is not always a necessary or valuable use of your team’s time. To decide whether to create an MVP, your team should weigh the risks and rewards.
### Risk
How costly would it be if your hypothesis turns out to be wrong? It could waste your time on the wrong idea, delay the market launch of a better solution, or damage your credibility with leadership, among other things. Assessing high-risk product hypotheses is key to a product's success or failure and makes an MVP more valuable.
It’s tempting to assume that AI tools lessen risk by speeding up development. However, product risk is about the cost of being wrong, not the cost of development. Even when a code MVP takes days rather than weeks to build, pursuing the wrong idea costs us time we could be devoting to better solutions.
Risk is often inversely related to existing evidence, so ask yourself: How much evidence do I already have to support my product hypothesis? Strong evidence reduces risk and may make an MVP less necessary.
### Reward
How much value could you gain if your product hypothesis is correct? For example, you might boost revenue, improve retention, and build trust with leadership. High-potential benefits make testing through an MVP more worthwhile.
### MVP Risk-Reward Matrix
To decide whether to test a hypothesis with an MVP and which format to choose, teams can plot their hypotheses on a risk-reward matrix (pictured below).
MVP risk-reward framework plots risk on the x-axis and reward on the y-axis, creating four hypothesis categories.
MVP risk-reward framework plots risk on the x-axis and reward on the y-axis, creating four hypothesis categories.
High-risk, high-reward ideas should initially be tested with prototype MVPs to minimize uncertainty with minimal time investment. Low-risk, high-reward ideas can progress to live-code MVPs, allowing real-world behavior to confirm their value. Low-risk, low-reward ideas can be postponed or examined using lightweight methods. High-risk, low-reward hypotheses should be deprioritized or reshaped.
After defining your product hypothesis, you’ll need to decide how to test it. MVPs are learning tools that can take many forms, from paper prototypes to functional features in production. The appropriate format depends on the type of feedback you need to test your product hypothesis.
### When to Use a Prototype MVP
Prototype MVPs are a great starting point for testing a value proposition hypothesis, after a team has completed some initial discovery work to identify a problem and who it affects.
Once a value proposition has been created, a prototype MVP helps test the key assumptions behind it in a quick, low-risk manner. Research questions addressable with a prototype MVP include:
• Comprehension: Do people understand what the product can do for them? • Usefulness: Do people find the product useful for addressing their needs? • Expected Outcome: Do people take the action(s) we expect (e.g., sign up, explore further)? • Usability of the core flow: Do people complete the main task without confusion, even in a simplified format?
Here are three common formats for prototype MVPs:
#### Paper Prototypes
Simple sketches or mockups allow teams to explore high-level concepts quickly and cheaply before further developing visuals and interactions. They’re ideal when you’re considering multiple concepts or implementations and need to narrow down on one by testing comprehension and usefulness.
#### Clickable Digital Prototypes
Interactive mockups that simulate navigation and content are best used when you have a clear concept and need to test user understanding, value, and usability. Teams can leverage AI-powered design tools to turn static designs or detailed product descriptions into prototypes. However, designers must then refine the prototypes to ensure that they address user needs, contain real content, and fit technical requirements. Otherwise, MVP results may be misinterpreted.
#### Wizard of Oz
Although the product seems automated, in reality, a “wizard” performs tasks manually behind the scenes to observe reactions to technically complex or intelligent systems that haven’t been fully developed.
### When to Use a Live-Code MVP
Once a team has validated the product’s value proposition, possibly through a prototype MVP, the next step is to test the solution hypothesis, which determines whether the product can attract, retain, and grow a profitable user base in the market.
Answering this question requires evaluating how people discover and interact with the offering. This process is best accomplished by deploying a live-code MVP: a minimal, fully coded experience released to a real audience. Unlike clickable prototypes, a live-code MVP lets you collect analytics on user behavior and system performance.
Key areas to measure include:
| Measurement area | What they measure | Example metrics | | --- | --- | --- | | Engagement | How actively users interact with the product | - Percentage of users who completed onboarding - Average number of features that users interact with per session | | Retention | Whether users return to the product | - Percentage of users who return within 7 days of their first session - Where in the user journey do most users drop off | | Macro-conversions | Whether users take high-value actions that translate directly to revenue | - Percentage of free trial users who upgrade to a paid plan - Percentage of users who completed a signup flow | | Micro-conversions | Whether users take smaller steps toward macro-conversions | - Percentage of users who clicked Learn more on the pricing page - Percentage of users who added an item to a wishlist | | System-performance | Whether the product delivers the experience reliably at scale | - Average page-load time under peak traffic - Percentage of transactions that fail or time out | | AI-performance | Whether the AI produces reliable, accurate, and timely outputs | - Percentage of relevant or accurate AI-generated recommendations |
#### Dangers of Live-Code MVPs
However, live-code MVPs can negatively impact brand perception and customer trust if poorly executed. If an idea has the potential to harm your brand, consider implementing the following safeguards to minimize negative impacts:
• Limit exposure: Launch only to a specific segment of users, or use invite-only access to minimize risk. • Show previews: Allow people to see what your product will do for them before requiring them to take actions, especially in scenarios involving money, personal data, or automation. • Label it as a beta or pilot: Present the MVP as a beta, pilot, or test experience so people are more tolerant of initial issues. • Monitor in real time: Use analytics alerts to quickly identify technical issues, bugs, or unexpected user behaviors.
AI coding assistants and low-code platforms have reduced the time and costs associated with building code MVPs. However, even a lightweight code MVP requires ongoing support, bug fixes, and infrastructure that a prototype does not. Validate your value proposition before committing to code.
MVPs are most effective when driven by a close-knit, crossfunctional team aligned around a clear learning goal. However, this is often not the reality. Regardless of team size, it’s possible to execute successful MVPs as long as the whole team agrees on what they’re testing and what they need to learn from it. To run a practical experiment, we recommend identifying the following roles:
• Learning lead(s) (often a product manager or a lead designer): Defines and prioritizes product hypotheses, sets learning goals, and considers how learnings will influence the roadmap. • Designer(s): Ensures that the MVP clearly communicates its value proposition and is usable enough to produce meaningful feedback, even if not fully polished or functional; plans and conducts research sessions with prototype MVPs; and synthesizes research findings into actionable insights that the team can use to evaluate the hypothesis. • Engineer(s): Ensures that the technology is accurately represented in prototype MVPs; develops functionality for live-code MVPs; and enables effective analytics.
High-performing teams commit only to ideas that demonstrate clear value, rather than relying on internal enthusiasm. They prioritize risk reduction by agreeing when to pivot according to findings and deliberately choosing where to allocate their time.
“MVP” often means different things to different people. Avoid confusion by clearly defining what you’re testing, its format, and what you expect to learn.
Stakeholders are more likely to support MVPs when they are framed as learning tools, not rushed launches. Here are a few ways to align your team and stakeholders around an MVP.
### 1\. Specify the MVP Goal
Clearly state the product hypothesis you are testing and justify why it matters to the business. Are you investigating a conversion hurdle or a market-viability concern? Connect the MVP to a specific business decision to help stakeholders see its value.
For example, you could say, “Testing this hypothesis will determine if we should invest in developing and scaling the product for this market segment. If accurate, it could create a new revenue stream by converting an untapped market into customers.”
### 2\. Frame the MVP as an Experiment
Avoid calling the MVP a “launch” or a “release” unless it truly is, like in the case of a live-code MVP. Instead, opt for terms like “pilot,” “experiment,” or “learning test” to emphasize the experimental nature of an MVP.
### 3\. Summarize Your Plan
Stakeholders are more inclined to buy in when your plan is clearly articulated. A one-page summary that states the hypotheses, the MVP format, what you’ll measure, and the decision criteria for each outcome (supported, partially supported, unsupported) gives stakeholders a clear picture of the plan and builds confidence that the experiment is well structured.
An MVP is a structured experiment, not a stripped-down product launch. The distinction matters because it changes what you optimize for. Instead of polishing features, you’re sharpening questions. Teams that internalize this shift avoid the most common MVP pitfall: building something “minimal” and treating the results as proof that the idea works (or doesn’t), when all they’ve really tested is whether people could use a rough interface.
Remember to clearly define what you’re testing. Ensure your team is aligned on what “supported” and “unsupported” look like before you collect data. An MVP won’t eliminate uncertainty, but it will replace opinion-driven debates with evidence your whole team can evaluate.
### References
Lean Enterprise Institute. 2023. A Brief History of Lean. Retrieved October 20, 2025 from https://www.lean.org/explore-lean/a-brief-history-of-lean/