Assumption Testing Archives - ProdSens.live https://prodsens.live/tag/assumption-testing/ News for Project Managers - PMI Wed, 12 Jun 2024 13:20:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://prodsens.live/wp-content/uploads/2022/09/prod.png Assumption Testing Archives - ProdSens.live https://prodsens.live/tag/assumption-testing/ 32 32 Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up https://prodsens.live/2024/06/12/product-in-practice-testing-assumptions-was-tricky-but-the-convo-team-didnt-give-up/?utm_source=rss&utm_medium=rss&utm_campaign=product-in-practice-testing-assumptions-was-tricky-but-the-convo-team-didnt-give-up https://prodsens.live/2024/06/12/product-in-practice-testing-assumptions-was-tricky-but-the-convo-team-didnt-give-up/#respond Wed, 12 Jun 2024 13:20:03 +0000 https://prodsens.live/2024/06/12/product-in-practice-testing-assumptions-was-tricky-but-the-convo-team-didnt-give-up/ product-in-practice:-testing-assumptions-was-tricky-but-the-convo-team-didn’t-give-up

Identifying and testing assumptions is a critical part of continuous discovery. But what happens when your assumption tests…

The post Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up appeared first on ProdSens.live.

]]>
product-in-practice:-testing-assumptions-was-tricky-but-the-convo-team-didn’t-give-up

Identifying and testing assumptions is a critical part of continuous discovery. But what happens when your assumption tests don’t go as planned? Whether you encounter technical difficulties, have a hard time finding customers to connect with, or run up against any other number of problems, it can be tempting to give up.

Whether you encounter technical difficulties, have a hard time finding customers to connect with, or run up against any other problems, it can be tempting to give up when assumption tests don’t go as planned. – Tweet This

Today’s Product in Practice is a lesson in perseverance. Amanda Wernicke, Craig Fogel, and Jason Ruzicka knew they needed to test assumptions with users, but found that a virtual assumption test over Zoom wasn’t working for them. It presented too many technical problems and didn’t allow them to even get to the assumption they were attempting to test.

Instead of giving up at this stage, they started looking into other ways of connecting with their potential users.

Read on to learn about Amanda, Craig, and Jason’s experiments with in-person user testing as well as their key learnings and observations from this process.

Have your own Product in Practice story you’d like to share? You can submit yours here.

Meet the Continuous Discovery Champions: Craig, Amanda, and Jason

Headshots of Craig Fogel, Amanda Wernicke, and Jason Ruzicka.

Meet the continuous discovery champions Craig, Amanda, and Jason.

Amanda Wernicke and Craig Fogel are product managers within the Product R&D department at Convo, a Deaf-owned sign language interpretation services and technology company. Jason Ruzicka is a product designer. “Convo centers the Deaf user’s needs and experience,” explains Amanda. “In a nutshell, Deaf people and hearing people use our tech to connect to Convo sign language interpreters so they can have conversations with each other.”

Amanda, Craig, and Jason partner with engineers, cross-functional stakeholders, and users to understand their users’ problems and solve them in a way that will grow revenue.

“In one product suite, we are working to grow revenue globally and create a new model for interpreting that centers the Deaf user,” says Craig. “For example, our QR codes allow businesses to meet their Deaf customers’ need for interpretation while empowering the Deaf user to use their own device. The Deaf person decides whether and when they want to pull up an interpreter into the interaction.”

When it comes to their experience with continuous discovery, Liane Taylor, previously a Product Ops leader and Director of Product at Convo, introduced Craig and Amanda to Teresa’s work. They enrolled in Continuous Interviewing in 2022 and since then, between the two of them, they’ve taken all the deep dive courses. They participate actively in the CDH Slack community.

“In the last six months we’ve begun to hit our stride with a more steady pipeline of interviews and applying assumption identification and testing in more situations,” says Amanda. “We continue to benefit from Liane being our coach for continuous discovery and product management more broadly.”

The Challenge: Testing a New Feature That Allowed Deaf Users to Bring an Interpreter into a Zoom Meeting

“Our new solution involved an interaction between our app and Zoom. That means we had to test how the user would behave both with our app and Zoom,” says Craig. They had tested assumptions from later in the user journey (where the user was the interpreter), but now they needed to come back to an initial step and test assumptions where the Deaf person was the user.

“We had held off on testing the initial step because we didn’t have finalized designs and we knew recruiting Deaf users was more challenging than testing with interpreter users, who we can easily pull aside during their work shifts,” says Craig. “When we were able to recruit Deaf customers, we were asking them to demonstrate behavior that required them to interact with the prototype of our app and their real Zoom app or meeting invitation.”

While doing this virtually, Craig (the PM) and Jason (the UX Designer) met with customers via Zoom and asked them to share their screen. Just the screen sharing alone proved challenging for some customers. Other customers seemed confused by the meta nature of the task: being asked to interact with their prototype and Zoom while already in Zoom.

“Initially we thought that maybe the icons on our buttons or the labels under them on our prototype were the problem,” says Craig. “Maybe the assumption that those were clear enough was false. After changing those and still seeing users struggle, we suspected that the test design (asking users to interact with Zoom while in a Zoom meeting) was causing confusion for testers. The way we were testing seemed to be getting in the way of testing the actual assumptions we were trying to test.”

We suspected that the test design was causing confusion for testers. The way we were testing seemed to be getting in the way of testing the actual assumptions we were trying to test. – Tweet This

But despite the difficulties, they didn’t want to give up. It was time to find another way to test their assumptions.

A Small Experiment with In-Person Tests

One product colleague, Jason, had already experimented with in-person assumption testing on a different but related assumption.

The challenge in this situation was that they needed to quickly recruit many Deaf users who did not have experience with their app as it was currently designed and did not have a ready pool of people to contact.

Jason was already planning to be at a Deaf event near his home, and he volunteered to test there. Fortunately, the event organizers agreed to let him do this.

This assumption test assumed the user would be using their mobile phone, so he was able to have Deaf users scan a QR code shown on his own phone screen in order to pull up a prototype in the research platform Maze on their own device.

The event had a lot of built-in downtime, and participants were already familiar with Jason, which made people more willing to donate their time without an incentive.

In person, Jason was able to coach participants through the less convenient aspects of doing prototypes in Maze (dismissing pop-ups with instructions he had already provided). Jason was able to quickly move through testing with 18 users and gathered useful data that supported their assumption.

“Conducting this test in person was great, not only because of the volume of users we were able to access quickly, but also because the task users had to perform was something they’d typically do on their mobile device out in the world, not sitting in front of a computer in a Zoom meeting,” says Craig.

Conducting a test in person was great, both because of the volume of users we were able to access quickly, and because the task users had to perform was something they’d typically do on their mobile device. – Tweet This

Amanda and Craig Decide to Experiment with Recruiting Users In Person

Because Jason had been successful with his in-person test and trying to test virtually over Zoom was complicating the assumption test they were trying to run, Amanda and Craig decided to look into other ways to interact with potential users in person.

They considered upcoming events the company was attending and opportunities in their personal lives where they interact with Deaf people who are representative of their current and potential users.

Amanda started by recruiting a couple of fellow parents to participate in tests during the breaks at a school workshop. That was a successful trial, but didn’t give them the volume they needed, and they realized they needed more discipline in how they presented the test to the users.

As a Deaf-owned business, Convo has a long-standing relationship with Gallaudet University, an institution for higher learning that centers the advancement of Deaf and hard of hearing students in Washington, DC. Amanda lives close to the campus and suggested setting up a table so they could recruit Deaf students and staff as they hung out in or passed through a central area.

Amanda volunteered to spend an afternoon at Gallaudet University in hopes of recruiting Deaf participants for an in-person version of this study and Craig designed new steps for administering the test.

A Successful In-Person User Recruiting Session at Gallaudet University

Once they’d gotten approval from their contact at Gallaudet and scheduled a day for the session, Amanda went to the university on the designated day and set up a temporary booth near the area where students eat and purchase food.

A photograph of Amanda standing in front of a booth. There's a sign visible on the booth that says,

Amanda set up a booth at Gallaudet University to recruit Deaf users in person.

Amanda would approach students and staff as they walked by and ask if they’d be willing to participate in a short research experiment.

“As the day went on, I realized I had to be more proactive in engaging potential testers,” says Amanda. “I couldn’t just wait for them to come to me or I wouldn’t reach my target of 20 testers. I started by making eye contact and greeting people. I got better at reading people’s body language about how much of a rush they were in and their willingness to engage. By the afternoon I was directly approaching students who were hanging out with their friends. Most people said yes when I directly asked.”

As the day went on, I realized I had to be more proactive in engaging potential testers. I couldn’t just wait for them to come to me or I wouldn’t reach my target number. – Tweet This

Amanda asked users how they joined the most recent Zoom meeting they attended. Based on their answer, she showed them a laptop screen with either an email or calendar invitation with Zoom meeting details and the app prototype. Amanda would then assign users the task of using the Convo app to get an interpreter for the Zoom meeting they’d been invited to.

A photograph of the booth set-up. There's a table and two chairs, a basket full of snacks, a laptop computer, and a sign that says

A closer look at Amanda’s in-person recruiting setup at Gallaudet University.

“Testing in person removed the element of meta confusion influencing our test results. We were also able to A/B test two different sets of button icons and labels by alternating which version of the test we showed to the participants,” says Craig.

Plus, since Amanda was able to recruit people on the spot, this eliminated the problem of no-shows that often occurs with pre-scheduled Zoom calls.

Overall, this was a successful experiment. Amanda says, “I was able to recruit and conduct tests with 20 Deaf users over the course of 5 hours.”

On the logistical front, if she’d known the foot traffic patterns of the location she’d chosen in advance, she would have adjusted the window of time when she set up the test to recruit (between 10am–3:15pm). Offering snacks in exchange for five minutes or less of testing seemed enough to entice students and staff to participate.

All in all, it took about two weeks of lead time from reaching out to their contact at the university to scheduling the event. Amanda said they spent about $375 on snacks, poster board, event insurance, and booth rental.

Craig adds that having mock email and calendar events ready as well as written out steps for test administration helped the tests go smoothly. Because they used the research platform Maze to host the prototype test, the team could easily see a summary of users’ interactions instead of  needing to depend on the notes Amanda took while observing. Amanda’s notes contained a lot of contextual information that happened outside of Maze, so putting those sources together gave them the full picture.

However, Amanda says she found it challenging to frame the task consistently when interactions with participants started in various ways. She also had to make a concerted effort not to provide too much support. “It was challenging not to provide too much help when the test participants asked questions like ‘What should I do?’”

Unfortunately, once they removed the meta nature of the testing over Zoom, it became clear that there was a problem with their solution design. The major assumption that 90% of Deaf users would select the correct button to initiate the connection to the interpreter and provide their webconference link before connecting to an interpreter did not pass their test criteria. “We had to go back to the drawing board for our solution design,” says Amanda.

The good news was that recruiting in person proved to be successful enough that the Convo product team felt convinced it would be worth repeating if they had similar assumptions to test in the future.

Key Learnings and Takeaways

While it’s still early days for Convo’s product team, they’ve already made some impressive changes to their approach to assumption testing.

Reflecting on what they’ve learned so far, they shared a few key learnings and takeaways.

  • Test the critical assumptions that apply to steps earlier in the flow first. This avoids wasting effort testing assumptions that turn out to be irrelevant when your assumptions earlier in the flow are disproven.
  • Have a way to more efficiently recruit external end users so you can test assumptions with them more quickly and iterate faster. “Although in the cases we shared we turned to in-person testing, we are also looking into Orbital as a tool to help us next time we have this need,” says Amanda.
  • Identifying and testing your assumptions is critical. “If we had built the original solution and gone to production with it, 30% of our Deaf users would have been frustrated because they would get face to face with an interpreter and get told they need to go back to take additional steps before they could proceed with their conversation. We were able to scrap the first plan, come up with a different solution, test its underlying assumptions, and verify it before building, so we were able to avoid the wasted engineering time and a painful experience for our users,” Craig explains.
  • For in-person testing, consider organizations that you already have a relationship with or you could build a relationship with that have a large concentration of the demographic of users you want to test with.

We were able to scrap the first plan, come up with a different solution, test its underlying assumptions, and verify it before building, avoiding wasted engineering time and a painful experience for our users. – Tweet This

Looking for more help in identifying and testing your own team’s assumptions? Come join us in the next cohort of Identifying Hidden Assumptions and/or Assumption Testing.

The post Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up appeared first on Product Talk.


Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up was first posted on June 12, 2024 at 6:00 am.
©2022 “Product Talk“. Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.

The post Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up appeared first on ProdSens.live.

]]>
https://prodsens.live/2024/06/12/product-in-practice-testing-assumptions-was-tricky-but-the-convo-team-didnt-give-up/feed/ 0
Evaluating Solutions: The 5 Types of Assumptions that Underlie Our Ideas https://prodsens.live/2023/11/24/evaluating-solutions-the-5-types-of-assumptions-that-underlie-our-ideas/?utm_source=rss&utm_medium=rss&utm_campaign=evaluating-solutions-the-5-types-of-assumptions-that-underlie-our-ideas https://prodsens.live/2023/11/24/evaluating-solutions-the-5-types-of-assumptions-that-underlie-our-ideas/#respond Fri, 24 Nov 2023 20:23:49 +0000 https://prodsens.live/2023/11/24/evaluating-solutions-the-5-types-of-assumptions-that-underlie-our-ideas/ evaluating-solutions:-the-5-types-of-assumptions-that-underlie-our-ideas

Assumption testing is at the heart of what good continuous discovery teams do week over week. It’s how…

The post Evaluating Solutions: The 5 Types of Assumptions that Underlie Our Ideas appeared first on ProdSens.live.

]]>
evaluating-solutions:-the-5-types-of-assumptions-that-underlie-our-ideas

Assumption testing is at the heart of what good continuous discovery teams do week over week. It’s how we evaluate which ideas will work and which won’t.

But before we can test our assumptions, we have to identify them. That’s not always as easy as it sounds.

Assumptions are beliefs that need to be true in order for our ideas to succeed. The challenge is that we need to be able to see our assumptions before we can test them.

It can be hard to see our own assumptions. Oftentimes they are core beliefs that we rarely think to question. It’s a bit like asking a fish about water.

It can be hard to see our own assumptions. Oftentimes they are core beliefs that we rarely think to question. It’s a bit like asking a fish about water. – Tweet This

A graphic labeled "Five Types of of Assumptions." The assumptions listed are: desirability, feasibility, usability, viability, and ethical.

There are five types of assumptions that product teams tend to make: desirability, feasibility, usability, viability, and ethical.

To help us see our own assumptions, it helps to think about the types of assumptions that product teams tend to make. We tend to make assumptions in five different categories:

  1. Desirability assumptions
  2. Viability assumptions
  3. Feasibility assumptions
  4. Usability assumptions
  5. Ethical assumptions

To illustrate these five types, let’s set up a context for our examples.

An image of an opportunity solution tree with "Increase new readers" at the top, leading to the opportunity "I know someone who should read this article" and several solution ideas below it.

To further explore the idea of assumptions, let’s imagine a product team that works for a newspaper and has the desired outcome of increasing new readers.

Suppose I work for a local paper and my product team is responsible for bringing in new readers (our outcome). When interviewing readers, we uncovered a common desire: “I know someone who should read this article.”

We’ve chosen this as our target opportunity and have generated three potential solutions:

  1. Add social media share buttons that allow people to quickly share the title and URL of the article.
  2. Add the option to email the full text of the article to someone.
  3. Add the option to text someone the title and URL of the article.

Each of these solutions depend on a number of assumptions that need to be true in order for each solution to succeed.

We can start with a broad assumption: Our readers want to share articles with other people.

All three of our ideas depend upon this assumption. This assumption assumes that our target opportunity (“I know someone who should read this article”) expresses a real desire.

But we can also enumerate many more specific assumptions:

  • Our readers want to share this article with people on Facebook.
  • Our readers will notice the option to share an article.
  • We can format the full article text appropriately to be shared via email.
  • The person receiving the shared article will be interested in the article.
  • If the person receiving the article doesn’t have a subscription, they’ll buy a subscription to get access to the article.
  • If the person receiving the article doesn’t have a subscription, they won’t be annoyed with the sharer since they can’t access the article.
  • Our readers won’t share articles with people who might be offended by the content.
  • Our readers will share articles via SMS with enough non-subscribers (who end up subscribing) to offset the cost of SMS.
  • Our emails will get through spam filters and into readers’ inboxes.
  • Our readers will know the email address of the person they want to share the article with.

We could enumerate dozens more. But this should give you an idea of what assumptions look like.

What Are Desirability Assumptions?

An image labeled "Desirability Assumptions." There are two puzzle pieces that fit together. The text on one reads: "Why we think someone will want our solution" and the text on the other reads: "Why we think customers will be willing to do what they need to do to get value from the solution."

Desirability assumptions include why we think someone will want our solution as well as why we think our customers are willing to do what they need to do to get value from the solution.

Desirability assumptions are the assumptions we make about why we think someone will want our solutions.

It’s easy to fall into the trap of thinking there is only one desirability assumption that comes in the form of: Our customers want our solution.

Many teams try to test this assumption by simply asking customers, “Do you want this?” But long-time Product Talk readers know that this isn’t a reliable way to collect feedback. Humans (all of us) don’t know what we’ll buy/use/want in the future, even when we think we do. See this article for more on why.

Instead of asking questions that lead to unreliable answers, we can run a demand test to learn more about this assumption. For example, we might launch a landing page that explains the benefits of our new solution and ask people to sign up to be notified when it’s available. If people are willing to give us their email address, that’s a signal (albeit a weak one) that they might want our solution.

The challenge with just testing demand is that it’s not the only type of desirability assumption that we want to test. It’s not enough for our customers to want our solution; they also have to be willing to do the things we need them to do to get value from the solution.

Desirability assumptions also include the assumptions we make about what our customers are willing to do to get value from our solutions. If they aren’t willing to do the things we need them to do, then our idea is dead in the water.

For example, I might want to share an article with a friend, but if I’m not willing to look up that friend’s email address (or in a case where there’s even more friction, text my friend to ask for their email address), it doesn’t matter how much I want to share the article with them, I’m not going to do it.

So desirability assumptions include assumptions about why we think our customers want our solution and our assumptions about what we think they will be willing to do.

Desirability assumptions include assumptions about why we think our customers want our solution and our assumptions about what we think they will be willing to do. – Tweet This

From our example, the following assumptions are desirability assumptions:

  • Our readers want to share this article with people on Facebook.
  • The person receiving the shared article will be interested in the article.

What Are Viability Assumptions?

An image labeled "Viability Assumptions." The text on the image reads: "Why we think our solution will create value for the business."

Viability assumptions address why we think a particular solution will be good for our business.

Viability assumptions are the assumptions we make about why a particular solution will be good for our business.

Viability assumptions are the assumptions we make about why a particular solution will be good for our business. – Tweet This

If we are an outcome-driven team, this might be as simple as enumerating our assumptions around why we think our solutions will address our target opportunity in a way that drives our outcome.

But with viability assumptions we also need to enumerate our assumptions around the economics of our solution. For example, a customer might want a solution (desirability), but they aren’t willing to pay for it. This solution won’t be viable.

We also need to evaluate the costs of delivering our solution. For example, if we are planning to share articles via SMS, we’ll need to pay carrier fees to deliver those messages. We’ll need to enumerate our assumptions around why we think the benefit of acquiring new readers will offset the costs of sending the messages.

From our example, the following assumptions are viability assumptions:

  • If the person receiving the article doesn’t have a subscription, they’ll buy a subscription to get access to the article.
  • Our readers will share articles via SMS with enough non-subscribers (who end up subscribing) to offset the cost of SMS.

What Are Feasibility Assumptions?

An image of a wheel labeled "Feasibility assumptions." Each spoke of the wheel has a different word on it: legal, compliance, security, organizational, and technical.

Feasibility assumptions address why we think we can build our solutions, and can include several elements like legal, compliance, security, organizational, and technical concerns.

Feasibility assumptions are the assumptions we make about why we think we can build our solutions.

Feasibility assumptions are the assumptions we make about why we think we can build our solutions. – Tweet This

The most common feasibility assumptions are engineering assumptions. Do we have the necessary skills, knowledge, and ability to build these solutions? Does the necessary technology exist?

I don’t limit feasibility to technically possible, though. I also like to examine compliance, legal, and security assumptions as part of feasibility assumptions. We can and should enumerate our assumptions around why legal or compliance will sign off on our solution. Or why our solution won’t expose any security risks.

I might even push feasibility to include assumptions around why we think this solution is feasible in our organization. Every organization has solutions they would never consider. We’ve all heard, “That will never work here.” If an organization won’t sign off on it, then in my view, it’s not feasible.

Some people argue that feasibility should be limited to technical feasibility and that these other assumptions are really viability assumptions. It doesn’t really matter how we categorize the assumptions. It matters that we generate them and then evaluate the risk that they carry.

It doesn’t really matter how we categorize our assumptions. It matters that we generate them and then evaluate the risk that they carry. – Tweet This

So regardless of how you categorize them, be sure to generate technical, legal, compliance, security, and organizational feasibility assumptions.

The following assumptions from our example are feasibility assumptions:

  • We can format the full article text appropriately to be shared via email.
  • Our emails will get through spam filters and into readers’ inboxes.

What Are Usability Assumptions?

An image labeled "Usability Assumptions." The text reads: We assume that customers will: Find what they need, Understand what we need them to do, Be able to do those things."

Usability assumptions allow us to consider how customers will find what they need, understand what we need them to do, and be able to do those things.

Usability assumptions are the assumptions we make about what our customer is able to do. As we design our solutions, we assume that our customers will be able to find what they need, that they’ll understand what we need them to do, and that they’ll be able to do those things.

We assume that our customers will be able to find what they need, understand what we need them to do, and be able to do those things. – Tweet This

Usability encompasses accessibility, but it’s not limited to accessibility.

In our example, our solutions won’t work if readers don’t notice the option to share an article. It also won’t work if they don’t understand how it works. Every bit of friction in the process reduces the chance our customer will get value from the solution.

The following assumptions from our example are usability assumptions:

  • Our readers will notice the option to share an article.
  • Our readers will know the email address of the person they want to share the article with.

What Are Ethical Assumptions?

An image labeled "Ethical Assumptions." The text below it reads: "Is there any harm in building this solution? Who are we choosing to serve (and not serve)? What data are we collecting and how are we using it? Are we building trust and keeping everyone safe?"

Ethical assumptions allow us to consider whether there’s any potential harm in offering our proposed solutions.

Ethical assumptions are the assumptions we make about why we think there’s no potential harm in offering our proposed solutions.

Ethical assumptions are the assumptions we make about why we think there’s no potential harm in offering our proposed solutions. – Tweet This

This is a big category and can include the assumptions we make about our ideal customer profile and our data collection and usage needs. Depending on the type of service that we offer, it might include assumptions we make about trust and safety related to user-generated content.

For example, when defining my ideal customer profile, I want to generate the assumptions I’m making about my target customer. What attributes am I prescribing to them? Who am I including and who am I leaving out (intentionally or unintentionally)?

If my solution requires that I collect personally identifiable data about customers, then I need to examine my assumptions around why I need that data, how I’ll use it, how I’ll store it, and who I’ll share it with. I’ll also need to examine my assumptions around how transparent I’ll be about those practices.

I also need to consider the social dynamics that might come into play. Sharing an article might seem innocuous, but what happens if a reader shares an article that the recipient is offended by? Are we responsible for the harm to that relationship? I can’t tell you the right answer, but we should consider these risks.

The following assumptions from our example are ethical assumptions:

  • If the person receiving the article doesn’t have a subscription, they won’t be annoyed with the sharer since they can’t access the article.
  • Our readers won’t share articles with people who might be offended by the content.

How Is This Different from Marty Cagan’s Four Risks?

A chart labeled "Assumptions Types vs. Four Risks." One column is labeled "Teresa Torres" and the other is labeled "Marty Cagan."

People often ask me about the difference between the assumption types and Marty Cagan’s four risks.

Astute readers will notice that these assumption types overlap quite a bit with Marty Cagan’s four areas of risk: value, viability, feasibility, and usability.

Conceptually, we are talking about the exact same ideas. In practice, I have found that positioning these as assumption types makes it easier to know what to test.

For example, if as a product team, we talk about viability risk without getting specific, I’m not sure what I need to test or how to mitigate the viability risk.

If instead, we enumerate our viability assumptions, I can now assess the risk of each assumption and know exactly what I need to test.

Marty also talks about value risk, whereas I talk about desirability assumptions.

Marty reviewed an early copy of Continuous Discovery Habits and was critical of me using desirability instead of value. He argued that desirability conflates usability and customer value. He also suggested that B2B products don’t need to be desirable, but simply usable and necessary.

After considering Marty’s feedback, I decided to keep the language as is.

To me, desirability and usability are distinct. I can desire something that isn’t usable. Apple Home is a great example of this. I want to connect all of my smart devices in one app, but I run into so many usability issues with the app that it simply doesn’t work for me. This is a usability issue, not a desirability issue.

I agree that B2C apps have more desirability risk than B2B apps, but desirability is still important in the B2B space. Every B2B company that has made the sale but lost the renewal due to a lack of adoption understands that desirability is important.

Finally, I find that the term “desirability assumptions” is more helpful in the context of using story maps to generate assumptions. Explaining how that works is a whole other blog post that I’ll save for another day.

At the end of the day, I don’t think this language difference matters. Marty Cagan and I are mostly aligned on the concepts. I recommend that teams adopt the language that works for them.

If you want to learn more about how to generate assumptions across all five of these categories, check out our Identifying Hidden Assumptions course.

The post Evaluating Solutions: The 5 Types of Assumptions that Underlie Our Ideas appeared first on Product Talk.


Evaluating Solutions: The 5 Types of Assumptions that Underlie Our Ideas was first posted on October 11, 2023 at 6:00 am.
©2022 “Product Talk“. Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.

The post Evaluating Solutions: The 5 Types of Assumptions that Underlie Our Ideas appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/24/evaluating-solutions-the-5-types-of-assumptions-that-underlie-our-ideas/feed/ 0
Assumption Testing: Everything You Need to Know to Get Started https://prodsens.live/2023/11/24/assumption-testing-everything-you-need-to-know-to-get-started/?utm_source=rss&utm_medium=rss&utm_campaign=assumption-testing-everything-you-need-to-know-to-get-started https://prodsens.live/2023/11/24/assumption-testing-everything-you-need-to-know-to-get-started/#respond Fri, 24 Nov 2023 19:23:47 +0000 https://prodsens.live/2023/11/24/assumption-testing-everything-you-need-to-know-to-get-started/ assumption-testing:-everything-you-need-to-know-to-get-started

A regular cadence of assumption testing helps product teams quickly determine which ideas will work and which ones…

The post Assumption Testing: Everything You Need to Know to Get Started appeared first on ProdSens.live.

]]>
assumption-testing:-everything-you-need-to-know-to-get-started

A regular cadence of assumption testing helps product teams quickly determine which ideas will work and which ones won’t. It’s one of the highest value activities we can do.

And sadly, most product teams don’t do any assumption testing at all. And those that do, don’t do nearly enough. I want to change that.

In this article, I’ll cover assumption testing from beginning to end, including:

Why should product teams test their assumptions?

When we consider more than one option, we make better decisions.

We all intuitively know this. It’s why we look at more than one house or apartment when choosing our next place to live. It’s why we talk to more than one company when looking for a job. In our everyday lives, this is intuitive.

At work, it’s harder. When we hear about an unmet customer need, pain point, or desire, we often jump to our first solution. If we do any discovery at all, we tend to ask, “Could this idea work?”

An image of a dimly lit, dirty, cramped apartment.

We intuitively know not to only look at one apartment or house or interview at only one company, but in the workplace we’re often tempted to jump to our first solution. This image was generated by DALL-E.

Imagine if you shopped for a new apartment this way. As you walk into a dimly lit fourth-floor walk-up, you gloss over the cracks in the walls and the exposed wiring in the bathroom, and start to rearrange the furniture mentally in your head—your bed will fit in the small master bedroom if you turn it diagonally, you can ditch your desk and use your small card table for both a dining room table and a desk since that’s all that will fit in the tiny kitchen/living room. You imagine how fit you’ll be from carrying your aging dog up the four flights of stairs multiple times a day.

This is absurd. Shouldn’t we look at more apartments? How do we know this is the best we can do? We haven’t compared and contrasted our options.

You would never shop for an apartment this way.

But this is exactly what we do when we try to make our first idea match the customer need we are trying to address, within the technical constraints we encounter, while still managing to deliver the right business results.

Why do we do this? The answer is simple. We don’t think we have time to consider more options. We worry it will slow us down. The engineers need something to work on next week.

But this is not true. We can consider multiple options and still move fast. The key to considering multiple options while still moving quickly is to stop testing whole ideas, and to start testing the underlying assumptions that your ideas depend upon.

The key to considering multiple options while still moving quickly is to stop testing whole ideas, and to start testing the underlying assumptions that your ideas depend upon. – Tweet This

Assumption testing is what allows us to quickly evaluate which ideas might work and throw out the ideas that won’t.

You can read more about the value of compare and contrast decisions in these articles:

What is an assumption?

An assumption is a belief that may or may not be true. For product teams, we are talking about the assumptions that need to be true for your idea to succeed. As a general rule, the more specific your assumption, the easier it will be to test.

To get a feel for what assumptions look like, let’s walk through an example. To set the context, I’ll start with an outcome and a target opportunity, using a mini opportunity solution tree. If you want to learn more about these concepts, start here.

A graphic of an opportunity solution tree. The outcome at the top is "Increase new readers." The opportunity "I know someone who should reach this article" and several solution ideas are below it.

The opportunity solution tree can help you identify assumptions that are connected to each solution you’re considering.

Suppose I work for a local paper and my product team is responsible for bringing in new readers (our outcome). When interviewing readers, we uncovered a common desire: “I know someone who should read this article.”

We’ve chosen this as our target opportunity and have generated three potential solutions:

  1. Add social media share buttons that allow people to quickly share the title and URL of the article.
  2. Add the option to email the full text of the article to someone.
  3. Add the option to text someone the title and URL of the article.

Why three solutions? Revisit the answer to “Why should product teams test their assumptions?”

Each of these solutions depend on a number of assumptions that need to be true in order for each to succeed.

We can start with a broad assumption: Our readers want to share articles with other people.

All three of our ideas depend upon this assumption. This assumption assumes that our target opportunity, “I know someone who should read this article,” expresses a real desire.

But we can also enumerate many more specific assumptions:

  • Our readers want to share this article with people on Facebook.
  • Our readers will notice the option to share an article.
  • We can format the full article text appropriately to be shared via email.
  • The person receiving the shared article will be interested in the article.
  • If the person receiving the article doesn’t have a subscription, they’ll buy a subscription to get access to the article.
  • If the person receiving the article doesn’t have a subscription, they won’t be annoyed with the sharer since they can’t access the article.
  • Our readers won’t share articles with people who might be offended by the content.
  • Our readers will share articles via SMS with enough non-subscribers (who end up subscribing) to offset the cost of SMS.
  • Our emails will get through spam filters and into readers’ inboxes.
  • Our readers will know the email address of the person they want to share the article with.

We could enumerate dozens more. But this should give you an idea of what assumptions look like.

What are the types of assumptions that product teams tend to make?

An image labeled "Five Types of Assumptions." The assumptions listed are: desirability, feasibility, usability, viability, and ethical.

Assumptions fall into five categories: desirability, feasibility, usability, viability, and ethical.

Product teams tend to make assumptions in five different categories:

  1. Desirability: Why do we think our customers want this solution and why do we think they’ll be willing to do what we need them to do to get value from it?
  2. Viability: Why do we think this solution will be good for our business?
  3. Feasibility: Why do we think we can build this solution?
  4. Usability: Why do we think the customer will be able to use this solution?
  5. Ethical: Is there any potential harm in building this solution?

You can learn more about the five assumption types (and see examples of each) in this article:

Does it matter what category a particular assumption falls into?

The short answer is no. If you read the more detailed article about assumption types, you’ll see that what I might call a feasibility assumption, someone else might call a viability assumption. That’s okay.

The value of the categories is they help us generate assumptions that represent risk in our ideas. So it’s not fruitful to spend time debating whether or not a specific assumption is a desirability assumption or a usability assumption. The point is to generate assumptions across the categories, increasing the likelihood that you uncover the riskiest assumptions.

How do you identify the assumptions that your ideas depend upon?

An image labeled "Story Map Your Ideas." There is a series of boxes with different reader steps on them next to the text "Reader" and a series of boxes with different assumptions next to the text "Assumptions."

Story mapping—walking through each step of a customer or user’s journey as shown here—is an effective way of generating different types of assumptions.

There are many ways to identify the assumptions that your ideas depend upon. Here are some of my favorites:

  • Use story mapping to help generate the assumptions that appear in each step of your story map. This is a great way to generate desirability and usability assumptions. But it can also surface feasibility, viability, and ethical assumptions as well.
  • Walk the lines of your opportunity solution tree. In other words, why do you think your solution will address the target opportunity in a way that will drive your outcome? Enumerate your thinking. Each inference is an assumption that you can test.
  • Define your ideal customer profile and generate the assumptions that it depends upon.
  • Do a data audit and examine the ethical assumptions related to your data policies.
  • Conduct a pre-mortem to catch any final blindspots.

We cover all of these methods in-depth in our Identifying Hidden Assumptions course.

Does it matter how we phrase the assumptions that we generate?

Yes. How we phrase our assumptions impacts how easy they are to test. Keep these two rules of thumb in mind when phrasing your assumptions:

  • Phrase your assumptions such that they need to be true for your idea to succeed. For example, if your app requires a login, then you might generate the following assumption: Users will remember their username and password. Don’t phrase it as: Users will forget their username and password. It’s easier to test if customers will do something than it is to test if they won’t do something.
  • Be specific. The more specific your assumption is, the smaller and faster the test will be. Tie each assumption to a specific test in your story map.

We cover how to phrase your assumptions in our Identifying Hidden Assumptions course.

What is an assumption test?

An image labeled "Four assumption test types." It lists prototype tests, 1-question surveys, data mining, and research spike with a brief description of each one.

While there are hundreds of assumption tests we could run, most of them tend to fall into four categories.

An assumption test is a structured activity that we do to evaluate the risk in an assumption.

For desirability and usability assumptions, we are typically designing activities that allow us to evaluate customer behavior.

For feasibility assumptions, we are typically conducting engineering activities that allow us to understand how difficult something might be to build.

For viability and ethical assumptions, the activity can vary based on the specifics of the assumption.

While we have hundreds of activities we could do to evaluate our assumptions, most assumption tests tend to fall into the following four categories:

  1. Prototype tests: activities designed to simulate a moment so that we can evaluate customer behavior.
  2. One-question surveys: a quick way to evaluate past customer behavior and/or observe current customer behavior.
  3. Data mining: the use of existing data to evaluate the inherent risk in an assumption.
  4. Research spike: an engineering activity that allows us to evaluate feasibility risk (e.g. often an engineering prototype).

We cover all four of these assumption test types in our Assumption Testing course.

Are assumption tests the same as experiments?

The short answer is no. When I wrote Continuous Discovery Habits, I decided not to use the term “experiments,” and instead, I replaced it with the term “assumption tests.” Here’s why.

First, we rarely have time to run real experiments in discovery. We can run large-scale randomized controlled experiments, also known as A/B tests, after we build something, but that’s only if we have enough traffic.

And this is not the best discovery activity. We don’t want to do all the work to build something before we learn we built the wrong thing.

Second, most teams fall into the trap of testing a whole idea. Whole idea testing takes a lot of work and a lot of time—two things we want to avoid in discovery.

Most teams fall into the trap of testing a whole idea. Whole idea testing takes a lot of work and a lot of time—two things we want to avoid in discovery. – Tweet This

Assumption testing makes it clear that we’re testing a single assumption and not the whole idea. These tests are faster and take less work.

Assumption tests are used in discovery when trying to decide between ideas. Experiments are used to measure the impact of what we built.

Do you need to test all of your assumptions?

No, as long as you are engaging with your customers on a regular basis, most of your assumptions will not carry much risk. They will be mostly safe.

We only need to test the riskiest assumptions that could torpedo our ideas or cause harm to our customers or company.

How do you identify the riskiest assumptions?

A two-by-two grid labeled

Assumption mapping is an activity David Bland introduces in his book Testing Business Ideas.

In his book Testing Business Ideas, David Bland shares an exercise called assumption mapping. It’s a way to evaluate assumptions based on two factors:

  1. How important the assumption is to the success of the idea.
  2. How much evidence you already have for the assumption.

Our riskiest assumptions are the assumptions that are critical to the success of our idea where we have little evidence that suggests that they are safe.

Our goal with assumption testing is to collect more evidence.

We cover assumption mapping in our Identifying Hidden Assumptions course.

When should you run assumption tests?

Assumption testing helps us compare and contrast multiple solutions against each other. So I like to run assumption tests after I’ve chosen a target opportunity and selected three potential ideas to explore.

I also like to use assumption testing after I’ve chosen a final solution candidate if I still have open questions about how the solution should work or if there’s more risk to mitigate.

Assumption testing is a discovery activity that should be used when trying to decide what to build. A team that does continuous discovery is continuously running assumption tests.

Assumption testing is a discovery activity that should be used when trying to decide what to build. A team that does continuous discovery is continuously running assumption tests. – Tweet This

How do you convince your organization to let you run assumption tests?

Most teams can start assumption testing without waiting for permission. For example, if you have access to data sources like user behavioral analytics, support tickets, sales notes, customer interview transcripts, or any other data sources that hint at customer behavior, you might be able to start doing some data mining tests right away.

If you work in an organization where you don’t have any access to customers, running prototype tests might be hard. I’d befriend someone on the account management team, the support team, or the sales team and see if you can run some quick prototype tests with the folks they are already talking to.

Your engineers can usually create engineering prototypes or run research spikes as part of your regular delivery cycle. This could be as simple as creating a user story and adding it to your next sprint. If your sprints are already jam-packed, start with something teeny tiny to start building this muscle.

Don’t overthink it. Assumption testing looks like a lot of the work that we already do. To make steps in the right direction, all you have to do is add a little bit of structure to your existing activities.

What tools should you use to test your assumptions?

An image labeled "Assumption Testing Tools" with a picture of a toolbox branching out into different types of tools like in-product survey tool and data synthesis tools.

There are a number of tools that make assumption testing easier and faster. Here are a few that I like to have in my toolbox.

There are a number of tools that make assumption testing easier and faster. I like to have the following types of tools in my toolbox:

  • An unmoderated testing platform: These tools allow us to upload a prototype, go home for the day, and come back to a set of videos of what customers did with the prototype. UserTesting innovated in this space. Maze is another popular platform.
  • In-product survey tool: Any tools that allow you to embed a short survey inside your product or service works. Qualaroo innovated in this space and Ethnio was a fast follower. But there are now dozens of tools that do this. I use Typeform for my one-question surveys.
  • User behavioral analytics tools: Anything that lets you see what your customers did while using your product or service works. There are dozens of tools in this space. Amplitude, Mixpanel, and Heap are a few of the popular ones.
  • Prototyping tools: It’s getting easier and easier to create quick mockups. Balsamiq really innovated on making prototype creation easy. Sketch and Figma and a wide variety of no-code tools are also good options here.
  • Data synthesis tools: Product teams are inundated with data inputs from sales notes, call-center transcripts, customer support tickets, NPS surveys, marketing insights, and so much more. Any tool that helps you make sense of all of these sources is a great one to have in your toolbox. This is the area where I’m hoping to see big gains quickly due to the rise of generative AI.

We now have hundreds of tools that can help us run assumption tests easier and faster. Pick the tools that work for your team. If you need some inspiration, check out our Tools of the Trade category.

We now have hundreds of tools that can help us run assumption tests easier and faster. Pick the tools that work for your team. – Tweet This

How do you evaluate the results of your assumption tests?

Product teams often ask me, “How do I know if my results are good enough or not?” The reality is: You don’t.

There are two things you can do to make evaluating your assumption test results easier:

  1. Use your assumption tests to compare and contrast different solutions against each other. Instead of asking, “Is this result good enough?”, you can now ask, “Which solution looks best?”
  2. As a team, work to define success upfront, before you run your assumption test. Think through what types of results you would need to see for your solution to succeed and then draw a line in the sand.

We spend an entire week of our Assumption Testing course on how to define success upfront and why it’s important to do so.

What do you do if an assumption test fails?

Get used to it. This is going to happen a lot. And it’s perfectly normal. Many (if not most) of our tests will fall short of our expectations.

The advantage of running assumption tests instead of whole idea tests is that when an assumption test fails, we now know exactly what to do.

When we test a whole idea and it fails, all we know is that it didn’t work, but we don’t always know why. When an assumption test fails, we know exactly what needs to change.

If the assumption is critical to the success of our idea, we might need to scrap the idea. But before you do that, be sure to read the answer to “How should you make decisions based on the results of your tests?”

If you can design around the assumption, do that. In other words, you learned something new in your assumption test, how can you use that information to improve your idea? Oftentimes, we can still make the idea work, we just need to iterate on it.

Most ideas will need iteration. It’s rare that a great idea just emerges from our head. Instead, we need to test and iterate to get to a good idea.

Most ideas will need iteration. It’s rare that a great idea just emerges from our head. Instead, we need to test and iterate to get to a good idea. – Tweet This

What do you do if an assumption test passes?

It depends. As we run assumption tests, we are collecting more evidence about the assumptions that our ideas depend upon.

If an assumption test passes and we think we’ve collected enough evidence for that particular assumption, we might move on to another assumption.

If we think we’ve collected enough evidence to choose a solution, we might do that.

It’s rare that we’ll make decisions on a single assumption test. We need to take into consideration what we are learning from the rest of our assumption tests.

How should you make decisions based on the results of your assumption tests?

I don’t recommend that you make decisions based on the results of a single assumption test. Remember, our goal with assumption testing is to test assumptions across different solutions, so that we can compare and contrast them against each other.

So we want to run multiple assumption tests and then make decisions based on what we are learning across our tests.

Sometimes all of our tests will fail. This either tells us there’s an issue with our target opportunity or we need a better set of solutions.

While it’s theoretically possible that all of our tests can pass, I’ll share that I’ve never seen this in practice. Usually some of our tests pass and some of our tests fail.

It’s easy to hyper-focus on choosing one of your solutions. But I recommend you take a step back and ask a different question, “What did we just learn about our customers from this round of assumption testing?”

When our assumption tests fail, it means that we learn something unexpected. We want to make sure that we take the time to absorb this new learning.

When our assumption tests fail, it means that we learn something unexpected. We want to make sure that we take the time to absorb this new learning. – Tweet This

You might be ready to choose one of your three solutions. Or you might have learned something that can help you generate even better solutions. Keep both in mind.

It usually takes successive rounds of assumption testing to develop a really good idea. Don’t rush the process.

How should you keep track of your assumption tests?

An image labeled "What to Track for Each Assumption Test." On a piece of notebook paper, a checklist contains different items like "the assumption" and "who you tested it with."

For every assumption test you run, I recommend keeping track of things like the assumption, who you tested it with, and a few other key points.

I like to keep things simple. I use a Trello card to track the tasks I have to do to get the assumption test live and I use Miro to keep track of all of the tests I’m currently running and their results. If I’m working on something evergreen, then I might create a dedicated Miro board just for tests on that challenge, so that I have everything in one place.

Every once in a while, I review all of the related tests on a topic to see if I can generate some broader insights.

I’m not suggesting this is the right solution for you. Some companies create research repositories and keep track of all of their assumption tests for all teams for all time. I can see how this could be helpful.

When tracking your assumption tests, I recommend making note of the following attributes:

  • The assumption: Make sure it’s specific and that you understand the context in which it was being tested.
  • The audience: Who did you test with?
  • The details of the assumption test: the prototype, the one-question survey, the engineering task, the data you looked at, etc.
  • The success criteria you set.
  • The results.
  • How you acted on the results.

For an alternative point of view, check out David Bland’s experiment card in Testing Business Ideas.

What if you don’t have time to run any assumption tests?

An image with a quote that reads,

We often think we don’t have time for assumption tests when actually we do.

This is a tough question because we often think we don’t have time when we actually do. We think we don’t have time because while we think assumption testing is important, it’s not urgent.

The challenge here is that if we knew how often our assumption tests failed (in other words, how often what we expect to happen doesn’t), assumption testing would start to feel urgent.

It’s a little bit of a catch-22.

So my recommendation is to start small. Do the smallest assumption test that you can complete in the next hour. I’m being serious.

Take whatever solution you are working on right now. Identify one thing that you think might have risk and figure out something you can do in the next hour to evaluate that risk. That might be: look at some user behavioral analytics, talk to a sales rep, review user search queries, or read through recent support tickets.

Don’t worry about whether it’s the best assumption test. Just look for something you can do right now.

Do it again tomorrow. And again the next day.

You’ll get better and faster at it. And suddenly, you’ll have plenty of time to run assumption tests.

Oftentimes, this question is more about our resistance to something new than it is about not having time. We all have 24 hours in a day. To overcome this obstacle, find the smallest, easiest thing you can do and get started right now.

How does assumption testing change based on the maturity of the product?

It doesn’t. Whether we are pre-product or working on a decades-old mature product, we still have new ideas that can be broken down into their underlying assumptions.

In fact, as products age, we can even take existing solutions and break them down into their underlying assumptions and see if any of those have changed.

Assumption testing is a way to test our beliefs. It’s a core element of critical thinking and as a result is broadly applicable.

One thing to keep in mind is if you are pre-product and you have no customers, you might have to get creative about who you test with. This article on how continuous discovery works in early-stage startups can help with that.

How can you learn more about assumption testing?

Images labeled "Identifying Hidden Assumptions" and "Assumption Testing" with short course descriptions below each.

We currently have two Product Talk Academy courses—Identifying Hidden Assumptions and Assumption Testing—that are designed to give you hands-on practice with many of the ideas discussed here.

Our Identifying Hidden Assumptions and Assumption Testing courses are designed to get you hands-on practice with many of the ideas in this article.

In Identifying Hidden Assumptions we cover how to:

  • Create story maps for each of your potential solutions.
  • Use those story maps to generate desirability, usability, and feasibility assumptions.
  • Generate viability assumptions by walking the lines of your opportunity solution tree and how to generate ethical assumptions by evaluating your ideal customer profile and by doing a data audit.
  • Ruthlessly prioritize what assumptions to test first.

In our Assumption Testing course, we cover how to:

  • Design assumption tests using the four test types: prototype tests, one-question surveys, data mining, and research spikes.
  • Define success upfront and why it’s critical to do so.
  • Improve the reliability and validity of your assumption tests so that you can trust the evidence you collect.
  • Make decisions based on sets of assumption tests so that you always know what to do next.

You should join us! We offer both courses several times a year.

The post Assumption Testing: Everything You Need to Know to Get Started appeared first on Product Talk.


Assumption Testing: Everything You Need to Know to Get Started was first posted on October 18, 2023 at 6:00 am.
©2022 “Product Talk“. Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.

The post Assumption Testing: Everything You Need to Know to Get Started appeared first on ProdSens.live.

]]>
https://prodsens.live/2023/11/24/assumption-testing-everything-you-need-to-know-to-get-started/feed/ 0