Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up

product-in-practice:-testing-assumptions-was-tricky-but-the-convo-team-didn’t-give-up

Identifying and testing assumptions is a critical part of continuous discovery. But what happens when your assumption tests don’t go as planned? Whether you encounter technical difficulties, have a hard time finding customers to connect with, or run up against any other number of problems, it can be tempting to give up.

Whether you encounter technical difficulties, have a hard time finding customers to connect with, or run up against any other problems, it can be tempting to give up when assumption tests don’t go as planned. – Tweet This

Today’s Product in Practice is a lesson in perseverance. Amanda Wernicke, Craig Fogel, and Jason Ruzicka knew they needed to test assumptions with users, but found that a virtual assumption test over Zoom wasn’t working for them. It presented too many technical problems and didn’t allow them to even get to the assumption they were attempting to test.

Instead of giving up at this stage, they started looking into other ways of connecting with their potential users.

Read on to learn about Amanda, Craig, and Jason’s experiments with in-person user testing as well as their key learnings and observations from this process.

Have your own Product in Practice story you’d like to share? You can submit yours here.

Meet the Continuous Discovery Champions: Craig, Amanda, and Jason

Headshots of Craig Fogel, Amanda Wernicke, and Jason Ruzicka.

Meet the continuous discovery champions Craig, Amanda, and Jason.

Amanda Wernicke and Craig Fogel are product managers within the Product R&D department at Convo, a Deaf-owned sign language interpretation services and technology company. Jason Ruzicka is a product designer. “Convo centers the Deaf user’s needs and experience,” explains Amanda. “In a nutshell, Deaf people and hearing people use our tech to connect to Convo sign language interpreters so they can have conversations with each other.”

Amanda, Craig, and Jason partner with engineers, cross-functional stakeholders, and users to understand their users’ problems and solve them in a way that will grow revenue.

“In one product suite, we are working to grow revenue globally and create a new model for interpreting that centers the Deaf user,” says Craig. “For example, our QR codes allow businesses to meet their Deaf customers’ need for interpretation while empowering the Deaf user to use their own device. The Deaf person decides whether and when they want to pull up an interpreter into the interaction.”

When it comes to their experience with continuous discovery, Liane Taylor, previously a Product Ops leader and Director of Product at Convo, introduced Craig and Amanda to Teresa’s work. They enrolled in Continuous Interviewing in 2022 and since then, between the two of them, they’ve taken all the deep dive courses. They participate actively in the CDH Slack community.

“In the last six months we’ve begun to hit our stride with a more steady pipeline of interviews and applying assumption identification and testing in more situations,” says Amanda. “We continue to benefit from Liane being our coach for continuous discovery and product management more broadly.”

The Challenge: Testing a New Feature That Allowed Deaf Users to Bring an Interpreter into a Zoom Meeting

“Our new solution involved an interaction between our app and Zoom. That means we had to test how the user would behave both with our app and Zoom,” says Craig. They had tested assumptions from later in the user journey (where the user was the interpreter), but now they needed to come back to an initial step and test assumptions where the Deaf person was the user.

“We had held off on testing the initial step because we didn’t have finalized designs and we knew recruiting Deaf users was more challenging than testing with interpreter users, who we can easily pull aside during their work shifts,” says Craig. “When we were able to recruit Deaf customers, we were asking them to demonstrate behavior that required them to interact with the prototype of our app and their real Zoom app or meeting invitation.”

While doing this virtually, Craig (the PM) and Jason (the UX Designer) met with customers via Zoom and asked them to share their screen. Just the screen sharing alone proved challenging for some customers. Other customers seemed confused by the meta nature of the task: being asked to interact with their prototype and Zoom while already in Zoom.

“Initially we thought that maybe the icons on our buttons or the labels under them on our prototype were the problem,” says Craig. “Maybe the assumption that those were clear enough was false. After changing those and still seeing users struggle, we suspected that the test design (asking users to interact with Zoom while in a Zoom meeting) was causing confusion for testers. The way we were testing seemed to be getting in the way of testing the actual assumptions we were trying to test.”

We suspected that the test design was causing confusion for testers. The way we were testing seemed to be getting in the way of testing the actual assumptions we were trying to test. – Tweet This

But despite the difficulties, they didn’t want to give up. It was time to find another way to test their assumptions.

A Small Experiment with In-Person Tests

One product colleague, Jason, had already experimented with in-person assumption testing on a different but related assumption.

The challenge in this situation was that they needed to quickly recruit many Deaf users who did not have experience with their app as it was currently designed and did not have a ready pool of people to contact.

Jason was already planning to be at a Deaf event near his home, and he volunteered to test there. Fortunately, the event organizers agreed to let him do this.

This assumption test assumed the user would be using their mobile phone, so he was able to have Deaf users scan a QR code shown on his own phone screen in order to pull up a prototype in the research platform Maze on their own device.

The event had a lot of built-in downtime, and participants were already familiar with Jason, which made people more willing to donate their time without an incentive.

In person, Jason was able to coach participants through the less convenient aspects of doing prototypes in Maze (dismissing pop-ups with instructions he had already provided). Jason was able to quickly move through testing with 18 users and gathered useful data that supported their assumption.

“Conducting this test in person was great, not only because of the volume of users we were able to access quickly, but also because the task users had to perform was something they’d typically do on their mobile device out in the world, not sitting in front of a computer in a Zoom meeting,” says Craig.

Conducting a test in person was great, both because of the volume of users we were able to access quickly, and because the task users had to perform was something they’d typically do on their mobile device. – Tweet This

Amanda and Craig Decide to Experiment with Recruiting Users In Person

Because Jason had been successful with his in-person test and trying to test virtually over Zoom was complicating the assumption test they were trying to run, Amanda and Craig decided to look into other ways to interact with potential users in person.

They considered upcoming events the company was attending and opportunities in their personal lives where they interact with Deaf people who are representative of their current and potential users.

Amanda started by recruiting a couple of fellow parents to participate in tests during the breaks at a school workshop. That was a successful trial, but didn’t give them the volume they needed, and they realized they needed more discipline in how they presented the test to the users.

As a Deaf-owned business, Convo has a long-standing relationship with Gallaudet University, an institution for higher learning that centers the advancement of Deaf and hard of hearing students in Washington, DC. Amanda lives close to the campus and suggested setting up a table so they could recruit Deaf students and staff as they hung out in or passed through a central area.

Amanda volunteered to spend an afternoon at Gallaudet University in hopes of recruiting Deaf participants for an in-person version of this study and Craig designed new steps for administering the test.

A Successful In-Person User Recruiting Session at Gallaudet University

Once they’d gotten approval from their contact at Gallaudet and scheduled a day for the session, Amanda went to the university on the designated day and set up a temporary booth near the area where students eat and purchase food.

A photograph of Amanda standing in front of a booth. There's a sign visible on the booth that says,

Amanda set up a booth at Gallaudet University to recruit Deaf users in person.

Amanda would approach students and staff as they walked by and ask if they’d be willing to participate in a short research experiment.

“As the day went on, I realized I had to be more proactive in engaging potential testers,” says Amanda. “I couldn’t just wait for them to come to me or I wouldn’t reach my target of 20 testers. I started by making eye contact and greeting people. I got better at reading people’s body language about how much of a rush they were in and their willingness to engage. By the afternoon I was directly approaching students who were hanging out with their friends. Most people said yes when I directly asked.”

As the day went on, I realized I had to be more proactive in engaging potential testers. I couldn’t just wait for them to come to me or I wouldn’t reach my target number. – Tweet This

Amanda asked users how they joined the most recent Zoom meeting they attended. Based on their answer, she showed them a laptop screen with either an email or calendar invitation with Zoom meeting details and the app prototype. Amanda would then assign users the task of using the Convo app to get an interpreter for the Zoom meeting they’d been invited to.

A photograph of the booth set-up. There's a table and two chairs, a basket full of snacks, a laptop computer, and a sign that says

A closer look at Amanda’s in-person recruiting setup at Gallaudet University.

“Testing in person removed the element of meta confusion influencing our test results. We were also able to A/B test two different sets of button icons and labels by alternating which version of the test we showed to the participants,” says Craig.

Plus, since Amanda was able to recruit people on the spot, this eliminated the problem of no-shows that often occurs with pre-scheduled Zoom calls.

Overall, this was a successful experiment. Amanda says, “I was able to recruit and conduct tests with 20 Deaf users over the course of 5 hours.”

On the logistical front, if she’d known the foot traffic patterns of the location she’d chosen in advance, she would have adjusted the window of time when she set up the test to recruit (between 10am–3:15pm). Offering snacks in exchange for five minutes or less of testing seemed enough to entice students and staff to participate.

All in all, it took about two weeks of lead time from reaching out to their contact at the university to scheduling the event. Amanda said they spent about $375 on snacks, poster board, event insurance, and booth rental.

Craig adds that having mock email and calendar events ready as well as written out steps for test administration helped the tests go smoothly. Because they used the research platform Maze to host the prototype test, the team could easily see a summary of users’ interactions instead of  needing to depend on the notes Amanda took while observing. Amanda’s notes contained a lot of contextual information that happened outside of Maze, so putting those sources together gave them the full picture.

However, Amanda says she found it challenging to frame the task consistently when interactions with participants started in various ways. She also had to make a concerted effort not to provide too much support. “It was challenging not to provide too much help when the test participants asked questions like ‘What should I do?’”

Unfortunately, once they removed the meta nature of the testing over Zoom, it became clear that there was a problem with their solution design. The major assumption that 90% of Deaf users would select the correct button to initiate the connection to the interpreter and provide their webconference link before connecting to an interpreter did not pass their test criteria. “We had to go back to the drawing board for our solution design,” says Amanda.

The good news was that recruiting in person proved to be successful enough that the Convo product team felt convinced it would be worth repeating if they had similar assumptions to test in the future.

Key Learnings and Takeaways

While it’s still early days for Convo’s product team, they’ve already made some impressive changes to their approach to assumption testing.

Reflecting on what they’ve learned so far, they shared a few key learnings and takeaways.

  • Test the critical assumptions that apply to steps earlier in the flow first. This avoids wasting effort testing assumptions that turn out to be irrelevant when your assumptions earlier in the flow are disproven.
  • Have a way to more efficiently recruit external end users so you can test assumptions with them more quickly and iterate faster. “Although in the cases we shared we turned to in-person testing, we are also looking into Orbital as a tool to help us next time we have this need,” says Amanda.
  • Identifying and testing your assumptions is critical. “If we had built the original solution and gone to production with it, 30% of our Deaf users would have been frustrated because they would get face to face with an interpreter and get told they need to go back to take additional steps before they could proceed with their conversation. We were able to scrap the first plan, come up with a different solution, test its underlying assumptions, and verify it before building, so we were able to avoid the wasted engineering time and a painful experience for our users,” Craig explains.
  • For in-person testing, consider organizations that you already have a relationship with or you could build a relationship with that have a large concentration of the demographic of users you want to test with.

We were able to scrap the first plan, come up with a different solution, test its underlying assumptions, and verify it before building, avoiding wasted engineering time and a painful experience for our users. – Tweet This

Looking for more help in identifying and testing your own team’s assumptions? Come join us in the next cohort of Identifying Hidden Assumptions and/or Assumption Testing.

The post Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up appeared first on Product Talk.


Product in Practice: Testing Assumptions Was Tricky But the Convo Team Didn’t Give Up was first posted on June 12, 2024 at 6:00 am.
©2022 “Product Talk“. Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please let us know at support@producttalk.org.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
organizations-need-to-assess-their-risk-level-and-take-appropriate-actions.

Organizations need to assess their risk level and take appropriate actions.

Next Post
overcoming-the-obstacles-to-cultural-change

Overcoming the Obstacles to Cultural Change

Related Posts