My scrappy method for proving user tests are not expensive

Product design is a process of experimentation and decision-making. But how often have you heard the following exchange?

Designer: What if we ask our customers?

Stakeholder: We don't have enough time.

Even in 2022, this is alarmingly common among the mid-market companies I've worked with. But it's not as expensive as it appears. And today I'm going to give you the workaround I've used to demonstrate to my teams that we can gather targeted feedback quickly.

"The Self-Guided Prototype"

This is not a new or novel concept. But when I first came up with it in 2016, tools like Maze hadn't reached the market yet. And even then, teams are still hesitant to add new tooling to their stacks.

The Self-Guided Prototype is a testing method that:

  1. Can be conducted with your existing prototyping stack
  2. Validates a small number of targeted questions
  3. Can be performed by a user on their own time without supervision
  4. Lives "on rails" (only moves from one step to the next) in a clickable prototype via tools like Figma or InvisionApp

It's best used for things like:

  1. Collection 10-50 user responses in 7-14 days
  2. Uncovering if views are missing critical information
  3. Testing navigation logic
  4. Validating holistic workflow performance

See it in action

7-Steps to a Quick Experiment

  1. Put together a working group
  2. Identify the most pressing questions
  3. Create the prototype and onboarding slides
  4. Create the 5-7 question companion survey
  5. Allow for 7-14 days to collect responses
  6. Synthesize major takeaways for the team
  7. BONUS: Connect survey results to a dedicated Slack channel

1. Put together a working group

A "working group" is a pre-vetted collection of users. Typically your product, sales or support teams can be tasked to identify 10-50 candidates whose feedback would be valuable. Generally, this can be messaged along the lines of:

"Hello! We're excited to invite you to take a first look at what some of the new features we're cooking up. As part of our tester's program, you'll be able to provide feedback that will directly impact major feature releases. The commitment is [x] and the incentive is [y] (sometimes we'll budget Amazon gift cards here)."

2. Identify the most pressing questions

This is not an exercise to cram every validating question into one test for "efficiency". This test requires that the team get together and pick a theme or area of focus. Usually, the biggest debate in the room is a good starting point, like information architecture or macro workflow challenges. These questions will be formalized in the companion survey.

3. Create the prototype and onboarding slides

Unlike prototypes that you create for your teams who need little context by this stage, this prototype needs a few elements to be successful:

  1. Introductory screens. 1-2 screens that tell the user what the purpose of this test is, how long it might take, and what feedback you're looking for.
  2. Onboarding screens. 1-2 screens that explain how to interact with the prototyping tool such as highlighting hotspots in case they get lost and where to find the companion survey.
  3. "On rails" prototype. Do not allow the user to get lost throughout the prototype. Each view should only have two clickable hotspots: the previous step and the next step. Secondary UI elements should not be interactive.
  4. Tooltips. Hoverable tooltips for context should be present to guide the user through what they are looking at and to help them answer the questions in the companion survey.

4. Create the 5-7 question companion survey

This survey can be created with any tool of your choice. Use Google Forms for a free option. This survey should adhere to best practices and remain short and focused. Each question includes two pairs of asks:

  1. Quantifiable question (e.g. how difficult was it to identify [x])
  2. Open text question for any (optional) clarifying feedback

5. Allow 7-14 days to collect responses

One of the big benefits of this method is that it doesn't cost a lot of team resources. User interviews can often run 45-60 minutes long per session and require coordination. Here we can allow users to complete this test on their own time and without needing to block our own calendars. For that reason, I give them 7-14 days to complete this test. If you've properly vetted your working groups, you shouldn't need to send any gentle reminders to complete this test, but the option exists.

6. Synthesize major takeaways for the team

As with all syntheses, use one author / one voice to capture the major takeaways. This begins as a document or a brief couple of slides that capture both the quantitive survey results as well as highlights the qualitative feedback collected from the open-ended clarification questions. This is where your team should be able to close the loop on outstanding decisions and where the real aha! moments of how effective (and fast!) this method can be.

7. BONUS: Connect survey results to a dedicated Slack channel

No matter how well your synthesis is constructed, stakeholders will likely still want to be able to dig into the raw data themselves. One way to facilitate this without it being disruptive is to integrate your survey responses directly into a newly created Slack channel (let's call it #experiment-survey). Using tools like Zapier, you can be sure each user's survey results is automatically posted to this Slack channel once its submitted. This is not meant to be a substitute for synthesis - that step is mandatory. But in my experience, this activity keeps teams excited and involved and brings to life stronger discussions during synthesis.

There are plenty of ways to conduct research, and this method isn't right for every use case. But the next time your team runs into a snag and you're seeing red flags of ongoing deliberation, consider my self-guided prototype as a way forward.

High-impact design tips for product builders
Design decisions made easy.
Every Saturday.
Please check your inbox to confirm your subscription!
Oops! Something went wrong. Please try again!