The following is how I typically think about the product design process. Please keep in mind that this represents only one way of doing things as I’ve experienced them throughout my career.
This is where we define what it is that we’re actually going to be working on. This is typically constrained by the goals that we’ve defined as a company at the beginning of the year. This is the time to ask questions like, “What’s the problem we’re trying to solve?” Or, “What opportunities are there for us to improve our product/service?
The discovery of these problems and/or opportunities may come from several different sources: internal data observations, customer and/or market research, business-specific goals/needs, etc. Wherever we uncover these problems or opportunities, it is important that we gather the necessary information to validate that the problem or opportunity is indeed real and what the opportunity size is1.
Once we’ve identified a problem, it’s time to formulate a hypothesis. Yes, the same type of hypothesis that you learned about in high school biology class. A good hypothesis has three parts:
- The thing that you’re going to change
- The behavior that will result from this change
- The business metric(s) that will be affected
If we put that together into the form of a statement, it will look something like this:
If we [do this], then [this behavior will change] and [these metrics will be affected].
Let’s say that through our customer research, we identified that people aren’t very familiar with our brand and therefore they’re hesitant to signup. One of our hypotheses might look like this:
If we add customer testimonials to our home page, then potential customers will trust us more and signups will increase.
When crafting your hypothesis, you’ll need to take both the needs of your customers and your business into consideration. This is where a keen understanding of both your customer’s needs and the needs of your business come into play and where — in my opinion — a product designer’s role is defined. I see product designers as sitting between the business and the customer — they need to be able to provide solutions that balance the goals of the business with the needs of the customer.
This is what most people typically think of when they think of “design”. This is where we put pencil to paper — or pixels to screen — and start to visualize the solutions we’ve been discussing.
At this stage, we’ve most likely narrowed down our concepts to a set that we feel would benefit from customer feedback. The fidelity of said prototype can vary widely from simple paper prototypes (yes, really) to high-fidelity, interactive prototypes that, for all intents and purposes, are nearly indistinguishable from what the final experience would actually be like.
Prototyping is optional because not every concept will necessarily require a prototype — it’s not worth the additional time and effort if we’re unable to justify what we’re going to learn from it and how we’ll take action on the feedback we receive. Some things are better tested at a much larger scale (such as an A/B test).
Evaluative Research (optional)
This step is very much tied to the previous step. At some point we’ll want to evaluate or get feedback on our ideas. The form that this takes will be informed by the stage of the process we’re in and what we’re trying to learn. Sometimes we might want to get feedback on the comprehension of the concept we’re working on. Other times we might want to uncover usability issues. At any rate, it’s important to be very clear about what we’re trying to learn, how we’re going to phrase our questions to get at those learnings, and how we’ll ultimately take action on the feedback we receive.
Typically this type of evaluative research is done in-market and in-person with one-on-one, moderated sessions where we bring in several participants (usually between 12 to 24 people) over the course of a day or two in order to better understand how our customers perceive our ideas.
A word of caution: be careful about the type of feedback you’re looking for and the types of questions you ask. Evaluative research is not very effective for determining which of your ideas is “good” or “bad” (the scale is just too small and people often say things that they would not actually do). And, if you ask leading questions2, you may only end up getting the answers you seek, not the answers that you need.
At this point, design, prototyping, and research may be repeated in an effort to hone in and temper your ideas. Additionally, many of these steps may overlap or be worked on in parallel — this process won’t always be as linear as it presented here.
Once we’re feeling good about the direction of the designs we’ve been working on, it’s time to test them at scale within our actual product or service.
It’s not my intention to go into the details of A/B testing (although I’d like to address that in a future post) but, the basics are that we would identify which experiences we’d like to test relative to our current experience — these become our test “cells”. We then split the customers that visit our product or service into different cohorts so that each of them only experiences one of the concepts we’re testing. We then measure the success or failure of each of those concepts — relative to our current experience, or our “control” — by measuring some metric that we established ahead of the test (e.g. an increase in sign ups or revenue, or a decrease in churn).
Test Results Analysis
Careful analysis of the results is important and can often be complex as there’s a lot of statistical science that goes into setting up and understanding the results of an A/B test. It really behooves you to work with someone who has significant experience analyzing A/B test results, however, if that’s not an option, many of the A/B testing tools available today have built-in tools for analyzing your results.
Retire or Productize
At this point — depending on the results — we may decide to run a follow-up test to clarify our learnings or we may have determined a clear “winning” or “losing” experience that can either become the new default for our customers or that we can learn and move on from respectively.
Ultimately your idea — or your hypothesis — will be validated at scale by running an A/B test within your product or service, but it’s important to build a case for why you believe this problem or opportunity exists and have access to data that backs up your claim. ↩
A leading question is one that influences the participant to see something or answer the questions being asked in a particular way. For example, “Would you click this button to continue?” versus, “What would you do next to continue on from here?” ↩