Aug 26, 2024
52 Views
Comments Off on You really like this, don’t you?
0 0

You really like this, don’t you?

Written by

Why should user researchers avoid questions like “Do you like it?” Why does this question or one of its close siblings keep on making an appearance? What are the biases to be aware of during user research? Why are open-ended questions better? Let’s talk about it.

Photo by UX Indonesia on Unsplash

Many many years ago, I had the opportunity to attend a Google-sponsored Masterclass about Human-Centred Design (HCD) in Cape Town, the beautiful city I call home. It was an intense 2-day experience where we were shown the power of the HCD process: discovering needs, defining problems, ideation, prototyping, and improving. We were given a light optional exercise to do on our own to drive the concepts home. I tend to do optional exercises. They amuse me and I enjoy discovering challenges that aren’t mentioned during classes and lectures. At the end of day 1, all of us had sketchy hand-drawn mobile prototypes attempting to solve well-defined problems.

The facilitators recommended that we put our prototypes in front of one or two people (obviously, not necessarily our target audience but at least other, non-product human beings) that evening or early the next morning, and to listen and observe. We could then bring these observations back and see how they might help us improve or pivot our products.

Jokingly, one facilitator called out to the room: “And remember: don’t put words into your users’ mouths. Keep your questions open-ended!”

She picked up a sketchy prototype that was lying around, held it up to us, and said with a smile: “So you can’t ask people things like ‘You really like this, don’t you?’

There was laughter all around. No self-respecting user researcher would ask something silly like that! Right?

Right..?

It’s almost 20 years later. Today, in my experience, user researchers continue to ask users that question during in-person, qualitative research, reporting on the answers gathered and making design decisions based on this dubious data. Why?

Every Product Designer knows that bias must be accounted for and leading questions minimised during user research. We know that we must ask open-ended questions. We know that we are not selling our products during user research, but observing people with interest and curiosity to learn what they might do with our designs (or ideally, the designs of other designers!) when we’re not around.

Despite this, I still regularly come across user researchers who ask research participants the “Do you like it?” question during qualitative, one-on-one user research. They might change the wording slightly, but these are all problematic when asked by a researcher from a participant:

how would you rate your experience of this?*how easy is this to use?*do you understand the content?*how likely are you to recommend this to your friends or family?*how empowered does this product make you feel?*did you like it?**Researchers often add short lists of answers or Likert scales to these questions. These may be good (and necessary!) for quantitative data collection methods. During in-person interviews, however, these answers might just exacerbate the problem. The thinking goes: “If I add a list of answers and the answer selected is negative, I can ask the user ‘why?’” Unfortunately, the moment your participants discover that a negative answer means that they need to provide more explanation, they are likely to veer towards positive responses with even greater fervour.

Research participants are highly unlikely to answer any of these questions honestly. They might even be unable to. Suppose a research participant is sitting with a researcher or the designer of the product being evaluated. In this case, the participant is highly likely to give a biased response whenever a question allows because participants want to please the researcher. This bias is powered by three factors:

1. Social Desirability Bias: Participants are likely to provide higher ratings because they feel a sense of obligation or want to be seen favourably by you, the researcher, or the organisation you represent.

2. Fear of Negative Consequences: If participants believe their feedback could negatively impact their relationship with you or your organisation, they are likely to avoid giving low ratings.

3. Lack of Anonymity: When feedback isn’t anonymous, participants may be less honest, leading to skewed data that doesn’t accurately reflect user sentiment. Even if you reassure your research participants that their feedback will remain anonymous, they are not anonymous to you.

If people think that you or your friends designed a product and you ask them about it, they are likely to tell you what they think you want to hear. The same goes for participants who are incentivised to take part in the research. They might want to be invited to take part in research again.

In my experience, this effect becomes magnified when there is a power imbalance between researchers and participants. If you are part of the organisation for which you are doing research, and your organisation is big and renowned, or the driving force behind a product or service that your research participants truly need, the three factors — social desirability bias, fear of negative consequences, and lack of anonymity — will skew your data like a wrecking ball. I have taken part in research for organisations as a consultant and have done my best to introduce myself as such: it’s hard to tell whether people clearly differentiate between a “person researching her own organisation’s product” and a “person researching another organisation’s product”. I suspect the two often mush together in people’s minds.

Am I saying that we can’t ask people how they feel about a design or what they think about it? Not at all. If you are sitting right next to your participant and observing them getting increasingly frustrated with a task or getting hopelessly derailed, you should absolutely be asking open-ended questions. The responses will still be vulnerable to bias, though, and your observations are likely to be more reliable.

A classic scenario that every user researcher has experienced is this: a participant (let’s call her Alice) is given a task to complete. Alice struggles her dear heart out to get the task done. She clicks on buttons that shouldn’t exist and goes down rabbit holes that surprise even you, and after 10 minutes or more, she completes a task that should have taken her 30 seconds. You ask: “How was that?” Alice replies with a fleeting expression of worry: “Easy! Yes. It’s really user-friendly.”

To reduce this bias, it’s often better to gather quantitative data. Look at task success rates and durations analytically, and gather user satisfaction feedback through anonymous surveys or third-party tools that can objectively collect user responses without the direct involvement of the product team. Satisfaction ratings and NPS scores (if this is the metric your organisation prefers) gathered in this way are more likely to reflect genuine user feedback and provide more reliable data for decision-making.

The bottom line? If you’re doing in-person research, don’t ask people “Do you like this?” Don’t ask people questions where the answer is likely to be a wrecking ball in terms of its reliability for future design decisions.

In-person qualitative user research is incredibly valuable if it is done well and if the findings are combined with insights gained from quantitative methods. In-person qualitative user research is also very expensive. Use your time with people wisely. Empathise. Observe. You are not trying to prove that a product or an idea is great. You are discovering how people might interact with it and uncovering surprising behaviours and motivations.

I wanted to leave this post right here, but there is more to be said. There is a good reason why researchers, despite knowing that this is wrong, keep on asking “Do you like it?” questions in qualitative interviews. It’s often because stakeholders demand it. They want to know if people will like a product or a feature before investing in it too much.

Side note: Interestingly, we can devise a trustworthy and reliable research method to find out how many people like a product or feature. But liking something does not mean that people will use it or buy it. You would be better off determining how many people need it. And this is a blog post for another day.

As the user researcher on the team, a big part of your job is to help your stakeholders to articulate their needs better. When your stakeholders ask you to find out whether people will like the new design, it’s your job to ask, with polite curiosity, “Why do you want to know this?” and to figure out whether they aren’t, in fact, hoping for an indication of the number of people who are likely to use or buy the new design.

Asking a team to do user research to lower the risk of investing too heavily into a new design that not enough people will use/buy is a good thing to do. From a business perspective, user research exists for this very reason. Let’s say your stakeholders are happy with this as a research question: “How many people in our target audience are likely to use the new design?”

Now this is the kind of question that User Research exists to answer. It is a question that can be answered with a well-planned combination of both quantitative and qualitative research activities, depending on the product type. None of these activities will involve user researchers approaching people and asking: “Will you use this?” People are notoriously bad at predicting their future behaviours and research questions are not interview questions, and we must not confuse these. Once again, this phenomenon deserves a blog post on its own!

If you have read all the way to this closing paragraph, then thank you. I hope this is of some value to you. Please share your stories as a user researcher! I would love to hear them.

You really like this, don’t you? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Article Categories:
Technology

Comments are closed.