The following is a short extract from our new book, Researching UX: User Research, written by James Lang and Emma Howell. It’s the ultimate guide to user research, a key part of effective UX design. SitePoint Premium members get access with their membership, or you can buy a copy in stores worldwide.
This next section is going to get a bit theoretical. Don’t worry: we’ll show you how to apply it later in the chapter. For now, though, you need the basic building blocks of research design.
In this section, we’re going to run through 10 concepts. Some may already be familiar to you, others less so. They are:
- What is data?
- Qualitative vs. quantitative
- Discovery vs. validation
- Insight vs. evidence vs. ideas
- Validity and representativeness
- Scaling your investment
- Multi-method approaches
- In-the-moment research
- Research as a team sport.
What is Data?
The research process involves collecting, organising and making sense of data, so it’s a good idea to be clear what we mean by the word ‘data’. Actually, data is just another word for observations, and observations come in many forms, such as:
- Seeing someone behave in a certain way
- Or do something we’re interested in (such as click on a particular button)
- Hearing someone make a particular comment about your product
- Noting that 3,186 people have visited your Contact Us page today
But how do you know what’s useful data, and what’s just irrelevant detail? That’s what we’ll be covering in the first few chapters, where we’ll talk about how to engage the right people, and how to ask the right questions in the right way.
And how do you know what to do with data when you’ve got it? We’ll be covering that in the final two chapters about analysis and sharing your findings. In particular, we’ll be showing you how to transform raw data into usable insight, evidence and ideas.
Qualitative vs. Quantitative
When it comes to data analysis, the approaches we use can be classified as qualitative or quantitative.
Qualitative questions are concerned with impressions, explanations and feelings, and they tend to begin with why, how or what. Such as:
- “Why don’t teenagers use the new skate park?”
- “How do novice cooks bake a cake?”
- “What’s the first thing visitors do when they arrive on the homepage?”
Quantitative questions are concerned with numbers. For example:
- “How many people visited the skate park today?”
- “How long has the cake been in the oven for?
- “How often do you visit the website?”
Because they answer different questions, and use data in different ways, we also think of research methods as being qualitative or quantitative. Surveys and analytics are in the quantitative camp, while interviews of all sorts are qualitative. In general, you’ll be leaning on qualitative research methods more, so that will be the focus of this book.
Discovery vs. Validation
The kind of research will depend on where you are in your product or project lifecycle.
If you’re right at the beginning (in the ‘discovery’ phase), you’ll be needing to answer fundamental questions, such as:
- Who are our potential users?
- Do they have a problem we could be addressing?
- How are they currently solving that problem?
- How can we improve the way they do things?
If you’re at the validation stage, you have a solution in mind and you need to test it. This might involve:
- Choosing between several competing options
- Checking the implementation of your solution matches the design
- Checking with users that your solution actually solves the problem it’s supposed to.
What this all means is that your research methods will differ, depending on whether you’re at the discovery stage or the validation stage. If it’s the former, you’ll be wanting to conduct more in-depth, multi-method research with a larger sample, using a mix of both qualitative and quantitative methodologies. If it’s the latter, you’ll be using multiple quick rounds of research with a small sample each time.
At the risk of confusing matters, it’s worth mentioning that discovery continues to happen during validation – you’re always learning about your users and how they solve their problems, so it’s important to remain open to this, and adapt earlier learnings to accommodate new knowledge.
Insight, Evidence and Ideas
Research is pointless unless it’s actually used. In some cases, the purpose of research is purely to provide direction to your team; the output of this kind of project is insight. Perhaps you want to understand users’ needs in the discovery phase of your project. If so, you need insight into their current behaviour and preferences, which you’ll refer to as you design a solution.
Often, though, you need research to persuade other people, not just enlighten your immediate team. This can be where you need to make a business case, where your approach faces opposition from skeptical stakeholders, or where you need to provide justification for the choices you’ve made. When you need to persuade other people, what you need is evidence.
And sometimes, your main objective is to generate new ideas. Where that’s the case, rigorous research is still the best foundation, but you’ll want to adjust things slightly to maximise the creativity of your outputs.
Research is great at producing insight, evidence and ideas. But… methodologies that prioritise one are often weaker on the others, and vice versa. It’s much easier if you plan in advance what you’ll need to collect, and how, rather than leaving it till the end of the project. The takeout: you should think about the balance of insight, evidence and ideas you’ll need from your project, and plan accordingly.
When it comes to planning your approach, bear in mind your analysis process later on. If you give it thought at this stage, you’ll ensure you’re collecting the right data in the right way. We talk about this more in Chapter 8.
Validity is another way of saying, “Could I make trustworthy decisions based on these results?” If your research isn’t valid, you might as well not bother. And at the same time, validity is relative. What this means is that every research project is a tradeoff between being as valid as possible, and being realistic about what’s achievable within your timeframe and budget. Designing a research project often comes down to a judgement call between these two considerations.
Let’s look at an example. You want to understand how Wall Street traders use technology to inform their decision-making. If you were prioritising validity, you might aspire to recruit a sample of several hundred, and use a mix of interviewing and observation to follow their behaviour week by week over several months. That would be extremely valid, but it would also be totally unrealistic:
- Wall Street traders will be rich and busy. They’re unlikely to want to take part in your research.
- A sample of several hundred is huge. You’re unlikely to be able to manage it and process the mountain of data it would generate.
- A duration of several months is ambitious. You would struggle to keep your participants engaged over such a long period.
- Even if the above weren’t issues, the effort and cost involved would be huge.
Undaunted, you might choose to balance validity and achievability in a different way, by using a smaller number of interviews, over a shorter duration, and appealing to traders’ sense of curiosity rather than offering money as an incentive for taking part. It’s more achievable, but you’ve sacrificed some validity in the process.
Validity can take several forms. When you design a research project, ask yourself whether your approach is:
- Representative: Is your sample a cross-section of the group you’re interested in? Watch out for the way you recruit and incentivize participants as a source of bias.
- Realistic: If you’re asking people to complete a task, is it a fair reflection of what they’d do normally? For example, if you’re getting them to assess a smartphone prototype, don’t ask them to try it on a laptop.
- Knowable: Sometimes people don’t know why they do things. If that’s the case, it’s not valid to ask them! For example, users may not know why they tend to prefer puzzle games to racing games, but they will probably still take a guess.
- Memorable: Small details are hard to remember. If you’re asking your participants to recall something, like how many times they’ve looked at their email in the past month, they’ll be unlikely to remember, and therefore your question isn’t valid: you need a different approach, such as one based on analytics. If you were to ask them how many times they’ve been to a funeral in the past month, you can put more trust in their answer.
- In the moment: If your question isn’t knowable or memorable, it’s still possible to tackle it ‘in the moment’. We’ll say more about this below.
Takeout: You want your research approach to be as valid as possible (ie, representative and realistic, as well as focused on questions that are knowable and memorable) within the constraints of achievability. Normally, achievability is a matter of time and budget, which leads us to…
Scaling Your Investment
Imagine you were considering changing a paragraph of text on your website. In theory, you could conduct a six-month contextual research project at vast expense, but it probably wouldn’t be worth it. The scale of investment wouldn’t be justified by the value of the change.
Continue reading %Understanding the Core Concepts of User Research%