인사이트

User research methods: a practical guide to qualitative and quantitative work

Essential user research methods for product work, split into qualitative and quantitative approaches: interviews, surveys, observation, usability tests, A/B tests, and analytics—with checklists you can use immediately.

User research methods: a practical guide to qualitative and quantitative work

Key takeaway

In one line: UX research follows the double diamond—diverge and converge twice. Write down hypotheses, samples, and metrics so the team can align and you can reproduce the study.

Double diamond (summary)


Introduction: why user research matters

“We should build what customers want” is easy to say and hard to prove. User research is the disciplined way to learn needs, pain points, and behavior so product decisions rest on evidence. When we planned reward and event features, we combined interviews, surveys, and observation—this article explains how we structure that work.


1. Foundational principles

A. Clarify the research goal

Pre-study checklist

  • What decision depends on this work? What question must we answer?
  • What will we do differently based on the findings?
  • Is the moment better served by qualitative depth or quantitative scale?

Example mappings

  • Exploratory: Discover needs → qualitative (interviews, contextual inquiry).
  • Validation: Test a hypothesis → quantitative (survey, A/B test) or structured usability.
  • Behavioral understanding: Go deep on “why” and “how” → qualitative (usability, observation).

B. Study design with 5W1H

  • Why: Purpose and success criteria.
  • What: Research questions and unknowns.
  • Who: Target participants and screening.
  • When: Timing in the product lifecycle.
  • Where: Context (lab, remote, in-field, in-product).
  • How: Methods and instruments.

2. Qualitative research

A. User interviews

Goals

  • Understand attitudes, emotions, and stories.
  • Surface unstated needs and mental models.
  • Map how the product fits real life.

How to run them

1. Preparation

  • Discussion guide with core questions.
  • Recruit 5–10 people who match the target (adjust for complexity).
  • Choose venue and tooling (remote or in person).

2. During the session

  • Open prompts: “How do you think about …?” beats “Do you like it?”
  • Concrete episodes: “Can you walk me through a recent example?”
  • Comfortable silence: Pause after answers; people often add depth.
  • Observe: Tone, pace, and nonverbal cues—not only words.

3. Analysis

  • Transcribe or take structured notes.
  • Tag themes; write findings with supporting quotes.

Checklist

  • Guide ready and piloted?
  • Participants representative of the target?
  • Consent and note-taking plan clear?

Practical tips

  • 60-minute shape: 5 min warm-up → 10 min context → 20 min core topic → 5 min close.
  • Remote: Zoom or Meet reduces travel friction; test audio and screen sharing.
  • Pair interviewing: One facilitator, one dedicated note-taker improves signal.

B. Usability testing

Goals

  • Find friction in real tasks.
  • Watch what people do, not only what they say.
  • Prioritize fixes by severity and frequency.

Process

1. Design

  • Task scenarios tied to critical journeys.
  • Recruit 5–8 users for many classic usability issues.
  • Prepare prototype or build; plan screen capture.

2. Session

  • Encourage think-aloud where appropriate.
  • Minimize hints unless the study design requires rescue protocols.
  • Capture errors, hesitation, and successful paths.

3. Analysis

  • Bucket issues by impact and frequency.
  • Recommend fixes and sequencing.

Checklist

  • Tasks map to real priorities?
  • Environment matches how you will ship?
  • Recording and consent handled ethically?

Practical tips

  • Remote moderated tests work well with screen share.
  • Pair with A/B tests after you have a concrete design hypothesis.

C. Observational research

Goals

  • See real environments and constraints.
  • Catch behaviors people do not articulate.

Methods

  • Contextual inquiry: In homes, offices, stores, or workflows.
  • Digital observation: Session replay, heatmaps (with privacy review).

Practical tips

  • Context first: Understand setting, interruptions, and tools in use.
  • Patterns: Recurring behaviors often matter more than one-off anecdotes.

3. Quantitative research

A. Surveys

Goals

  • Sample many users; estimate prevalence.
  • Test hypotheses at scale.

Design guide

1. Purpose

  • Tie every question to a decision.

2. Writing items

  • Clear wording: Avoid jargon and double-barreled questions.
  • Neutral tone: Do not steer toward “right” answers.
  • One idea per item.
  • Balanced scales where you use Likert or satisfaction items.

3. Fielding

  • Reach the right audience (email, in-app, panels).
  • Improve response rates with relevance, brevity, and transparent incentives.

4. Analysis

  • Check response quality and sample bias.
  • Run descriptive stats; add inference only when design supports it.

Checklist

  • Each question serves a decision?
  • Pretested on a small group?
  • Length appropriate (often 5–10 minutes)?

Practical tips

  • Tools: Google Forms, Typeform, SurveyMonkey, or in-house instruments.
  • Pilot: Catch confusing scales before full send.
  • Incentives: Small thank-yous often lift completion without skewing results if framed ethically.

B. A/B testing

Goals

  • Compare variants on one primary metric.
  • Replace opinions with controlled evidence.

Process

1. Design

  • Hypothesis: “Variant A will outperform B on conversion because …”
  • One main change per test when learning; multivariate only with power and tooling.
  • Random assignment to variants.

2. Run

  • Adequate sample and duration for statistical reliability (and business cycles).

3. Read out

  • Check significance and practical effect size.
  • Document learning for the next iteration.

Checklist

  • Hypothesis and primary metric explicit?
  • Single-variable discipline (unless MVT is justified)?
  • Power / runtime adequate?

Practical tips

  • Platforms: Google Optimize successors, VWO, Optimizely, or in-house feature flags.
  • MVT: Use when interactions matter and you can analyze responsibly.

C. Behavioral analytics

Goals

  • Find patterns in usage data.
  • Monitor health of funnels and features.

Sources

  • Web: GA4 and similar.
  • Product: Amplitude, Mixpanel, Heap, PostHog, etc.
  • Qualitative overlays: Heatmaps and session tools where policy allows.

Practical tips

  • Rhythm: Weekly or monthly reviews beat one-off dashboards.
  • Ask why: Every metric shift deserves a narrative and, often, a follow-up qualitative slice.

4. From insights to product decisions

A. Synthesis framework

  1. Observe: What happened?
  2. Interpret: Why might that be?
  3. Insight: So what for our strategy?
  4. Action: What do we change, by when, and how will we verify?

B. Feed the roadmap

Checklist

  • Findings linked to roadmap items and owners?
  • Prioritization explicit (impact × confidence × effort)?
  • Follow-up research planned where uncertainty remains?

Practical tips

  • Share widely: Short readouts beat dense decks nobody opens.
  • Archive: Future you will thank you for dated research packets.
  • Iterate: Research is a system, not a single project.

Conclusion: make research routine

User research is central to good product work. Qualitative methods build empathy and explanatory models; quantitative methods validate scale and trade-offs. Run research regularly, document decisions, and close the loop in the product—that is how evidence compounds.

Share

Related posts