Before you launch into new research, spend time excavating what your organization already has. Most companies have scattered data about their customer journey buried across different teams and systems. Customer support call logs. Past focus group notes. Analytics data. NPS scores. Feedback from sales conversations. This existing data—both qualitative and quantitative—can reveal patterns and help you shape your research efforts more efficiently.
This step serves two purposes. First, it saves time and budget by preventing duplicate research. Second, it helps you identify gaps. Where does your existing data tell you nothing? Where do stakeholder assumptions conflict with what the data shows? These gaps become your research priorities.
You might be tempted to build your journey map primarily from quantitative data. Analytics show you what happened. NPS tells you satisfaction levels at specific points. But quantitative data answers what and how much—not why. It won't tell you the emotions, mindsets, and motivations that drive customer behavior at each stage of the journey.
For that, you need qualitative research. Qualitative methods let you directly observe or converse with customers, capturing the context and reasoning behind their actions. This is the only way to understand the full texture of the customer journey.
Interviews are the most straightforward qualitative method. You talk to customers about their experiences, their challenges, and their decision-making process. Done well, interviews reveal patterns you wouldn't see in data alone.
The key is asking specific questions, not broad ones. "How do you feel about our product?" is less useful than "Walk me through what happened when you tried to set up your account." Specific questions trigger memories and help customers articulate details they might otherwise forget.
In-person interviews have a particular advantage: you can use a simple technique where participants map their journey steps on sticky notes as they talk. This helps them recall the sequence accurately and rearrange steps if they remember them out of order. If you conduct phone interviews afterward, you can send participants a rough template based on what you learned in person, asking them to review and revise it to reflect their own experience. This approach captures variation across different customer segments.
The limitation of interviews is that what people say they do isn't always what they actually do. Someone might describe their process as straightforward and efficient, but their actual behavior might involve workarounds, delays, or steps they forgot to mention. This is why interviews work best when paired with observation.
Field studies involve observing customers in their natural environment; their home, their office, a retail store. You watch them interact with your product or service in the context where they actually use it.
The power of field studies is that they reveal blind spots. During one journey-mapping research initiative, customer service representatives described a specific protocol for finding answers to customer questions. But when researchers observed them actually handling calls, they saw something different: the representatives were using workarounds and shortcuts that weren't part of the official process. This gap between what people say and what they do is exactly what field studies uncover.
Field studies can take different forms. Contextual inquiry involves observing someone while you're present, asking clarifying questions as you watch. For retail experiences, "shop-alongs" let you follow customers through their purchasing process. For digital products, you might observe someone using your software in their actual work environment rather than in a lab.
The trade-off with field studies is time and cost. They're more resource-intensive than interviews. But they're invaluable for understanding the actual flow of interactions and the environment that shapes customer behavior.
If you're designing a future-state journey map for a product or service that doesn't exist yet, you have no existing user base to research. This is where competitive analysis becomes essential. By examining how competitors handle similar journeys, you can identify their strengths and weaknesses, and spot opportunities to differentiate.
You can approach this virtually using remote-usability testing platforms. Record customers using competitor sites and ask them to comment on their thoughts, feelings, and motivations at specific points. This gives you research input even when you don't have your own customers to study yet.
When budget and timeline allow, the strongest approach combines multiple qualitative methods. Each method reveals different aspects of the journey. Together, they create a more complete picture.
Here's a sample research plan that organizations can adapt based on their constraints:
| Phase |
Method |
What to do |
1a |
Customer interviews (in-person) |
Conduct 5-8 in-person interviews. Have participants map their journey steps on sticky notes as they talk. Focus on all relevant phases of the journey. |
1b |
User interviews (phone) |
Create a rough journey template from phase 1. Conduct 5-8 phone interviews with new participants, asking them to review and revise the template using a digital whiteboard tool. |
2 |
Field Study |
Perform contextual inquiry with 3-5 participants. Observe them using your product in their actual environment. Ask clarifying questions to verify what you heard in interviews. |
3 |
Competitive Analysis |
Conduct usability testing with 3-5 participants using competitor products. Ask them to comment on their thoughts and feelings at key points in the competitor journey. |
The timeline for this research typically spans 4-6 weeks, depending on how quickly you can recruit participants and schedule sessions. The cost varies based on whether you use internal resources or external research firms, but qualitative research is generally less expensive than many organizations assume.
After you've completed qualitative research, quantitative data becomes a powerful reinforcement tool. It adds credibility to your findings and helps you understand the scale and frequency of the behaviors and emotions you discovered.
Several quantitative approaches work well with journey mapping:
Surveys can quantify the frequency and magnitude of behaviors you discovered in interviews. For example, if interviews revealed that customers struggle with account setup, a survey can tell you what percentage of customers experience this struggle and how much it impacts their decision to continue.
Digital analytics can validate pain points. If interviews suggest that customers get frustrated at a specific step, analytics showing high exit rates at that exact point strengthens your case.
Customer satisfaction or loyalty scores can be aligned to specific journey phases. Rather than having a single NPS score, you can measure satisfaction at each stage, creating a visual representation of where the journey is working and where it's breaking down.
Social sentiment analysis can reveal how customers feel about different stages. Analyzing customer mentions on social media or in reviews can show which parts of the journey generate positive or negative emotions.
The key is using quantitative data to supplement and validate qualitative findings, not replace them. Quantitative data tells you the magnitude of a problem. Qualitative data tells you what the problem actually is.
The goal of this research isn't to create a perfect map. It's to create a map that your organization will actually use to make better decisions. This means involving your core team of stakeholders throughout the research process, not just at the end.
Share findings as you discover them. Discuss emerging patterns. When you find a gap between what stakeholders assumed and what research shows, address it directly. This ongoing involvement builds buy-in and reduces the attachment to assumptions that can undermine even the best research.
When you're ready to synthesize your findings into a final journey map, consider bringing actual customers into the workshop. Having them review and validate your map creates accountability and often surfaces nuances that even thorough research can miss.
(1): Lemon, K. N., & Verhoef, P. C. (2016). Understanding customer experience throughout the customer journey. Journal of Service Research, 19(4), 497-499. https://doi.org/10.1177/1094670516658945
(2): Følstad, A., & Kvale, K. (2018 ). Customer journeys: a systematic literature review. Journal of Service Theory and Practice, 28(2), 196-227. https://doi.org/10.1108/JSTP-11-2014-0261