Kick-start your Continuous Research in 5 steps
Identify and map your key user feedback sources to streamline your research process and maximize impact!
It’s been a while since I last wrote an article, and I must admit I’ve missed sharing what I learn daily with the design community ❤.
For this one, I want to walk you through how my team and I set up a Continuous Research process at OpenClassrooms, and how it helped us deliver better insights to feed the roadmaps and ultimately improve our users’ experience.
By the end of this article, you should be able to do the same for your product!
Let’s dive in!
But wait! Before starting: why should we do this?
You’re right to ask that question! 🙂
One of my core beliefs is that User Research should have a strong impact within a company while remaining efficient and time-effective. Unfortunately, this is not always the case: in some companies, user research can slow down decision-making or lead to poor strategic choices.
The main objectives of Continuous Research are:
Regularly and easily gathering user feedback on strategic parts of the productConsolidating knowledge about usersEnabling user-centric decision-making
Collecting user feedback continuously helps prioritize the roadmap, improve user experience, and speed up iteration cycles.
Step 1: Identify Existing Feedback Sources
Have you ever noticed that User Research is very similar to data analysis? Just like with data, structuring and cleaning your sources before extracting valuable and actionable insights is crucial.
You can collect feedback from various sources (Trustpilot, Appstore, Zendesk, Surveys…)
To start your Continuous Research process, here are the questions you need to ask yourself:
Who in the company already has regular contact with users? (e.g., Sales, Customer Success)Do they already collect regular feedback and how? (e.g., surveys, CSAT, calls)How do these sources perform? (e.g., number of monthly responses)Is this feedback regularly analyzed and by whom?What is the purpose of these feedback sources?What kind of insights do they generate?
From these sources, identify the ones relevant for Continuous Research, eliminate duplicates, and remove those that don’t provide real value. Favor sources that will bring relevant insights over the long term.
Clean up redundant or low-value feedback sources that lack clear ownership or objectives.
Step 2: Map the Feedback Sources
Now that you identified the most relevant feedback sources, you can start mapping them on every stage of the User Journey to have a good overall vision of what’s available. You might also need to create new feedback sources as strategic touchpoint for business objectives that are not yet covered. Anyway, I really recommend starting small and complete your mapping one step at a time.
At OpenClassrooms, we categorized different types of sources feeding Continuous Research:
Different types of users feedback sources
Step Experience Surveys — Feedback on a User Journey step
These are the primary feedback sources for squads’ Continuous Research. We have one survey at the end of each step, composed of three main parts:
1. CSAT — Customer Satisfaction
It measures satisfaction on specific parts of the user experience.
The question is “How satisfied are you with the [Step Name] process?”.2. CES — Customer Effort
It measures the effort required for specific actions.
The question is “Completing this [Action or Step] seemed: very easy to very difficult”.3. Qualitative feedback
This open question provides context for the ratings, helping us understand the “why” behind user scores and gather improvement ideas.
We add a question at the end to collect emails from users interested in improving the product, creating a user pool for testing and interviews!
Recently, we tested adding a question about step clarity to assess whether the available information is clear and useful. This will help us measure the impact of Content Design more precisely.
NPS surveys — Feedback on the global Experience
Net Promoter Score (NPS) measures customer loyalty toward the Brand. The qualitative feedback informs the company about users’ main concerns and expectations regarding their global experience with the product or service.
The question is “Would you recommend [Name of the company]?”.
NPS is widely debated as it provides broad feedback that is not always directly actionable. However, categorizing responses into themes allows us to the monitor changes in user satisfaction and dissatisfaction over time and better understand why.
Temporary sources — Feedback pre- or post-release
These are temporary sources, launched for a specific purpose and having having a lifetime of 1 to 6 months. They include surveys, user tests, and interviews that help gather feedback on a particular topic.
Example of specific survey for the Matching step — Illustration by Fabien Gouby
External sources
These are third-party sources like App Store reviews, Google ratings, or Trustpilot. While you don’t control them, analyzing them helps validate insights and spot trends that align — or not — with your own insights.
Example of external feedback source — Trustpilot
Get more information on Continuous Research in this article.
Step 3: Document Feedback Sources
Now that you’ve started mapping your sources, you can take it a step further by documenting each one more precisely:
Objective: What do we want to measure? How does this feed into our strategy and inform decisions?Source: Where is it stored? In which tool?Format: How is feedback collected? (e.g., in-product, email, call)Trigger: At what stage is feedback requested? Is there a specific trigger?Filter: Are specific user segments targeted?Number: How many responses are received monthly? Is the source performing well?Card template to map your feedback sourcesIt makes it easy for everyone on the team to understand what sources exist and what they are used for.
Clarifying this information helps teams understand existing sources and their purposes while ensuring easy access to raw data if needed. It also prevents redundant surveys that ask similar questions — And your users will thank you for that! 🙂
I strongly encourage you to do this in collaboration with other teams (e.g., marketing, support, sales, …). This mapping should also belong to them, and they should be able to help you update and improve it.
Step 4: Analyze Feedback Regularly
You now have feedback sources that are relevant, mapped, and documented: well done!
But unanalyzed feedback has no value, agree? Now, you need to ensure this material is analyzed regularly without overwhelming teams. At OpenClassrooms, we review feedback monthly, typically at the end of each month.
All feedback is automatically stored in a dedicated folder in our Research Repository:
List of continuous research projects
We can import multiple sources into the same folder, allowing us to cross-analyze user interviews, Step Experience surveys, and external sources. This enhances the reliability of insights by merging different perspectives.
Product Designers analyze feedback within their scope, tagging and categorizing it appropriately:
Feedback analysis in our Research Repository
This ensures Designers and Product Managers gain a deeper understanding of users’ concerns while keeping the process efficient.
But this process of tagging content can be time-consuming. To optimize it further, we’re experimenting with AI to assist in analyzing and automatically categorizing feedback. If you’re interested in our first tests, check out this article:
AI prompt – Analyse user feedback | Notion
Step 5: Present Insights and Drive Decisions
These regular analyses help us generate valuable insights into user pain points and closely monitor satisfaction and effort over time.
Each month, Product Designers present their findings, combining them with Product Managers’ insights and data. This forms what we call the “Squad’s Monthly Reports”, which help prioritize roadmap topics based on criticality and business objectives.
Findings are widely shared with the squads, stakeholders and leadership members.
Depending on the objectives, we can present results in different ways:
Categorized feedback by user
Split of categories regarding our Alumni’ feedback
This allows teams to quickly identify patterns and trends. By comparing the proportion of different feedback themes, we can prioritize the most critical user concerns and ensure that decision-making is data-driven. It can impact the roadmap of all teams across the company, not just the Product team (e.g., Learning team, Mentorship team, Student Success team).
Top three pain points for a given step
3 main user pain points for the Application step
This can directly help prioritize the squad roadmap and identify key pain points that require deeper analysis or further exploration in the next product discovery phase.
Evolution of Satisfaction and Effort over time
This allows us to correlate user satisfaction with business objectives. For example, if application rates drop, we can examine the Satisfaction and Effort scores for this step and analyze qualitative feedback to identify potential causes.
It also helps us assess the impact of new releases. For example, whenever we make a change to the funnel, we analyze our “Application Experience KPI” to determine whether it reduces effort and enhances satisfaction.
We can now track this evolution across all squads and have begun setting teams goals based on those scores:
Exemple of Satisfaction evolution over time for 3 squads
What’s next?
When launching your first surveys, you’ll likely find some ineffective. Some may not receive enough responses, while others may provide non-actionable feedback. This is normal — you won’t get everything right on the first try! 🙂
You will need to regularly iterate on your feedback sources: add new ones, remove ineffective ones, and adjust placement, channels, or questions as needed.
At OpenClassrooms, we continuously monitor Feedback sources performance and refine our approach accordingly.
Impact of Continuous Research on Product Strategy
Time-saving: Collecting and analyzing feedback becomes easier. Teams no longer have to start from scratch.Deeper user insights: Ongoing analysis enhances user understanding and empathy.Cross-team impact: Insights influence not only product roadmaps but also Customer Success, Sales, and Marketing.Proactive issue detection: Monitoring scores helps identify trends and take corrective action before problems escalate.
Key numbers
10+ Continuous research folders created900+ Feedback pieces analyzed monthly20+ Actionable insights generated each month, directly influencing product strategy
Want to go further? Check out my article on Research Repositories:
How to start a UX Research Repository
Kick-start your continuous user research in 5 steps was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.