User Research & Testing Articles - The Good https://thegood.com/insight-category/user-research-testing/ Optimizing Digital Experiences Wed, 26 Nov 2025 18:55:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust https://thegood.com/insights/maxdiff-analysis/ Wed, 26 Nov 2025 17:56:30 +0000 https://thegood.com/?post_type=insights&p=111202 When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like […]

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like ours,” and “How do I know this won’t be another failed implementation?”

The company had assembled a list of proof points they could showcase on their homepage: years in business, number of integrations, customer counts, implementation guarantees, security certifications, industry awards, analyst recognition, and more. But they only had space to highlight four of these benefits prominently below their hero section. They faced the classic messaging dilemma: which trust signals would actually move the needle with prospects evaluating B2B software?

This is where MaxDiff analysis becomes valuable. Instead of relying on stakeholder opinions or generic best practices, we could let their target buyers vote with data on what mattered most.

What makes MaxDiff analysis different from other survey methods

MaxDiff analysis (short for Maximum Difference Scaling) is a research methodology that forces trade-offs. Rather than asking people to rate items individually on a scale, MaxDiff presents sets of options and asks participants to identify the most and least important items in each set. This forced-choice format reveals true preferences because people can’t rate everything as “very important.”

Here’s why this matters: traditional rating scales often produce compressed results where everything scores high. When you ask customers, “How important is X on a scale of 1-10?” most people will hover around 7 or 8 for anything remotely relevant. You end up with a spreadsheet full of similar numbers and no clear direction.

MaxDiff cuts through that noise. By repeatedly asking “which of these five options matters most to you, and which matters least?” across different combinations, you build a statistical picture of relative importance. The math behind MaxDiff generates a best-worst score for each item, showing not just which options are preferred, but by how much.

For digital experience optimization, this methodology is particularly useful when you need to prioritize limited real estate on a website, determine which features to build first, or figure out which messaging will actually differentiate your brand.

How we structured the MaxDiff study for maximum insight

In the project for our client, we started by defining the target audience precisely. The company was a B2B SaaS platform serving mid-market operations teams, so we recruited 60 participants who matched their customer profile: director-level or above at companies with 50-500 employees, working in operations or supply chain roles, currently using at least two SaaS tools in their workflow, and actively evaluating solutions within the past six months.

From the initial audit and stakeholder interviews, we identified 11 potential trust signals the company could emphasize on its homepage. These included things like:

  • Concrete numbers (customer counts, uptime percentages, integrations available)
  • Credentials (security certifications, enterprise clients)
  • Promises (implementation timelines, support response times, money-back guarantees)
  • And more

Each represented something the company could truthfully claim, but we needed to know which ones would build the most trust with prospects evaluating the platform.

The survey design was straightforward. Each participant saw these 11 benefits randomized into multiple sets of five items. For each set, they selected the most important factor and the least important factor when considering whether to adopt this type of software. Participants completed several rounds of these comparisons, seeing different combinations each time.

This approach gave us enough data points to calculate a robust best-worst score for each benefit: the number of times it was selected as “most important” minus the number of times it was selected as “least important.” Positive scores indicate a strong preference, negative scores indicate a low importance, and the magnitude of the scores shows the strength of feeling.

The results revealed a clear hierarchy of trust signals

When we analyzed the MaxDiff results, the pattern was striking. The top-scoring benefits shared a common theme: they provided concrete evidence of proven reliability and satisfied users. The bottom-scoring benefits? They emphasized company scale and marketing visibility.

A chart showing the ranking of MaxDiff analysis SaaS trust signals.

The four highest-scoring trust signals were clear winners. G2 or Capterra ratings scored 38 points (the highest possible), indicating this was nearly universal in its importance. The number of active customers scored 30 points. An implementation guarantee (“live in 30 days or your money back”) scored 25 points. And SOC 2 Type II certification scored 16 points.

These weren’t arbitrary marketing metrics. They were the specific signals that would make someone think, “this platform delivers real value and other companies trust them.”

The middle tier included operational details that registered as minor positives but weren’t decisive: the number of successful implementations (7 points), availability of 24/7 support (6 points). These signals suggested competence but didn’t particularly move the needle on trust.

Then came the surprises. Years in business scored -5 points, indicating it was slightly more often selected as “least important” than “most important.” The number of integrations available scored -11 points. AI-powered features claimed scored -15 points. Employee headcount scored -36 points. And recognition as a Gartner Cool Vendor scored -55 points, the lowest possible score.

Think about what prospects were telling us: “I don’t care that you have 200 employees or that Gartner mentioned you. Show me that real companies like mine trust you and that you’ll actually deliver on your promises.”

Why customers rejected company-focused metrics

The findings revealed an insight into trust-building that extends beyond this single company. B2B buyers weigh social proof and reliability guarantees far more heavily than they weigh indicators of company scale or industry recognition.

When a business talks about its employee headcount or analyst mentions, prospects interpret this as the company talking about itself. These metrics answer the question “How big is your business?” but not “Will this solve my problem?” From the buyer’s perspective, a larger team or Gartner mention doesn’t necessarily correlate with better software or smoother implementation.

By contrast, user reviews and customer counts answer the implicit question every prospect has: “Did this work for companies like mine?” A guarantee directly addresses risk: “What happens if implementation fails?” Security certifications address legitimacy: “Is this platform secure enough for our data?”

The AI-powered features claim scored poorly, likely because it felt trendy rather than practical. Prospects for this specific business weren’t primarily concerned about cutting-edge technology; they wanted a platform that would reliably solve their workflow problems. Leading with an AI angle, while possibly true, didn’t address the core decision-making criteria.

Years in business scored negatively for similar reasons. While longevity can signal stability, in this context, it didn’t address the prospect’s immediate concerns about implementation speed and user adoption. A company could be around for years while providing clunky software with poor support.

From insight to implementation: turning research into revenue

The MaxDiff analysis gave the company a clear action plan. We recommended implementing a four-part trust signal section directly below their homepage hero, featuring the top four scoring benefits in order of importance.

This meant reworking their existing homepage structure. Previously, they had emphasized their implementation guarantee in the hero area while burying customer counts and ratings further down the page. The research showed this approach had it backward. Prospects needed to see evidence of customer satisfaction first, then the implementation guarantee as additional reassurance.

We also recommended removing or de-emphasizing several elements they had been proud of. The employee headcount mention, the Gartner recognition, and several other low-scoring items were either removed entirely or moved to less prominent positions on the site. The goal was to prevent low-value signals from crowding out high-value ones.

The broader lesson here extends beyond this single homepage optimization. The MaxDiff results provided a messaging hierarchy that the company could apply across its entire go-to-market strategy. Email campaigns, landing pages, sales conversations, demo decks, and even their LinkedIn company page could now emphasize the trust signals that actually mattered to prospects.

When MaxDiff analysis makes sense for your business

MaxDiff is particularly valuable when you’re facing a prioritization problem with limited data. It works best in these scenarios:

  • You have more options than you can implement. Whether that’s features to build, benefits to highlight, or messages to test, MaxDiff helps you choose wisely when you can’t do everything at once.
  • Stakeholder opinions are conflicting. When internal debates about priorities can’t be resolved through argument, customer data settles the question. MaxDiff provides quantitative evidence for decision-making.
  • You need to differentiate in a crowded market. If competitors are all saying similar things, MaxDiff reveals which specific claims will break through. Often, the winning messages are ones companies overlook because they seem “obvious” or “not unique enough.”
  • You’re optimizing for a specific audience segment. Generic research about “customers in general” often produces generic insights. MaxDiff works best when you recruit participants who precisely match your target customer profile.

The methodology has limitations worth noting. It requires careful setup, and you need to know which options to test before you start.

If you don’t include the right benefits in your initial list, you won’t discover them through MaxDiff.

It also works best with a reasonably sized set of options (typically 5-15 items).

And the results tell you about relative importance, not absolute importance; everything could theoretically matter, but MaxDiff reveals the hierarchy.

How to use MaxDiff findings in your optimization strategy

Once you have MaxDiff results, the application extends beyond simply reordering homepage elements. The insights should inform your entire digital experience.

Your messaging architecture should reflect the importance hierarchy. High-scoring benefits deserve prominent placement, repetition across pages, and detailed explanation. Low-scoring benefits can either be removed or repositioned as supporting rather than leading messages.

Your testing roadmap should prioritize changes based on MaxDiff findings. If customer reviews scored highest in your study, test different ways of showcasing reviews before you test other elements. Let the data guide your experimentation priorities.

Your content strategy should emphasize what customers care about. If service guarantees scored highly, create content that explains the guarantee in detail, shares stories of when it was honored, and addresses common concerns. Build your editorial calendar around the topics MaxDiff revealed as important.

Your sales enablement should align with customer priorities. If the research showed that prospects value licensing credentials, make sure your sales team knows to emphasize this early in conversations. Create collateral that highlights the trust signals that matter most.

The most effective companies use MaxDiff as one tool in a broader research program. They combine it with qualitative research to understand why certain benefits matter, behavioral analytics to see how users interact with different messages, and continuous testing to validate that the predicted preferences translate into actual conversion improvements.

Turning guesswork into growth

The SaaS company we worked with started with a dozen possible messages and no clear sense of which would build trust most effectively with B2B buyers. After the MaxDiff analysis, they had a data-backed hierarchy that let them confidently restructure their homepage and broader messaging strategy.

This is the power of asking prospects the right questions in the right way. Not “do you like this?” which produces inflated scores for everything. Not “rank these 11 items,” which overwhelms participants and produces unreliable data. But rather, through repeated forced choices, revealing the true importance of each element.

If you’re struggling with similar prioritization challenges (too many options, limited space, stakeholder disagreement about what matters), MaxDiff analysis might be the tool that breaks through the noise. It transforms subjective opinion into statistical evidence, letting your prospects vote on what will actually convince them to choose your platform.

Ready to discover which messages actually resonate with your customers? The Good’s Digital Experience Optimization Program™ includes research methodologies like MaxDiff analysis to help you prioritize changes based on real customer preferences, not guesswork.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
How to Validate Website Design Changes: A Decision Framework https://thegood.com/insights/website-design-changes/ Thu, 28 Aug 2025 21:23:05 +0000 https://thegood.com/?post_type=insights&p=110805 How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it? The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have […]

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it?

The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have fallen into the trap of either testing everything (creating bottlenecks and slowing innovation) or testing nothing (making changes based purely on intuition).

There’s a better way. By understanding when different validation* methods are most appropriate, SaaS teams can make faster, more confident design decisions while maintaining the rigor needed for their most critical changes.

*Note: We know validation is a bad word in the research community because it implies “proving you’re right,” but we feel it’s easier to read and more quickly comprehensible for those not in research disciplines. We’re using “validation” in this article, but “evaluation” or “confirm or disconfirm” would be more acute in other settings.

The real cost of a bad experimentation strategy

When teams lack a clear strategy for validating decisions, they create what researcher Jared Spool calls “Experience Rot” – the gradual deterioration of user experience quality from moving too slowly or focusing solely on economic outcomes rather than user needs.

The costs manifest in several ways:

  • Opportunity cost: Market opportunities disappear while waiting for test results that may not even be necessary
  • Resource waste: Development time gets tied up in prolonged testing initiatives for low-risk changes
  • Analysis paralysis: Teams debate endlessly about what to test next instead of making decisions
  • Competitive disadvantage: Competitors gain ground while you’re stuck in lengthy validation cycles

The key is matching your experimentation method to the decision you’re making, rather than forcing every design change through the same validation process.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A framework for design validation decisions

The path to better validation starts with two fundamental questions about any proposed design change:

  1. Is this strategically important? Does this change significantly impact key business metrics or user experience?
  2. What’s the potential risk? What happens if this change performs worse than expected?

Using these dimensions, you can map any design change into one of four validation approaches:

A decision making framework for validating decisions regarding website design changes.

High Strategic Importance + Low Risk = Just ship it

If you can’t explain meaningful downsides to a design change but know it’s strategically important, you probably don’t need to validate it at all. These are your quick wins.

Examples for SaaS teams:

  • Adding customer testimonials to your pricing page
  • Improving mobile responsiveness
  • Fixing broken links or outdated screenshots
  • Adding clearer error messages in your product

Why this works: The upside is clear, the downside is minimal, and the time spent testing could be better invested elsewhere.

Low Strategic Importance = Deprioritize

Not every design change needs validation because not every change is worth making. Some modifications simply aren’t worth the time and resources, regardless of the validation method you might use.

Examples of low-impact changes:

  • Minor color adjustments to non-critical elements
  • Changing footer layouts
  • Tweaking secondary page designs that get little traffic
  • Adjusting spacing that doesn’t affect usability

When to reconsider: If data later shows these areas are creating friction, they can move up in priority.

High Strategic Importance + High Risk = Validation territory

This is where both A/B testing and rapid testing methods become valuable. The critical next decision becomes: can you reach statistical significance within an acceptable timeframe, and are you technically capable of running the experiment?

When to use A/B testing vs rapid testing

This decision tree helps determine if your website design changes should be tested or if another approach should be used.

When to use A/B testing for design changes

A/B testing remains your best option for design changes when:

  • You have sufficient traffic on the experience: Generally, you need 1,000+ visitors per week to the page being tested
  • The change is reversible: You can easily switch back if the results are negative
  • You need statistical confidence: Stakes are high enough to justify the time investment
  • Technical capability exists: Your team can implement and track the test properly

Examples of SaaS use cases for A/B testing:

  • Complete homepage redesigns
  • Pricing page layouts and messaging
  • Sign-up flow modifications
  • Core product onboarding changes
  • High-traffic landing page variations

When to use rapid testing for design changes

When A/B testing isn’t right due to traffic constraints, technical limitations, or time pressures, rapid testing provides a faster path to validation.

Rapid testing methods work particularly well for SaaS design validation because they can:

  • Validate concepts before development: Test wireframes and mockups before building
  • Narrow down options: Compare multiple design variations quickly
  • Identify usability issues: Spot problems before they reach real users
  • Provide qualitative insights: Understand the “why” behind user preferences

Examples of SaaS use cases for rapid testing:

  • New feature naming and messaging
  • Dashboard navigation restructuring
  • Enterprise sales page designs (low traffic)
  • Value proposition clarity testing
  • Multi-option comparisons (6-8 variations)

The natural next question might be “which rapid testing method should I use?” Here is another decision tree framework to help answer that.

This framework is a guide to determining which rapid testing method is best suited for your website design changes.

Incorporate your experimentation strategy into your design process

With a decision-making strategy for how and what to test, you’ll need to incorporate the strategy into your design process. The most successful SaaS teams don’t treat validation as an afterthought. They build it into their process from the beginning:

  • During ideation: Use rapid testing to validate concepts and narrow options before detailed design work
  • During design: Test wireframes and mockups to identify issues before development
  • Before launch: Use A/B testing for high-stakes changes, rapid testing for others
  • After launch: Continue testing iterations based on user feedback and performance data

The compounding benefits of a sound experimentation strategy

The goal isn’t to replace A/B testing with rapid methods or vice versa. Both have their place in a mature experimentation strategy. The key is understanding when each approach provides the most value for your specific situation and constraints.

Teams that master this balanced approach to validation see remarkable improvement, including:

  • 50% better A/B test win rates (because rapid testing helps identify winning concepts)
  • Faster time-to-market for design improvements
  • More confident decision-making across the organization
  • Better team morale from seeing results from their work more quickly

Perhaps most importantly, they avoid the extremes of either testing nothing (high risk) or testing everything (slow progress).

For SaaS teams serious about optimization, the question isn’t whether to validate design changes; it’s whether you’re using the right validation method for each decision.

Start by auditing your current design change process. Are you testing changes that should be implemented immediately? Are you implementing changes that should be tested? By aligning your validation approach with the strategic importance and risk level of each change, you can move faster without sacrificing confidence in your decisions.

And if you aren’t sure how to get started, our team can help.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
What Is Discovery Research in UX? https://thegood.com/insights/discovery-research/ Thu, 17 Jul 2025 15:21:56 +0000 https://thegood.com/?post_type=insights&p=110732 It’s difficult to find a product team that lacks data or feature requests. Most don’t even need additional user feedback. Yet, they’re still building the wrong things. The culprit isn’t a lack of information; it’s starting with solutions instead of problems. While 89% of product teams are conducting user interviews according to recent industry data, […]

The post What Is Discovery Research in UX? appeared first on The Good.

]]>
It’s difficult to find a product team that lacks data or feature requests. Most don’t even need additional user feedback. Yet, they’re still building the wrong things. The culprit isn’t a lack of information; it’s starting with solutions instead of problems.

While 89% of product teams are conducting user interviews according to recent industry data, there’s a critical gap between gathering user input and uncovering the insights that actually drive business results.

We see this all the time in our client work. Teams building features that competitors have without competitor data, or developing features based on the loudest customers without checking the significance of those friction points.

So what’s the solution?

The companies consistently shipping features that move the needle know the difference between asking users what they want and understanding what they actually need. It starts with discovery research.

What is discovery research in UX?

Discovery research in UX is the foundational phase of user research that focuses on understanding user problems, needs, and contexts before any solutions are designed.

Unlike evaluative research methods that test existing designs or prototypes, discovery research explores the unknown territory of user behavior to uncover opportunities and define problems worth solving.

Discovery research helps you understand use cases and user needs. It can ground you in what problems to solve and what is going on in the market.

This grounding is essential for product teams who want to build features that users actually need and will drive growth.

Discovery research typically involves methods like user interviews, field studies, diary studies, and market analysis. These approaches help teams understand the broader context of user goals and challenges before jumping into design solutions. The insights gathered during this phase become the strategic foundation for all subsequent product decisions.

Discovery research versus UX discovery

While these terms are often used interchangeably, there’s an important distinction that affects how product teams approach their research strategy.

Discovery research specifically refers to the research methods and activities used to understand user needs and identify problems. It’s the “how” of gathering insights through interviews, observations, and analysis. This includes techniques like ethnographic studies, user interviews, and competitive analysis.

UX discovery, on the other hand, is the broader strategic phase that encompasses discovery research, but also includes other activities such as technical feasibility assessments, business viability analysis, and stakeholder alignment. UX discovery is the “what and why” that frames the entire early-stage product exploration.

Think of discovery research as the tactical execution within the strategic framework of UX discovery. A comprehensive UX discovery process will include multiple types of discovery research methods. It also considers business constraints, technical limitations, and market opportunities.

For SaaS product teams, this distinction matters because it clarifies roles and expectations. UX researchers lead discovery research activities, while product managers typically orchestrate the broader UX discovery process that incorporates research findings into strategic decisions.

Understanding this difference helps teams avoid the common mistake of treating research as a checkbox activity rather than a strategic input that informs product direction.

Benefits of discovery research

Discovery research delivers tangible benefits that extend far beyond the research team, directly impacting product success and business outcomes.

Reduces development risk and waste

The most immediate benefit of discovery research is risk reduction. By understanding user needs and the specific problems before development begins, teams avoid building features that miss the mark. This is particularly critical for SaaS teams where failed features mean ongoing maintenance costs and technical debt that compound over time.

Enables data-driven product decisions

Discovery research transforms product decisions from opinion-based to evidence-based. Instead of stakeholder preferences driving priorities, user insights guide development resources toward the highest-potential impact opportunities.

Uncover hidden opportunities

Discovery research often reveals unmet user needs that aren’t obvious from analytics or existing feedback channels. These insights can become the foundation for innovative features that differentiate your product in the market.

Improves cross-team alignment

When discovery research findings are shared across product, design, and development teams, everyone gains a shared understanding of user priorities. This alignment reduces conflicting opinions and streamlines the development process.

Accelerates time-to-market for successful features

While discovery research requires upfront time investment, it actually accelerates the development of successful features by ensuring teams build the right things from the start.

Enhances user satisfaction and retention

Products built on solid discovery research foundations better meet user expectations, leading to higher satisfaction scores and improved retention rates. Users feel heard and understood when products solve their actual problems rather than perceived problems.

This is essential for SaaS businesses where discovery research can identify the difference between features that drive daily engagement versus one-time usage, directly impacting churn rates.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

When to use discovery research

Discovery research is best leveraged as part of a continuous research strategy.

Teresa Torres, expert and author of Continuous Discovery Habits, recommends weekly conversations with customers. “Continuous discovery means weekly touchpoints with customers by the team building the product, where they conduct small research activities in pursuit of a desired outcome.”

The goal is to take research from something you pause to do, into something you always do.

Many leaders will have experimentation rituals that allow quick and consistent feedback on ideas/products, but it’s rarer to see teams prioritize discovery on a frequent cadence.

When you manage discovery in batches or isolated sprints, it can mean you miss out on opportunities or delay solving urgent problems for customers.

Common discovery activities in UX

Effective discovery research employs multiple methods to build an understanding of the problem landscape and market conditions. Not all are required, but a combination will give a better picture to work off.

Diary studies

For understanding user behavior over time, diary studies ask participants to record their experiences, thoughts, and interactions over days or weeks. This method is particularly valuable for SaaS products where user needs evolve or vary based on different use cases and timeframes.

User interviews

One-on-one conversations with users can be a great pillar of discovery research. The key to successful interviews in discovery is asking open-ended questions that help explore user motivations, frustrations, and workflows. A good foundation is to conduct 6-8 interviews per user segment to get a picture of current challenges and behaviors.

Field studies and contextual inquiry

Observing users in their natural environment provides insights that interviews alone can’t capture. Field studies reveal the environmental, social, and technical factors that influence user behavior, uncovering needs that users might not articulate in interviews.

Competitive analysis and market research

Understanding the competitive landscape helps identify opportunities for differentiation. It also uncovers whether user problems are being adequately solved by existing solutions. This desk research complements user-facing research methods.

Jobs-to-be-done (JTBD) research framework

JTBD research helps frame what job users are “hiring” your product to do. It can help you think beyond features to understand the fundamental progress users are trying to make in their lives or work.

Card sorting

This method helps teams understand how users categorize information and conceptualize problem spaces. Card sorting is particularly useful for discovering how users naturally group features or content areas.

Survey research

While qualitative methods provide depth, surveys can help uncover findings across larger user populations. Use surveys to quantify the prevalence of problems discovered through qualitative research.

Leveraging discovery research for better outcomes

In an era where 83% of designers, product managers, and researchers agree that research should be conducted at every stage of product development, it’s critical to understand discovery research in UX.

Discovery research is a tool that helps you dig into current user needs and prioritize the problems worth solving. It provides the user insights needed to build theme-based roadmaps, prioritize high-impact features, and avoid costly development mistakes. Most importantly, it ensures that every dollar spent on product development addresses real user needs rather than perceived problems.

Ready to make discovery research work for your product team? The Good specializes in helping SaaS companies uncover the user insights that drive product success. Our team combines deep research expertise with practical product strategy to ensure your research translates into features that drive growth.

Get in touch with The Good to discuss how discovery research can accelerate your product development and improve user satisfaction. Let’s turn your user insights into your competitive advantage.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post What Is Discovery Research in UX? appeared first on The Good.

]]>
From Data Collector to Data Connector: Embracing Research Democratization https://thegood.com/insights/research-democratization/ Mon, 16 Jun 2025 15:26:20 +0000 https://thegood.com/?post_type=insights&p=110652 As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights. “The fundamental shift that people have to make is that you’re no […]

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights.

“The fundamental shift that people have to make is that you’re no longer a data collector. You’re a data connector,” says Ari Zelmanov, former police detective and current research leader. In Ari’s view, as teams get leaner and tools get better at executing research tasks, the job of the researcher becomes standing up repositories, socializing learning mechanisms, and creating the systems that empower organizations to act on good information.

We spoke with research leaders who've successfully made this transition, transforming their teams from siloed specialists into customer-centric learning cultures. Their approaches varied, but one theme was clear: when you empower others to answer their own questions, you don't diminish your value, you multiply it.

The d word holding us back

Before diving into solutions, there's an elephant we need to address: Democratization. Many researchers worry that democratizing research will lead to poor methodologies, incorrect conclusions, or devalued expertise. But Ari feels the argument is nye.

"The only people arguing about democratization are researchers," says Ari. "Nobody else is arguing about it. We're infighting about something that we have zero control over. It's happening."

I tend to feel like anyone arguing about democratization is missing one critical point: customer centricity isn't just one person's job.

Anton Krotov, Researcher in an organization of over 10,000 people, was in the fortunate position of being very trusted by his colleagues. So much so that they believed research could answer all of their questions.

“I had already established a reputation. I was fortunate that I didn't need to sell the value of research. Quite the opposite. People came to me with too many requests. They believed research could do everything for them. I needed to set up boundaries.”

Overwhelmed with requests from colleagues, Anton realized that the solution wasn't saying no—it was saying yes in a different way. Rather than becoming a bottleneck, Anton chose to become a bridge.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Connect teams through shared intelligence

Good intelligence is the responsibility of many disciplines, not just research. To get answers quickly, Ari's teams use what he calls the "Moneyball" approach to research, a framework that prioritizes speed and accessibility over methodological purity:

"Product teams are incentivized to move fast. So, how do you make research fit into that in a way that makes sense? We built something called Moneyball Research. It's super simple: start with what you know. It could be in your repository, it could be what you know. Then you go to what data is accessible within 24 to 48 hours. That's usually internal analytics, CSAT tickets, NPS, sales conversations, and tribal knowledge. Then—and only then—do you go to primary research."

This approach shifts conversations away from methods and focuses instead on what teams need to know and how confident they need to be. "Then it's up to the researcher to be the doctor. Diagnose that, determine how they're going to collect that evidence given the time, money, and level of rigor."

René Bastijans, lead researcher at a growth-stage startup, has found creative ways to loop colleagues into data collection. His sales team is trained to lightly survey prospects during sales calls and report back to the wider team.

"We've trained our sales team to ask for specific data and enter it into Salesforce. Researchers and the product team have access to these data, and therefore, sales has allowed us to keep a pretty good pulse on the market."

This creates a healthy feedback loop that keeps everyone abreast of evolving user needs while extending the research team's reach without expanding headcount.

Invite colleagues into the research process

While it might seem counterintuitive to share methodologies and research responsibilities, successful research leaders see democratization as an opportunity rather than a threat.

To remove research bottlenecks, Anton ran internal workshops to upskill his colleagues on doing their own research. This proactive approach to education focused on tailoring training to his colleagues' specific needs: "I try to cover the cases that will be really applicable, so I don't offer any cookie-cutter material and don't go much into theory. It's really tailored to their day-to-day work."

The key is meeting people where they are and giving them tools that fit their contexts. Not everyone needs to become a master researcher, but many can learn to conduct basic customer interviews or query data effectively.

Brittany Lang, UX Research Manager and M.S. in Information Science, uses project reviews as a time to cultivate a shared point of view and continually refine her thinking.

“Before we socialize research plans, I usually take a look at it, or I have someone else on my team take a look at it. It doesn't have to be your manager that's reviewing something, but can someone give you feedback?

It's nice when coworkers leave comments and I can see what other people on the team have said and we can agree or challenge, and then have a discussion about it. I also learn in those moments too. When I'm looking at how members of my team have reviewed other work, where they're coming from and their perspective, I learn a lot from them in those moments.”

Facilitate low-risk learning

It takes more than a few ambitious researchers to imbue a company’s culture with a learning mindset, which is why rituals and learning programs are so important.

Anton’s employer formalized this approach to building safe learning environments through a program called "Gigs for Growth," a repository of side projects from different departments where employees can apply to work on learning opportunities outside their typical scope.

"It's like a company green light that you can work on learning during your full-time gig and outside of your typical work scope. Something that you would never otherwise be able to touch in the company."

Under this program, researchers can support QA engineers, sales can support marketing, and everyone gets exposure to new perspectives that inform their primary roles. "You get some really new experiences that otherwise you wouldn't be able to."

At The Good, we like to build regular, low-stakes opportunities for knowledge sharing and skill development. One of our approaches at The Good is a ritual called "Random Question of the Week." During another bi-weekly meeting, team members share client questions that stumped them or that they felt they could have answered better.

These conversations help build shared perspectives that then get turned into artifacts:

  • FAQ entries for brief, punchy answers
  • Articles for long-form perspectives
  • Policies or SOPs that outline ways of working

The result is that teams become more aligned, can answer tough questions on the spot, and save time by referring to their collective knowledge instead of rehashing the same discussions.

Another effective ritual is "Critique & Share" sessions, where team members bring questions, websites they admire, or work they're developing to get fresh perspectives from colleagues who haven't been deep in the weeds of a particular project.

Maggie Paveza, Senior Strategist at The Good, shares that it has helped her break the ice when building a shared P.O.V.

"It's pretty informal and often we're not showing our own work, so it feels less intimidating to ask your team members, 'why do you think this competitor is using this strategy,' than if it were your own work," explains Maggie.

The power of being a data connector

"The fundamental problem that research as an industry has is we've been myopically focused on the front end of the equation," says Ari. "Data collection, statistical significance, theoretical saturation—insert whatever fancy academic word you want in here. But the real power comes on the back end of the equation."

That back end is about connection, synthesis, and empowerment. When researchers shift from being data collectors to data connectors, they don't lose their expertise; they amplify it.

As Anton puts it, "Where soil is right, then you can do things. Praise people for when they do things great. You can learn from mistakes, you can learn from success."

The goal isn't to turn everyone into a researcher. It's to create an environment where customer insights flow freely, where good questions get asked by many disciplines, and where learning happens continuously rather than in bursts.

Making the shift

Building a customer-centric learning culture doesn't happen overnight, but it starts with understanding where your organization is open to change and being constructive about how you facilitate it.

Look for teams and individuals who are already curious about customers. Find the places where people are asking good questions but lack the tools or confidence to find answers. Then meet them there with the right combination of education, tools, and support.

"At the end of the day, it's about empowering decision-making," says Ari. And in a world where customer expectations evolve quickly and research teams are lean, that empowerment might be the most valuable thing researchers can provide.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense https://thegood.com/insights/why-rapid-test/ Fri, 23 May 2025 20:04:02 +0000 https://thegood.com/?post_type=insights&p=110602 The right tool for the right job. It’s a principle that applies everywhere, from construction sites to surgical suites, yet for digital product development, many teams are singularly focused on A/B testing. Don’t get me wrong, A/B testing is incredibly powerful. It’s the gold standard for high-stakes, high-traffic decisions where statistical significance matters most. But […]

The post How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense appeared first on The Good.

]]>
The right tool for the right job. It’s a principle that applies everywhere, from construction sites to surgical suites, yet for digital product development, many teams are singularly focused on A/B testing.

Don’t get me wrong, A/B testing is incredibly powerful. It’s the gold standard for high-stakes, high-traffic decisions where statistical significance matters most. But when it becomes your only tool, you create unnecessary constraints that can paralyze decision-making and slow innovation.

The reality is that different decisions require different levels of rigor, confidence, and investment. Luckily, there is a complementary approach that fills critical gaps in your experimentation toolkit. By understanding when each method is most appropriate, teams can make faster, more informed optimizations while maintaining the rigor needed for their most high-stakes decisions.

Creating “experience rot”

A/B testing borrowed its methodology from medical intervention studies, where 95% confidence intervals and statistical significance aren’t just nice-to-haves; they’re life-or-death requirements.

But we’re not rocket scientists, and we don’t always need the same level of assurance in product decisions to move towards the right outcome.

An infographic of the evidence hierarchy inherited from medical disciplines.

A/B testing can be overkill for the decisions product teams need to make daily. Yet teams have become so committed to this single methodology that they’ve created what researcher Jared Spool calls “Experience Rot,” the gradual deterioration of user experience quality from teams moving too slowly or focusing solely on economic outcomes.

The costs of slow testing cycles are tangible and measurable:

  • Market opportunities disappear while waiting for test results
  • Competitors gain ground during lengthy testing phases
  • Development resources get tied up in prolonged testing initiatives
  • Customer frustration builds as issues remain unfixed
  • Decision fatigue sets in as teams debate what to test next

But the problem runs deeper than just speed. Many teams face contexts where A/B testing simply isn’t feasible. Regulatory challenges in healthcare and finance, low-traffic scenarios for B2B products, technical constraints, and organizational politics all create barriers to traditional experimentation.

By the time a test idea passes through all the bureaucratic loopholes and oversight at an organization, it’s often no longer lean enough to justify testing. Without an alternative testing method, teams are left without any data at all.

So, how do we:

  1. Circumvent the challenges of A/B testing, and
  2. Prevent experience rot?

Enter rapid testing

Rapid testing isn’t about cutting corners or accepting lower-quality insights. It’s about matching your research method to the decision you’re trying to make, rather than forcing every question through the same rigorous, but often slow, process.

Like A/B testing, rapid testing helps you understand if your solutions are working. Unlike A/B testing, rapid tests are conducted with smaller sample sizes, completed in days rather than weeks or months, and often provide qualitative insights that A/B tests can miss.

“The speed at which we obtain actionable findings has been impressive,” says Gabrielle Nouhra, Software Director of Product Marketing, who leverages rapid testing with The Good for research and experimentation. “We are receiving rapid results within weeks and taking immediate action based on the findings.”

The key is understanding when each approach makes sense. Not every decision requires the same level of rigor, and smart product teams create systems that allow critical insights to move faster.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A framework for decision making

So, how do you decide when to use rapid testing versus A/B testing? The decision starts with two critical questions: Is this strategically important? And what’s the potential risk? With those two questions in mind, you can map your ideas on a simple 2×2.

A framework to use for decision making and deciding why to rapid test.

High Strategic Importance + Low Risk = Just Ship It. If you can’t explain meaningful downsides to a change but know it’s strategically important, you probably don’t need to test it at all. These are your quick wins.

Low Strategic Importance = Deprioritize. Not everything needs to be tested. Some changes simply aren’t worth the time and resources, regardless of the method you use.

High Strategic Importance + High Risk = Test Territory. This is where both A/B testing and rapid testing live. The next decision point becomes: Can you reach statistical significance within an acceptable timeframe? Are you technically capable of running the experiment?

If the test isn’t technically feasible or traffic constraints make the time-to-significance longer than is acceptable, rapid testing becomes your best option for de-risking the decision.

A decision tree to determine whether to test something and why to rapid test or use another approach.

Rapid testing in practice

Rapid testing encompasses various methodologies, each suited to different types of questions. Here are just a few examples:

First-Click Testing helps confirm where users would naturally click to complete a task. Perfect for interface design decisions and navigation optimization.

Preference Testing goes beyond simple A/B comparisons to evaluate multiple options, often six to eight variations, helping teams understand which labels, designs, or approaches resonate most with their target audience.

Tree Testing reveals where users might stray from their intended path, using nested structures to understand navigation behavior without the distraction of full visual design.

A framework to use when determining which rapid testing method is best suited for your needs.

The beauty of these methods lies in their speed and specificity. Rather than testing entire page redesigns, rapid testing allows you to validate specific hypotheses quickly. Which onboarding segments will users self-identify with? Where should we place a new feature to maximize engagement? Which design elements increase trust among new visitors?

Rapid tests can also guide our A/B testing strategy. If we’re entertaining multiple options for new nomenclature within an app experience and we’re just trying to understand which label users think would be most accurate or most likely to represent those outcomes, running a rapid test can narrow down those options and help us decide what to A/B test.

Building a rapid testing practice

Implementing rapid testing effectively requires more than just choosing the right method. Teams that see the best results follow several key principles:

  1. Impact pre-mortems: Before testing, clearly define what success looks like and what impact you expect if implemented. This helps connect testing activities to business outcomes and prevents post-hoc justification of results.
  2. Acuity of purpose: Keep tests focused on specific questions rather than trying to evaluate everything at once. A/B testing often encourages comprehensive evaluations, but rapid testing works best with precise hypotheses.
  3. Pre-defined success criteria: Establish clear benchmarks before you start testing. If 80% of users can complete a task, is that a win? What about 60%? Define these thresholds upfront to avoid moving goalposts when results come in.
  4. Mute context: When testing specific elements, remove unnecessary context that might distract from the core question. Full-page designs can overwhelm participants and dilute feedback on the element being tested.
  5. Sunlight: Even experienced researchers benefit from collaborative review of test plans. Transparency builds confidence in the process, and a peer review of test designs helps identify potential issues before execution.
  6. Share: Circulate your impact, what you’ve learned about your audience, and get people excited about the work. The goal is to build visibility, create a case for why this work is valuable, and encourage people to make decisions with data.

The compound effects of speed

Teams that successfully implement rapid testing alongside their existing A/B testing programs see remarkable results. Our clients report 50% improved A/B test win rates, better customer satisfaction scores, and significantly faster time-to-insights.

But perhaps most importantly, they report better team morale. There’s something energizing about seeing results from your work quickly, about being able to iterate and improve based on real user feedback rather than lengthy committee discussions.

It’s never too late to pivot. The idea is to move from long-term decision making, where we send something through the whole development and design cycle only to come up with a lackluster outcome, to form a process that helps us get quick, early signals.

Making the transition

The goal isn’t to replace A/B testing. It remains the gold standard for high-stakes, high-traffic decisions. But by adding rapid testing to your toolkit, you can accelerate the decisions that don’t require months of statistical validation while still maintaining confidence in your choices.

As decision scientist Annie Duke writes in Thinking in Bets, “What makes a great decision is not that it has a great outcome. It’s the result of a good process.” Rapid testing gives teams a process for rational de-risking that emphasizes both speed and quality.

The question isn’t whether you should test your ideas; it’s whether you’re using the right testing method for each decision. In a world where speed increasingly determines competitive advantage, teams that master this balance will consistently outpace those stuck with only one tool in their kit.

Ready to accelerate your decision-making process? Our team specializes in helping product teams implement rapid testing alongside existing experimentation programs. Get in touch to learn how we can help you cut testing time without sacrificing insight quality.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense appeared first on The Good.

]]>
How to Identify Your Most Valuable User Segments and Prioritize Accordingly https://thegood.com/insights/user-segments/ Thu, 01 May 2025 05:24:04 +0000 https://thegood.com/?post_type=insights&p=110491 Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers. Despite this being a proven economic model, companies are rarely focusing their effort on that 20%. It’s not because they don’t want to; it’s […]

The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

]]>
Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers.

Despite this being a proven economic model, companies are rarely focusing their effort on that 20%.

It’s not because they don’t want to; it’s because it is easy to get wrapped up in not losing a single sale, to the point that you are spreading yourself too thin.

If you focus your energy and product improvements on the highest-value user segment, you will see greater returns for less work.

In this article, we’re sharing the study we recently ran for a client that helped us identify their most valuable user segments and prioritize improvements to meet their needs.

What are user segments?

User segments are groups within a customer base who share similar characteristics, behaviors, or values.

They are created with user segmentation, which researches those commonalities and divides your audience into distinct groups. You can then tailor experiences, personalize messaging, and focus optimization efforts on their specific needs.

Common types of user segments

User segments can be divided based on different traits. The type of segmentation you use will vary based on your use case and goals. Here is a quick overview of common user segments.

Segmentation TypeDescriptionExample Use Case
DemographicSegments users by age, gender, income, education, etc.Targeting campaigns for specific roles
FirmographicSegments by company size, industry, revenue, or locationTailoring features for SMBs vs. enterprise
BehavioralBased on how users interact with your product, such as product usage, feature adoption, or login frequencyIdentifying power users or at-risk users
TechnographicSegments by technology stack, device, browser, or OSPrioritizing integrations or support
Needs-BasedSegments by specific problems or needsCustomizing messaging for value drivers
Value-BasedGroups by economic value (annual recurring revenue, lifetime value, subscription tier)Prioritizing high-revenue customers
Lifecycle StageSegments by user journey (trial, active, churn risk, etc.)Triggering onboarding or win-back flows
RFM (Recency, Frequency, Monetary)Groups based on most recent activity, engagement frequency, and spendIdentifying loyal or dormant users
AcquisitionBased on the marketing channel or campaign sourceTailor messaging or personalize the experience

Why companies optimize for the wrong segments

When we run prioritization exercises, one of the most common mistakes we see is companies focused on segments of users based on volume. If the segment has more users, they automatically believe it deserves more attention.

This reflects one of the three common prioritization mistakes:

  1. Volume bias: Prioritizing segments with the most users rather than the most value
  2. Squeaky wheel focus: Optimizing for the users who complain the loudest
  3. Recency fallacy: Focusing on the latest acquisition channel or user cohort without evaluating their actual value

The uncomfortable truth is that your most valuable segments may not be your largest, your loudest, or your newest.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Conducting a segmentation study step by step

At The Good, we’ve developed a systematic approach to identify and prioritize your most valuable user segments. Here’s how it works.

Step 1: Set your goals

Before you start analyzing data, segmenting users, and prioritizing, you need a clear understanding of your project goals. In most cases, they will look something like this:

  • Identify and quantify subsets of user segments based on use cases
  • Understand the potential value of known segments
  • Identify features and benefits that are most important on a per-segment basis
  • Find opportunities to improve the engagement of high-value users

These goals can be turned into the key research questions of your study.

Step 2: Identify valuable behaviors beyond revenue

Your most valuable user segments, of course, need to drive revenue, but there are other indicators to consider when prioritizing who you are building/optimizing for.

Current value metrics, future value indicators, influence value, and cost-to-serve factors will all influence the overall value of a user segment.

  • Current value metrics: Revenue generated, subscription tier, feature usage, team size
  • Future value indicators: Growth trajectory, expansion potential
  • Influence value: Referral behavior, advocacy impact
  • Cost-to-serve factors: Support requirements, implementation complexity, churn risk, acquisition cost

Identifying and tracking these metrics and scoring segments based on this information will help paint a more holistic picture of value. Some segments might not be your biggest revenue drivers today, but they represent significant future opportunities, so you may choose to optimize for them instead of your current biggest spenders.

Step 3: Collect qualitative and quantitative data

Once you’re clear on goals and value metrics, you’re ready to start collecting data for your segmentation analysis. Gathering a multidimensional data set will help you better understand users as the complex humans they are. Types of data that will help your analysis will include:

  • Usage patterns: Frequency, features used, time spent in the product
  • Transactional data: Revenue contribution, plan type, upgrade/downgrade history
  • Behavioral signals: Engagement with key activation points, referral behavior
  • Acquisition source: Channel origin, customer acquisition cost, time to convert
  • Demographic/firmographic data: Company size, industry, role

Most of this data will be sourced from your main quantitative collection tool, such as Google Analytics or your product analytics. But for a truly effective study, you need to supplement all this information with qualitative context. Surveys, session recordings, or user tests can help you better understand why your users are doing what they do.

Step 4: Conduct factor analysis to identify value drivers

Group your data together into a reduced number of independent factors that represent the underlying themes within the dataset. This will help identify value drivers that differentiate your user segments.

For example, in a recent segmentation project, we discovered distinct value factors that formed natural segment groupings:

  • Efficiency seekers: Primarily valued time savings and streamlined workflows
  • Integration power users: Heavily utilized connections to other tools in their stack
  • Data-driven optimizers: Focused on analytics and performance insights
  • Scale-focused operators: Needed enterprise features and team collaboration

Understanding these value drivers helps you move beyond simple demographic segmentation to truly understand what motivates different user groups.

Step 5: Apply cluster analysis to form actionable segments

Once you’ve identified the key value drivers, use cluster analysis to group users with similar characteristics. Usually, 3-7 distinct segments emerge from the exercise.

These segments often cross traditional demographic lines, revealing unexpected patterns. For example, power users might not be enterprise customers as you assumed, but mid-market companies with specific workflow needs.

This is also the time to start looking for natural clusters of behavior that indicate high-value segments. Considering this, when you’re analyzing user clusters, look for key differentiators like:

  1. Usage frequency: Daily users vs. weekly vs. monthly
  2. Feature utilization: Which user flows are most common for each segment
  3. Value perception: What features each segment values most highly
  4. Growth potential: Which segments show increasing usage over time

Step 6: Quantify segment value and opportunity size

The inputs from your data, factor, and cluster analyses will produce outputs of your high-value segments.

Here’s an example of that workflow so far. The data (survey themes collected) on habits, values, and use cases were the inputs for the factor and cluster analyses. That resulted in segments around the frequency of product use, customer values, and reason for use.

An example of the workflow to quantify segment value and opportunity size.

For each potential high-value segment, revisit the value metrics you established in step 2 of the process. Calculate the relevant metrics to ensure you’re not just following hunches but making data-backed decisions about where to focus.

The most valuable segments often show strength across multiple metrics, not just in current revenue. For example, a segment with moderate current revenue but excellent retention and high referral rates may be more valuable than a high-revenue segment with poor retention.

You’ll also start to see how your most valuable segments differ from your hypotheses. Maybe it’s not defined by company size but by a specific usage pattern. As a specific example, imagine users who perform at least 3 exports per week AND invite 2+ team members within the first 30 days are 4.5x more likely to upgrade to the enterprise tier within 6 months.

This kind of insight could transform your priorities, focusing on making these specific actions easier and more intuitive, rather than spending time/money on creating new features for other segments.

Step 7: Map segments to specific opportunities

The final step is to leverage your knowledge about high-value users to focus optimization efforts. Now, you can connect your segment analysis to concrete optimization opportunities. A few thought starters for this process:

  1. What actions correlate with long-term success for this segment?
  2. Where do users in this segment typically struggle?
  3. What capabilities does this segment need but doesn’t have?
  4. What value propositions connect most strongly to this segment?

You’ll end up with a list of optimization opportunities. To prioritize those efforts and start building a roadmap, we recommend scoring them across these dimensions on a 1-10 scale, then calculating a weighted score that reflects your company’s specific situation and constraints.

  1. Potential revenue impact: How much additional revenue could optimizing for this segment generate?
  2. Implementation effort: How difficult would it be to implement changes for this segment?
  3. Time to results: How quickly can you expect to see meaningful outcomes?
  4. Strategic alignment: How well does focusing on this segment align with your long-term business strategy?

For example, if you’re under pressure to show quick wins, you might weigh “time to results” more heavily. If you’re planning for long-term growth, strategic alignment might carry more weight.

This will be the start of your roadmap for optimization efforts, ensuring that you focus resources on the right opportunities for your most valuable segments.

Focus on your highest-value segments first, then gradually expand your optimization efforts to secondary segments once you’ve captured the initial value. Always consider potential cross-segment impacts when making changes.

Drive growth with user segmentation and prioritization

As your product and market evolve, so will your user segments. What constitutes a high-value segment today may shift as you introduce new features or enter new markets.

We recommend evaluating your user segments quarterly, with a more comprehensive review annually or whenever you experience significant business changes.

Remember, the path to scaling your SaaS business isn’t through trying to please everyone with generic optimizations. It’s through deeply understanding which user segments create the most value and deliberately focusing your limited resources on enhancing their experience.

Ready to identify and prioritize your most valuable user segments? The Good’s Digital Experience Optimization Program™ can help you discover untapped growth opportunities through expert research, strategic insight, and data-driven experimentation. Contact us to learn more about how our team can help your SaaS business scale faster.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

]]>
Continuous Research: The Secret Weapon For Effective Product Teams https://thegood.com/insights/continuous-research/ Fri, 25 Apr 2025 05:35:22 +0000 https://thegood.com/?post_type=insights&p=110484 Traditional product building happens in sequential phases. Following a waterfall methodology, long phases of upfront research are followed by long periods of building or implementing, before the research begins again. But this episodic style is falling out of favor with forward-thinking teams. The best product organizations are embracing continuous research, an always-on approach to gathering […]

The post Continuous Research: The Secret Weapon For Effective Product Teams appeared first on The Good.

]]>
Traditional product building happens in sequential phases. Following a waterfall methodology, long phases of upfront research are followed by long periods of building or implementing, before the research begins again.

But this episodic style is falling out of favor with forward-thinking teams. The best product organizations are embracing continuous research, an always-on approach to gathering insights that allows for more user-centered, effective products.

In one study, 83% of designers, product managers, and researchers agreed that research should be conducted at every stage of the product development life cycle. But, only 36% of them are conducting research studies after launch.

How can they bridge the gap? With continuous research.

What is continuous research?

Continuous research is an “always-on” style of research, where product teams put practices and systems in place to habitually learn from users. Rather than conducting isolated research studies or sprints, it focuses on integrating regular research activities into the product development cycle.

Why continuous research?

The benefits of continuous research are plentiful. Gathering insights regularly means quickly responding to user needs/wants, making more data-driven decisions, and reducing spend on changes that don’t work.

According to research by McKinsey, there is a direct correlation between financial success and de-risking development by continually listening, testing, and iterating with end-users. Continuous research methods are proven to positively impact the bottom line, and you can feel good knowing that they also make your customers’ lives better.

However, the under-touted benefit of continuous research is that it makes everyone’s job at your company easier. Product teams get their questions answered faster. Developers don’t have to waste time on unfriendly user features. Sales can more easily connect with customers.

No one has to wait to get on a roadmap, because there is a constant cycle of feedback and user connection that is otherwise unattainable.

Continuous research methods

So, what specifically counts as continuous research? Plenty of methods would fall under this umbrella. Here are a few of our favorites to paint a picture of what continuous research looks like in action.

Regular user interviews

Open a time on your calendar to consistently fill with customer interviews. This consistent, lightweight user research can gather immediate feedback on new features or designs.

Regular usability testing

Find time to observe users interacting with your product. Do this often, and you will uncover patterns to improve your UX.

Ongoing collection of CSAT or NPS scores

Keep a record of customer satisfaction scores (CSAT) or Net Promoter Scores (NPS) to understand over time whether users are happy with your product. This consistent record will help you determine if product optimizations have helped or hurt your experience.

Cohort comparison through onboarding surveys

Conduct onboarding surveys and then compare cohorts over time to identify trends that may not be apparent in individual feedback sessions.

Lightweight prototype testing

Get feedback on designs from initial prototype to mid-fidelity to fully mocked up. Use the consistent feedback to iterate quickly and make immediate changes as you go.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Continuous research implementation strategies

With the benefits and methods clear, you might be ready to shift your culture towards continuous research. If so, here are a few implementation strategies to set you off on the right foot.

Start small and build up consistency

Begin with a single recurring research touchpoint, such as weekly user interviews or bi-weekly prototype testing sessions. You don’t need a comprehensive, always-on strategy when you’re just starting out. Starting small will get you into the habit, and then you can find ways to expand your efforts.

Put research on the calendar

Blocking time on the calendar for research will get you into the continuous research habit. Consider something like a Friday afternoon standing meeting that you can fill with customer interviews.

Teresa Torres, a well-respected expert and author of Continuous Discovery Habits, suggests you talk to customers every week. “Continuous discovery means weekly touchpoints with customers by the team building the product, where they conduct small research activities in pursuit of a desired outcome.”

The emphasis is on taking research from something you pause to do, into something you always do. Putting it on the calendar in a consistent cadence will help you stick to it.

Get the whole team involved

One of the best parts of continuous research is that it benefits the whole team… and the whole team can be involved! While continuous research may be led by a researcher, it can also be effectively led by product managers who incorporate it into their regular schedule.

Even if other teams don’t lead the process, get them involved by:

  • Asking salespeople to ask one specific research question in each call
  • Having designers build prototype testing into their workflows
  • Sharing research findings across the organization

We have lots of expert insights on how to make B2B research work harder and get your team involved in the process, here.

Complement ongoing feedback with strategic research

Another great recommendation from Teresa Torres is to complement ongoing feedback with occasional deeper discovery work. When you have a higher-risk change or question, take the necessary time to do a deep dive into the data, testing, and analysis.

An always-on research strategy should ensure you’re solving the right problems and that you’re doing it effectively. A combination of lighter, continuous, and deep-dive research will make sure that happens.

Build your toolkit

Tools and technologies that enable continuous feedback will be a lifesaver during busy weeks when it would be easy to let research fall to the wayside. Set up automations and find the tools that make it easier for you to keep users scheduling, data collecting, and insights surfacing.

In the end, the value of continuous research comes from rapidly applying insights. These implementation strategies will create explicit pathways for research findings to influence product decisions within days, not months.

Who should leverage continuous research?

A shift to continuous research represents a cultural shift in product development. It’s not just a changing methodology; it’s a truly user-centered approach where customer feedback continuously shapes product direction. Most product teams would benefit from implementing an always-on research strategy, but it’s particularly valuable in a few circumstances.

Teams with limited resources

It might seem counterintuitive, but it is particularly valuable for teams that don’t have a big research budget. Even without the dollars to fund big studies, teams can leverage continuous research to uncover customer insights that guide development.

Growth-stage startups

It’s ideal for startups that are moving quickly to build and make decisions. They’re mostly throwing things at the wall to see what sticks, but continuous research can act as the safety net or gut check for those ideas. Run it by a customer and get some quick feedback instead of waiting to make mistakes in-market.

Products in rapidly evolving markets

If you’re in a market that is changing quickly, like AI, it’s a good idea to implement continuous research. It helps you adapt to evolving consumer needs more efficiently and to keep up with rapidly developing technological advancements. When you have an always-on research schedule, you can get your questions answered more quickly and implement changes shortly after.

Why “always-on” should be your new normal

Studies show that the compartmentalization of design, development, and research stages of product development “increases the risk of losing the voice of the consumer or of relying too heavily on one iteration of that voice.” Don’t let your organization fall into this trap.

User insights help teams innovate faster and build better products. The best teams today are those that learn from their customers as they build, putting the user experience at the center of product development and optimization. Consistent feedback loops allow them to deliver constant value and effectively respond to market changes.

As competition intensifies in SaaS, continuous research could be the difference between products that thrive and those that die.

If your team sees the value of continuous research but doesn’t have the resources to manage it in-house, The Good can help.

Our team of experts will be an on-demand research (and design and strategy) team that helps you get things done faster. No more waiting months to get your ideas on the roadmap.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Continuous Research: The Secret Weapon For Effective Product Teams appeared first on The Good.

]]>
Three Green Flags to Look For in a Research Vendor https://thegood.com/insights/research-vendor/ Wed, 09 Apr 2025 16:19:11 +0000 https://thegood.com/?post_type=insights&p=110459 It seems like everyone is talking about the flattening of the talent stack these days. Tech leads are doubling as product managers, product managers are playing designer, and researchers are lending their talents to the insights team. Anyone with a laptop is doing more with less. Perhaps no corner of the product industry is witnessing […]

The post Three Green Flags to Look For in a Research Vendor appeared first on The Good.

]]>
It seems like everyone is talking about the flattening of the talent stack these days. Tech leads are doubling as product managers, product managers are playing designer, and researchers are lending their talents to the insights team. Anyone with a laptop is doing more with less.

Perhaps no corner of the product industry is witnessing democratization more than UX research.

Despite “research” working its way into the job descriptions of more and more disciplines, experienced, high-caliber researchers will always have their place in industry. Whether it’s to supplement your team’s capacity, tap into deep expertise, or get an objective outside perspective, research vendors are valuable for a host of reasons. But between traditional agencies and the recent increases of independent and fractional labor, how do you know you’re talking to someone with the chops to execute at a high level?

We asked product research experts Hannah Shamji and Jon MacDonald how to spot a great research vendor. Read on to hear their perspective on what “green flags” to look for when vetting your next research partner.

They Ask a Healthy Number of Questions

Most experienced researchers have chosen the wrong method at least once in their careers. And the outcome is always disappointing. “Picking the wrong research method leaves you with results you effectively can’t use, and is a huge waste of resources,” says Jon MacDonald, Founder and CEO at The Good.

Fortunately, that painful lesson has its upside in learning value. Careful to avoid diving headfirst into a low-utility approach, experienced researchers ask plenty of questions before jumping into execution. This assures they understand both the problem space and how the research will be actioned on.

“If they are asking questions, it tells you they want to understand the business context,” says Hannah Shamji, former psychotherapist turned customer researcher. “If they’re just jumping in and not really scoping things out, it’s probably a sign they’re not the right fit.”

That heavy lifting up front helps shape a clear scope, but the conversation is more than just a learning exercise. A strong vendor will then massage the methodology to fit the business challenge. “I think it’s important to not lead with a method unless you have a very clear diagnosis,” says Hannah.

Jon agrees. “A good researcher will avoid a cookie-cutter approach,” says Jon. It’s why his team kicks off every project with a conversation designed to uncover nuance, align on business goals, and extract the institutional knowledge embedded within the team. It’s a process Jon calls “diagnosing before prescribing.” And it’s why The Good doesn’t respond to RFPs.

“If a scope is completely mapped out before involving a vendor, we often find that it’s poorly suited to yield the outcomes they’re after,” says Jon.

By forming scope through a collaborative process that starts with a conversation, research vendors are well-equipped to help craft an approach that’s appropriate and effective.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

They Can Walk You Through the Tradeoffs

If you’re at the stage where you’re vetting research vendors, you probably have some idea of how to get the job done, i.e., through a survey or customer interviews. But Hannah warns that because research is so “accessible-sounding,” it’s common to chat with clients who start out asking for one form of research but really need another.

From Hannah’s point of view, a true expert will help you navigate the tradeoffs of one method vs another. They’ll help you understand how an approach impacts your time, budget, and expected outcomes. “There are a lot of easy, accessible go-tos like running a survey and talking to customers, but there are so many other forms of research that can close the gap,” Hannah says.

“The difference between an executioner and consultants is that if you want someone to do, that’s a slightly different hire. If you want someone who will help you navigate the tradeoffs, it’s a different conversation.”

Jon agrees.

“Our clients love chatting through their needs with us because we’re really good at helping them outline the constraints and requirements of the task at hand and figuring out where to get the most leverage. We’re a thought partner. So by being brought in early enough, we can help them think through what they need to learn with new research versus where we can rely on historical or secondary research.”

In an ideal world, we would execute at the perfect balance of depth, speed, and cost. But at the speed of business today, most contexts leave us wanting for either time, budget, or rigor. A good research vendor will help you navigate the tradeoffs and make an informed decision.

“Sometimes you need to be scrappy, sometimes you need to go deep,” Hannah explains. "Being able to juggle your timeline and adapt the methods to your needs is key. Not everything needs significant rigor.”

As such, Hannah recommends being up front with your vendor and communicating what your priorities are—being honest about your budget, when you can act on the findings, who’s involved, and what’s in your power to change (versus what authority lives on another team). This context will help your vendor deliver “just enough research.”

They Are Flexible in Their Collaboration Style

For Hannah, research services are best done in a way that meets the team where they are at. That tailored collaboration style is what Jon calls a “one size fits one” approach.

As such, our experts believe a strong research vendor tailors their engagement to the company's needs, understanding that research roles can shift depending on the stage of the business. "Depending on who’s involved, I think about research differently," Hannah says. "There are certain stages where it’s not helpful to bring in a vendor with a buttoned-up process."

For instance, Hannah finds that early-stage founders seeking product-market fit may benefit more from hands-on coaching than outsourced research, “so they can stay close to the data and be at the frontline of it.”

For those early-stage founders, Hannah recommends working with a partner who will open up their process or even take a more coaching-based approach. That way, the feedback loops are faster and the learnings are gathered first-hand. “You want to own the process yourself and minimize the gap between learning and doing.”

This manifests in conversational snapshots of the data as it’s rolling in. "Sometimes I will drip out the findings as I get them because I know they need to move," Hannah notes. “I’ve had sales [people] jump on a call with me in the middle of me doing research because they just want to ask some questions to fill in the gaps with what I’m learning.”

For Jon, it’s about figuring out how involved a partner wants to be. “Some people want email updates almost daily, others just want a report in their inbox when things are wrapped. We try to work in a way that gives them their desired level of input and transparency.” This kind of adaptability ensures that research remains a business enabler rather than a bottleneck.

How To Choose The Right Research Partner

Choosing the right research vendor isn’t just about credentials or experience; it’s about fit.

To set yourself up for success, look for vendors who:

  • Ask the right questions and diagnose problems before prescribing solutions.
  • Can communicate tradeoffs to determine a path forward that fits your needs
  • Are flexible in their collaboration style, tailoring their approach to the company’s stage and objectives.

By keeping these green flags in mind, businesses can ensure they partner with a research vendor who will deliver value, not just data.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Three Green Flags to Look For in a Research Vendor appeared first on The Good.

]]>
How to Make the Move From Intuition-led to Data-driven https://thegood.com/insights/intuition-led-to-data-driven/ Fri, 28 Mar 2025 21:10:55 +0000 https://thegood.com/?post_type=insights&p=110423 If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of […]

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of CEOs want a data-driven organization, the reality is that many organizations are still largely intuition-run. It takes more than a compelling argument in those contexts to turn the tide.

If you’re spearheading the shift from an intuition-driven to a data-driven practice, it can be an uphill battle and a lonely one at that. We spoke with Hanna Grevelius, CPO at Golf Gamebook & Advisor, and Maggie Paveza, Digital Strategist at The Good, about how they’ve navigated data-imperfect conditions throughout their careers and successfully advocated for data-first principles.

Whether you’re working with limited data or as your company’s first A/B testing specialist, their stories make one thing clear: doing it alone doesn’t have to be so daunting.

Keep reading to hear about:

  • How they learned to work with data
  • How to leverage data to build prioritization intuition
  • When guessing is appropriate
  • How to be an advocate for data-first practices

1. It’s OK to learn on the job

For those with only a passable knowledge of statistics, it can seem intimidating to dive headfirst into data-driven decision making. But it doesn’t take a data science degree to be able to act on good data. In fact, few teams employ full-time analysts at early stages of growth. Most teams get by early on with the skills of a few generalists, who, it turns out, often learn on the job.

“Quantitative methods are something that I’ve learned in my career,” says Maggie Paveza, Senior Digital Strategist at The Good. Having previously worked as a UX Researcher at Usertesting.com, Maggie started with a strong foundation in qualitative research before adding quantitative methods to her toolkit, which she says helps her tell a fuller story. “The qualitative research forms the why; the quantitative research forms the what.”

For Hanna Gervelius, CPO at GolfGamebook, her relationship data started from close collaboration with Product Managers.

“My role when I started was in support, answering customer support emails. In trying to understand the scalability of issues, I got to work and talk a lot to product managers who really helped me understand we need to look at the data to know: is it one person who experienced the bug? Is it from a specific version of the app? Is it related to the device or operating system they were on?”

Hanna says learning how to dig for data helped her contextualize customer pain. And through that practice, she built the skills necessary to transition into Product Management. “It was through support that I started to understand that we should look into the data, then eventually I moved over to work on Product Management.”

When she added A/B testing to her toolkit, that took her passion for data to a whole new level.

“It’s so clear when you A/B test that even a small change can have a big impact. When you start seeing the difference, that really sparks an interest.”

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

2. Use data to define your focus

Once Hanna could confidently dive into the data, she started to use it in her practice, evaluating where traffic hits the app most frequently and focusing on those high-value, high-traffic areas first. This exercise in opportunity sizing taught her that it’s ok to shift focus in light of new data.

Maggie takes a similar approach to prioritization. She uses traffic data to understand what areas of a site or app are highly trafficked, and before proposing a test, she always verifies that an A/B test would see significance within an acceptable amount of time.

“We rely on prioritization methodologies to understand if running a test in an area would have a significant revenue impact and if an A/B test would help us gauge in a number of weeks or longer.”

If you’re just starting out with a new property, Maggie and Hanna both suggest building a foundational understanding of traffic patterns and to regularly refine your strategy. Priorities often shift as a result.

3. In the absence of data, start with a guess

One valuable skill that came later in their careers was understanding the value of a lead. Boosting form fills can feel invigorating, but without an understanding of what portion of that audience might become a deal later, it’s hard to know if your work is making a difference. Assigning a dollar amount to a lead is a powerful tool to evaluate your performance.

But if you’re joining an organization without mature data practices, leads often have no value assigned. And without institutional knowledge, it can be intimidating to make a guesstimate. But to Hanna, it’s worth starting with a guess to set initial priorities.

Hanna advises using a rough calculation to estimate the value of a metric (with things like average deal value and percent of pipeline that converts), which can help you get an early read.

“Over time, you can start adjusting it higher or lower. But trying to put a value on it and making decisions based on that is the best way to still work in a data-driven way even when you don’t have all the answers.”

Hanna warns that an estimate is just that, and that staying above board about where the data comes from is key to retaining trust.

“What’s really important in that estimation reporting is that you’re always super clear that you’re estimating—that it could be a lot higher and a lot lower, because if you start making critical budget decisions on it, you can end up in a dangerous situation.”

4. Be the change you want to see

For those who know the clarity that data can bring to the decision-making process, working within a data-poor organization can be challenging. But Hanna says it’s fairly easy to lead others to data advocacy, even if you’re not in a C-suite. “Most people nowadays want to be data-driven,” Hanna says. In her opinion, it doesn’t take a fancy title to turn others into advocates.

“If you are working in an org where you are the only person who is responsible for testing, the best thing you can do is try to spread that knowledge. Get them involved and feel a sense of ownership. Try to make it so that you’re not the only one who cares about A/B testing and being data-driven.”

In order to build stewardship throughout the organization, Hanna’s advice is to walk through your thinking, specifically by walking colleagues through the potential upside to testing, and the risks of not. “That can help people who are not so interested in testing to be a bit more curious and to want to understand.”

In Hanna’s experience, your passion can be quite contagious. “Data and testing, it opens up a world that is so fun.”

As for how she does it, Hanna shares her excitement by showing rather than telling. “As soon as you have the test going, share a bit of the data early on,” she says. Rather than being cagey about how inaccurate early test data is, she uses it as a teaching moment.

“All of us who work in the testing space know that data from one day or three days is probably going to be completely wrong, and you can say that also. But show it to that person. Show that ‘this is super early, we have no idea if this is going to be correct or not, and stat sig, but after one day this is what it looks like’”

And of course, once you run successful tests down the line, Maggie’s experience tells her that there is nothing more powerful than sharing a win with your team.

Artfully navigating the shift

Advocating for data-driven decision-making in intuition-led companies isn’t always easy, but it’s a challenge worth taking on.

As Maggie and Hanna’s experiences show, starting small, whether by learning on the job, prioritizing based on data, making informed estimates, or sharing early insights, can lead to big shifts in mindset.

By fostering curiosity and collaboration, you can help transform your organization’s approach to decision-making, making data a natural and valued part of the process.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For https://thegood.com/insights/how-to-read-a-heatmap/ Mon, 10 Feb 2025 17:49:14 +0000 https://thegood.com/?post_type=insights&p=110285 Wondering why users leave your site without converting? You may have a gut-instinct answer to the question. You might even have ideas for how to tweak design, rewrite headlines, or add new features in an attempt to get users to stick around. But guesswork isn’t a strategy. Expert researchers don’t guess. They use data, and […]

The post How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For appeared first on The Good.

]]>
Wondering why users leave your site without converting?

You may have a gut-instinct answer to the question. You might even have ideas for how to tweak design, rewrite headlines, or add new features in an attempt to get users to stick around. But guesswork isn’t a strategy.

Expert researchers don’t guess. They use data, and one of their most powerful tools is the heatmap.

When used correctly, heatmaps reveal where users hesitate, what grabs their attention, and where they drop off—all critical insights for optimizing conversions. But the real magic isn’t simply generating a heatmap; it’s knowing how to read it.

In this guide, we’ll break down how to read a heatmap like an expert so you can stop guessing and start making informed, high-impact changes to your website.

Intro to heatmap analysis

Heatmap analysis shows real data points that represent actual human behavior. And when those behaviors form visibly discernable patterns, we use these patterns to form hypotheses about user wants and needs.

Heatmaps can help answer questions about user behavior and uncover sticking points in the customer journey.

Like footprints in the sand, heatmaps show us where users have been. And we use that information to infer and imply intent.

Types of heatmaps

At The Good, we primarily use three types of heatmaps: Click maps, movement maps, and scroll maps.

These types of heatmaps provide insights that answer critical UX and conversion questions, such as:

  • Are users seeing my key content?
  • What elements are they engaging with?
  • Where are they paying the most attention?

By analyzing these patterns, we can pinpoint where users get stuck, what’s drawing their attention, and where they drop off—and take action to improve the experience.

Scroll Maps

Scroll maps visually depict typical scroll depth on any web page. Key insights you can glean from scroll maps:

  • Where users drop off (high exit points may signal a false bottom)
  • Whether important content is being seen
  • If users are scanning or engaged

Tools typically use scales to show you the portion of users who scroll to different parts of your page. Red or “Hot” areas of your Heatmap indicate that all or almost all your users have seen this part of the page. As you move down the page, the colors will get “colder” according to the percentage of users who scroll down to that point.

The lines on the page below indicate where 25%, 50%, and 75% of users dropped off, meaning they left the page or clicked on something, therefore not scrolling further.

While shallow scrolling is not inherently negative, it may indicate lost user attention or that a page does not look scrollable.

The same goes for deeper scroll depths. It is not inherently positive or negative to see a deeper scroll depth. Depending on the surrounding context, deeper scroll depth may indicate that users are failing to find meaningful content higher on the page, and therefore go looking by scrolling down.

Movement Maps

Movement maps show where users have hovered their mouse on a page. They are valuable because they show us where the majority of user attention is focused. Movement maps can show:

  • What content users are reading or skimming
  • Where their attention is most concentrated
  • Whether key information is being overlooked

Movement maps help us infer what content is most valuable to users during the decision-making process.

Reading movement maps is similar to reading eye-tracking heat maps. For many users, their mouse movement follows their gaze, so knowing where mouse movement occurs tells us what content users are reading or paying attention to.

Based on our experience, concentrated left-to-right movement over text generally indicates intentional reading, since many mouse movements tend to follow the user’s eyes.

In this example, we see side-to-side movement over FAQs, indicating users are reading each question to determine which one may reveal helpful information about the services being offered. We looked at movement clusters in the FAQs, which when paired with data about the most highly clicked FAQ items helped us determine what questions users needed answered to have the confidence to purchase services.

In contrast, up-and-down movement may indicate areas that users are simply skimming rather than intently reading. Take this example: seeing vertical movement patterns indicated to us that users may be scanning the resources available (rather than reading). User testers told us that the content did not look worthwhile, so those two bits of data together told us this area needed some fresh content and a redesign.

Click Maps

Click maps show us what elements users click on most commonly. Click maps can uncover insights including:

  • What elements drive engagement (or get ignored)
  • If users are clicking on non-clickable elements (indicating confusion)
  • Which navigation links or CTAs attract the most interest

Hot spots, shown in red, have the highest concentration of clicks. Transparent blue spots represent a low density of clicks.

In the click map below, we see a list of “All Products” with one notable hot spot in the middle of the menu. What, you may wonder, is in the middle of the list that is drawing so many clicks?

The answer is in the name: Paints. Here we see an example of a company with a clear specialty and a large portion of their sales going to one category. Yet, when we saw this heatmap we realized they were making the user work hard to find these most popular products by burying them in the middle of the list.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

18 Heatmap patterns to look out for

To get the most value out of heatmaps, researchers have to analyze how different heatmap elements interact, compare trends across pages, and validate findings with other data sources like session recordings or analytics.

In this section, we’ll walk through the most common heatmap patterns, what they look like, and what they reveal about user behavior so you can start making smarter, data-backed decisions.

The Spot Specific Pattern

Where it appears: Click map

What it looks like: Highly concentrated heat activity on an individual spot in a sea of text.

What it means: Users might have a specific interest related to a need. They could also be clicking on a non-clickable element within a paragraph or looking for information that is slightly buried within other text. It may be an indicator that you need to rearrange a menu or better highlight certain features of a product.

Gapped Patterns

Where it appears: Click map

What it looks like: In a list of items, there is one that gets no heat activity.

What it means: It usually means that a user doesn’t know what to expect if they were to click here, or they are simply disinterested.

Primacy vs Recency Pattern

Where it appears: Click map

What it looks like: Concentrated click activity on the first and last items in a list.

What it means: Typical of menus, users often engage most with the first and last item in any list. Named after the psychological phenomenon where users are best at recalling the first and last words in any list.

Filter Hot Spots

Where it appears: Click map

What it looks like: Concentrated clicks on certain areas of a filter, and a lack of clicks on other areas of a filter.

What it means: Users generally rely heavily on certain filters and less on others. Knowing what filters are helpful to users might tell us how we should rearrange filters or give us context for what users care about in their products.

Consistent Browsing Pattern

Where it appears: Click map

What it looks like: Strong click patterns across products on category pages.

What it means: This tells us that users are interested in a variety of products on the category page and are clicking on various product pages.

Spotted Browsing Pattern

Where it appears: Click map

What it looks like: Strong amount of clicks on only a few product images on category pages.

What it means: This tells us that users are most interested in specific products. These might be flagship products (as in this example).

Strong Pagination Pattern

Where it appears: Click map

What it looks like: Concentrated activity on the pagination with little activity on filters or product tiles.

What it means: Users might not have very intentional browsing behavior, and instead of engaging with product tiles and narrowing down their search, they are simply going from page to page to see all products.

Click Indecision

Where it appears: Movement map

What it looks like: Horizontal heat patterns found in the middle of two clickable elements, usually between 2 or more different elements positioned next to each other. Can be found on a menu navigation or even dual CTAs.

What it means: Users are hovering between clickable elements. They might be experiencing a bit of uncertainty in their browsing experience. They’re not sure where to click because both options are similar in nature or unclear.

F-Shaped Reading

Where it appears: Movement map

What it looks like: Concentrations of heat in the shape of an F on the page. The direction begins with the user tracing the page from top to bottom and then from left to right.

What it means: Users are assessing the content on the page but they are not necessarily reading it.

Source.

Commitment Reading

Where it appears: Movement map

What it looks like: Blocks of heat activity usually on content pages or chunks of text.

What it means: Users are high-intent and they’re learners. These patterns show strong interest in the information displayed and intentional reading.

Source.

Layer Cake Pattern

Where it appears: Movement map

What it looks like: Users read headlines but overlook the associated subtext.

What it means: They are interested in the content but are reviewing the page at a high level.

The downside of this pattern is that users could be missing content related to their needs or diminish the influence of the content’s intended purpose to promote a desired course of action.

Scrolling Pattern

Where it appears: Movement map

What it looks like: A vertical heat pattern that travels down the page. On low-traffic pages, this might be represented by dots that align in a vertical fashion, as with the example here.

What it means: This signifies that users are scrolling down the page, without necessarily reading the content. They might be looking for something that they are not finding, or the content might be arranged in a fashion that is best for scanning. If this is paired with truly little click engagement, we might assume that the content is not very valuable.

Truncated Scanning

Where it appears: Movement map

What it looks like: Users skip a consistently repeated word in a text.

What it means: Users are reading content faster, likely because the content is repetitive and it’s easy to recall the skipped word.

Dropdown Residue

Where it appears: Movement map

What it looks like: A spotted heat residue in a rectangular fashion positioned below the top navigation.

What it means: This is residual activity of users strongly considering items in the drop-down menu or some drop-down element on the page. Residue will be concentrated in the areas where users are actually scanning and considering the content.

Image Hover

Where it appears: Movement map

What it looks like: Heat activity around images on a page. Could be on a category page or rows of photos.

What it means: Imagery is dynamic–a secondary image shows when the users hover over the primary image. The user is hovering around the image to see the second photo.

Content Avoidance

Where it appears: Movement map

What it looks like: The inverse of the image-specific pattern, content avoidance happens when people explicitly avoid an area with their mouse, almost creating a frame.

What it means: This might mean that users perceive this as an ad and are intentionally avoiding it, or have “banner blindness” and simply don’t see the content as relevant to their visit.

False Bottom

Where it appears: Scroll map

What it looks like: On scroll maps, there is a high drop-off on the page (drop-off is above the halfway mark on the page).

What it means: Users might perceive that they’ve reached the end of the page. This is extremely common when email signups are in the middle of a page (see example right) and when there is a strong color contrast, full-bleed section early in the page. These things signal the footer is coming, so they often make users think they’ve seen everything they need to see.

Halted Pattern

Where it appears: Scroll map

What it looks like: Drop-off is right above the fold, and nearly no users scroll below it.

What it means: Either most users are finding something to click on above the fold, there is a high bounce/abandon rate, or there is a false bottom. It could also be some combination of the three.

What is the best tool for heat mapping?

Not all heatmaps are created equal. The best heat mapping tool is the one that provides clear, actionable insights without adding unnecessary complexity.

For most teams, Hotjar will be a great go-to solution. It’s lightweight, easy to set up, and provides a suite of heatmaps—including click maps, scroll maps, and movement maps—that help you understand user behavior at a glance.

Why Hotjar?

  • Comprehensive Behavior Tracking: Hotjar captures how users interact with your site—where they click, how far they scroll, and what elements they hover over.
  • Fast Insights, No Heavy Lifting: Unlike enterprise tools that require complex setup, Hotjar makes it easy to get started and see results quickly.
  • Paired with Session Recordings: Heatmaps alone tell part of the story; Hotjar lets you connect heatmap insights to real visitor session recordings for deeper analysis.

While it’s our top pick, if Hotjar isn’t the right fit, another good option is Microsoft Clarity.

Turning heatmap data into actionable strategies

Reading a heatmap like an expert researcher isn’t just about spotting red and blue zones—it’s about understanding the “why” behind user behavior and knowing what to do next.

But if you don’t have the time or resources to build a research team, you don’t have to go it alone. At The Good, we specialize in turning heatmap data into clear, actionable strategies that drive real results.

Want to skip the learning curve and get expert insights now? Let’s talk.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For appeared first on The Good.

]]>