user research Archives - The Good Optimizing Digital Experiences Wed, 26 Nov 2025 18:55:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust https://thegood.com/insights/maxdiff-analysis/ Wed, 26 Nov 2025 17:56:30 +0000 https://thegood.com/?post_type=insights&p=111202 When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like […]

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like ours,” and “How do I know this won’t be another failed implementation?”

The company had assembled a list of proof points they could showcase on their homepage: years in business, number of integrations, customer counts, implementation guarantees, security certifications, industry awards, analyst recognition, and more. But they only had space to highlight four of these benefits prominently below their hero section. They faced the classic messaging dilemma: which trust signals would actually move the needle with prospects evaluating B2B software?

This is where MaxDiff analysis becomes valuable. Instead of relying on stakeholder opinions or generic best practices, we could let their target buyers vote with data on what mattered most.

What makes MaxDiff analysis different from other survey methods

MaxDiff analysis (short for Maximum Difference Scaling) is a research methodology that forces trade-offs. Rather than asking people to rate items individually on a scale, MaxDiff presents sets of options and asks participants to identify the most and least important items in each set. This forced-choice format reveals true preferences because people can’t rate everything as “very important.”

Here’s why this matters: traditional rating scales often produce compressed results where everything scores high. When you ask customers, “How important is X on a scale of 1-10?” most people will hover around 7 or 8 for anything remotely relevant. You end up with a spreadsheet full of similar numbers and no clear direction.

MaxDiff cuts through that noise. By repeatedly asking “which of these five options matters most to you, and which matters least?” across different combinations, you build a statistical picture of relative importance. The math behind MaxDiff generates a best-worst score for each item, showing not just which options are preferred, but by how much.

For digital experience optimization, this methodology is particularly useful when you need to prioritize limited real estate on a website, determine which features to build first, or figure out which messaging will actually differentiate your brand.

How we structured the MaxDiff study for maximum insight

In the project for our client, we started by defining the target audience precisely. The company was a B2B SaaS platform serving mid-market operations teams, so we recruited 60 participants who matched their customer profile: director-level or above at companies with 50-500 employees, working in operations or supply chain roles, currently using at least two SaaS tools in their workflow, and actively evaluating solutions within the past six months.

From the initial audit and stakeholder interviews, we identified 11 potential trust signals the company could emphasize on its homepage. These included things like:

  • Concrete numbers (customer counts, uptime percentages, integrations available)
  • Credentials (security certifications, enterprise clients)
  • Promises (implementation timelines, support response times, money-back guarantees)
  • And more

Each represented something the company could truthfully claim, but we needed to know which ones would build the most trust with prospects evaluating the platform.

The survey design was straightforward. Each participant saw these 11 benefits randomized into multiple sets of five items. For each set, they selected the most important factor and the least important factor when considering whether to adopt this type of software. Participants completed several rounds of these comparisons, seeing different combinations each time.

This approach gave us enough data points to calculate a robust best-worst score for each benefit: the number of times it was selected as “most important” minus the number of times it was selected as “least important.” Positive scores indicate a strong preference, negative scores indicate a low importance, and the magnitude of the scores shows the strength of feeling.

The results revealed a clear hierarchy of trust signals

When we analyzed the MaxDiff results, the pattern was striking. The top-scoring benefits shared a common theme: they provided concrete evidence of proven reliability and satisfied users. The bottom-scoring benefits? They emphasized company scale and marketing visibility.

A chart showing the ranking of MaxDiff analysis SaaS trust signals.

The four highest-scoring trust signals were clear winners. G2 or Capterra ratings scored 38 points (the highest possible), indicating this was nearly universal in its importance. The number of active customers scored 30 points. An implementation guarantee (“live in 30 days or your money back”) scored 25 points. And SOC 2 Type II certification scored 16 points.

These weren’t arbitrary marketing metrics. They were the specific signals that would make someone think, “this platform delivers real value and other companies trust them.”

The middle tier included operational details that registered as minor positives but weren’t decisive: the number of successful implementations (7 points), availability of 24/7 support (6 points). These signals suggested competence but didn’t particularly move the needle on trust.

Then came the surprises. Years in business scored -5 points, indicating it was slightly more often selected as “least important” than “most important.” The number of integrations available scored -11 points. AI-powered features claimed scored -15 points. Employee headcount scored -36 points. And recognition as a Gartner Cool Vendor scored -55 points, the lowest possible score.

Think about what prospects were telling us: “I don’t care that you have 200 employees or that Gartner mentioned you. Show me that real companies like mine trust you and that you’ll actually deliver on your promises.”

Why customers rejected company-focused metrics

The findings revealed an insight into trust-building that extends beyond this single company. B2B buyers weigh social proof and reliability guarantees far more heavily than they weigh indicators of company scale or industry recognition.

When a business talks about its employee headcount or analyst mentions, prospects interpret this as the company talking about itself. These metrics answer the question “How big is your business?” but not “Will this solve my problem?” From the buyer’s perspective, a larger team or Gartner mention doesn’t necessarily correlate with better software or smoother implementation.

By contrast, user reviews and customer counts answer the implicit question every prospect has: “Did this work for companies like mine?” A guarantee directly addresses risk: “What happens if implementation fails?” Security certifications address legitimacy: “Is this platform secure enough for our data?”

The AI-powered features claim scored poorly, likely because it felt trendy rather than practical. Prospects for this specific business weren’t primarily concerned about cutting-edge technology; they wanted a platform that would reliably solve their workflow problems. Leading with an AI angle, while possibly true, didn’t address the core decision-making criteria.

Years in business scored negatively for similar reasons. While longevity can signal stability, in this context, it didn’t address the prospect’s immediate concerns about implementation speed and user adoption. A company could be around for years while providing clunky software with poor support.

From insight to implementation: turning research into revenue

The MaxDiff analysis gave the company a clear action plan. We recommended implementing a four-part trust signal section directly below their homepage hero, featuring the top four scoring benefits in order of importance.

This meant reworking their existing homepage structure. Previously, they had emphasized their implementation guarantee in the hero area while burying customer counts and ratings further down the page. The research showed this approach had it backward. Prospects needed to see evidence of customer satisfaction first, then the implementation guarantee as additional reassurance.

We also recommended removing or de-emphasizing several elements they had been proud of. The employee headcount mention, the Gartner recognition, and several other low-scoring items were either removed entirely or moved to less prominent positions on the site. The goal was to prevent low-value signals from crowding out high-value ones.

The broader lesson here extends beyond this single homepage optimization. The MaxDiff results provided a messaging hierarchy that the company could apply across its entire go-to-market strategy. Email campaigns, landing pages, sales conversations, demo decks, and even their LinkedIn company page could now emphasize the trust signals that actually mattered to prospects.

When MaxDiff analysis makes sense for your business

MaxDiff is particularly valuable when you’re facing a prioritization problem with limited data. It works best in these scenarios:

  • You have more options than you can implement. Whether that’s features to build, benefits to highlight, or messages to test, MaxDiff helps you choose wisely when you can’t do everything at once.
  • Stakeholder opinions are conflicting. When internal debates about priorities can’t be resolved through argument, customer data settles the question. MaxDiff provides quantitative evidence for decision-making.
  • You need to differentiate in a crowded market. If competitors are all saying similar things, MaxDiff reveals which specific claims will break through. Often, the winning messages are ones companies overlook because they seem “obvious” or “not unique enough.”
  • You’re optimizing for a specific audience segment. Generic research about “customers in general” often produces generic insights. MaxDiff works best when you recruit participants who precisely match your target customer profile.

The methodology has limitations worth noting. It requires careful setup, and you need to know which options to test before you start.

If you don’t include the right benefits in your initial list, you won’t discover them through MaxDiff.

It also works best with a reasonably sized set of options (typically 5-15 items).

And the results tell you about relative importance, not absolute importance; everything could theoretically matter, but MaxDiff reveals the hierarchy.

How to use MaxDiff findings in your optimization strategy

Once you have MaxDiff results, the application extends beyond simply reordering homepage elements. The insights should inform your entire digital experience.

Your messaging architecture should reflect the importance hierarchy. High-scoring benefits deserve prominent placement, repetition across pages, and detailed explanation. Low-scoring benefits can either be removed or repositioned as supporting rather than leading messages.

Your testing roadmap should prioritize changes based on MaxDiff findings. If customer reviews scored highest in your study, test different ways of showcasing reviews before you test other elements. Let the data guide your experimentation priorities.

Your content strategy should emphasize what customers care about. If service guarantees scored highly, create content that explains the guarantee in detail, shares stories of when it was honored, and addresses common concerns. Build your editorial calendar around the topics MaxDiff revealed as important.

Your sales enablement should align with customer priorities. If the research showed that prospects value licensing credentials, make sure your sales team knows to emphasize this early in conversations. Create collateral that highlights the trust signals that matter most.

The most effective companies use MaxDiff as one tool in a broader research program. They combine it with qualitative research to understand why certain benefits matter, behavioral analytics to see how users interact with different messages, and continuous testing to validate that the predicted preferences translate into actual conversion improvements.

Turning guesswork into growth

The SaaS company we worked with started with a dozen possible messages and no clear sense of which would build trust most effectively with B2B buyers. After the MaxDiff analysis, they had a data-backed hierarchy that let them confidently restructure their homepage and broader messaging strategy.

This is the power of asking prospects the right questions in the right way. Not “do you like this?” which produces inflated scores for everything. Not “rank these 11 items,” which overwhelms participants and produces unreliable data. But rather, through repeated forced choices, revealing the true importance of each element.

If you’re struggling with similar prioritization challenges (too many options, limited space, stakeholder disagreement about what matters), MaxDiff analysis might be the tool that breaks through the noise. It transforms subjective opinion into statistical evidence, letting your prospects vote on what will actually convince them to choose your platform.

Ready to discover which messages actually resonate with your customers? The Good’s Digital Experience Optimization Program™ includes research methodologies like MaxDiff analysis to help you prioritize changes based on real customer preferences, not guesswork.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company https://thegood.com/insights/intent-based-segmentation/ Fri, 21 Nov 2025 18:45:09 +0000 https://thegood.com/?post_type=insights&p=111181 Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do. The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different […]

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do.

The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different goals, workflows, and definitions of success.

We recently worked with a leading enterprise SaaS company facing exactly this challenge. Their product served everyone from solo creators to enterprise teams, but the experience treated everyone the same.

Users logging in to track a single client project got bombarded with team collaboration features. Power users managing complex workflows couldn’t find advanced capabilities buried in generic menus.

Users felt overwhelmed by irrelevant features, engagement plateaued, and retention suffered.

The solution wasn’t another traditional segmentation model. It was intent-based segmentation; a framework for grouping users by what they’re actually trying to accomplish, then personalizing the experience to match those specific goals.

Understanding behavioral segmentation and where intent fits

Before diving into how to build intent-based clusters, it helps to understand where this approach fits within the broader landscape of user segmentation.

Most SaaS companies are familiar with demographic segmentation (company size, industry, role) and firmographic segmentation (ARR, team size, geographic location). These approaches tell you who your users are, but they don’t tell you what they’re trying to do, which is why they often fail to predict engagement or retention.

Behavioral segmentation takes a different approach. Instead of looking at static attributes, it focuses on how users actually interact with your product.

Behavioral segmentation divides users based on engagement. This includes actions like feature usage patterns, open frequency, purchase behavior, and time-to-value metrics.

While not the only way to do things, behavioral segmentation is widely regarded as more effective than demographic segmentation alone, with plenty of research backing it up.

Within behavioral segmentation, there are several approaches:

  • Usage-based segmentation looks at frequency and intensity of use.
  • Lifecycle segmentation tracks where users are in their journey.
  • Benefit-sought segmentation groups users by the outcomes they want to achieve.

Intent-based segmentation sits at the intersection of all three.

visual portraying intent based segmentation at the center of different types of user segmentation

It identifies clusters of users who share similar goals and workflows, then maps those patterns to create a more personalized experience.

Intent-based clusters answer the question: “What is this user trying to accomplish right now?”

In a recent client engagement that inspired this article, this distinction mattered. They had mountains of usage data showing which features people clicked, but no framework for understanding why certain feature combinations existed or what job users were trying to complete.

They knew “Business Professionals” used their tool, but that category was so broad it offered no actionable insights. A marketing manager building campaign timelines has completely different needs than a legal team tracking contract approvals, even though both might be classified as “business professionals.”

Intent-based clustering gave them that missing layer of insight.

Case study: How to spot the need for intent-based segmentation

Let’s talk more about the client engagement I mentioned. This is a great case study for when to use intent-based segmentation.

We work on a quarterly retainer for these clients with our on-demand growth research services. So, when they mentioned struggling with how to personalize experiences and improve retention, we opened up a research project that same day.

The team could see that certain users logged in daily and used five or more features. Great, right? Not really. When we dug deeper, we discovered something critical. Heavy feature usage didn’t predict retention. Some power users churned while casual users stuck around for years.

The issue wasn’t the quantity of features used; it was whether those features aligned with what users were trying to accomplish. A user coming in weekly to update a single client dashboard showed higher retention than someone exploring ten features that didn’t match their core workflow.

The symptoms: What teams told us was broken

During our stakeholder interviews, we heard the same frustrations across departments:

From product: “We know the top five features everyone uses, but that doesn’t help us understand why they’re using them or what to build next. Two users might both use our template feature, but one is building client proposals while the other is standardizing internal processes. They need completely different things from that feature.”

From marketing: “Our segments are too broad to be useful. ‘Business Professional’ could mean anyone from a solo consultant to an enterprise VP. When we send educational content, we can’t make it relevant because we don’t know what problem they’re trying to solve.”

From customer success: “We can see when someone is at risk of churning because their usage drops off, but we can’t predict it before it happens. By the time we notice, they’ve already decided the product isn’t right for them. We need to understand intent earlier so we can intervene proactively.”

From UX research: “Users think in terms of tasks, not features. They don’t say ‘I want to use the dependency mapping tool.’ They say, ‘I need to make sure the design team finishes before development starts.’ But our product talks about features, not outcomes.”

The underlying problem: Missing the ‘why’

What became clear was that the organization had plenty of data about behavior but no framework for understanding intent. They could answer questions like:

  • How many people use feature X?
  • What’s the average session duration?
  • Which users log in most frequently?

But they couldn’t answer the questions that actually mattered:

  • What are users trying to accomplish when they use feature X?
  • Why do some users stick around while others churn?
  • What combination of goals and workflows predicts long-term retention?

This may sound familiar. They have data about behavior but lack context about intent. Without understanding the users’ different definitions of success, they use generic personalization that recommends “similar features” and misses the mark entirely.

The four-phase framework for creating intent-based user clusters

Based on our work with the enterprise SaaS client, we developed a systematic framework for building intent-based clusters from scratch.

The process has four distinct phases, each building on the previous one.

Think of this as a directional guide rather than a rigid formula. You can adapt the scope based on your resources and organizational complexity.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Phase 1: Capture institutional knowledge and identify gaps

Most organizations have more customer insight than they realize; it’s just scattered across teams, buried in old reports, and locked in individual team members’ heads. The first phase consolidates that knowledge and identifies what’s missing.

Graphic of phase 1 in the intent based user segmentation process

1. Conduct cross-functional interviews

Start by interviewing stakeholders who interact with users regularly: product managers, customer success, sales, support, and marketing. For our client, we conducted interviews with seven team members from UX research, growth, engagement, marketing, and analytics.

The goal isn’t consensus. You want to uncover patterns and contradictions instead. Focus your conversations on these questions:

  • How do you currently describe different user types?
  • What patterns have you noticed in how different users engage with the product?
  • What data exists about user behavior that isn’t being used to inform decisions?
  • Where do personalization efforts break down today?
  • What questions about users keep you up at night?

These conversations surface institutional knowledge that never makes it into documentation.

Capture everything. The contradictions are especially valuable—they reveal where teams operate with different mental models of the same users.

2. Audit existing research and reports

Next, gather relevant user research, data analysis, and customer insight. For our client, we analyzed eight existing reports, including retention data, cancellation surveys, user studies, engagement patterns, and conversion analysis.

Look for:

  • Marketing segmentation models (usually demographic-heavy)
  • User research studies (often small sample, rich insights)
  • Behavioral analytics reports (feature usage patterns, cohort analysis)
  • Customer journey maps (theoretical vs. actual paths)
  • Support ticket analysis (pain points and use cases)
  • Cancellation surveys (why users leave)

Pay attention to gaps between how different teams think about users. Marketing might segment by company size, while product segments by feature usage. Neither is wrong, but the lack of a unified framework means teams optimize for different definitions of success.

3. Define hypotheses about intent-based variables

Based on interviews and research, develop hypotheses about what variables might define user intent. This is where you move from “who uses our product” to “what are they trying to accomplish.”

For our client, we identified several dimensions that seemed to correlate with intent:

  • Primary workflow type: Are users managing team projects, client deliverables, or personal tasks?
  • Collaboration patterns: Solo work, small team coordination, or cross-functional orchestration?
  • Usage frequency: Daily operational tool or periodic project management?
  • Success metrics: Speed (quick task completion) vs. thoroughness (detailed planning and tracking)?
  • Document complexity: Simple task lists or multi-layered project hierarchies?

The goal is to create testable hypotheses that can be validated in Phase 2.

Phase 2: Validate clusters through user research and behavioral analysis

Once you have initial hypotheses, Phase 2 tests them against real user behavior and feedback. This is where hunches become data-backed insights.

visual of phase 2 in the intent based user segmentation process

1. Develop provisional cluster groups

Transform your hypotheses into provisional clusters. For our client, we identified six distinct intent-based clusters. Let me illustrate with a fictional example of how this might work for a project management SaaS tool:

Sprint Executors: Users focused on rapid task completion and daily standup workflows. They need speed, simple task updates, and quick team coordination. Think startup teams moving fast with lightweight processes.

Client Project Coordinators: Users managing multiple client engagements simultaneously with strict deliverable timelines. They need client visibility controls, progress tracking, and professional reporting. Think agencies and consultancies.

Cross-Functional Orchestrators: Users coordinating complex projects across departments with dependencies and approval workflows. They need Gantt views, resource allocation, and stakeholder communication tools. Think enterprise program managers.

Personal Productivity Optimizers: Users treat the tool as their second brain for personal task management and goal tracking. They need customization, recurring tasks, and minimal collaboration features. Think solopreneurs and executives.

Seasonal Campaign Managers: Users with predictable high-intensity periods followed by dormancy. They need templates, bulk operations, and the ability to archive/reactivate projects easily. Think retail operations teams or event planners.

Mobile-First Coordinators: Users who primarily access the tool from mobile devices for field work or on-the-go updates. They need streamlined mobile experiences and offline sync. Think field service teams or traveling consultants.

Each cluster gets a descriptive name that captures the user’s primary intent, not just their behavior. “Sprint Executor” tells you more about what someone is trying to do than “high-frequency user.”

2. Conduct targeted user research

With provisional clusters defined, recruit users who fit each profile and conduct interviews to understand:

  • Their primary use cases and goals when they first adopted the tool
  • How they discovered and currently use the product
  • Their typical workflows from start to finish
  • What defines success in their role
  • Pain points and unmet needs
  • How they decide which features to explore
  • What would make them cancel vs. what keeps them subscribed

For our client, we conducted three to four interviews per cluster, totaling around 24 user conversations. This gave us enough coverage to validate patterns without drowning in data.

The insights were eye-opening. We discovered that one cluster had the fastest time-to-value but the lowest feature adoption. They found what they needed immediately and never explored further. Another cluster showed the highest retention but needed the longest onboarding. They invested time up front because the tool was critical to their workflow.

3. Analyze behavioral data to confirm patterns

User interviews reveal what people say they do. Behavioral data shows what they actually do. Cross-reference your clusters against:

  • Feature usage sequences (which tools appear together in sessions)
  • Time-to-value metrics by cluster (how quickly do they get their first win)
  • Retention and churn patterns (which clusters stick around)
  • Upgrade and expansion behavior (which clusters grow their usage)
  • Support ticket themes (which clusters need help with what)
  • Feature adoption curves (how exploration patterns differ)

For our client, the data revealed critical differences. The “Sprint Executor” equivalent had fast initial adoption but plateaued quickly. They found their core workflow and stopped exploring.

The “Cross-Functional Orchestrator” cluster showed slow initial adoption but deep engagement over time. They needed to learn the tool thoroughly to unlock value.

These patterns weren’t visible in aggregate data. Only by segmenting users by intent could we see that different clusters had fundamentally different paths to retention.

4. Build detailed cluster profiles

For each validated cluster, create a comprehensive profile that becomes the foundation for personalization. For example:

Cluster name: Sprint Executors

Primary intent: Complete daily tasks quickly with minimal friction and maximum team visibility

Most-used features:

  • Quick-add task creation
  • Board view for visual sprint planning
  • Mobile app for on-the-go updates
  • Real-time team activity feed

Typical workflow patterns:

  • Morning standup with task assignments
  • Throughout the day: quick updates and status changes
  • End of day: marking tasks complete and planning tomorrow

Behavioral flags that identify this cluster:

  • Creates 5+ tasks in first week
  • Returns daily within first 14 days
  • Uses mobile app within first 7 days
  • Rarely uses advanced features like Gantt charts or dependencies

Retention drivers:

  • Speed of task completion
  • Team visibility and accountability
  • Mobile accessibility

Churn risks:

  • Tool feels too complex for simple needs
  • Feature bloat is making core actions harder to find
  • Forced upgrades to access speed-focused features

Personalization opportunities:

  • Streamlined onboarding focused on quick task creation
  • Mobile-first feature discovery
  • Templates for common sprint workflows
  • Integrations with communication tools

These profiles become the single source of truth that product, marketing, and customer success can all reference.

Phase 3: Develop indicators and personalization strategies

The final phase connects clusters to action. This is where the framework moves from insight to implementation.

1. Create behavioral flags for cluster identification

Most users won’t self-identify their intent at signup. You need to infer cluster membership from behavioral signals early in their journey. The key is identifying flags that appear within the first 7-14 days. It should be early enough to personalize the experience before users decide if the tool is right for them.

visual of phase 3 in the intent based user segmentation process

For reference, the “Sprint Executor” cluster in our fictional example:

  • Created 5+ tasks in first week
  • Logged in on 4+ separate days in first 14 days
  • Used mobile app within first 7 days
  • Board or list view used more than timeline/Gantt view (80%+ of sessions)
  • Invited at least one team member within first 10 days
  • Never explored advanced dependency features
  • Average session length under 10 minutes

Versus the “Client Project Coordinator” cluster:

  • Created 3+ separate projects within first week (indicating multiple clients)
  • Used folder or workspace organization features within first 5 days
  • Set up client-specific permissions or external sharing settings
  • Created custom views or reports within first 14 days
  • Longer average session times (20+ minutes per session)
  • Uses professional or client-specific terminology in project names
  • High usage of export or presentation features

The goal is to find the minimum viable signal set that reliably predicts cluster membership. Start with more flags and refine over time based on which actually correlate with long-term behavior.

One critical finding from our client work: early behavioral flags predicted retention better than demographic data.

A user who exhibited “Client Project Coordinator” behaviors in week one showed 40% higher 90-day retention than the average user, regardless of their company size or job title.

2. Map personalization opportunities to each cluster

With clusters and flags defined, identify specific ways to personalize the experience across the user journey:

Onboarding sequences: Tailor the first-run experience to highlight features that match user intent. Show Sprint Executors how to set up their first sprint board, not the full feature catalog with Gantt charts and resource allocation tools they don’t need.

In-app messaging: Trigger contextual tips based on usage patterns. When a Client Project Coordinator creates their third project with similar structure, suggest project templates to save time.

Feature discovery: Recommend next-step features that align with cluster workflows. For Sprint Executors who’ve mastered basic task management, introduce the mobile app and integrations with their communication tools—not complex dependency mapping.

Content and education: Send targeted educational content that addresses cluster-specific goals. Client Project Coordinators get tips on professional reporting and client permissions. Sprint Executors get productivity hacks and team coordination strategies.

Upgrade paths: Present pricing tiers and feature upgrades that match cluster needs. Don’t push team collaboration features to Personal Productivity Optimizers who work solo and won’t use them.

Support prioritization: Route support tickets differently based on cluster. Client Project Coordinators might get priority support since they’re often managing paying clients. Seasonal Campaign Managers might get proactive check-ins before predicted busy periods.

For our client, this mapping revealed opportunities they’d completely missed. One cluster had been receiving generic “explore more features” emails when what they actually needed was advanced security capabilities for compliance requirements. Another cluster kept churning at the end of trial because onboarding emphasized features they’d never use instead of the speed-focused tools that matched their workflow.

Phase 4: Develop test concepts to validate impact

Turn personalization opportunities into testable hypotheses. Don’t roll everything out at once. Start with high-impact, low-effort tests that prove the value of intent-based segmentation.

For our client, we proposed several test concepts structured to validate clusters quickly and build organizational confidence in the framework. Here are a few examples.

visual of phase 4 in the intent based user segmentation process

Example Test 1: Intent-Based Onboarding Survey

Background: The organization lacked a way to identify user intent at the critical moment when users were most open to guidance: right after signup, but before they’d formed opinions about product fit.

Hypothesis: By asking users to self-identify their primary goal during their first meaningful session, we can segment them into actionable clusters that enable personalized feature discovery and messaging, resulting in improved 3-month retention rates by 5-10%.

Test design: During the first session (after initial signup but before deep engagement), present a brief survey asking: “What brings you here today?” with options that map directly to identified clusters:

☐ Coordinate my team’s daily work (Sprint Executors)

☐ Manage multiple client projects (Client Project Coordinators)

☐ Organize complex cross-functional initiatives (Cross-Functional Orchestrators)

☐ Track my personal tasks and goals (Personal Productivity Optimizers)

☐ Plan seasonal campaigns or events (Seasonal Campaign Managers)

☐ Update projects while on the go (Mobile-First Coordinators)

☐ Something else (with optional text field)

Then immediately personalize their first experience based on their response: Sprint Executors see a streamlined task creation tutorial, Client Project Coordinators get guidance on setting up client workspaces, etc.

Success metrics:

  • Primary: 3-month retention rate by selected cluster (looking for 5-10% lift)
  • Secondary: Survey completion rate (target: >80%), feature adoption aligned with cluster (target: 20% lift), time to first value-generating action
  • Guardrails: No negative impact on day 2 or day 7 retention

Acceptance criteria for “winning test”:

  • Survey completion rate >80%
  • 60% of users select a pre-set option (vs. “something else”)
  • Statistically significant retention lift in at least one cluster
  • No degradation in key engagement metrics

Acceptance criteria for “learning test”:

  • 40% of users select “something else” (suggests clusters don’t match user mental models)
  • No statistically significant difference in retention (suggests clusters exist, but personalization approach needs refinement)

Audience: New paid subscribers on first day, trial users converting to paid, reactivated users returning after 30+ days dormant. Start with 25% of eligible users to minimize risk.

Timeline: 90 days to measure primary retention metric, but early signals (completion rate, feature adoption) available within 14 days.

Example Test 2: Cluster-Specific Feature Recommendations

Background: Generic in-app messaging had low click-through rates (<5%) and wasn’t driving feature adoption. Users felt bombarded by irrelevant suggestions.

Hypothesis: For users who match behavioral flags within the first 14 days, triggering personalized feature recommendations will increase feature adoption by 20% and engagement depth by 15%.

Test design: Identify users by behavioral flags, then trigger targeted in-app messages at contextually relevant moments:

  • Sprint Executors see mobile app download prompt after completing 5 tasks on desktop: “Update tasks on the go: get the mobile app”
  • Client Project Coordinators see reporting features after creating third project: “Impress clients with professional progress reports”
  • Cross-Functional Orchestrators see dependency mapping after creating complex project: “Map dependencies to keep cross-functional teams aligned”

Success metrics:

  • Primary: Feature adoption rate for recommended features (target: 20% lift vs. control)
  • Secondary: Overall engagement depth (features used per session), message click-through rate
  • Guardrails: No increase in feature abandonment (starting but not completing flows)

Audience: Users who match cluster behavioral flags within first 14 days. Test one cluster at a time to isolate impact.

Timeline: 30 days to measure feature adoption impact.

Example Test 3: Retention Email Campaigns by Cluster

Background: Generic “tips and tricks” email campaigns had 8% open rates and weren’t moving retention metrics. Content felt irrelevant to most recipients.

Hypothesis: Segmenting email campaigns by identified cluster will improve email engagement by 50% and show a measurable correlation with retention.

Test design: Replace generic weekly tips emails with cluster-specific content:

  • Sprint Executors: “5 ways to speed up your daily standup” / “Mobile shortcuts that save 2 hours per week”
  • Client Project Coordinators: “How to impress clients with professional project reports” / “3 ways to give clients visibility without overwhelming them”
  • Personal Productivity Optimizers: “Build your second brain: advanced filtering techniques” / “Automate your recurring tasks in 5 minutes”

Send to users identified through either the onboarding survey or behavioral flags. Track engagement and retention by cluster.

Success metrics:

  • Primary: Email open rates (target: 50% lift), click-through rates (target: 100% lift)
  • Secondary: Correlation between email engagement and 90-day retention
  • Guardrails: Unsubscribe rates remain stable or decrease

Audience: Users identified as belonging to specific clusters either through survey responses or behavioral flags, minimum 14 days after signup.

Timeline: 6 weeks for initial engagement metrics, 90 days for retention correlation.

Post-test analysis framework

For each test, we established a clear decision framework:

If “winning test”:

  • Roll out to 100% of eligible users
  • Begin development on next phase of personalization for that cluster
  • Use learnings to inform tests for other clusters
  • Document what worked to build organizational playbook

If “learning test”:

  • Analyze all “something else” responses for missing clusters or unclear framing
  • Review behavioral data to see if clusters exist, but personalization was wrong
  • Iterate on messaging, timing, or format
  • Decide whether to retest with refinements or try different approach

If negative impact:

  • Immediately roll back to the control experience
  • Conduct user interviews to understand what went wrong
  • Reassess cluster definitions or personalization approach
  • Consider whether the cluster exists but needs a different treatment

The key to successful testing is starting small, measuring rigorously, and being willing to learn from failures. Not every cluster will respond to every type of personalization, and that’s valuable information. The goal isn’t perfect personalization immediately; it’s continuous improvement based on what actually moves metrics.

Intent-based segmentation mistakes and how to avoid them

Based on our experience implementing this framework, here are the mistakes that will derail your efforts:

1. Starting with too many clusters

More isn’t better. Six well-defined clusters are more useful than fifteen overlapping ones. You need enough clusters to capture meaningfully different intents, but few enough that teams can actually remember and act on them. Start with 4-6 clusters and refine over time. If you find yourself creating clusters that differ only slightly, you’ve gone too granular.

2. Confusing demographics with intent

Job title, company size, or industry might correlate with intent, but they don’t define it. We’ve seen solo consultants behave like “Cross-Functional Orchestrators” and enterprise teams behave like “Sprint Executors.” Focus on what users are trying to accomplish, not who they are on paper.

3. Creating overlapping clusters

Each cluster should be distinct in its primary intent and workflow patterns. If you’re struggling to articulate how two clusters differ behaviorally, they’re probably the same cluster with different labels. Test this by asking: “If I saw someone’s usage data, could I confidently assign them to one cluster?”

4. Ignoring edge cases entirely

Some users will span multiple clusters or switch between them based on context. That’s fine. The framework should accommodate primary intent while recognizing that users are complex. A user might primarily be a “Client Project Coordinator” but occasionally use “Personal Productivity Optimizer” features for their own task management. Don’t force rigid categorization.

5. Skipping the validation step

Your initial hypotheses will be wrong in places. User research and behavioral data keep you honest and prevent confirmation bias. We’ve seen teams fall in love with theoretically elegant clusters that don’t actually exist in their user base, or miss entire segments because they didn’t fit the initial hypothesis.

6. Treating clusters as static

User intent evolves. Someone might start as a “Personal Productivity Optimizer” and grow into a “Client Project Coordinator” as their business scales. Review and refine your clusters quarterly based on new data, product changes, and market shifts.

7. Personalizing too aggressively too soon

Start with high-confidence, low-risk personalization (like targeted email content) before you completely diverge user experiences. You want to validate that clusters behave differently before you build entirely separate onboarding flows.

8. Forgetting to measure impact

Intent-based segmentation is valuable only if it improves outcomes. Define success metrics upfront (e.g., retention lifts, engagement depth, upgrade rates, support ticket reduction) and track them by cluster. If personalization isn’t moving these metrics, refine your approach.

Making intent-based segmentation work for your organization

The framework we’ve outlined works across product categories and company sizes, but implementation varies based on your resources and organizational maturity.

If you have limited data: Start with Phase 1 and Phase 2, using qualitative research to define clusters before investing in behavioral infrastructure. You can manually tag users based on interview responses and onboarding surveys, then personalize through targeted emails and customer success outreach. As you grow, build the data systems to automate cluster identification.

If you have rich behavioral data but limited research capabilities: Reverse the order. Start with data patterns and validate through targeted interviews. Look for natural groupings in your analytics that suggest different workflow types, then talk to representative users from each group to understand their intent.

If you’re a small team: Don’t let perfect be the enemy of good. Start with 3-4 obvious clusters based on your highest-level workflow differences. The founder of a 10-person startup probably has a better intuitive understanding of user intent than a 500-person company with siloed data. Write down what you know, test it with a few users, and start personalizing.

If you’re a large enterprise: The challenge is getting organizational alignment, not defining clusters. Use Phase 1 to surface where teams already operate with different mental models, then use data to arbitrate. Create executive sponsorship for the new framework so it becomes the shared language across product, marketing, and CS.

The key is starting somewhere. Most companies know their one-size-fits-all approach isn’t working, but they keep personalizing around the wrong variables that don’t actually predict what users are trying to accomplish.

Intent-based segmentation reorients everything around the question that actually matters: What is this user trying to accomplish, and how can we help them succeed at that specific goal?

Turn insights into retention that drives revenue

Understanding user intent is just the first step. The real value comes from translating those insights into personalized experiences that keep users engaged and drive measurable revenue growth.

At The Good, we’ve spent 16 years helping SaaS companies identify their most valuable user segments and optimize experiences around what actually drives retention. Our systematic approach to user segmentation goes beyond frameworks. We help you implement experimentation strategies that prove which personalization efforts move the needle on the metrics your board cares about.

Plenty of companies struggle to implement segmentation that’s actually actionable. They end up with beautiful personas gathering dust or broad categories that don’t inform product decisions.

Intent-based segmentation is different because it connects directly to behavior you can observe and experiences you can personalize.

If you’re struggling with generic experiences that fail to resonate with different user types, or if you know your segmentation could be better but aren’t sure where to start, let’s talk about how intent-based segmentation could transform your retention strategy and drive revenue growth.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base https://thegood.com/insights/monetization-strategy/ Thu, 17 Jul 2025 15:22:34 +0000 https://thegood.com/?post_type=insights&p=110736 Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously. But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach. The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who […]

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously.

But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach.

The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who build monetization strategies that focus on their existing user base. As users find more value in the tool and increase usage, the tool’s pricing fluctuates accordingly.

Realistically, every SaaS tool will hit a growth plateau. There aren’t infinite users that will find value in your product, even though we all wish there were.

The goal is to build growth into your monetization strategies so you don’t leave any untapped revenue in your existing user base. This ensures you don’t reach a premature growth plateau once net new users become stagnant.

The fundamental shift in monetization strategy from seats to value

Before we get started, throw your traditional SaaS monetization playbook out the window.

For years, companies have relied on seat-based pricing because it made sense. With each new hire, a new account or seat was purchased for tools. Revenue grew linearly with team size.

But now one person can do the work of two or three people. AI tools, automation, and productivity software mean that the relationship between users and value creation has completely shifted. When your customers can accomplish twice as much with half the team, seat-based pricing isn’t sustainable.

Smart companies are pivoting to value-based extraction. Instead of charging for the number of people using your software, they charge for the value you create. This isn’t just about switching to usage-based pricing; it’s about fundamentally rethinking how you capture the value your product delivers.

Consider HubSpot’s evolution. Instead of sticking to their standard seat-based pricing model as the market has evolved, they’ve created a dynamic pricing system. Users can pay for seats at their specific account tier, but also have a layer of contact-based pricing, aligning cost with the actual value delivered rather than just the number of users.

They’ve also recently added token-based pricing for certain functions in the tool, like marketing email sends, AI features, and API calls. These changes allow them to maintain revenue growth even as customers reduce their seat count.

You’re trying to capture more of the consumer surplus

Most SaaS tools have a consumer surplus. There are features or outcomes that customers would pay more for, but don’t have to because of your pricing model.

You can never eliminate all surplus (you need happy customers), but you can likely capture more of it through strategic segmentation and value extraction.

Think about your demand curve. It’s not a straight line. It’s a complex slope that varies by customer segment, use case, and willingness to pay. Most companies set one or two price points and leave massive value on the table. The companies that scale create multiple packages along that curve.

Netflix understood this when it evolved from a single $7.99 plan to Basic, Standard, and Premium tiers. Each tier captures different segments of willingness to pay while allowing customers to self-select into the option that works for them. However, the real insight wasn’t in the tiers themselves, but rather an understanding that different customer segments valued different features. Knowing that allowed Netflix to extract more value from customers who were willing to pay more while keeping price-sensitive customers from defecting.

Research changes everything

To get started on a monetization strategy based on value and capture more of the consumer surplus, companies have to build their understanding of what customers are willing to pay for.

Research from monetization and pricing expert Madhavan Ramanujam says that 20% of features drive 80% of willingness to pay. The challenge is to make sure you aren’t over-indexing on features that customers don’t actually value while underdeveloping the ones that drive revenue.

The solution is systematic research that reveals what customers actually want to pay for. Here are three methods to make it happen:

Max diff analysis: Present customers with feature lists and ask them to identify the most and least important items. With enough volume, you can rank features by their impact on willingness to pay. Features that over 50% of customers want become your “leader” features or the core value proposition that justifies your price point.

Anchoring questions: Instead of asking customers what they’d pay (which doesn’t work), ask them to compare your value to a known competitor. “If Salesforce brings your team 100 points of value, where do we rank?” This gives you relative value positioning without the discomfort of direct pricing questions.

Van Westendorp pricing: Ask customers four questions about price sensitivity: What’s acceptable? What’s expensive but you’d consider it? What’s so expensive that you wouldn’t consider it? What’s so cheap that you’d question the quality? This reveals the psychological price boundaries for different customer segments, providing a window of tenable prices that capture both the price-sensitive and high-willingness-to-pay corners of the market.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The Shopify monetization strategy: how to scale with your customers

An innovative and extremely effective monetization strategy allows you to grow with your customers. Shopify cracked this code by creating a model where their revenue increases as their customers become more successful. Instead of charging an ever-larger flat monthly fee, they take a percentage of gross merchandise volume (GMV).

This creates a virtuous cycle: Shopify is incentivized to help their customers succeed because customer success directly translates to revenue growth. When a merchant goes from $10,000 to $100,000 in monthly sales, Shopify’s revenue from that customer increases 10x.

Smaller businesses benefit from a proportional cost as they get started, and if businesses leave once they grow, Shopify doesn’t mind.

Shopify actually optimizes for this churn, not against it. As Archie Abrams, VP of Product and Head of Growth at Shopify, explains: “The way we think about churn [goes] back to Shopify’s mission and what we want to do, which is to increase the amount of entrepreneurship on the Internet.”

Instead of trying to prevent customers from leaving, Shopify focuses on lowering barriers to entry so more entrepreneurs can try starting businesses. They know most will fail, but the few who succeed generate massive value. This counterintuitive approach has helped Shopify power over 10% of U.S. e-commerce with $235 billion in GMV in 2023.

The beauty of this model lies in its retention through value creation, rather than friction. Traditional SaaS companies worry about churn because losing a customer means losing all their revenue. But when your revenue scales with customer success, churn becomes less of a concern. Your most successful customers are worth 10x or 100x more than your average customer, creating a natural buffer against churn.

Finding your untapped revenue

The process of discovering untapped revenue in your user base can be synthesized into a few steps:

Step 1: Segment your demand curve

Different customer segments have different willingness to pay. Enterprise customers might value security and compliance features, while SMBs prioritize ease of use and cost. Map these segments and understand what each values most.

Step 2: Identify value gaps

Look for places where customers are getting significant value but paying relatively little. These are your biggest opportunities for revenue expansion. Often, these are found in features that save customers time or help them make money.

Step 3: Create extraction mechanisms

Build pricing tiers, usage limits, or premium features that allow high-value customers to pay more for the value they receive. The key is making this feel like a fair exchange rather than a penalty.

The most effective monetization strategies combine multiple approaches. For example:

  • Base + usage: Provide a predictable subscription base with usage-based charges for additional value. This gives customers cost certainty while allowing you to capture upside from heavy users.
  • Tiered value: Create pricing tiers based on customer segments and use cases, not just feature lists. Each tier should feel designed for a specific type of customer.
  • Expansion revenue: Build mechanisms for customers to naturally increase their spending as they grow. This could be through additional seats, increased usage, or premium features.
  • Value-based upgrades: Tie pricing increases to value delivered, rather than just features added. When customers see clear ROI, they’re willing to pay more.

Step 4: Test and iterate

Pricing optimization is an ongoing process. Test different approaches, measure customer response, and iterate based on data. The best monetization strategies evolve continuously.

A monetization strategy that works for the long term

The future of SaaS monetization is about aligning pricing with value creation rather than resource consumption. The untapped revenue in your user base is real, measurable, and accessible, and approaching it with a value-based strategy will help you capture it.

At The Good, we specialize in helping SaaS companies optimize their monetization strategies through data-driven research and strategic experimentation.

Our services can help you identify value gaps, design pricing experiments, and implement changes that drive meaningful revenue growth. Get in touch to learn how we can help you extract more value from your existing customers.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
What Is Discovery Research in UX? https://thegood.com/insights/discovery-research/ Thu, 17 Jul 2025 15:21:56 +0000 https://thegood.com/?post_type=insights&p=110732 It’s difficult to find a product team that lacks data or feature requests. Most don’t even need additional user feedback. Yet, they’re still building the wrong things. The culprit isn’t a lack of information; it’s starting with solutions instead of problems. While 89% of product teams are conducting user interviews according to recent industry data, […]

The post What Is Discovery Research in UX? appeared first on The Good.

]]>
It’s difficult to find a product team that lacks data or feature requests. Most don’t even need additional user feedback. Yet, they’re still building the wrong things. The culprit isn’t a lack of information; it’s starting with solutions instead of problems.

While 89% of product teams are conducting user interviews according to recent industry data, there’s a critical gap between gathering user input and uncovering the insights that actually drive business results.

We see this all the time in our client work. Teams building features that competitors have without competitor data, or developing features based on the loudest customers without checking the significance of those friction points.

So what’s the solution?

The companies consistently shipping features that move the needle know the difference between asking users what they want and understanding what they actually need. It starts with discovery research.

What is discovery research in UX?

Discovery research in UX is the foundational phase of user research that focuses on understanding user problems, needs, and contexts before any solutions are designed.

Unlike evaluative research methods that test existing designs or prototypes, discovery research explores the unknown territory of user behavior to uncover opportunities and define problems worth solving.

Discovery research helps you understand use cases and user needs. It can ground you in what problems to solve and what is going on in the market.

This grounding is essential for product teams who want to build features that users actually need and will drive growth.

Discovery research typically involves methods like user interviews, field studies, diary studies, and market analysis. These approaches help teams understand the broader context of user goals and challenges before jumping into design solutions. The insights gathered during this phase become the strategic foundation for all subsequent product decisions.

Discovery research versus UX discovery

While these terms are often used interchangeably, there’s an important distinction that affects how product teams approach their research strategy.

Discovery research specifically refers to the research methods and activities used to understand user needs and identify problems. It’s the “how” of gathering insights through interviews, observations, and analysis. This includes techniques like ethnographic studies, user interviews, and competitive analysis.

UX discovery, on the other hand, is the broader strategic phase that encompasses discovery research, but also includes other activities such as technical feasibility assessments, business viability analysis, and stakeholder alignment. UX discovery is the “what and why” that frames the entire early-stage product exploration.

Think of discovery research as the tactical execution within the strategic framework of UX discovery. A comprehensive UX discovery process will include multiple types of discovery research methods. It also considers business constraints, technical limitations, and market opportunities.

For SaaS product teams, this distinction matters because it clarifies roles and expectations. UX researchers lead discovery research activities, while product managers typically orchestrate the broader UX discovery process that incorporates research findings into strategic decisions.

Understanding this difference helps teams avoid the common mistake of treating research as a checkbox activity rather than a strategic input that informs product direction.

Benefits of discovery research

Discovery research delivers tangible benefits that extend far beyond the research team, directly impacting product success and business outcomes.

Reduces development risk and waste

The most immediate benefit of discovery research is risk reduction. By understanding user needs and the specific problems before development begins, teams avoid building features that miss the mark. This is particularly critical for SaaS teams where failed features mean ongoing maintenance costs and technical debt that compound over time.

Enables data-driven product decisions

Discovery research transforms product decisions from opinion-based to evidence-based. Instead of stakeholder preferences driving priorities, user insights guide development resources toward the highest-potential impact opportunities.

Uncover hidden opportunities

Discovery research often reveals unmet user needs that aren’t obvious from analytics or existing feedback channels. These insights can become the foundation for innovative features that differentiate your product in the market.

Improves cross-team alignment

When discovery research findings are shared across product, design, and development teams, everyone gains a shared understanding of user priorities. This alignment reduces conflicting opinions and streamlines the development process.

Accelerates time-to-market for successful features

While discovery research requires upfront time investment, it actually accelerates the development of successful features by ensuring teams build the right things from the start.

Enhances user satisfaction and retention

Products built on solid discovery research foundations better meet user expectations, leading to higher satisfaction scores and improved retention rates. Users feel heard and understood when products solve their actual problems rather than perceived problems.

This is essential for SaaS businesses where discovery research can identify the difference between features that drive daily engagement versus one-time usage, directly impacting churn rates.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

When to use discovery research

Discovery research is best leveraged as part of a continuous research strategy.

Teresa Torres, expert and author of Continuous Discovery Habits, recommends weekly conversations with customers. “Continuous discovery means weekly touchpoints with customers by the team building the product, where they conduct small research activities in pursuit of a desired outcome.”

The goal is to take research from something you pause to do, into something you always do.

Many leaders will have experimentation rituals that allow quick and consistent feedback on ideas/products, but it’s rarer to see teams prioritize discovery on a frequent cadence.

When you manage discovery in batches or isolated sprints, it can mean you miss out on opportunities or delay solving urgent problems for customers.

Common discovery activities in UX

Effective discovery research employs multiple methods to build an understanding of the problem landscape and market conditions. Not all are required, but a combination will give a better picture to work off.

Diary studies

For understanding user behavior over time, diary studies ask participants to record their experiences, thoughts, and interactions over days or weeks. This method is particularly valuable for SaaS products where user needs evolve or vary based on different use cases and timeframes.

User interviews

One-on-one conversations with users can be a great pillar of discovery research. The key to successful interviews in discovery is asking open-ended questions that help explore user motivations, frustrations, and workflows. A good foundation is to conduct 6-8 interviews per user segment to get a picture of current challenges and behaviors.

Field studies and contextual inquiry

Observing users in their natural environment provides insights that interviews alone can’t capture. Field studies reveal the environmental, social, and technical factors that influence user behavior, uncovering needs that users might not articulate in interviews.

Competitive analysis and market research

Understanding the competitive landscape helps identify opportunities for differentiation. It also uncovers whether user problems are being adequately solved by existing solutions. This desk research complements user-facing research methods.

Jobs-to-be-done (JTBD) research framework

JTBD research helps frame what job users are “hiring” your product to do. It can help you think beyond features to understand the fundamental progress users are trying to make in their lives or work.

Card sorting

This method helps teams understand how users categorize information and conceptualize problem spaces. Card sorting is particularly useful for discovering how users naturally group features or content areas.

Survey research

While qualitative methods provide depth, surveys can help uncover findings across larger user populations. Use surveys to quantify the prevalence of problems discovered through qualitative research.

Leveraging discovery research for better outcomes

In an era where 83% of designers, product managers, and researchers agree that research should be conducted at every stage of product development, it’s critical to understand discovery research in UX.

Discovery research is a tool that helps you dig into current user needs and prioritize the problems worth solving. It provides the user insights needed to build theme-based roadmaps, prioritize high-impact features, and avoid costly development mistakes. Most importantly, it ensures that every dollar spent on product development addresses real user needs rather than perceived problems.

Ready to make discovery research work for your product team? The Good specializes in helping SaaS companies uncover the user insights that drive product success. Our team combines deep research expertise with practical product strategy to ensure your research translates into features that drive growth.

Get in touch with The Good to discuss how discovery research can accelerate your product development and improve user satisfaction. Let’s turn your user insights into your competitive advantage.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post What Is Discovery Research in UX? appeared first on The Good.

]]>
How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense https://thegood.com/insights/why-rapid-test/ Fri, 23 May 2025 20:04:02 +0000 https://thegood.com/?post_type=insights&p=110602 The right tool for the right job. It’s a principle that applies everywhere, from construction sites to surgical suites, yet for digital product development, many teams are singularly focused on A/B testing. Don’t get me wrong, A/B testing is incredibly powerful. It’s the gold standard for high-stakes, high-traffic decisions where statistical significance matters most. But […]

The post How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense appeared first on The Good.

]]>
The right tool for the right job. It’s a principle that applies everywhere, from construction sites to surgical suites, yet for digital product development, many teams are singularly focused on A/B testing.

Don’t get me wrong, A/B testing is incredibly powerful. It’s the gold standard for high-stakes, high-traffic decisions where statistical significance matters most. But when it becomes your only tool, you create unnecessary constraints that can paralyze decision-making and slow innovation.

The reality is that different decisions require different levels of rigor, confidence, and investment. Luckily, there is a complementary approach that fills critical gaps in your experimentation toolkit. By understanding when each method is most appropriate, teams can make faster, more informed optimizations while maintaining the rigor needed for their most high-stakes decisions.

Creating “experience rot”

A/B testing borrowed its methodology from medical intervention studies, where 95% confidence intervals and statistical significance aren’t just nice-to-haves; they’re life-or-death requirements.

But we’re not rocket scientists, and we don’t always need the same level of assurance in product decisions to move towards the right outcome.

An infographic of the evidence hierarchy inherited from medical disciplines.

A/B testing can be overkill for the decisions product teams need to make daily. Yet teams have become so committed to this single methodology that they’ve created what researcher Jared Spool calls “Experience Rot,” the gradual deterioration of user experience quality from teams moving too slowly or focusing solely on economic outcomes.

The costs of slow testing cycles are tangible and measurable:

  • Market opportunities disappear while waiting for test results
  • Competitors gain ground during lengthy testing phases
  • Development resources get tied up in prolonged testing initiatives
  • Customer frustration builds as issues remain unfixed
  • Decision fatigue sets in as teams debate what to test next

But the problem runs deeper than just speed. Many teams face contexts where A/B testing simply isn’t feasible. Regulatory challenges in healthcare and finance, low-traffic scenarios for B2B products, technical constraints, and organizational politics all create barriers to traditional experimentation.

By the time a test idea passes through all the bureaucratic loopholes and oversight at an organization, it’s often no longer lean enough to justify testing. Without an alternative testing method, teams are left without any data at all.

So, how do we:

  1. Circumvent the challenges of A/B testing, and
  2. Prevent experience rot?

Enter rapid testing

Rapid testing isn’t about cutting corners or accepting lower-quality insights. It’s about matching your research method to the decision you’re trying to make, rather than forcing every question through the same rigorous, but often slow, process.

Like A/B testing, rapid testing helps you understand if your solutions are working. Unlike A/B testing, rapid tests are conducted with smaller sample sizes, completed in days rather than weeks or months, and often provide qualitative insights that A/B tests can miss.

“The speed at which we obtain actionable findings has been impressive,” says Gabrielle Nouhra, Software Director of Product Marketing, who leverages rapid testing with The Good for research and experimentation. “We are receiving rapid results within weeks and taking immediate action based on the findings.”

The key is understanding when each approach makes sense. Not every decision requires the same level of rigor, and smart product teams create systems that allow critical insights to move faster.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A framework for decision making

So, how do you decide when to use rapid testing versus A/B testing? The decision starts with two critical questions: Is this strategically important? And what’s the potential risk? With those two questions in mind, you can map your ideas on a simple 2×2.

A framework to use for decision making and deciding why to rapid test.

High Strategic Importance + Low Risk = Just Ship It. If you can’t explain meaningful downsides to a change but know it’s strategically important, you probably don’t need to test it at all. These are your quick wins.

Low Strategic Importance = Deprioritize. Not everything needs to be tested. Some changes simply aren’t worth the time and resources, regardless of the method you use.

High Strategic Importance + High Risk = Test Territory. This is where both A/B testing and rapid testing live. The next decision point becomes: Can you reach statistical significance within an acceptable timeframe? Are you technically capable of running the experiment?

If the test isn’t technically feasible or traffic constraints make the time-to-significance longer than is acceptable, rapid testing becomes your best option for de-risking the decision.

A decision tree to determine whether to test something and why to rapid test or use another approach.

Rapid testing in practice

Rapid testing encompasses various methodologies, each suited to different types of questions. Here are just a few examples:

First-Click Testing helps confirm where users would naturally click to complete a task. Perfect for interface design decisions and navigation optimization.

Preference Testing goes beyond simple A/B comparisons to evaluate multiple options, often six to eight variations, helping teams understand which labels, designs, or approaches resonate most with their target audience.

Tree Testing reveals where users might stray from their intended path, using nested structures to understand navigation behavior without the distraction of full visual design.

A framework to use when determining which rapid testing method is best suited for your needs.

The beauty of these methods lies in their speed and specificity. Rather than testing entire page redesigns, rapid testing allows you to validate specific hypotheses quickly. Which onboarding segments will users self-identify with? Where should we place a new feature to maximize engagement? Which design elements increase trust among new visitors?

Rapid tests can also guide our A/B testing strategy. If we’re entertaining multiple options for new nomenclature within an app experience and we’re just trying to understand which label users think would be most accurate or most likely to represent those outcomes, running a rapid test can narrow down those options and help us decide what to A/B test.

Building a rapid testing practice

Implementing rapid testing effectively requires more than just choosing the right method. Teams that see the best results follow several key principles:

  1. Impact pre-mortems: Before testing, clearly define what success looks like and what impact you expect if implemented. This helps connect testing activities to business outcomes and prevents post-hoc justification of results.
  2. Acuity of purpose: Keep tests focused on specific questions rather than trying to evaluate everything at once. A/B testing often encourages comprehensive evaluations, but rapid testing works best with precise hypotheses.
  3. Pre-defined success criteria: Establish clear benchmarks before you start testing. If 80% of users can complete a task, is that a win? What about 60%? Define these thresholds upfront to avoid moving goalposts when results come in.
  4. Mute context: When testing specific elements, remove unnecessary context that might distract from the core question. Full-page designs can overwhelm participants and dilute feedback on the element being tested.
  5. Sunlight: Even experienced researchers benefit from collaborative review of test plans. Transparency builds confidence in the process, and a peer review of test designs helps identify potential issues before execution.
  6. Share: Circulate your impact, what you’ve learned about your audience, and get people excited about the work. The goal is to build visibility, create a case for why this work is valuable, and encourage people to make decisions with data.

The compound effects of speed

Teams that successfully implement rapid testing alongside their existing A/B testing programs see remarkable results. Our clients report 50% improved A/B test win rates, better customer satisfaction scores, and significantly faster time-to-insights.

But perhaps most importantly, they report better team morale. There’s something energizing about seeing results from your work quickly, about being able to iterate and improve based on real user feedback rather than lengthy committee discussions.

It’s never too late to pivot. The idea is to move from long-term decision making, where we send something through the whole development and design cycle only to come up with a lackluster outcome, to form a process that helps us get quick, early signals.

Making the transition

The goal isn’t to replace A/B testing. It remains the gold standard for high-stakes, high-traffic decisions. But by adding rapid testing to your toolkit, you can accelerate the decisions that don’t require months of statistical validation while still maintaining confidence in your choices.

As decision scientist Annie Duke writes in Thinking in Bets, “What makes a great decision is not that it has a great outcome. It’s the result of a good process.” Rapid testing gives teams a process for rational de-risking that emphasizes both speed and quality.

The question isn’t whether you should test your ideas; it’s whether you’re using the right testing method for each decision. In a world where speed increasingly determines competitive advantage, teams that master this balance will consistently outpace those stuck with only one tool in their kit.

Ready to accelerate your decision-making process? Our team specializes in helping product teams implement rapid testing alongside existing experimentation programs. Get in touch to learn how we can help you cut testing time without sacrificing insight quality.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense appeared first on The Good.

]]>
How to Identify Your Most Valuable User Segments and Prioritize Accordingly https://thegood.com/insights/user-segments/ Thu, 01 May 2025 05:24:04 +0000 https://thegood.com/?post_type=insights&p=110491 Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers. Despite this being a proven economic model, companies are rarely focusing their effort on that 20%. It’s not because they don’t want to; it’s […]

The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

]]>
Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers.

Despite this being a proven economic model, companies are rarely focusing their effort on that 20%.

It’s not because they don’t want to; it’s because it is easy to get wrapped up in not losing a single sale, to the point that you are spreading yourself too thin.

If you focus your energy and product improvements on the highest-value user segment, you will see greater returns for less work.

In this article, we’re sharing the study we recently ran for a client that helped us identify their most valuable user segments and prioritize improvements to meet their needs.

What are user segments?

User segments are groups within a customer base who share similar characteristics, behaviors, or values.

They are created with user segmentation, which researches those commonalities and divides your audience into distinct groups. You can then tailor experiences, personalize messaging, and focus optimization efforts on their specific needs.

Common types of user segments

User segments can be divided based on different traits. The type of segmentation you use will vary based on your use case and goals. Here is a quick overview of common user segments.

Segmentation TypeDescriptionExample Use Case
DemographicSegments users by age, gender, income, education, etc.Targeting campaigns for specific roles
FirmographicSegments by company size, industry, revenue, or locationTailoring features for SMBs vs. enterprise
BehavioralBased on how users interact with your product, such as product usage, feature adoption, or login frequencyIdentifying power users or at-risk users
TechnographicSegments by technology stack, device, browser, or OSPrioritizing integrations or support
Needs-BasedSegments by specific problems or needsCustomizing messaging for value drivers
Value-BasedGroups by economic value (annual recurring revenue, lifetime value, subscription tier)Prioritizing high-revenue customers
Lifecycle StageSegments by user journey (trial, active, churn risk, etc.)Triggering onboarding or win-back flows
RFM (Recency, Frequency, Monetary)Groups based on most recent activity, engagement frequency, and spendIdentifying loyal or dormant users
AcquisitionBased on the marketing channel or campaign sourceTailor messaging or personalize the experience

Why companies optimize for the wrong segments

When we run prioritization exercises, one of the most common mistakes we see is companies focused on segments of users based on volume. If the segment has more users, they automatically believe it deserves more attention.

This reflects one of the three common prioritization mistakes:

  1. Volume bias: Prioritizing segments with the most users rather than the most value
  2. Squeaky wheel focus: Optimizing for the users who complain the loudest
  3. Recency fallacy: Focusing on the latest acquisition channel or user cohort without evaluating their actual value

The uncomfortable truth is that your most valuable segments may not be your largest, your loudest, or your newest.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Conducting a segmentation study step by step

At The Good, we’ve developed a systematic approach to identify and prioritize your most valuable user segments. Here’s how it works.

Step 1: Set your goals

Before you start analyzing data, segmenting users, and prioritizing, you need a clear understanding of your project goals. In most cases, they will look something like this:

  • Identify and quantify subsets of user segments based on use cases
  • Understand the potential value of known segments
  • Identify features and benefits that are most important on a per-segment basis
  • Find opportunities to improve the engagement of high-value users

These goals can be turned into the key research questions of your study.

Step 2: Identify valuable behaviors beyond revenue

Your most valuable user segments, of course, need to drive revenue, but there are other indicators to consider when prioritizing who you are building/optimizing for.

Current value metrics, future value indicators, influence value, and cost-to-serve factors will all influence the overall value of a user segment.

  • Current value metrics: Revenue generated, subscription tier, feature usage, team size
  • Future value indicators: Growth trajectory, expansion potential
  • Influence value: Referral behavior, advocacy impact
  • Cost-to-serve factors: Support requirements, implementation complexity, churn risk, acquisition cost

Identifying and tracking these metrics and scoring segments based on this information will help paint a more holistic picture of value. Some segments might not be your biggest revenue drivers today, but they represent significant future opportunities, so you may choose to optimize for them instead of your current biggest spenders.

Step 3: Collect qualitative and quantitative data

Once you’re clear on goals and value metrics, you’re ready to start collecting data for your segmentation analysis. Gathering a multidimensional data set will help you better understand users as the complex humans they are. Types of data that will help your analysis will include:

  • Usage patterns: Frequency, features used, time spent in the product
  • Transactional data: Revenue contribution, plan type, upgrade/downgrade history
  • Behavioral signals: Engagement with key activation points, referral behavior
  • Acquisition source: Channel origin, customer acquisition cost, time to convert
  • Demographic/firmographic data: Company size, industry, role

Most of this data will be sourced from your main quantitative collection tool, such as Google Analytics or your product analytics. But for a truly effective study, you need to supplement all this information with qualitative context. Surveys, session recordings, or user tests can help you better understand why your users are doing what they do.

Step 4: Conduct factor analysis to identify value drivers

Group your data together into a reduced number of independent factors that represent the underlying themes within the dataset. This will help identify value drivers that differentiate your user segments.

For example, in a recent segmentation project, we discovered distinct value factors that formed natural segment groupings:

  • Efficiency seekers: Primarily valued time savings and streamlined workflows
  • Integration power users: Heavily utilized connections to other tools in their stack
  • Data-driven optimizers: Focused on analytics and performance insights
  • Scale-focused operators: Needed enterprise features and team collaboration

Understanding these value drivers helps you move beyond simple demographic segmentation to truly understand what motivates different user groups.

Step 5: Apply cluster analysis to form actionable segments

Once you’ve identified the key value drivers, use cluster analysis to group users with similar characteristics. Usually, 3-7 distinct segments emerge from the exercise.

These segments often cross traditional demographic lines, revealing unexpected patterns. For example, power users might not be enterprise customers as you assumed, but mid-market companies with specific workflow needs.

This is also the time to start looking for natural clusters of behavior that indicate high-value segments. Considering this, when you’re analyzing user clusters, look for key differentiators like:

  1. Usage frequency: Daily users vs. weekly vs. monthly
  2. Feature utilization: Which user flows are most common for each segment
  3. Value perception: What features each segment values most highly
  4. Growth potential: Which segments show increasing usage over time

Step 6: Quantify segment value and opportunity size

The inputs from your data, factor, and cluster analyses will produce outputs of your high-value segments.

Here’s an example of that workflow so far. The data (survey themes collected) on habits, values, and use cases were the inputs for the factor and cluster analyses. That resulted in segments around the frequency of product use, customer values, and reason for use.

An example of the workflow to quantify segment value and opportunity size.

For each potential high-value segment, revisit the value metrics you established in step 2 of the process. Calculate the relevant metrics to ensure you’re not just following hunches but making data-backed decisions about where to focus.

The most valuable segments often show strength across multiple metrics, not just in current revenue. For example, a segment with moderate current revenue but excellent retention and high referral rates may be more valuable than a high-revenue segment with poor retention.

You’ll also start to see how your most valuable segments differ from your hypotheses. Maybe it’s not defined by company size but by a specific usage pattern. As a specific example, imagine users who perform at least 3 exports per week AND invite 2+ team members within the first 30 days are 4.5x more likely to upgrade to the enterprise tier within 6 months.

This kind of insight could transform your priorities, focusing on making these specific actions easier and more intuitive, rather than spending time/money on creating new features for other segments.

Step 7: Map segments to specific opportunities

The final step is to leverage your knowledge about high-value users to focus optimization efforts. Now, you can connect your segment analysis to concrete optimization opportunities. A few thought starters for this process:

  1. What actions correlate with long-term success for this segment?
  2. Where do users in this segment typically struggle?
  3. What capabilities does this segment need but doesn’t have?
  4. What value propositions connect most strongly to this segment?

You’ll end up with a list of optimization opportunities. To prioritize those efforts and start building a roadmap, we recommend scoring them across these dimensions on a 1-10 scale, then calculating a weighted score that reflects your company’s specific situation and constraints.

  1. Potential revenue impact: How much additional revenue could optimizing for this segment generate?
  2. Implementation effort: How difficult would it be to implement changes for this segment?
  3. Time to results: How quickly can you expect to see meaningful outcomes?
  4. Strategic alignment: How well does focusing on this segment align with your long-term business strategy?

For example, if you’re under pressure to show quick wins, you might weigh “time to results” more heavily. If you’re planning for long-term growth, strategic alignment might carry more weight.

This will be the start of your roadmap for optimization efforts, ensuring that you focus resources on the right opportunities for your most valuable segments.

Focus on your highest-value segments first, then gradually expand your optimization efforts to secondary segments once you’ve captured the initial value. Always consider potential cross-segment impacts when making changes.

Drive growth with user segmentation and prioritization

As your product and market evolve, so will your user segments. What constitutes a high-value segment today may shift as you introduce new features or enter new markets.

We recommend evaluating your user segments quarterly, with a more comprehensive review annually or whenever you experience significant business changes.

Remember, the path to scaling your SaaS business isn’t through trying to please everyone with generic optimizations. It’s through deeply understanding which user segments create the most value and deliberately focusing your limited resources on enhancing their experience.

Ready to identify and prioritize your most valuable user segments? The Good’s Digital Experience Optimization Program™ can help you discover untapped growth opportunities through expert research, strategic insight, and data-driven experimentation. Contact us to learn more about how our team can help your SaaS business scale faster.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

]]>
How to Make the Move From Intuition-led to Data-driven https://thegood.com/insights/intuition-led-to-data-driven/ Fri, 28 Mar 2025 21:10:55 +0000 https://thegood.com/?post_type=insights&p=110423 If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of […]

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of CEOs want a data-driven organization, the reality is that many organizations are still largely intuition-run. It takes more than a compelling argument in those contexts to turn the tide.

If you’re spearheading the shift from an intuition-driven to a data-driven practice, it can be an uphill battle and a lonely one at that. We spoke with Hanna Grevelius, CPO at Golf Gamebook & Advisor, and Maggie Paveza, Digital Strategist at The Good, about how they’ve navigated data-imperfect conditions throughout their careers and successfully advocated for data-first principles.

Whether you’re working with limited data or as your company’s first A/B testing specialist, their stories make one thing clear: doing it alone doesn’t have to be so daunting.

Keep reading to hear about:

  • How they learned to work with data
  • How to leverage data to build prioritization intuition
  • When guessing is appropriate
  • How to be an advocate for data-first practices

1. It’s OK to learn on the job

For those with only a passable knowledge of statistics, it can seem intimidating to dive headfirst into data-driven decision making. But it doesn’t take a data science degree to be able to act on good data. In fact, few teams employ full-time analysts at early stages of growth. Most teams get by early on with the skills of a few generalists, who, it turns out, often learn on the job.

“Quantitative methods are something that I’ve learned in my career,” says Maggie Paveza, Senior Digital Strategist at The Good. Having previously worked as a UX Researcher at Usertesting.com, Maggie started with a strong foundation in qualitative research before adding quantitative methods to her toolkit, which she says helps her tell a fuller story. “The qualitative research forms the why; the quantitative research forms the what.”

For Hanna Gervelius, CPO at GolfGamebook, her relationship data started from close collaboration with Product Managers.

“My role when I started was in support, answering customer support emails. In trying to understand the scalability of issues, I got to work and talk a lot to product managers who really helped me understand we need to look at the data to know: is it one person who experienced the bug? Is it from a specific version of the app? Is it related to the device or operating system they were on?”

Hanna says learning how to dig for data helped her contextualize customer pain. And through that practice, she built the skills necessary to transition into Product Management. “It was through support that I started to understand that we should look into the data, then eventually I moved over to work on Product Management.”

When she added A/B testing to her toolkit, that took her passion for data to a whole new level.

“It’s so clear when you A/B test that even a small change can have a big impact. When you start seeing the difference, that really sparks an interest.”

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

2. Use data to define your focus

Once Hanna could confidently dive into the data, she started to use it in her practice, evaluating where traffic hits the app most frequently and focusing on those high-value, high-traffic areas first. This exercise in opportunity sizing taught her that it’s ok to shift focus in light of new data.

Maggie takes a similar approach to prioritization. She uses traffic data to understand what areas of a site or app are highly trafficked, and before proposing a test, she always verifies that an A/B test would see significance within an acceptable amount of time.

“We rely on prioritization methodologies to understand if running a test in an area would have a significant revenue impact and if an A/B test would help us gauge in a number of weeks or longer.”

If you’re just starting out with a new property, Maggie and Hanna both suggest building a foundational understanding of traffic patterns and to regularly refine your strategy. Priorities often shift as a result.

3. In the absence of data, start with a guess

One valuable skill that came later in their careers was understanding the value of a lead. Boosting form fills can feel invigorating, but without an understanding of what portion of that audience might become a deal later, it’s hard to know if your work is making a difference. Assigning a dollar amount to a lead is a powerful tool to evaluate your performance.

But if you’re joining an organization without mature data practices, leads often have no value assigned. And without institutional knowledge, it can be intimidating to make a guesstimate. But to Hanna, it’s worth starting with a guess to set initial priorities.

Hanna advises using a rough calculation to estimate the value of a metric (with things like average deal value and percent of pipeline that converts), which can help you get an early read.

“Over time, you can start adjusting it higher or lower. But trying to put a value on it and making decisions based on that is the best way to still work in a data-driven way even when you don’t have all the answers.”

Hanna warns that an estimate is just that, and that staying above board about where the data comes from is key to retaining trust.

“What’s really important in that estimation reporting is that you’re always super clear that you’re estimating—that it could be a lot higher and a lot lower, because if you start making critical budget decisions on it, you can end up in a dangerous situation.”

4. Be the change you want to see

For those who know the clarity that data can bring to the decision-making process, working within a data-poor organization can be challenging. But Hanna says it’s fairly easy to lead others to data advocacy, even if you’re not in a C-suite. “Most people nowadays want to be data-driven,” Hanna says. In her opinion, it doesn’t take a fancy title to turn others into advocates.

“If you are working in an org where you are the only person who is responsible for testing, the best thing you can do is try to spread that knowledge. Get them involved and feel a sense of ownership. Try to make it so that you’re not the only one who cares about A/B testing and being data-driven.”

In order to build stewardship throughout the organization, Hanna’s advice is to walk through your thinking, specifically by walking colleagues through the potential upside to testing, and the risks of not. “That can help people who are not so interested in testing to be a bit more curious and to want to understand.”

In Hanna’s experience, your passion can be quite contagious. “Data and testing, it opens up a world that is so fun.”

As for how she does it, Hanna shares her excitement by showing rather than telling. “As soon as you have the test going, share a bit of the data early on,” she says. Rather than being cagey about how inaccurate early test data is, she uses it as a teaching moment.

“All of us who work in the testing space know that data from one day or three days is probably going to be completely wrong, and you can say that also. But show it to that person. Show that ‘this is super early, we have no idea if this is going to be correct or not, and stat sig, but after one day this is what it looks like’”

And of course, once you run successful tests down the line, Maggie’s experience tells her that there is nothing more powerful than sharing a win with your team.

Artfully navigating the shift

Advocating for data-driven decision-making in intuition-led companies isn’t always easy, but it’s a challenge worth taking on.

As Maggie and Hanna’s experiences show, starting small, whether by learning on the job, prioritizing based on data, making informed estimates, or sharing early insights, can lead to big shifts in mindset.

By fostering curiosity and collaboration, you can help transform your organization’s approach to decision-making, making data a natural and valued part of the process.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
B2B Research Doesn’t Have To Be So Hard https://thegood.com/insights/b2b-research/ Mon, 03 Feb 2025 19:05:30 +0000 https://thegood.com/?post_type=insights&p=110269 Whether your users are knowledge workers busy with deadlines or car mechanics who rarely leave the garage, connecting with folks who buy and use B2B software products is challenging. “B2B is definitely more complex,” says Hannah Shamji, former psychotherapist and current B2B research specialist. “There are so many stakeholders involved in any type of decision. […]

The post B2B Research Doesn’t Have To Be So Hard appeared first on The Good.

]]>
Whether your users are knowledge workers busy with deadlines or car mechanics who rarely leave the garage, connecting with folks who buy and use B2B software products is challenging.

“B2B is definitely more complex,” says Hannah Shamji, former psychotherapist and current B2B research specialist. “There are so many stakeholders involved in any type of decision. Having to juggle all of those, it’s just much more of a web.”

Tools needed throughout the workday are specialized—whether for accountants, mechanics or marketers—so your participant pool is smaller by design. But it’s not just the size of the total addressable market that makes the research challenging. It can be hard to compel B2B users and decision-makers to participate in a study.

We’ve heard numerous examples of “hard to compel” users:

  • High-earning executives who can’t spare an hour
  • Managers responsible for a large task load
  • Operators who spend near-zero time in their inbox

“There’s no incentive that you can pay that would buy their time,” says Benson Low, a 20-year veteran of UX Research and a board member of ResearchOps.

In Low’s experience, compelling those who make a B2B purchase decision (often c-suite executives) with financial incentives alone doesn’t work. “They’re not going to care if you’re paying them $1,000. They probably wouldn’t give you their time.”

Researching in the enterprise space adds another layer of complexity to the recruitment process.

“You can almost play the same playbook in small to medium-sized businesses—same research methodology, approach, even recruitment,” says Low. But, in large organizations, the approach will look quite different.

“When it starts getting difficult is when your organization has an account management team supporting specific businesses. That’s where you have to work with that account management team first before you even reach out to customers.”

Overcoming the challenge

We talked to four experts with a combined 80+ years of experience in product about how they circumvent the challenges of B2B research. Although their approaches varied, one theme was clear: their path to meaningful B2B research has been through relationships.

“It’s about relationships, ultimately.”

Read on to hear how four research pros overcome challenges in B2B research.

1. Learn the problem space before you talk to customers

Because actual users will likely be hard to connect with, using their time to learn the basic details of their role or industry would be a waste.

As such, Low recommends doing internal research before even talking to customers.

“Know the product and how it's positioned in the market. Understand the business. Then you can design the right capabilities, right research sequencing, etc.”

Our experts mentioned several methods for this discovery:

  • Listening to sales calls
  • Interviewing CX staff
  • Reviewing service blueprints
  • Scouring communities to learn about the users your product serves

Internal research gives you the foundation you need to interpret customer feedback meaningfully and assures that you don’t waste precious customer interviews just coming up to speed with their lingo.

2. Relationships, relationships, relationships

Marketing, sales, and CX have a wealth of knowledge and connections that can ease the research process.

Paul Stevens, a UX leader who’s been in digital design long enough to have A/B tested print mailers, heralds the power of a relationship with your sales team.

“Really good salespeople will get you into a customer. They’ve got relationships; they can get you in. They can make that all-important introduction,” says Stevens. “You need to be best friends with your sales team.”

But salespeople won’t be ready to give up their contacts without some established trust, so our experts emphasized building trust and connection rather than focusing on information extraction.

To earn their trust, show them you’re aligned with their goals. Understand their OKRs so you can frame your initiatives in a way that is mutually beneficial. “You essentially have to research the stakeholder in order to get them to buy in on the research,” says Shamji.

3. Use team members as research proxies

Beyond just making intros, your sales team can actually be a partner in research, working prototypes and early feedback into their sales calls in a lightweight form of “testing.”

René Bastijans, who describes himself as a “recovering Product Manager,” is currently a lead researcher at a growth-stage startup. As a research team of one in a company of over 100 employees, he’s found ways to loop his colleagues into the research process.

His sales team is trained to lightly survey prospects during sales calls and report back to the wider team. This creates a healthy feedback loop that keeps everyone abreast of evolving user needs.

“We’ve trained our sales team to ask for specific data and enter it into Salesforce. Researchers and the product team have access to these data, and therefore, sales has allowed us to keep a pretty good pulse on the market.”

But it’s not just a few questions here and there that sales can support with. Bastijans works with the sales team to get quick feedback on product updates in a lightweight form of testing.

“We give them a couple of slides and they slot them into their conversation when they speak with prospects to get input from real people. That’s been working really well for us.”

Stevens advocates for relying on your team to conduct field research where you can’t.

“If you’re in a global organization, you want to do research in a country that you’re not in, and you can’t fly around the world to do it, you can put an education program together. Find like-minded people within the org. In my experience, there have been marketers in each country and they are usually aligned with design and research.”

Sending them out with a camera, a notepad, and a directive to report back on what they see has allowed Stevens to extend his reach beyond global borders. And he says teams love to participate. “I’ve never had anybody snicker. I’ve had them say ‘that’s fantastic; how can I get involved?”

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

4. Stand up a Customer Advisory Board

Because B2B customers are impossible to engage en masse, one way to circumvent the challenge of recruitment is to create a “board” of customers that gives regular feedback in exchange for value. As Low describes, the relationship should benefit both parties.

“There's a benefit for them in that we provide them with training, support, discounts, or access to new features. On the flip side, we expect them to give us feedback—how their business has been going and what their needs are.”

Sometimes called “Sales Advisory Boards” or “Customer Advisory Boards" these can start as a partner to either sales or product, but their insights will support various disciplines within the organization.

Stevens has had success in previous roles with what he calls “Customer Days,” in which the Customer Advisory Board spends an entire day on-site, rotating between different practice groups to provide insight to various business functions. “It’s not a full day of UX testing, but researchers will have a slot to talk with them.” It’s a great way to regularly solicit the perspectives of your customers.

5. Create an expert-level playbook for B2B interviews

When you do get a chance to interview B2B clients and users, Low emphasizes the need to take it seriously: put senior staff on the task and do adequate preparation.

“Make sure you pay the utmost respect to those hard-to-recruit participants. You want to make the best use of their time and be able to ask questions effectively while being able to protect the organization's brand, reputation, and business.”

Low recommends an extensive preparation process for B2B interviews, including researching the participant’s background, reviewing their account details, and chatting with their account manager, which Low says may reveal any potentially sensitive discussion points. “You just don't want anything to impact the business unintentionally.”

But preparation alone doesn’t guarantee an effective conversation—you have to have experience as well. Low recommends that only very experienced moderators conduct conversations with existing clients. “You don’t want them to be in a power dynamic issue. Then they can’t execute the research effectively.”

Ineffective moderation risks producing research that can’t be used effectively and a feeling of time wasted for the participant.

“Especially considering how small this panel likely is and how small the population is. You likely don't have too many enterprise customers, and you might want to talk to them again next year. So, build a rapport and make sure that you are able to access them. If not yourself, then your peers—other researchers, designers, or product managers on your team.”

6. Use automations to maintain trust (and stay sane)

If you’ve been introduced to customers via your sales team, it can feel like you’ve won a golden ticket. But our experts remind us that trust needs to be maintained, so they’ve built workflows that foster trust and transparency.

Bastijans uses Zapier automations that push updates at critical customer touchpoints:

  • When a prospect books an interview
  • When interviews take place

Automations Slack the sales team when a conversation is booked or takes place, and auto-magically import CRM data and update a Notion page. “Zapier has been a really huge help for me just automating mundane tasks that I would have to otherwise do manually.” For Bastijans’ research team of one, he’s been able to ramp up his output without upping the workload.

7. Hire an outsider to play a neutral third party

Depending on your research objectives, it can be hard to solicit honest feedback from your recruits. To circumvent this issue, Low recommends occasionally using outside firms to act as a neutral third party.

“If you can’t do the research because of baggage you have representing your company, you might do it in a roundabout way by getting a third party involved. This way, an independent researcher, consultancy, or research firm might do this centrally, saying ‘we’re just doing industry research,’ they can interview all sorts of customers without damaging anything.”

Agencies can be especially useful in projects that involve talking to the customers of your competitors, says Low. While participants might be hesitant to give honest feedback to a direct competitor of a company they’ve been loyal to, agencies can frame their work more neutrally to enable participants to give candid feedback.

“Essentially, you’re trying to find a Switzerland. Someone that is unbiased with no interconnection that could cloud the insights that you want to get out. So you get, from a data perspective, cleaner insights.”

Plus, agencies can often work much faster, says Low. “The difficult B2B customers that you can’t get to, or have constraints or limitations to access, an independent consultancy might do much quicker.”

8. Make sure the insights stick

While it’s one thing to find the workflows and relationships that enable excellent research, the endeavor is fruitless unless you know how to stick the landing, says Shamji. “It's great to have all the data, but are they going to action on it? Is it going to help make decisions?”

With many teams globally distributed and an average ratio of 1 researcher per 50 developers, the average researcher is, as Stevens puts it, “a very, very, very small fish in a very big pond.”

As a result, our experts say that visibility is key.

To build visibility and buy-in, Stevens suggests a healthy dose of self-promotion: of yourself, the importance of your role, and the outcomes that your research enables.

“As soon as you've got any results, you have to publicize it as much as you can, but especially the right eyes. Depending on the relationships that you have in the business, how comfortable you are, and what the C-Suite is like, there's nothing wrong with dropping a Slack message to the CEO.”

Bastijans solicits buy-in and builds visibility through what he calls “Learning Lunches,” a 25-minute presentation with Q&A, designed to circulate the latest research and keep the team rowing in the same direction.

And for research teams in their infancy, Low says it’s especially important to advocate for the importance of research within your organization. “When you’re establishing a research team, people don’t know what we do.” Rituals like Slack memos, Learning Lunches, and direct conversations can go a long way toward building user-centered thinking within your organization.

The importance of B2B research

Despite the numerous challenges of B2B research, our experts assured us that it is workable.

The sales journey is complex, the personas are many, and the execution needs to be handled delicately. It’s why Shamji sees B2B contexts as the best application for UX research.

“B2B is just more of an obvious place for research,” says Shamji. “All of the touch points like customer success and sales—they're all seeing different parts of the process, so it just kind of warrants a researcher's 360 view of what's going on.”

It’s also why Low says AI isn’t coming for the enterprise researcher’s job anytime soon. Today’s AI interventions just aren’t prepared for the task. “I actually don’t think AI is going to take over our jobs.”

The post B2B Research Doesn’t Have To Be So Hard appeared first on The Good.

]]>
What is The Strategic Value of Ongoing User Research in SaaS? https://thegood.com/insights/saas-user-research/ Thu, 21 Nov 2024 15:17:13 +0000 https://thegood.com/?post_type=insights&p=109736 Whether you’re unearthing new use cases for a core audience, testing value propositions, or mitigating the risk of a feature flop via experimentation, savvy product teams leverage research throughout the product life cycle to improve usability and increase retention. Judd Antin perhaps put the value of user experience research (UXR) best: “When research makes a […]

The post What is The Strategic Value of Ongoing User Research in SaaS? appeared first on The Good.

]]>
Whether you’re unearthing new use cases for a core audience, testing value propositions, or mitigating the risk of a feature flop via experimentation, savvy product teams leverage research throughout the product life cycle to improve usability and increase retention.

Judd Antin perhaps put the value of user experience research (UXR) best:

“When research makes a product more usable and accessible, engagement goes up, and churn rates go down. Companies need that for the bottom line. Users get a better product. Win-win.”

Still, the latest reports put typical UX research staffing at a ratio of one researcher to every 50 developers. Perhaps as a result of this imbalance, research roadmaps can be excessively long and often fail to respond to the real-time needs of product specialists.

“Getting on a roadmap can be very tricky,” says Heidi Dean, Principal Product-Led Growth Manager at Adobe. “With a limited amount of internal resources shared across a matrixed environment, you sometimes have to rely on outside help to get the insights you need.”

When DIY Just Won’t Do

In the era of “founder mode,” many product managers (PMs) and product marketing managers (PMMs) are circumventing internal resources and doing ad-hoc research themselves. And while the DIY approach can be a solution to long lead times, it’s not always feasible. A careful combination of training, tooling, and time is required for non-researchers to do their own research.

Take, for example, moderated user interviews and usability studies. “It’s a skill set that I’ve had to try and hone,” says Dean. With permission, desire, and training, Dean has upskilled in the methodologies, but she’s conscious that not everyone in a product role has the time or opportunity to do so. “Sometimes it’s hard to find the time to recruit and talk to customers,” she says.

It’s not just the lack of formal training preventing PMs from DIY-ing it. Access barriers and time also play a significant role.

While formal research teams are generally equipped with tools like Lyssna, Rally, and Pendo, many research tools operate on a per-seat basis. As a result, access to “seats” is often tightly guarded—and PMs are often left off the roster.

These access barriers can make ad-hoc projects hard to streamline. In previous roles, Dean has seen this play out as a permission-seeking exercise that manifests in added up-time for even simple projects. “There’s a lot of overhead that comes with getting access to a system like that. It can be a heavy lift.”

Add to that the challenge of fitting research into an already-packed schedule, and the barriers to DIY research can feel unsurmountable. “It’s not an easy thing to slot into existing work and commitments,” says Dean.

Using Outside Experts to Supplement Research

Luckily, with support from firms like The Good, you don’t need to be an expert in user testing to get quick insights. PMs with already-packed research roadmaps and busy schedules hire outside experts like us to cut the line and get results quicker.

“Using part of our budget to gain customer insights has been invaluable for decision-making. The insights from user research have helped us unlock new opportunities and validate hypotheses,” says Dean.

The impact isn’t just doing more with less, but doing it reliably faster, says Software Director of Product Marketing Gabrielle Nouhra, who leverages The Good for UX research, rapid testing, and on-site experimentation, and thinks of The Good as “an extension to the product team.”

“The speed at which we obtain actionable findings has been impressive. We are receiving rapid results within weeks and taking immediate action based on the findings, unlike past survey research that often took much longer to yield insights.”

The Multiplying Force of Long-term Partners

Operating somewhat behind the scenes, outside vendors can be a multiplying force that enables product managers, according to Dean. “Your team’s work is additive to our roadmap and helps us meet the demands of our stakeholders looking for customer insights.”

If a good research partner amplifies their impact, why aren’t more product teams leveraging research vendors?

It all comes down to cost and time.

Whether it’s familiarizing them with your business model, metrics, or past insights, standing up a relationship with a new vendor is work, and the process is imprecise. “You try to do the best download that you can, but things are always going to get missed or misinterpreted,” says Dean.

That investment cost is why Dean says that when comparing a long-term partner to one on retainer, “there’s no comparison.”

“When you work with somebody long term, they learn your products, the organization and your stakeholders. They understand the pain points that you’re dealing with, and then you just develop a shorthand.”

Retainer relationships mean time saved, which is why, in Dean’s view, dollars spent with a long-term partner go a lot farther. “Using a partner to help with our research needs has been an efficient use of our resources,” she says.

That manifests in not just time saved but a less arduous process altogether. “It’s streamlining things. It makes everything easier. I can get a lot more done using you guys than even I can with my team.”

Assuring the Success of the Relationship

Once you’ve found the right partner and begun building a backlog of research, the results are compounding. Partners with historical product knowledge can mitigate the pain of reorgs by retaining institutional knowledge.

They can also act as a scaffolding to support new hires. Nouhra knows this firsthand. When she was onboarded to a new role, her first task was to review existing research. Having both a catalog of existing, high-quality research and a partner at The Good who could walk her through it has been an empowering resource that enabled her to dive in quickly.

“You brought me up to speed on so much when I joined—beyond test results and our catalog of research, you were able to share what product updates had been proposed, which were implemented, and what the tradeoffs were to get them live. This gave me a headstart with my product and cross-functional teams.”

Knowing that all good things take time, we asked Dean and Nouhra for their tips for a lasting, high-impact partner. Here’s what they said.

Invest in Up-front Relationship Building

While Nouhra raves about the time-savings of a good partner, she cautions that the dividends are born of an up-front investment. Acting purposefully at the outset can set you up for success. “Take the time to invest in the upfront so that you can reap the benefits of the partnership down the line.”

“Having a partner that's always by your side, you've already done the investment. You can actually get a lot more out of it in the short term because they know the background, and they know your customers, and they know your site experience.”

Include Your Vendor in the Scoping Conversations

When other internal stakeholders are involved, Dean recommends letting the vendor in early, even during the scoping phase. That way, they can ask clarifying questions and quickly speak about the budget implications of various methodologies. It’s an approach that saves time, and it helps identify assumptions and biases that might otherwise arise if the conversation stayed internal, according to Dean.

“When the vendor asks questions, it can draw out the unspoken details. It comes across as ‘I want to make sure we do the best work for you guys.’ So there's a built-in trust that we're all trying to get to, and there's a joint exercise of figuring out what that is.”

Establish a Client-side Conduit

To assure mutual success, Dean recommends assigning a single person to be responsible for mitigation in the event of an issue with regard to the vendor-stakeholder relationship.

While it’s possible that no issues will arise, in Dean’s view, just having someone client-side own the relationship makes her stakeholders feel supported. “They know that I'm personally vested in their success. I'm not just throwing them over the fence to a vendor.”

Getting Started

The benefits of user research in SaaS are proven and they aren’t singular. Conducting frequent, consistent research delivers compounding results. Your whole organization can benefit from the learnings if you pass user insights between development, sales, marketing, product, and more.

But, if like many product leaders, conducting your own research gets put on the back-burner due to competing initiatives, ditch the one-off engagement approach.

To get the results you’re looking for, you need to commit to a long-term research partner. Invest time and resources upfront, and you’ll be rewarded with insights that will propel growth. The longer you engage with the right partner, the easier it will be to glean more insights and, in turn, improve your product experience, marketing, and more.

If you want to understand if The Good might be that long-term partner for your business, get in touch. We start with a thorough audit of your current practices and digital experience to ensure you get everything you need (and nothing you don’t) from working with us. Check out our program and get in touch.

The post What is The Strategic Value of Ongoing User Research in SaaS? appeared first on The Good.

]]>
How To Leverage The Priming & Expectation Setting Heuristic To Drive Conversions https://thegood.com/insights/priming/ Fri, 27 Sep 2024 18:35:32 +0000 https://thegood.com/?post_type=insights&p=109478 Have you ever gotten through the end of a tediously long shopping process only to get hit at checkout with a shipping fee that doubles your cart cost? Or have you tried to sign up for an online account that forced you to download an additional app to access the service? There is nothing more […]

The post How To Leverage The Priming & Expectation Setting Heuristic To Drive Conversions appeared first on The Good.

]]>
Have you ever gotten through the end of a tediously long shopping process only to get hit at checkout with a shipping fee that doubles your cart cost? Or have you tried to sign up for an online account that forced you to download an additional app to access the service?

There is nothing more frustrating than feeling like a company is giving you the bait and switch. In user experience design, we call this poor priming and expectation-setting, and it is a violation of one of the six Heuristics for Digital Experience Optimization™.

Heuristics, by definition, are mental shortcuts used to solve problems quickly and effectively. They allow people to speed up analysis and make informed, efficient decisions. Knowing our brains are wired to take shortcuts and make quick decisions, you can imagine how heuristics play a crucial role in how customers navigate and perceive digital experiences.


Digital experiences that violate user heuristics are bad for users and bad for business. So, let’s take a look at how to address the priming and expectation-setting heuristic in a way that improves the user experience.

What is the priming and expectation-setting heuristic?

Priming and expectation setting is a heuristic that sets users up for success by clarifying how the interface will perform, indicating what actions users should take, and managing user expectations.

Digital experiences that adhere to this heuristic may apply a tactic like explicitly mentioning free shipping early in the journey to reduce cart abandonment rates or sharing estimated delivery dates to manage customer expectations.

Priming and expectation setting is one of the six Heuristics of Digital Experience Optimization™ developed by our team at The Good. The full list includes:

  1. Priming & Expectation Setting
  2. Trust & Authority
  3. Ease
  4. Benefits & Unique Selling Points
  5. Directional Guidance
  6. Incentives

These heuristics theme common optimization issues and opportunities. Analyzing your digital experience with heuristics in mind keeps the user at the center of analyses and guides your strategy toward building journeys that feel familiar, do what they say, and function intuitively.

Identify violations of this heuristic with user research patterns

Before you can start to address any heuristic to improve the digital experience, you have to understand if, where, and when users are getting stuck.

To understand if your digital experience is violating the priming and expectation-setting heuristic, a great place to start is user research. Set goals, pick the right method for your needs, and start talking to your users (or observing their behavior).

As you analyze the research, look for patterns including:

  • Rage clicks: User clicks on an element multiple times without getting the desired or expected result. Usually, this signifies unclear system status, meaning your user doesn’t provide enough cues, semantics, or timely feedback to keep users informed.
  • Low directness: Users can be seen scrolling through the site looking for specific content, struggling to find items of interest, and possibly hesitating on the site, suggesting uncertainty. This can be a sign of unmet expectations, meaning your system’s interactions, navigation, or language don’t match users’ mental models of real-world or site conventions.
  • Price sensitivity: Users express concern about product or shipping prices, potentially leading them to abandon. This often indicates poor priming because of unclear or missing elements in the interface that typically guide user behavior and inform them of what to expect.

The good news is once you identify the patterns, you can address them with tactics to improve priming and expectation setting. Doing so is an ethical way to improve customer sentiment and increase conversions.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Real-life examples of using priming and expectation-setting to improve the user experience

Most companies have a chance to improve priming and expectation setting across their digital journey. Here are a few real-world examples that can inspire your efforts to adhere more closely to the heuristic. You might see some pretty compelling rewards for your improvements.

Offline download delivery priming

We worked with the largest digital repair manual database, eManualOnline, to find opportunities to improve their on-site experience. Following similar recommendations as outlined above to identify violations of optimization heuristics, we conducted user testing. It revealed that users were confused about how eManualOnline delivers their manuals, as some are digital downloads and others are physical editions.

Because of the mixed delivery method messages throughout the site, customers felt a lack of trust when confronted with the website.

We decided to test out highlighting delivery methods to clarify any confusion and increase transactions. We A/B tested 2 variants: a control and a variant that made delivery methods clearer at various touchpoints.

The variant with clear delivery method language showed a 14% lift over the control. Clarifying access methods for offline downloads resulted in stronger purchase intent. This is a clear example of priming and expectation setting at work.

Permission priming in user onboarding

When onboarding a user to a new digital experience (app or desktop service), priming and expectation setting can strongly impact churn metrics.

Here’s a good example from Scan & Translate. It reminds users that in order to use the scan features and gain value from the app, they need to grant camera permissions to the system.

Preparing, or priming, a user before you ask permission to access their OS makes it more likely that they’ll comply with your request. This is vitally important because your product might not be able to provide value to the user without access.

An example of permission priming on the Scan and Translate app.

Expectation-setting without compromising brand language

Residential furnishings brand, Knoll, has a range of uniquely crafted and handmade products. The care and detail that goes into each piece means longer lead times on shipping and delivery.

When we took on a project to improve their digital experience, we tested out adjusting their copy to better reflect the craftsmanship of their work.

Changing the wording from “Lead time: 8 weeks” messaging to “Made for you. Ships in 8 weeks” led to our biggest test win of the year in terms of revenue.

It created synergy between the brand’s needs (priming purchasers that shipment won’t happen for a while) and the customer’s needs (understanding why shipment won’t happen for a while). It also had the benefit of turning a challenge (long lead times) into a compelling conversion booster (custom-made).

Image demonstrating how Knoll uses expectation setting priming for their delivery timeline.

Priming in form design

Priming is one of the first principles of form design. It keeps users on the path to form completion by clearly setting expectations and ensuring they don’t drop off due to surprises.

Priming in form design takes many forms but often is provided through progress bars. Adding this element tells the user what they can expect from the process during or before completion of the form, setting the expectation so that users come prepared to fully fill out the form.

See this example from Etsy. The company features a progress bar with clear labels to prime users about what to expect during the mobile checkout process.

An example of form design priming from Etsy.

To set expectations with a form, you can also be clear about the end result or value users receive upon completing the form. This can generate excitement for the product, motivating form completion.

The “Try Demo” button from ServiceNow, shown below, primes users to know what they can expect after they fill out the form. Users will get to demo the product and can also expect everything in the bulleted list to the left.

An image from the ServiceNow website showing the use of priming and expectation-setting in form completion.

Using heuristics to theme your roadmap of opportunities

To transform the priming and expectation-setting heuristic into an actionable improvement opportunity for your digital property, consider building a strategic roadmap.

Leverage user research to identify common patterns indicating violation of the six Heuristics for Digital Experience Optimization™. Prioritize those opportunities based on their potential for impacting KPIs. Then, develop a plan to test improvements with a theme-based roadmap.

Taking the time to really understand where users are getting stuck in your digital experience will set you up to make more efficient and impactful decisions.

Our team can support you on your journey through a custom Digital Experience Optimization Program™. You’ll have access to an entire team of researchers, strategists, designers, and developers that will help remove violations of the priming and expectation-setting heuristic (and more).

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How To Leverage The Priming & Expectation Setting Heuristic To Drive Conversions appeared first on The Good.

]]>