data driven Archives - The Good Optimizing Digital Experiences Wed, 26 Nov 2025 18:55:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust https://thegood.com/insights/maxdiff-analysis/ Wed, 26 Nov 2025 17:56:30 +0000 https://thegood.com/?post_type=insights&p=111202 When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like […]

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like ours,” and “How do I know this won’t be another failed implementation?”

The company had assembled a list of proof points they could showcase on their homepage: years in business, number of integrations, customer counts, implementation guarantees, security certifications, industry awards, analyst recognition, and more. But they only had space to highlight four of these benefits prominently below their hero section. They faced the classic messaging dilemma: which trust signals would actually move the needle with prospects evaluating B2B software?

This is where MaxDiff analysis becomes valuable. Instead of relying on stakeholder opinions or generic best practices, we could let their target buyers vote with data on what mattered most.

What makes MaxDiff analysis different from other survey methods

MaxDiff analysis (short for Maximum Difference Scaling) is a research methodology that forces trade-offs. Rather than asking people to rate items individually on a scale, MaxDiff presents sets of options and asks participants to identify the most and least important items in each set. This forced-choice format reveals true preferences because people can’t rate everything as “very important.”

Here’s why this matters: traditional rating scales often produce compressed results where everything scores high. When you ask customers, “How important is X on a scale of 1-10?” most people will hover around 7 or 8 for anything remotely relevant. You end up with a spreadsheet full of similar numbers and no clear direction.

MaxDiff cuts through that noise. By repeatedly asking “which of these five options matters most to you, and which matters least?” across different combinations, you build a statistical picture of relative importance. The math behind MaxDiff generates a best-worst score for each item, showing not just which options are preferred, but by how much.

For digital experience optimization, this methodology is particularly useful when you need to prioritize limited real estate on a website, determine which features to build first, or figure out which messaging will actually differentiate your brand.

How we structured the MaxDiff study for maximum insight

In the project for our client, we started by defining the target audience precisely. The company was a B2B SaaS platform serving mid-market operations teams, so we recruited 60 participants who matched their customer profile: director-level or above at companies with 50-500 employees, working in operations or supply chain roles, currently using at least two SaaS tools in their workflow, and actively evaluating solutions within the past six months.

From the initial audit and stakeholder interviews, we identified 11 potential trust signals the company could emphasize on its homepage. These included things like:

  • Concrete numbers (customer counts, uptime percentages, integrations available)
  • Credentials (security certifications, enterprise clients)
  • Promises (implementation timelines, support response times, money-back guarantees)
  • And more

Each represented something the company could truthfully claim, but we needed to know which ones would build the most trust with prospects evaluating the platform.

The survey design was straightforward. Each participant saw these 11 benefits randomized into multiple sets of five items. For each set, they selected the most important factor and the least important factor when considering whether to adopt this type of software. Participants completed several rounds of these comparisons, seeing different combinations each time.

This approach gave us enough data points to calculate a robust best-worst score for each benefit: the number of times it was selected as “most important” minus the number of times it was selected as “least important.” Positive scores indicate a strong preference, negative scores indicate a low importance, and the magnitude of the scores shows the strength of feeling.

The results revealed a clear hierarchy of trust signals

When we analyzed the MaxDiff results, the pattern was striking. The top-scoring benefits shared a common theme: they provided concrete evidence of proven reliability and satisfied users. The bottom-scoring benefits? They emphasized company scale and marketing visibility.

A chart showing the ranking of MaxDiff analysis SaaS trust signals.

The four highest-scoring trust signals were clear winners. G2 or Capterra ratings scored 38 points (the highest possible), indicating this was nearly universal in its importance. The number of active customers scored 30 points. An implementation guarantee (“live in 30 days or your money back”) scored 25 points. And SOC 2 Type II certification scored 16 points.

These weren’t arbitrary marketing metrics. They were the specific signals that would make someone think, “this platform delivers real value and other companies trust them.”

The middle tier included operational details that registered as minor positives but weren’t decisive: the number of successful implementations (7 points), availability of 24/7 support (6 points). These signals suggested competence but didn’t particularly move the needle on trust.

Then came the surprises. Years in business scored -5 points, indicating it was slightly more often selected as “least important” than “most important.” The number of integrations available scored -11 points. AI-powered features claimed scored -15 points. Employee headcount scored -36 points. And recognition as a Gartner Cool Vendor scored -55 points, the lowest possible score.

Think about what prospects were telling us: “I don’t care that you have 200 employees or that Gartner mentioned you. Show me that real companies like mine trust you and that you’ll actually deliver on your promises.”

Why customers rejected company-focused metrics

The findings revealed an insight into trust-building that extends beyond this single company. B2B buyers weigh social proof and reliability guarantees far more heavily than they weigh indicators of company scale or industry recognition.

When a business talks about its employee headcount or analyst mentions, prospects interpret this as the company talking about itself. These metrics answer the question “How big is your business?” but not “Will this solve my problem?” From the buyer’s perspective, a larger team or Gartner mention doesn’t necessarily correlate with better software or smoother implementation.

By contrast, user reviews and customer counts answer the implicit question every prospect has: “Did this work for companies like mine?” A guarantee directly addresses risk: “What happens if implementation fails?” Security certifications address legitimacy: “Is this platform secure enough for our data?”

The AI-powered features claim scored poorly, likely because it felt trendy rather than practical. Prospects for this specific business weren’t primarily concerned about cutting-edge technology; they wanted a platform that would reliably solve their workflow problems. Leading with an AI angle, while possibly true, didn’t address the core decision-making criteria.

Years in business scored negatively for similar reasons. While longevity can signal stability, in this context, it didn’t address the prospect’s immediate concerns about implementation speed and user adoption. A company could be around for years while providing clunky software with poor support.

From insight to implementation: turning research into revenue

The MaxDiff analysis gave the company a clear action plan. We recommended implementing a four-part trust signal section directly below their homepage hero, featuring the top four scoring benefits in order of importance.

This meant reworking their existing homepage structure. Previously, they had emphasized their implementation guarantee in the hero area while burying customer counts and ratings further down the page. The research showed this approach had it backward. Prospects needed to see evidence of customer satisfaction first, then the implementation guarantee as additional reassurance.

We also recommended removing or de-emphasizing several elements they had been proud of. The employee headcount mention, the Gartner recognition, and several other low-scoring items were either removed entirely or moved to less prominent positions on the site. The goal was to prevent low-value signals from crowding out high-value ones.

The broader lesson here extends beyond this single homepage optimization. The MaxDiff results provided a messaging hierarchy that the company could apply across its entire go-to-market strategy. Email campaigns, landing pages, sales conversations, demo decks, and even their LinkedIn company page could now emphasize the trust signals that actually mattered to prospects.

When MaxDiff analysis makes sense for your business

MaxDiff is particularly valuable when you’re facing a prioritization problem with limited data. It works best in these scenarios:

  • You have more options than you can implement. Whether that’s features to build, benefits to highlight, or messages to test, MaxDiff helps you choose wisely when you can’t do everything at once.
  • Stakeholder opinions are conflicting. When internal debates about priorities can’t be resolved through argument, customer data settles the question. MaxDiff provides quantitative evidence for decision-making.
  • You need to differentiate in a crowded market. If competitors are all saying similar things, MaxDiff reveals which specific claims will break through. Often, the winning messages are ones companies overlook because they seem “obvious” or “not unique enough.”
  • You’re optimizing for a specific audience segment. Generic research about “customers in general” often produces generic insights. MaxDiff works best when you recruit participants who precisely match your target customer profile.

The methodology has limitations worth noting. It requires careful setup, and you need to know which options to test before you start.

If you don’t include the right benefits in your initial list, you won’t discover them through MaxDiff.

It also works best with a reasonably sized set of options (typically 5-15 items).

And the results tell you about relative importance, not absolute importance; everything could theoretically matter, but MaxDiff reveals the hierarchy.

How to use MaxDiff findings in your optimization strategy

Once you have MaxDiff results, the application extends beyond simply reordering homepage elements. The insights should inform your entire digital experience.

Your messaging architecture should reflect the importance hierarchy. High-scoring benefits deserve prominent placement, repetition across pages, and detailed explanation. Low-scoring benefits can either be removed or repositioned as supporting rather than leading messages.

Your testing roadmap should prioritize changes based on MaxDiff findings. If customer reviews scored highest in your study, test different ways of showcasing reviews before you test other elements. Let the data guide your experimentation priorities.

Your content strategy should emphasize what customers care about. If service guarantees scored highly, create content that explains the guarantee in detail, shares stories of when it was honored, and addresses common concerns. Build your editorial calendar around the topics MaxDiff revealed as important.

Your sales enablement should align with customer priorities. If the research showed that prospects value licensing credentials, make sure your sales team knows to emphasize this early in conversations. Create collateral that highlights the trust signals that matter most.

The most effective companies use MaxDiff as one tool in a broader research program. They combine it with qualitative research to understand why certain benefits matter, behavioral analytics to see how users interact with different messages, and continuous testing to validate that the predicted preferences translate into actual conversion improvements.

Turning guesswork into growth

The SaaS company we worked with started with a dozen possible messages and no clear sense of which would build trust most effectively with B2B buyers. After the MaxDiff analysis, they had a data-backed hierarchy that let them confidently restructure their homepage and broader messaging strategy.

This is the power of asking prospects the right questions in the right way. Not “do you like this?” which produces inflated scores for everything. Not “rank these 11 items,” which overwhelms participants and produces unreliable data. But rather, through repeated forced choices, revealing the true importance of each element.

If you’re struggling with similar prioritization challenges (too many options, limited space, stakeholder disagreement about what matters), MaxDiff analysis might be the tool that breaks through the noise. It transforms subjective opinion into statistical evidence, letting your prospects vote on what will actually convince them to choose your platform.

Ready to discover which messages actually resonate with your customers? The Good’s Digital Experience Optimization Program™ includes research methodologies like MaxDiff analysis to help you prioritize changes based on real customer preferences, not guesswork.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company https://thegood.com/insights/intent-based-segmentation/ Fri, 21 Nov 2025 18:45:09 +0000 https://thegood.com/?post_type=insights&p=111181 Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do. The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different […]

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do.

The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different goals, workflows, and definitions of success.

We recently worked with a leading enterprise SaaS company facing exactly this challenge. Their product served everyone from solo creators to enterprise teams, but the experience treated everyone the same.

Users logging in to track a single client project got bombarded with team collaboration features. Power users managing complex workflows couldn’t find advanced capabilities buried in generic menus.

Users felt overwhelmed by irrelevant features, engagement plateaued, and retention suffered.

The solution wasn’t another traditional segmentation model. It was intent-based segmentation; a framework for grouping users by what they’re actually trying to accomplish, then personalizing the experience to match those specific goals.

Understanding behavioral segmentation and where intent fits

Before diving into how to build intent-based clusters, it helps to understand where this approach fits within the broader landscape of user segmentation.

Most SaaS companies are familiar with demographic segmentation (company size, industry, role) and firmographic segmentation (ARR, team size, geographic location). These approaches tell you who your users are, but they don’t tell you what they’re trying to do, which is why they often fail to predict engagement or retention.

Behavioral segmentation takes a different approach. Instead of looking at static attributes, it focuses on how users actually interact with your product.

Behavioral segmentation divides users based on engagement. This includes actions like feature usage patterns, open frequency, purchase behavior, and time-to-value metrics.

While not the only way to do things, behavioral segmentation is widely regarded as more effective than demographic segmentation alone, with plenty of research backing it up.

Within behavioral segmentation, there are several approaches:

  • Usage-based segmentation looks at frequency and intensity of use.
  • Lifecycle segmentation tracks where users are in their journey.
  • Benefit-sought segmentation groups users by the outcomes they want to achieve.

Intent-based segmentation sits at the intersection of all three.

visual portraying intent based segmentation at the center of different types of user segmentation

It identifies clusters of users who share similar goals and workflows, then maps those patterns to create a more personalized experience.

Intent-based clusters answer the question: “What is this user trying to accomplish right now?”

In a recent client engagement that inspired this article, this distinction mattered. They had mountains of usage data showing which features people clicked, but no framework for understanding why certain feature combinations existed or what job users were trying to complete.

They knew “Business Professionals” used their tool, but that category was so broad it offered no actionable insights. A marketing manager building campaign timelines has completely different needs than a legal team tracking contract approvals, even though both might be classified as “business professionals.”

Intent-based clustering gave them that missing layer of insight.

Case study: How to spot the need for intent-based segmentation

Let’s talk more about the client engagement I mentioned. This is a great case study for when to use intent-based segmentation.

We work on a quarterly retainer for these clients with our on-demand growth research services. So, when they mentioned struggling with how to personalize experiences and improve retention, we opened up a research project that same day.

The team could see that certain users logged in daily and used five or more features. Great, right? Not really. When we dug deeper, we discovered something critical. Heavy feature usage didn’t predict retention. Some power users churned while casual users stuck around for years.

The issue wasn’t the quantity of features used; it was whether those features aligned with what users were trying to accomplish. A user coming in weekly to update a single client dashboard showed higher retention than someone exploring ten features that didn’t match their core workflow.

The symptoms: What teams told us was broken

During our stakeholder interviews, we heard the same frustrations across departments:

From product: “We know the top five features everyone uses, but that doesn’t help us understand why they’re using them or what to build next. Two users might both use our template feature, but one is building client proposals while the other is standardizing internal processes. They need completely different things from that feature.”

From marketing: “Our segments are too broad to be useful. ‘Business Professional’ could mean anyone from a solo consultant to an enterprise VP. When we send educational content, we can’t make it relevant because we don’t know what problem they’re trying to solve.”

From customer success: “We can see when someone is at risk of churning because their usage drops off, but we can’t predict it before it happens. By the time we notice, they’ve already decided the product isn’t right for them. We need to understand intent earlier so we can intervene proactively.”

From UX research: “Users think in terms of tasks, not features. They don’t say ‘I want to use the dependency mapping tool.’ They say, ‘I need to make sure the design team finishes before development starts.’ But our product talks about features, not outcomes.”

The underlying problem: Missing the ‘why’

What became clear was that the organization had plenty of data about behavior but no framework for understanding intent. They could answer questions like:

  • How many people use feature X?
  • What’s the average session duration?
  • Which users log in most frequently?

But they couldn’t answer the questions that actually mattered:

  • What are users trying to accomplish when they use feature X?
  • Why do some users stick around while others churn?
  • What combination of goals and workflows predicts long-term retention?

This may sound familiar. They have data about behavior but lack context about intent. Without understanding the users’ different definitions of success, they use generic personalization that recommends “similar features” and misses the mark entirely.

The four-phase framework for creating intent-based user clusters

Based on our work with the enterprise SaaS client, we developed a systematic framework for building intent-based clusters from scratch.

The process has four distinct phases, each building on the previous one.

Think of this as a directional guide rather than a rigid formula. You can adapt the scope based on your resources and organizational complexity.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Phase 1: Capture institutional knowledge and identify gaps

Most organizations have more customer insight than they realize; it’s just scattered across teams, buried in old reports, and locked in individual team members’ heads. The first phase consolidates that knowledge and identifies what’s missing.

Graphic of phase 1 in the intent based user segmentation process

1. Conduct cross-functional interviews

Start by interviewing stakeholders who interact with users regularly: product managers, customer success, sales, support, and marketing. For our client, we conducted interviews with seven team members from UX research, growth, engagement, marketing, and analytics.

The goal isn’t consensus. You want to uncover patterns and contradictions instead. Focus your conversations on these questions:

  • How do you currently describe different user types?
  • What patterns have you noticed in how different users engage with the product?
  • What data exists about user behavior that isn’t being used to inform decisions?
  • Where do personalization efforts break down today?
  • What questions about users keep you up at night?

These conversations surface institutional knowledge that never makes it into documentation.

Capture everything. The contradictions are especially valuable—they reveal where teams operate with different mental models of the same users.

2. Audit existing research and reports

Next, gather relevant user research, data analysis, and customer insight. For our client, we analyzed eight existing reports, including retention data, cancellation surveys, user studies, engagement patterns, and conversion analysis.

Look for:

  • Marketing segmentation models (usually demographic-heavy)
  • User research studies (often small sample, rich insights)
  • Behavioral analytics reports (feature usage patterns, cohort analysis)
  • Customer journey maps (theoretical vs. actual paths)
  • Support ticket analysis (pain points and use cases)
  • Cancellation surveys (why users leave)

Pay attention to gaps between how different teams think about users. Marketing might segment by company size, while product segments by feature usage. Neither is wrong, but the lack of a unified framework means teams optimize for different definitions of success.

3. Define hypotheses about intent-based variables

Based on interviews and research, develop hypotheses about what variables might define user intent. This is where you move from “who uses our product” to “what are they trying to accomplish.”

For our client, we identified several dimensions that seemed to correlate with intent:

  • Primary workflow type: Are users managing team projects, client deliverables, or personal tasks?
  • Collaboration patterns: Solo work, small team coordination, or cross-functional orchestration?
  • Usage frequency: Daily operational tool or periodic project management?
  • Success metrics: Speed (quick task completion) vs. thoroughness (detailed planning and tracking)?
  • Document complexity: Simple task lists or multi-layered project hierarchies?

The goal is to create testable hypotheses that can be validated in Phase 2.

Phase 2: Validate clusters through user research and behavioral analysis

Once you have initial hypotheses, Phase 2 tests them against real user behavior and feedback. This is where hunches become data-backed insights.

visual of phase 2 in the intent based user segmentation process

1. Develop provisional cluster groups

Transform your hypotheses into provisional clusters. For our client, we identified six distinct intent-based clusters. Let me illustrate with a fictional example of how this might work for a project management SaaS tool:

Sprint Executors: Users focused on rapid task completion and daily standup workflows. They need speed, simple task updates, and quick team coordination. Think startup teams moving fast with lightweight processes.

Client Project Coordinators: Users managing multiple client engagements simultaneously with strict deliverable timelines. They need client visibility controls, progress tracking, and professional reporting. Think agencies and consultancies.

Cross-Functional Orchestrators: Users coordinating complex projects across departments with dependencies and approval workflows. They need Gantt views, resource allocation, and stakeholder communication tools. Think enterprise program managers.

Personal Productivity Optimizers: Users treat the tool as their second brain for personal task management and goal tracking. They need customization, recurring tasks, and minimal collaboration features. Think solopreneurs and executives.

Seasonal Campaign Managers: Users with predictable high-intensity periods followed by dormancy. They need templates, bulk operations, and the ability to archive/reactivate projects easily. Think retail operations teams or event planners.

Mobile-First Coordinators: Users who primarily access the tool from mobile devices for field work or on-the-go updates. They need streamlined mobile experiences and offline sync. Think field service teams or traveling consultants.

Each cluster gets a descriptive name that captures the user’s primary intent, not just their behavior. “Sprint Executor” tells you more about what someone is trying to do than “high-frequency user.”

2. Conduct targeted user research

With provisional clusters defined, recruit users who fit each profile and conduct interviews to understand:

  • Their primary use cases and goals when they first adopted the tool
  • How they discovered and currently use the product
  • Their typical workflows from start to finish
  • What defines success in their role
  • Pain points and unmet needs
  • How they decide which features to explore
  • What would make them cancel vs. what keeps them subscribed

For our client, we conducted three to four interviews per cluster, totaling around 24 user conversations. This gave us enough coverage to validate patterns without drowning in data.

The insights were eye-opening. We discovered that one cluster had the fastest time-to-value but the lowest feature adoption. They found what they needed immediately and never explored further. Another cluster showed the highest retention but needed the longest onboarding. They invested time up front because the tool was critical to their workflow.

3. Analyze behavioral data to confirm patterns

User interviews reveal what people say they do. Behavioral data shows what they actually do. Cross-reference your clusters against:

  • Feature usage sequences (which tools appear together in sessions)
  • Time-to-value metrics by cluster (how quickly do they get their first win)
  • Retention and churn patterns (which clusters stick around)
  • Upgrade and expansion behavior (which clusters grow their usage)
  • Support ticket themes (which clusters need help with what)
  • Feature adoption curves (how exploration patterns differ)

For our client, the data revealed critical differences. The “Sprint Executor” equivalent had fast initial adoption but plateaued quickly. They found their core workflow and stopped exploring.

The “Cross-Functional Orchestrator” cluster showed slow initial adoption but deep engagement over time. They needed to learn the tool thoroughly to unlock value.

These patterns weren’t visible in aggregate data. Only by segmenting users by intent could we see that different clusters had fundamentally different paths to retention.

4. Build detailed cluster profiles

For each validated cluster, create a comprehensive profile that becomes the foundation for personalization. For example:

Cluster name: Sprint Executors

Primary intent: Complete daily tasks quickly with minimal friction and maximum team visibility

Most-used features:

  • Quick-add task creation
  • Board view for visual sprint planning
  • Mobile app for on-the-go updates
  • Real-time team activity feed

Typical workflow patterns:

  • Morning standup with task assignments
  • Throughout the day: quick updates and status changes
  • End of day: marking tasks complete and planning tomorrow

Behavioral flags that identify this cluster:

  • Creates 5+ tasks in first week
  • Returns daily within first 14 days
  • Uses mobile app within first 7 days
  • Rarely uses advanced features like Gantt charts or dependencies

Retention drivers:

  • Speed of task completion
  • Team visibility and accountability
  • Mobile accessibility

Churn risks:

  • Tool feels too complex for simple needs
  • Feature bloat is making core actions harder to find
  • Forced upgrades to access speed-focused features

Personalization opportunities:

  • Streamlined onboarding focused on quick task creation
  • Mobile-first feature discovery
  • Templates for common sprint workflows
  • Integrations with communication tools

These profiles become the single source of truth that product, marketing, and customer success can all reference.

Phase 3: Develop indicators and personalization strategies

The final phase connects clusters to action. This is where the framework moves from insight to implementation.

1. Create behavioral flags for cluster identification

Most users won’t self-identify their intent at signup. You need to infer cluster membership from behavioral signals early in their journey. The key is identifying flags that appear within the first 7-14 days. It should be early enough to personalize the experience before users decide if the tool is right for them.

visual of phase 3 in the intent based user segmentation process

For reference, the “Sprint Executor” cluster in our fictional example:

  • Created 5+ tasks in first week
  • Logged in on 4+ separate days in first 14 days
  • Used mobile app within first 7 days
  • Board or list view used more than timeline/Gantt view (80%+ of sessions)
  • Invited at least one team member within first 10 days
  • Never explored advanced dependency features
  • Average session length under 10 minutes

Versus the “Client Project Coordinator” cluster:

  • Created 3+ separate projects within first week (indicating multiple clients)
  • Used folder or workspace organization features within first 5 days
  • Set up client-specific permissions or external sharing settings
  • Created custom views or reports within first 14 days
  • Longer average session times (20+ minutes per session)
  • Uses professional or client-specific terminology in project names
  • High usage of export or presentation features

The goal is to find the minimum viable signal set that reliably predicts cluster membership. Start with more flags and refine over time based on which actually correlate with long-term behavior.

One critical finding from our client work: early behavioral flags predicted retention better than demographic data.

A user who exhibited “Client Project Coordinator” behaviors in week one showed 40% higher 90-day retention than the average user, regardless of their company size or job title.

2. Map personalization opportunities to each cluster

With clusters and flags defined, identify specific ways to personalize the experience across the user journey:

Onboarding sequences: Tailor the first-run experience to highlight features that match user intent. Show Sprint Executors how to set up their first sprint board, not the full feature catalog with Gantt charts and resource allocation tools they don’t need.

In-app messaging: Trigger contextual tips based on usage patterns. When a Client Project Coordinator creates their third project with similar structure, suggest project templates to save time.

Feature discovery: Recommend next-step features that align with cluster workflows. For Sprint Executors who’ve mastered basic task management, introduce the mobile app and integrations with their communication tools—not complex dependency mapping.

Content and education: Send targeted educational content that addresses cluster-specific goals. Client Project Coordinators get tips on professional reporting and client permissions. Sprint Executors get productivity hacks and team coordination strategies.

Upgrade paths: Present pricing tiers and feature upgrades that match cluster needs. Don’t push team collaboration features to Personal Productivity Optimizers who work solo and won’t use them.

Support prioritization: Route support tickets differently based on cluster. Client Project Coordinators might get priority support since they’re often managing paying clients. Seasonal Campaign Managers might get proactive check-ins before predicted busy periods.

For our client, this mapping revealed opportunities they’d completely missed. One cluster had been receiving generic “explore more features” emails when what they actually needed was advanced security capabilities for compliance requirements. Another cluster kept churning at the end of trial because onboarding emphasized features they’d never use instead of the speed-focused tools that matched their workflow.

Phase 4: Develop test concepts to validate impact

Turn personalization opportunities into testable hypotheses. Don’t roll everything out at once. Start with high-impact, low-effort tests that prove the value of intent-based segmentation.

For our client, we proposed several test concepts structured to validate clusters quickly and build organizational confidence in the framework. Here are a few examples.

visual of phase 4 in the intent based user segmentation process

Example Test 1: Intent-Based Onboarding Survey

Background: The organization lacked a way to identify user intent at the critical moment when users were most open to guidance: right after signup, but before they’d formed opinions about product fit.

Hypothesis: By asking users to self-identify their primary goal during their first meaningful session, we can segment them into actionable clusters that enable personalized feature discovery and messaging, resulting in improved 3-month retention rates by 5-10%.

Test design: During the first session (after initial signup but before deep engagement), present a brief survey asking: “What brings you here today?” with options that map directly to identified clusters:

☐ Coordinate my team’s daily work (Sprint Executors)

☐ Manage multiple client projects (Client Project Coordinators)

☐ Organize complex cross-functional initiatives (Cross-Functional Orchestrators)

☐ Track my personal tasks and goals (Personal Productivity Optimizers)

☐ Plan seasonal campaigns or events (Seasonal Campaign Managers)

☐ Update projects while on the go (Mobile-First Coordinators)

☐ Something else (with optional text field)

Then immediately personalize their first experience based on their response: Sprint Executors see a streamlined task creation tutorial, Client Project Coordinators get guidance on setting up client workspaces, etc.

Success metrics:

  • Primary: 3-month retention rate by selected cluster (looking for 5-10% lift)
  • Secondary: Survey completion rate (target: >80%), feature adoption aligned with cluster (target: 20% lift), time to first value-generating action
  • Guardrails: No negative impact on day 2 or day 7 retention

Acceptance criteria for “winning test”:

  • Survey completion rate >80%
  • 60% of users select a pre-set option (vs. “something else”)
  • Statistically significant retention lift in at least one cluster
  • No degradation in key engagement metrics

Acceptance criteria for “learning test”:

  • 40% of users select “something else” (suggests clusters don’t match user mental models)
  • No statistically significant difference in retention (suggests clusters exist, but personalization approach needs refinement)

Audience: New paid subscribers on first day, trial users converting to paid, reactivated users returning after 30+ days dormant. Start with 25% of eligible users to minimize risk.

Timeline: 90 days to measure primary retention metric, but early signals (completion rate, feature adoption) available within 14 days.

Example Test 2: Cluster-Specific Feature Recommendations

Background: Generic in-app messaging had low click-through rates (<5%) and wasn’t driving feature adoption. Users felt bombarded by irrelevant suggestions.

Hypothesis: For users who match behavioral flags within the first 14 days, triggering personalized feature recommendations will increase feature adoption by 20% and engagement depth by 15%.

Test design: Identify users by behavioral flags, then trigger targeted in-app messages at contextually relevant moments:

  • Sprint Executors see mobile app download prompt after completing 5 tasks on desktop: “Update tasks on the go: get the mobile app”
  • Client Project Coordinators see reporting features after creating third project: “Impress clients with professional progress reports”
  • Cross-Functional Orchestrators see dependency mapping after creating complex project: “Map dependencies to keep cross-functional teams aligned”

Success metrics:

  • Primary: Feature adoption rate for recommended features (target: 20% lift vs. control)
  • Secondary: Overall engagement depth (features used per session), message click-through rate
  • Guardrails: No increase in feature abandonment (starting but not completing flows)

Audience: Users who match cluster behavioral flags within first 14 days. Test one cluster at a time to isolate impact.

Timeline: 30 days to measure feature adoption impact.

Example Test 3: Retention Email Campaigns by Cluster

Background: Generic “tips and tricks” email campaigns had 8% open rates and weren’t moving retention metrics. Content felt irrelevant to most recipients.

Hypothesis: Segmenting email campaigns by identified cluster will improve email engagement by 50% and show a measurable correlation with retention.

Test design: Replace generic weekly tips emails with cluster-specific content:

  • Sprint Executors: “5 ways to speed up your daily standup” / “Mobile shortcuts that save 2 hours per week”
  • Client Project Coordinators: “How to impress clients with professional project reports” / “3 ways to give clients visibility without overwhelming them”
  • Personal Productivity Optimizers: “Build your second brain: advanced filtering techniques” / “Automate your recurring tasks in 5 minutes”

Send to users identified through either the onboarding survey or behavioral flags. Track engagement and retention by cluster.

Success metrics:

  • Primary: Email open rates (target: 50% lift), click-through rates (target: 100% lift)
  • Secondary: Correlation between email engagement and 90-day retention
  • Guardrails: Unsubscribe rates remain stable or decrease

Audience: Users identified as belonging to specific clusters either through survey responses or behavioral flags, minimum 14 days after signup.

Timeline: 6 weeks for initial engagement metrics, 90 days for retention correlation.

Post-test analysis framework

For each test, we established a clear decision framework:

If “winning test”:

  • Roll out to 100% of eligible users
  • Begin development on next phase of personalization for that cluster
  • Use learnings to inform tests for other clusters
  • Document what worked to build organizational playbook

If “learning test”:

  • Analyze all “something else” responses for missing clusters or unclear framing
  • Review behavioral data to see if clusters exist, but personalization was wrong
  • Iterate on messaging, timing, or format
  • Decide whether to retest with refinements or try different approach

If negative impact:

  • Immediately roll back to the control experience
  • Conduct user interviews to understand what went wrong
  • Reassess cluster definitions or personalization approach
  • Consider whether the cluster exists but needs a different treatment

The key to successful testing is starting small, measuring rigorously, and being willing to learn from failures. Not every cluster will respond to every type of personalization, and that’s valuable information. The goal isn’t perfect personalization immediately; it’s continuous improvement based on what actually moves metrics.

Intent-based segmentation mistakes and how to avoid them

Based on our experience implementing this framework, here are the mistakes that will derail your efforts:

1. Starting with too many clusters

More isn’t better. Six well-defined clusters are more useful than fifteen overlapping ones. You need enough clusters to capture meaningfully different intents, but few enough that teams can actually remember and act on them. Start with 4-6 clusters and refine over time. If you find yourself creating clusters that differ only slightly, you’ve gone too granular.

2. Confusing demographics with intent

Job title, company size, or industry might correlate with intent, but they don’t define it. We’ve seen solo consultants behave like “Cross-Functional Orchestrators” and enterprise teams behave like “Sprint Executors.” Focus on what users are trying to accomplish, not who they are on paper.

3. Creating overlapping clusters

Each cluster should be distinct in its primary intent and workflow patterns. If you’re struggling to articulate how two clusters differ behaviorally, they’re probably the same cluster with different labels. Test this by asking: “If I saw someone’s usage data, could I confidently assign them to one cluster?”

4. Ignoring edge cases entirely

Some users will span multiple clusters or switch between them based on context. That’s fine. The framework should accommodate primary intent while recognizing that users are complex. A user might primarily be a “Client Project Coordinator” but occasionally use “Personal Productivity Optimizer” features for their own task management. Don’t force rigid categorization.

5. Skipping the validation step

Your initial hypotheses will be wrong in places. User research and behavioral data keep you honest and prevent confirmation bias. We’ve seen teams fall in love with theoretically elegant clusters that don’t actually exist in their user base, or miss entire segments because they didn’t fit the initial hypothesis.

6. Treating clusters as static

User intent evolves. Someone might start as a “Personal Productivity Optimizer” and grow into a “Client Project Coordinator” as their business scales. Review and refine your clusters quarterly based on new data, product changes, and market shifts.

7. Personalizing too aggressively too soon

Start with high-confidence, low-risk personalization (like targeted email content) before you completely diverge user experiences. You want to validate that clusters behave differently before you build entirely separate onboarding flows.

8. Forgetting to measure impact

Intent-based segmentation is valuable only if it improves outcomes. Define success metrics upfront (e.g., retention lifts, engagement depth, upgrade rates, support ticket reduction) and track them by cluster. If personalization isn’t moving these metrics, refine your approach.

Making intent-based segmentation work for your organization

The framework we’ve outlined works across product categories and company sizes, but implementation varies based on your resources and organizational maturity.

If you have limited data: Start with Phase 1 and Phase 2, using qualitative research to define clusters before investing in behavioral infrastructure. You can manually tag users based on interview responses and onboarding surveys, then personalize through targeted emails and customer success outreach. As you grow, build the data systems to automate cluster identification.

If you have rich behavioral data but limited research capabilities: Reverse the order. Start with data patterns and validate through targeted interviews. Look for natural groupings in your analytics that suggest different workflow types, then talk to representative users from each group to understand their intent.

If you’re a small team: Don’t let perfect be the enemy of good. Start with 3-4 obvious clusters based on your highest-level workflow differences. The founder of a 10-person startup probably has a better intuitive understanding of user intent than a 500-person company with siloed data. Write down what you know, test it with a few users, and start personalizing.

If you’re a large enterprise: The challenge is getting organizational alignment, not defining clusters. Use Phase 1 to surface where teams already operate with different mental models, then use data to arbitrate. Create executive sponsorship for the new framework so it becomes the shared language across product, marketing, and CS.

The key is starting somewhere. Most companies know their one-size-fits-all approach isn’t working, but they keep personalizing around the wrong variables that don’t actually predict what users are trying to accomplish.

Intent-based segmentation reorients everything around the question that actually matters: What is this user trying to accomplish, and how can we help them succeed at that specific goal?

Turn insights into retention that drives revenue

Understanding user intent is just the first step. The real value comes from translating those insights into personalized experiences that keep users engaged and drive measurable revenue growth.

At The Good, we’ve spent 16 years helping SaaS companies identify their most valuable user segments and optimize experiences around what actually drives retention. Our systematic approach to user segmentation goes beyond frameworks. We help you implement experimentation strategies that prove which personalization efforts move the needle on the metrics your board cares about.

Plenty of companies struggle to implement segmentation that’s actually actionable. They end up with beautiful personas gathering dust or broad categories that don’t inform product decisions.

Intent-based segmentation is different because it connects directly to behavior you can observe and experiences you can personalize.

If you’re struggling with generic experiences that fail to resonate with different user types, or if you know your segmentation could be better but aren’t sure where to start, let’s talk about how intent-based segmentation could transform your retention strategy and drive revenue growth.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times https://thegood.com/insights/fritz-oconnor/ Thu, 04 Sep 2025 20:09:59 +0000 https://thegood.com/?post_type=insights&p=110835 Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide. Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two […]

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide.

Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two decades of experience spanning manufacturing, sales, and marketing leadership, Fritz has developed a methodology for building high-performing organizations that deliver results consistently, even in challenging circumstances.

A marketing leader built for manufacturing

Fritz’s career journey reads like a masterclass in understanding customers across different industries. Starting in the printing and paper industry, he cut his teeth in structured sales training programs that taught him the fundamentals of professional sales and business operations.

“I’ve spent my entire career in sales and marketing roles. Almost exclusively in the manufacturing sector for companies that make stuff,” Fritz explains. This foundation in manufacturing would prove invaluable throughout his career, giving him deep insight into the complexity of bringing physical products to market.

His two-decade tenure at GE further refined his skills across diverse business environments. “We always used to say we can work in any industry, anywhere in the world, and still get paid by the same company,” he recalls. This experience working across plastics, appliances, and GE Corporate gave him a unique perspective on how great companies operate at scale.

But it was during his time at GE Corporate that Fritz discovered what would become his career-defining framework: differential value proposition (DVP). Working in a marketing consulting role with virtually every business in GE’s global portfolio, he helped launch this customer-centric approach to messaging and positioning throughout the organization.

This systematic approach to understanding and serving customers became foundational to Fritz’s ongoing success.

Implementing systems and frameworks that take teams from features to solutions

Originally coined by the founder of Valkre Solutions, Jerry Alderman, the DVP framework transforms how companies think about customer messaging and competitive positioning. Fritz became a master at implementing this methodology across diverse organizations.

“What are you offering? Be it a product or service that is better than the customer’s next best alternative,” Fritz explains. This might seem simple, but the implications are profound. Rather than competing on features or price, DVP focuses on solving customer problems in ways that competitors simply cannot match.

The challenge, as Fritz learned during his GE implementation, is that DVP represents a fundamental shift in thinking. "Every business, product, or service has a value proposition, but not every value proposition is differential. So many companies have the same value proposition. The white space is that differential part."

"It's about switching thinking from a feature to a benefit. For example, a blue appliance is not a differential value proposition. It's a feature."

Fritz teaches teams to make this shift by leading with problems and solutions.

"It's how it makes the consumer or customer's life better, how it solves that problem. You have to identify what the problem is. You have to articulate how you can fix that problem in a different way, better than anybody else."

This shift from features to solutions requires teams to understand their customers' actual problems, not just their stated needs.

For leaders, this translates directly into more effective product messaging, clearer value propositions, and ultimately, higher conversion rates.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Overcoming the "this is how we've always done it" challenge

One of Fritz's biggest career wins (and ongoing challenges) centers around implementing the Differential Value Proposition (DVP) methodology across organizations. The implementation at GE became both a success story and a learning experience in change management.

"As you can imagine, anytime you try and launch a new process in a company the size of GE, you can be met with resistance. Especially when you're coming out of corporate."

This resistance taught Fritz a crucial lesson about implementing change: "I don't view that as a challenge or a stumbling block, but as a fantastic and wonderful opportunity because when you flip those people, they become your biggest proponents."

His approach centers on listening first, then demonstrating value in the stakeholder's own language. "It's a listening journey. You've gotta understand what the challenges are that of the people with whom you're working, whether it's an external customer or an internal customer."

"Proactively listen and walk in the shoes of the people I'm working with. When I'm trying to introduce something as significant as DVP or other business tools."

This listening approach helps identify the real challenges and resistance points, making it possible to address them effectively.

The foundation: accountability, responsibility, and challenge

But having the right frameworks isn't enough. Fritz learned that execution depends on creating the right team culture. He is quick to credit his teams as the backbone of his successful projects, and one of the ways he supports them is with clear organizational principles.

"I have a few underlying business principles that I've gained along the way that are the foundational threads for me," Fritz explains. "One is, any team I work with or works for me, my job is to make them as successful as possible."

This people-first approach manifests through three guiding principles:

  • Accountability: Holding yourself and your team responsible for deliverables and outcomes
  • Responsibility: Taking ownership of significant business challenges
  • Challenge: Embracing difficult problems that create meaningful business impact

"The way I do that is through three guiding principles, which are accountability, responsibility, and challenge," Fritz notes. "I want to be entrusted with significant responsibility that is helping to solve a significant business challenge."

These principles translate into a simple but powerful operational mantra: deliver on time, complete with excellence.

"I know those all sound like buzzwords, but they're not meant to be. And we don't treat them as such. We treat them as very simple guiding principles to keep us focused."

Putting it all together at Ironman 4x4

When Fritz joined Ironman 4x4 America, he found the perfect opportunity to apply all of these frameworks.

Ironman 4x4 is a global company that sells off-road parts and accessories for 4x4 vehicles (lift kits, suspension parts, bumpers, etc.). They have been around since the 1950s, but were new to the United States, so Fritz had the opportunity to find new ways to market their complex "fitment" products, or parts that must work with specific vehicle makes and models. This complexity creates both technical and marketing challenges that Fritz's team had to solve systematically.

His sales background gave him an invaluable perspective on marketing effectiveness. "If you spend any time in sales, that means you're around customers, whether those are B2B or B2C customers. And you learn what's important to them."

This customer proximity taught him the critical principle of "show me, don't tell me." Rather than relying on feature lists or industry awards, effective marketing demonstrates value through customer experiences and outcomes.

"We always, in both sales and marketing, it's easy to get into the trap of just talking, talking, talking, describing stuff, talking about features and benefits. Talking about the industry's best. Nobody cares about your industry. They care about how your product or service is going to impact them."

The key to marketing complex products, Fritz knew, is understanding how customers think about their problems. Rather than leading with technical specifications, the focus should be on the customer's end goal and the emotional drivers behind their purchase decisions.

Fritz emphasizes the importance of demonstrating value rather than just describing it: "Really, visual storytelling, video storytelling, placing the customer in the scene so they understand your value. That ability comes from firsthand experience of seeing that happen in the sales arena."

A data-driven website replatforming

His POV shaped everything he was involved in at Ironman 4x4 America, from new product introduction processes to website optimization. Fritz implemented structured new product integration toll gates with clear deliverables and cross-functional accountability, ensuring every product launch was executed with precision across creative, digital, and channel marketing.

His customer-centered thinking and frameworks proved essential when his team tackled a complex website migration from an outdated platform to Shopify. The project was based on their understanding that a website change was necessary to better serve their audience and increase ecommerce sales.

Working with The Good on a DXO Program™, the Ironman 4x4 team executed the redesign and replatforming with data-driven methodology. Rather than relying on opinions about what the site should look like, they embraced rapid prototyping and continuous testing.

"Any decision made without data is just an opinion, right?" Fritz notes, referencing CEO Luke Schnacke's philosophy.

"We try to be very data-driven, which is why it was so important for us to work with The Good, to get that data and share it with the team managing the website replatforming so that they were making data-driven decisions on design and functionality."

They didn’t wait for a “perfect website” to figure out what customers wanted. They tested and got feedback throughout the entire process to make sure they were developing the right ideas.

"I realized we were never going to do it perfectly," Fritz recalls. The team was getting bogged down in opinions about checkout processes, product customizers, and overall site design. "We could end up using half our development budget on building something that doesn't perform."

"Ultimately, we agreed to launch and then test the heck out of it. We didn't want to overburden the development pipeline with projects that don't have a financial impact."

This represents a fundamental shift in thinking. They went from trying to build the perfect site to building a testable foundation for continuous improvement.

The beauty of working with The Good in this situation, Fritz explains, was "the rapid prototyping, the test and learn. We could very quickly get feedback and iterate and then test and learn again."

Multiplying results through partnership

Leveraging an external partnership accelerated progress beyond what internal resources could achieve alone and held the team accountable to the frameworks and goals of staying user-centered and data-driven.

"If you're not an expert, I would recommend doing a website project with a company like The Good. It wasn't a cost, it was an investment," Fritz emphasizes. "And I think that Ironman 4x4 is the beneficiary of the investment that they made with The Good as they migrated over to Shopify and learned about what customers would like."

The partnership enabled intentional, studied testing with proper dependencies and measurable results tracking.

"That whole test and learn methodology is done in a very structured, deliberate way. Making changes in a waterfall, with the proper dependencies articulated, and then tracking the measurable benefits of changes, and then tweaking accordingly from there."

This approach breeds confidence because it's entirely data-driven, removing guesswork from critical business decisions.

Lessons for marketing and sales leaders

For marketing and sales leaders looking to build similar operational excellence, Fritz's approach provides a roadmap: start with principles, understand your customers deeply, make decisions based on data, and never underestimate the power of strategic partnerships to unlock potential.

Start with principles, not tactics

Before implementing any marketing or optimization program, establish clear guiding principles. Fritz's framework of accountability, responsibility, and challenge provided a foundation that influenced every decision and created lasting organizational change.

Understand your customer's next best alternative

Move beyond feature-benefit messaging to understand what your customers would do if your solution didn't exist. This "next best alternative" thinking is the foundation of truly differential value propositions.

Convert resistance through understanding

When facing organizational resistance to change, focus on understanding stakeholder concerns rather than pushing solutions. Meet people where they are and demonstrate value in their language.

Embrace data-driven decision making

Resist the temptation to rely on opinions or best practices. Instead, create structured testing methodologies that let customer behavior guide optimization decisions.

Invest in external partnerships strategically

Recognize when external expertise can accelerate progress. The right partnerships provide capabilities and perspectives that internal teams may not possess, ultimately delivering better results faster.

Starting an optimization journey

Fritz's approach to building and scaling teams, including Ironman 4x4's US marketing operations, demonstrates how principled leadership, customer-centric thinking, and strategic partnerships can create sustainable competitive advantages.

"There's no obstacle too big that can't be overcome with data and optimization, right?" Fritz states emphatically. "The whole point of being data-driven and optimizing is to get time back and to become more efficient."

His advice for other leaders facing similar challenges?

"Get to yes. Figure out how to do it. Don't say, this is why I can't do it. Say this is how I'm going to do it. Here are things I need to do in order to do it. Then hold yourself accountable. Make it happen. Do it."

The secret, according to Fritz, lies in celebrating small wins that compound over time: "Little steps, I always like to say, celebrate the little wins. Go after the little wins because they compound on one another and then all of a sudden you're gonna look back and go, holy mackerel, I can't believe I am where I am."

The secret is consistency: "And it starts with data as your foundation and optimization as the accelerator."

For ecommerce leaders looking to build similar operational excellence, Fritz's framework provides a proven template: establish clear principles, understand customer problems deeply, make data-driven decisions, and never underestimate the power of strategic partnerships to accelerate growth.

Ready to optimize your ecommerce experience with data-driven methodology? Learn more about The Good's Digital Experience Optimization Program™ and discover how strategic partnerships can unlock your growth potential.


The Good helps ecommerce brands like Ironman 4x4 optimize their digital experiences through research-backed testing and strategic partnerships. Our team combines deep technical expertise with proven methodologies to deliver measurable results for growing brands.

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
From Data Collector to Data Connector: Embracing Research Democratization https://thegood.com/insights/research-democratization/ Mon, 16 Jun 2025 15:26:20 +0000 https://thegood.com/?post_type=insights&p=110652 As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights. “The fundamental shift that people have to make is that you’re no […]

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights.

“The fundamental shift that people have to make is that you’re no longer a data collector. You’re a data connector,” says Ari Zelmanov, former police detective and current research leader. In Ari’s view, as teams get leaner and tools get better at executing research tasks, the job of the researcher becomes standing up repositories, socializing learning mechanisms, and creating the systems that empower organizations to act on good information.

We spoke with research leaders who've successfully made this transition, transforming their teams from siloed specialists into customer-centric learning cultures. Their approaches varied, but one theme was clear: when you empower others to answer their own questions, you don't diminish your value, you multiply it.

The d word holding us back

Before diving into solutions, there's an elephant we need to address: Democratization. Many researchers worry that democratizing research will lead to poor methodologies, incorrect conclusions, or devalued expertise. But Ari feels the argument is nye.

"The only people arguing about democratization are researchers," says Ari. "Nobody else is arguing about it. We're infighting about something that we have zero control over. It's happening."

I tend to feel like anyone arguing about democratization is missing one critical point: customer centricity isn't just one person's job.

Anton Krotov, Researcher in an organization of over 10,000 people, was in the fortunate position of being very trusted by his colleagues. So much so that they believed research could answer all of their questions.

“I had already established a reputation. I was fortunate that I didn't need to sell the value of research. Quite the opposite. People came to me with too many requests. They believed research could do everything for them. I needed to set up boundaries.”

Overwhelmed with requests from colleagues, Anton realized that the solution wasn't saying no—it was saying yes in a different way. Rather than becoming a bottleneck, Anton chose to become a bridge.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Connect teams through shared intelligence

Good intelligence is the responsibility of many disciplines, not just research. To get answers quickly, Ari's teams use what he calls the "Moneyball" approach to research, a framework that prioritizes speed and accessibility over methodological purity:

"Product teams are incentivized to move fast. So, how do you make research fit into that in a way that makes sense? We built something called Moneyball Research. It's super simple: start with what you know. It could be in your repository, it could be what you know. Then you go to what data is accessible within 24 to 48 hours. That's usually internal analytics, CSAT tickets, NPS, sales conversations, and tribal knowledge. Then—and only then—do you go to primary research."

This approach shifts conversations away from methods and focuses instead on what teams need to know and how confident they need to be. "Then it's up to the researcher to be the doctor. Diagnose that, determine how they're going to collect that evidence given the time, money, and level of rigor."

René Bastijans, lead researcher at a growth-stage startup, has found creative ways to loop colleagues into data collection. His sales team is trained to lightly survey prospects during sales calls and report back to the wider team.

"We've trained our sales team to ask for specific data and enter it into Salesforce. Researchers and the product team have access to these data, and therefore, sales has allowed us to keep a pretty good pulse on the market."

This creates a healthy feedback loop that keeps everyone abreast of evolving user needs while extending the research team's reach without expanding headcount.

Invite colleagues into the research process

While it might seem counterintuitive to share methodologies and research responsibilities, successful research leaders see democratization as an opportunity rather than a threat.

To remove research bottlenecks, Anton ran internal workshops to upskill his colleagues on doing their own research. This proactive approach to education focused on tailoring training to his colleagues' specific needs: "I try to cover the cases that will be really applicable, so I don't offer any cookie-cutter material and don't go much into theory. It's really tailored to their day-to-day work."

The key is meeting people where they are and giving them tools that fit their contexts. Not everyone needs to become a master researcher, but many can learn to conduct basic customer interviews or query data effectively.

Brittany Lang, UX Research Manager and M.S. in Information Science, uses project reviews as a time to cultivate a shared point of view and continually refine her thinking.

“Before we socialize research plans, I usually take a look at it, or I have someone else on my team take a look at it. It doesn't have to be your manager that's reviewing something, but can someone give you feedback?

It's nice when coworkers leave comments and I can see what other people on the team have said and we can agree or challenge, and then have a discussion about it. I also learn in those moments too. When I'm looking at how members of my team have reviewed other work, where they're coming from and their perspective, I learn a lot from them in those moments.”

Facilitate low-risk learning

It takes more than a few ambitious researchers to imbue a company’s culture with a learning mindset, which is why rituals and learning programs are so important.

Anton’s employer formalized this approach to building safe learning environments through a program called "Gigs for Growth," a repository of side projects from different departments where employees can apply to work on learning opportunities outside their typical scope.

"It's like a company green light that you can work on learning during your full-time gig and outside of your typical work scope. Something that you would never otherwise be able to touch in the company."

Under this program, researchers can support QA engineers, sales can support marketing, and everyone gets exposure to new perspectives that inform their primary roles. "You get some really new experiences that otherwise you wouldn't be able to."

At The Good, we like to build regular, low-stakes opportunities for knowledge sharing and skill development. One of our approaches at The Good is a ritual called "Random Question of the Week." During another bi-weekly meeting, team members share client questions that stumped them or that they felt they could have answered better.

These conversations help build shared perspectives that then get turned into artifacts:

  • FAQ entries for brief, punchy answers
  • Articles for long-form perspectives
  • Policies or SOPs that outline ways of working

The result is that teams become more aligned, can answer tough questions on the spot, and save time by referring to their collective knowledge instead of rehashing the same discussions.

Another effective ritual is "Critique & Share" sessions, where team members bring questions, websites they admire, or work they're developing to get fresh perspectives from colleagues who haven't been deep in the weeds of a particular project.

Maggie Paveza, Senior Strategist at The Good, shares that it has helped her break the ice when building a shared P.O.V.

"It's pretty informal and often we're not showing our own work, so it feels less intimidating to ask your team members, 'why do you think this competitor is using this strategy,' than if it were your own work," explains Maggie.

The power of being a data connector

"The fundamental problem that research as an industry has is we've been myopically focused on the front end of the equation," says Ari. "Data collection, statistical significance, theoretical saturation—insert whatever fancy academic word you want in here. But the real power comes on the back end of the equation."

That back end is about connection, synthesis, and empowerment. When researchers shift from being data collectors to data connectors, they don't lose their expertise; they amplify it.

As Anton puts it, "Where soil is right, then you can do things. Praise people for when they do things great. You can learn from mistakes, you can learn from success."

The goal isn't to turn everyone into a researcher. It's to create an environment where customer insights flow freely, where good questions get asked by many disciplines, and where learning happens continuously rather than in bursts.

Making the shift

Building a customer-centric learning culture doesn't happen overnight, but it starts with understanding where your organization is open to change and being constructive about how you facilitate it.

Look for teams and individuals who are already curious about customers. Find the places where people are asking good questions but lack the tools or confidence to find answers. Then meet them there with the right combination of education, tools, and support.

"At the end of the day, it's about empowering decision-making," says Ari. And in a world where customer expectations evolve quickly and research teams are lean, that empowerment might be the most valuable thing researchers can provide.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
The Biggest Roadmap Mistake: Prioritizing Low-Impact Features https://thegood.com/insights/feature-bloat/ Mon, 19 May 2025 19:43:44 +0000 https://thegood.com/?post_type=insights&p=110593 Picture this: Your product team just wrapped up the quarter with a bang. Fifteen new features shipped. The engineering and development teams are exhausted but proud. The roadmap is color-coded and beautiful. But then the metrics start to roll in. Conversion rates are flat. Churn is up. Customer satisfaction scores haven’t budged. Sound familiar? You’re […]

The post The Biggest Roadmap Mistake: Prioritizing Low-Impact Features appeared first on The Good.

]]>
Picture this: Your product team just wrapped up the quarter with a bang. Fifteen new features shipped. The engineering and development teams are exhausted but proud. The roadmap is color-coded and beautiful.

But then the metrics start to roll in. Conversion rates are flat. Churn is up. Customer satisfaction scores haven’t budged.

Sound familiar? You’re not alone.

Most SaaS companies are stuck in a feature factory, churning out functionality users don’t want, don’t use, or actively avoid. While your competitors are optimizing the core experiences that drive growth, you’re polishing the peripheral features.

Here’s the uncomfortable truth: You’re probably building the wrong things.

The hidden cost of feature bloat

Low-impact features aren’t just harmless additions to your product; they’re silent growth killers. Every hour spent building or optimizing a feature that doesn’t move the needle is an hour stolen from something that could grow your business.

But what exactly makes a feature “low-impact”? It’s not about whether the feature works or whether someone, somewhere, might find it useful. Low-impact features are those that:

  • Address edge cases rather than core user needs
  • Generate minimal usage after launch
  • Don’t correlate with key business metrics like retention or expansion revenue
  • Create more complexity than value

According to research by UserPilot, the average core feature adoption rate is 24.5%. That means more than 75% of features might as well not exist from a user perspective.

When a SaaS company prioritizes those extra features, it is likely suffering from feature bloat.

Feature bloat is costly for your team, your users, and your business. An excess of features creates complexity and detracts from your product’s core value. Sometimes, feature bloat can actually prevent your product from doing its main job.

The cost of feature bloat develops quickly. Some examples include:

Development opportunity cost: While your team builds that quirky reporting dashboard that three power users requested, your core onboarding flow continues to hemorrhage trial users.

User experience degradation: Every new feature is another decision your users have to make, another item in the navigation, another potential source of confusion. Research from the Nielson Norman Group shows that feature bloat directly correlates with decreased user satisfaction and other industry experts agree. Jared Spool calls it experience rot and often highlights the inevitable complexity creep and user experience decline that occurs when teams add features without ruthless prioritization.

Technical debt accumulation: Low-impact features still need maintenance, bug fixes, and updates. They create dependencies that slow down future development and increase the risk of breaking changes.

Low-impact features don’t just waste resources; they actively prevent you from building high-impact ones.

Consider a hypothetical case of a B2B SaaS platform that spent six months building an advanced scheduling feature requested by their largest enterprise client. The feature worked beautifully for that one client, but sat unused by 98% of their user base. Meanwhile, their core product suffered a 60% drop-off rate during onboarding. This was a fixable problem that could have doubled their conversion rate.

The real kicker? That scheduling feature became a maintenance burden, requiring updates every time they changed their core platform. What started as a “quick win” became an ongoing resource drain.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Warning signs you’re in the feature bloat trap

It isn’t always easy to identify if and when you’re prioritizing low-impact features. Here are some of the common red flags that might make you think twice about how you’re building your roadmap:

  • Lack of data: Decisions based on gut feeling rather than data-driven insights can easily lead to prioritizing the wrong things.
  • The squeaky wheel syndrome: Your roadmap is driven by whoever complains loudest, not by what data shows you should build.
  • Internal politics: Sometimes, features are prioritized based on the influence of certain stakeholders rather than their actual value to the user or the business.
  • Fear of risk: High-impact features often involve more risk and uncertainty. Teams might opt for safer, less impactful options to avoid potential failures.
  • Shiny object syndrome: New feature ideas consistently trump optimization of existing functionality, or the allure of new and trendy features can sometimes overshadow the importance of addressing core user needs.
  • Short-term focus: A focus on immediate gains can lead to neglecting long-term strategic goals and prioritizing quick wins over sustainable growth.
  • The metrics disconnect: You can’t clearly articulate how each planned feature connects to business outcomes like revenue, retention, or user satisfaction.
  • Poor prioritization framework: Without a clear and consistent framework for evaluating and prioritizing features, it’s easy to fall into the trap of prioritizing the wrong things.
  • The “just one more thing” mentality: Features keep getting added to releases because they seem small and easy.

The longer your team functions in the trap of any of these situations, the harder it is to change the behavior. So, if this resonates, try to get your team on board to shift behavior and implement some of the strategies we outline below.

A better way: Data-driven prioritization

The solution isn’t to stop building features, it’s to build and optimize the right ones. This means establishing clear criteria for what constitutes “high-impact” before you write a single line of code.

Start with the outcome, not the output

Instead of asking “What features should we build?” ask “What user behaviors drive business growth, and how can we encourage more of them?”

Implement continuous user research

Don’t just collect feature requests, use them as an opportunity to understand the underlying problems. Continuous research that includes things like regular user interviews, behavioral analytics, and feedback loops can help you distinguish between what users say they want and what actually drives value.

Continuous research also allows you to test assumptions before implementation. Including rapid testing in your workflow can help you get fast, early feedback on concepts from real users for better direction.

Let the data guide decisions

Base your prioritization decisions on data from user research, analytics, and market analysis so that you can focus on what users truly need and what will drive the most significant impact.

Use prioritization frameworks consistently

Tools like RICE (Reach, Impact, Confidence, Effort) or the ICE (Impact, Confidence, Ease) scoring model help you compare feature ideas objectively. The specific framework matters less than using one consistently.

At The Good, we use the ADVIS’R Prioritization Framework™ to guide our optimization strategy.

Measure everything

For every feature you build, define success metrics upfront. If you can’t measure whether a feature is working, you can’t determine if it’s worth the investment.

Consider the indirect impact

Sometimes, a feature might not directly impact a North Star metric but could have a significant indirect impact. For example, improving the onboarding experience might not immediately increase conversion rates but could lead to higher user retention and lifetime value in the long run.

Focus on your most valuable users

Part of building and optimizing the right features means understanding your users. If you haven’t, conduct a step-by-step user segmentation study to help identify your highest-value users. Then you can tailor feature prioritization and optimization to their use case before moving on to other segments. A feature that’s high-impact for one segment might be low-impact for another.

Embrace the power of “no”

The most successful product teams are ruthless about saying no to good ideas so they can say yes to great ones. Create explicit criteria for what doesn’t make the cut. It’s okay to say “no” to features that don’t align with your strategic goals or offer significant value.

Moving beyond the feature bloat factory

Breaking free from the low-impact feature trap requires discipline, but the payoff is substantial. Companies that master prioritization don’t just build better products; they build products faster, with fewer resources, and with much better business outcomes.

The goal isn’t to build everything your users request. It’s to understand what truly drives value and relentlessly focus on that.

Your roadmap should be a strategic weapon, not a wishlist. Every feature should earn its place through clear evidence that it will move the metrics that matter.

Stop building features. Start building value.

Struggling to identify which features truly drive growth? Our Digital Experience Optimization Program™ helps SaaS companies cut through the noise and focus on changes that move the needle.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post The Biggest Roadmap Mistake: Prioritizing Low-Impact Features appeared first on The Good.

]]>
How to Identify Your Most Valuable User Segments and Prioritize Accordingly https://thegood.com/insights/user-segments/ Thu, 01 May 2025 05:24:04 +0000 https://thegood.com/?post_type=insights&p=110491 Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers. Despite this being a proven economic model, companies are rarely focusing their effort on that 20%. It’s not because they don’t want to; it’s […]

The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

]]>
Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers.

Despite this being a proven economic model, companies are rarely focusing their effort on that 20%.

It’s not because they don’t want to; it’s because it is easy to get wrapped up in not losing a single sale, to the point that you are spreading yourself too thin.

If you focus your energy and product improvements on the highest-value user segment, you will see greater returns for less work.

In this article, we’re sharing the study we recently ran for a client that helped us identify their most valuable user segments and prioritize improvements to meet their needs.

What are user segments?

User segments are groups within a customer base who share similar characteristics, behaviors, or values.

They are created with user segmentation, which researches those commonalities and divides your audience into distinct groups. You can then tailor experiences, personalize messaging, and focus optimization efforts on their specific needs.

Common types of user segments

User segments can be divided based on different traits. The type of segmentation you use will vary based on your use case and goals. Here is a quick overview of common user segments.

Segmentation TypeDescriptionExample Use Case
DemographicSegments users by age, gender, income, education, etc.Targeting campaigns for specific roles
FirmographicSegments by company size, industry, revenue, or locationTailoring features for SMBs vs. enterprise
BehavioralBased on how users interact with your product, such as product usage, feature adoption, or login frequencyIdentifying power users or at-risk users
TechnographicSegments by technology stack, device, browser, or OSPrioritizing integrations or support
Needs-BasedSegments by specific problems or needsCustomizing messaging for value drivers
Value-BasedGroups by economic value (annual recurring revenue, lifetime value, subscription tier)Prioritizing high-revenue customers
Lifecycle StageSegments by user journey (trial, active, churn risk, etc.)Triggering onboarding or win-back flows
RFM (Recency, Frequency, Monetary)Groups based on most recent activity, engagement frequency, and spendIdentifying loyal or dormant users
AcquisitionBased on the marketing channel or campaign sourceTailor messaging or personalize the experience

Why companies optimize for the wrong segments

When we run prioritization exercises, one of the most common mistakes we see is companies focused on segments of users based on volume. If the segment has more users, they automatically believe it deserves more attention.

This reflects one of the three common prioritization mistakes:

  1. Volume bias: Prioritizing segments with the most users rather than the most value
  2. Squeaky wheel focus: Optimizing for the users who complain the loudest
  3. Recency fallacy: Focusing on the latest acquisition channel or user cohort without evaluating their actual value

The uncomfortable truth is that your most valuable segments may not be your largest, your loudest, or your newest.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Conducting a segmentation study step by step

At The Good, we’ve developed a systematic approach to identify and prioritize your most valuable user segments. Here’s how it works.

Step 1: Set your goals

Before you start analyzing data, segmenting users, and prioritizing, you need a clear understanding of your project goals. In most cases, they will look something like this:

  • Identify and quantify subsets of user segments based on use cases
  • Understand the potential value of known segments
  • Identify features and benefits that are most important on a per-segment basis
  • Find opportunities to improve the engagement of high-value users

These goals can be turned into the key research questions of your study.

Step 2: Identify valuable behaviors beyond revenue

Your most valuable user segments, of course, need to drive revenue, but there are other indicators to consider when prioritizing who you are building/optimizing for.

Current value metrics, future value indicators, influence value, and cost-to-serve factors will all influence the overall value of a user segment.

  • Current value metrics: Revenue generated, subscription tier, feature usage, team size
  • Future value indicators: Growth trajectory, expansion potential
  • Influence value: Referral behavior, advocacy impact
  • Cost-to-serve factors: Support requirements, implementation complexity, churn risk, acquisition cost

Identifying and tracking these metrics and scoring segments based on this information will help paint a more holistic picture of value. Some segments might not be your biggest revenue drivers today, but they represent significant future opportunities, so you may choose to optimize for them instead of your current biggest spenders.

Step 3: Collect qualitative and quantitative data

Once you’re clear on goals and value metrics, you’re ready to start collecting data for your segmentation analysis. Gathering a multidimensional data set will help you better understand users as the complex humans they are. Types of data that will help your analysis will include:

  • Usage patterns: Frequency, features used, time spent in the product
  • Transactional data: Revenue contribution, plan type, upgrade/downgrade history
  • Behavioral signals: Engagement with key activation points, referral behavior
  • Acquisition source: Channel origin, customer acquisition cost, time to convert
  • Demographic/firmographic data: Company size, industry, role

Most of this data will be sourced from your main quantitative collection tool, such as Google Analytics or your product analytics. But for a truly effective study, you need to supplement all this information with qualitative context. Surveys, session recordings, or user tests can help you better understand why your users are doing what they do.

Step 4: Conduct factor analysis to identify value drivers

Group your data together into a reduced number of independent factors that represent the underlying themes within the dataset. This will help identify value drivers that differentiate your user segments.

For example, in a recent segmentation project, we discovered distinct value factors that formed natural segment groupings:

  • Efficiency seekers: Primarily valued time savings and streamlined workflows
  • Integration power users: Heavily utilized connections to other tools in their stack
  • Data-driven optimizers: Focused on analytics and performance insights
  • Scale-focused operators: Needed enterprise features and team collaboration

Understanding these value drivers helps you move beyond simple demographic segmentation to truly understand what motivates different user groups.

Step 5: Apply cluster analysis to form actionable segments

Once you’ve identified the key value drivers, use cluster analysis to group users with similar characteristics. Usually, 3-7 distinct segments emerge from the exercise.

These segments often cross traditional demographic lines, revealing unexpected patterns. For example, power users might not be enterprise customers as you assumed, but mid-market companies with specific workflow needs.

This is also the time to start looking for natural clusters of behavior that indicate high-value segments. Considering this, when you’re analyzing user clusters, look for key differentiators like:

  1. Usage frequency: Daily users vs. weekly vs. monthly
  2. Feature utilization: Which user flows are most common for each segment
  3. Value perception: What features each segment values most highly
  4. Growth potential: Which segments show increasing usage over time

Step 6: Quantify segment value and opportunity size

The inputs from your data, factor, and cluster analyses will produce outputs of your high-value segments.

Here’s an example of that workflow so far. The data (survey themes collected) on habits, values, and use cases were the inputs for the factor and cluster analyses. That resulted in segments around the frequency of product use, customer values, and reason for use.

An example of the workflow to quantify segment value and opportunity size.

For each potential high-value segment, revisit the value metrics you established in step 2 of the process. Calculate the relevant metrics to ensure you’re not just following hunches but making data-backed decisions about where to focus.

The most valuable segments often show strength across multiple metrics, not just in current revenue. For example, a segment with moderate current revenue but excellent retention and high referral rates may be more valuable than a high-revenue segment with poor retention.

You’ll also start to see how your most valuable segments differ from your hypotheses. Maybe it’s not defined by company size but by a specific usage pattern. As a specific example, imagine users who perform at least 3 exports per week AND invite 2+ team members within the first 30 days are 4.5x more likely to upgrade to the enterprise tier within 6 months.

This kind of insight could transform your priorities, focusing on making these specific actions easier and more intuitive, rather than spending time/money on creating new features for other segments.

Step 7: Map segments to specific opportunities

The final step is to leverage your knowledge about high-value users to focus optimization efforts. Now, you can connect your segment analysis to concrete optimization opportunities. A few thought starters for this process:

  1. What actions correlate with long-term success for this segment?
  2. Where do users in this segment typically struggle?
  3. What capabilities does this segment need but doesn’t have?
  4. What value propositions connect most strongly to this segment?

You’ll end up with a list of optimization opportunities. To prioritize those efforts and start building a roadmap, we recommend scoring them across these dimensions on a 1-10 scale, then calculating a weighted score that reflects your company’s specific situation and constraints.

  1. Potential revenue impact: How much additional revenue could optimizing for this segment generate?
  2. Implementation effort: How difficult would it be to implement changes for this segment?
  3. Time to results: How quickly can you expect to see meaningful outcomes?
  4. Strategic alignment: How well does focusing on this segment align with your long-term business strategy?

For example, if you’re under pressure to show quick wins, you might weigh “time to results” more heavily. If you’re planning for long-term growth, strategic alignment might carry more weight.

This will be the start of your roadmap for optimization efforts, ensuring that you focus resources on the right opportunities for your most valuable segments.

Focus on your highest-value segments first, then gradually expand your optimization efforts to secondary segments once you’ve captured the initial value. Always consider potential cross-segment impacts when making changes.

Drive growth with user segmentation and prioritization

As your product and market evolve, so will your user segments. What constitutes a high-value segment today may shift as you introduce new features or enter new markets.

We recommend evaluating your user segments quarterly, with a more comprehensive review annually or whenever you experience significant business changes.

Remember, the path to scaling your SaaS business isn’t through trying to please everyone with generic optimizations. It’s through deeply understanding which user segments create the most value and deliberately focusing your limited resources on enhancing their experience.

Ready to identify and prioritize your most valuable user segments? The Good’s Digital Experience Optimization Program™ can help you discover untapped growth opportunities through expert research, strategic insight, and data-driven experimentation. Contact us to learn more about how our team can help your SaaS business scale faster.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

]]>
How To Cut Test Cycle Time In Half Without Losing Out On User Insights https://thegood.com/insights/test-cycle-time/ Sat, 19 Apr 2025 06:57:38 +0000 https://thegood.com/?post_type=insights&p=110468 Experiment-led growth has been quickly absorbed into the SaaS vernacular. The approach complements product-led growth and puts systematic experimentation at the core of decision-making and growth initiatives. Product leaders have seen firsthand how testing changes with real users before full implementation can minimize risk and maximize ROI. Booking.com, Netflix, and Amazon have made experiment-led growth […]

The post How To Cut Test Cycle Time In Half Without Losing Out On User Insights appeared first on The Good.

]]>
Experiment-led growth has been quickly absorbed into the SaaS vernacular. The approach complements product-led growth and puts systematic experimentation at the core of decision-making and growth initiatives.

Product leaders have seen firsthand how testing changes with real users before full implementation can minimize risk and maximize ROI. Booking.com, Netflix, and Amazon have made experiment-led growth central to their success by running thousands of experiments annually to optimize UX.

But, teams that don’t have the time, resources, or traffic of the industry leaders might find it difficult to take on the practice effectively.

In this article, we’re sharing how SaaS companies between product-market fit and scale can cut down on test cycle time to make adoption of experiment-led growth practices more accessible.

What is test cycle time?

In optimization, test cycle time is the full length of time taken to complete an experiment. It includes all phases, from ideation to planning, through execution, and analysis. It’s an important metric for measuring the efficiency of testing programs and identifying bottlenecks in the experimentation workflow.

How slow test cycles are holding you back

Slow testing cycles aren’t just annoying, they can create real problems for your business and, as we mentioned, hold you back from the benefits of experiment-led growth.

We’ve seen firsthand with clients how test cycle delays can mean months between identifying an issue and implementing a solution. That’s simply too long in today’s market.

Here are a few of the high costs you might be paying for a slow test cycle:

  • Market opportunities slip away while waiting for test results
  • Competitors gain ground during your lengthy testing phases
  • Development resources get tied up in prolonged testing initiatives
  • Customer frustration builds as issues remain unfixed
  • Decision fatigue sets in as teams debate what to test next

Significantly reduce test cycle time with these tips

So, what should you do about it? We reached into our tool chest and pulled together the strategies that have worked for our clients and trusted partners to reduce test cycle time.

1. Supplement A/B tests with rapid tests and qualitative research

A/B testing has its place in product optimization, but there are plenty of ways it fails.

Regulatory challenges, low traffic, time constraints, and other issues make A/B testing an at times untenable solution for validating changes to their website or app.

In some companies, by the time a test idea passes through all the bureaucratic loopholes and oversight at an organization, it’s no longer lean enough to justify testing. Without an alternative testing method, teams are left without any data at all. In these instances, A/B testing is just too clunky to make sense.

But, there is a great alternative solution that can provide insights when A/B testing isn’t an option. Using targeted qualitative methods like rapid testing helps overcome the challenges mentioned and builds confidence in changes within days instead of weeks.

Software Director of Product Marketing Gabrielle Nouhra, who leverages The Good for research and experimentation support, says this about rapid testing:

“The speed at which we obtain actionable findings has been impressive. We are receiving rapid results within weeks and taking immediate action based on the findings.”

How it works:

  • Recruit users from your target segment
  • Choose the right rapid testing method and conduct tests on specific features or flows
  • Analyze patterns immediately rather than waiting for statistical significance
  • Implement clear findings quickly while planning more extensive testing only where truly needed

In addition to the efficiency of rapid testing and its ability to overcome regulatory challenges, it also has the cool added benefits of adding some qualitative feedback and the voice of the consumer to your work.

For more on rapid testing and how it differs from A/B testing, check out this deep dive.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

2. Run parallel tests

Rather than testing one hypothesis at a time, well-thought-out testing programs can run multiple tests simultaneously across different parts of the product experience.

This can be done with both A/B tests and rapid tests.

The most important consideration for running concurrent tests is to differentiate. Our director of UX and Strategy, Natalie Thomas, says:

“It’s important to look at behavior goals to assess why your metrics improved after a series of tests. So if you’re running too many similar tests at once, it will be difficult to pinpoint and assess exactly which test led to the positive result.”

To make sure parallel testing is done correctly, there are a few tips that can help:

  • Create a testing roadmap that covers independent areas of your product
  • Build small, cross-functional testing teams assigned to each area
  • Use a centralized dashboard to track all ongoing tests
  • Establish clear handoff processes so development teams can implement findings quickly

When done well, parallel testing can double or triple your insight velocity without requiring additional resources.

3. Prioritize high-impact tests

Not all tests require the same level of rigor. Smart SaaS companies create prioritization systems that allow critical tests to move faster and, at minimum, reduce test cycle time for the most important experiments.

To create a proverbial fast lane for high-impact tests, there are tons of prioritization frameworks you can pull inspiration from (PIE, ICE, etc.). At The Good, we use the ADVS’R framework, incorporating binary evaluations and stoplight scoring. We like that it encourages strategic thinking and accounts for both business value and user experience/needs in the prioritization schema.

ADVIS'R prioritization framework for planning test priority by The Good

While this is our preferred method, there is no “one size fits all” prioritization method. Regardless of the framework you choose, there are a few important steps to making sure you work on the most important hypotheses first.

  • Develop clear criteria for what qualifies as “high impact”
  • Create simplified approval processes for these tests
  • Allocate dedicated resources to fast-track implementation
  • Accept slightly higher uncertainty in exchange for speed, where appropriate

Kalah Arsenault, Autodesk’s Marketing Optimization Lead, implemented a custom prioritization calculator that allowed her team to rank test priority on business impact, level of effort, and urgency. The result was 2x the testing volume.

“We were able to double the amount of tests our team took on within one year. So, from this compared to last year, we doubled the volume of testing with a new operating and prioritization model.”

4. Adopt elements of modular testing

Another more technical way to reduce test cycle time is to adopt modular testing.

Traditional design or content tests start from scratch each time. Modular approaches reuse components, dramatically reducing setup time and saving on potential design and development costs.

To get through the testing cycle more efficiently with a modular approach, here are a few tips:

  • Create pre-built design templates for common test scenarios (onboarding, checkout, feature adoption)
  • If working on qualitative research, standardize recruitment criteria and screening questions for your ICP, and tweak based on the hypothesis and context of each specific test
  • Once you get results, leverage analysis frameworks that speed up insight generation
  • Implement consistent plug-and-play reporting formats that make decision-making faster

5. Leverage AI as a research assistant

AI can be a great tool to support you in quicker testing analysis and to point you to the research you should be taking a closer look at.

Tools can analyze user session recordings, heatmaps, and customer feedback at scale, identifying patterns that may inspire the A/B or rapid tests you’d like to run to validate changes.

Some practical applications of AI-assisted analysis include:

  • Automatically categorizing user feedback into actionable themes
  • Identifying anomalies in user behavior that warrant immediate attention
  • Generating preliminary insights that researchers can validate quickly
  • Transforming qualitative data into quantifiable metrics

Companies using AI-assisted analysis can move from question to answer much faster by quickly identifying common trends in data or research that require further investigation by an expert researcher and/or feedback from real users.

Speeding up your test cycle means better products and faster growth

There is a big caveat to keep in mind with all of this: there are tradeoffs when you prioritize speed.

A faster testing cycle means more room for errors and statistically insignificant results. While precautions can be taken, there are pros and cons to whatever approach you choose.

For some organizations, a slow testing cycle that reaches statistical significance is worth it for a high-risk change. This may mean they’re fine with a custom, one-test-at-a-time approach.

Also, speeding up test cycles takes a certain level of expertise and nuance. You can’t just ask users what they prefer and call it a validated improvement.

To make these changes well, it takes some training, but it’s worth the investment.

SaaS companies that successfully implement these approaches can see benefits like:

  • Accelerated product improvement cycles
  • Higher customer satisfaction and retention
  • More efficient use of development resources
  • Competitive advantage through faster innovation
  • Improved team morale as they see results from their work more quickly

If you aren’t sure how to get started, let’s talk. We can help you accelerate your product improvement cycle without sacrificing the quality of insights.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How To Cut Test Cycle Time In Half Without Losing Out On User Insights appeared first on The Good.

]]>
How to Make the Move From Intuition-led to Data-driven https://thegood.com/insights/intuition-led-to-data-driven/ Fri, 28 Mar 2025 21:10:55 +0000 https://thegood.com/?post_type=insights&p=110423 If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of […]

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of CEOs want a data-driven organization, the reality is that many organizations are still largely intuition-run. It takes more than a compelling argument in those contexts to turn the tide.

If you’re spearheading the shift from an intuition-driven to a data-driven practice, it can be an uphill battle and a lonely one at that. We spoke with Hanna Grevelius, CPO at Golf Gamebook & Advisor, and Maggie Paveza, Digital Strategist at The Good, about how they’ve navigated data-imperfect conditions throughout their careers and successfully advocated for data-first principles.

Whether you’re working with limited data or as your company’s first A/B testing specialist, their stories make one thing clear: doing it alone doesn’t have to be so daunting.

Keep reading to hear about:

  • How they learned to work with data
  • How to leverage data to build prioritization intuition
  • When guessing is appropriate
  • How to be an advocate for data-first practices

1. It’s OK to learn on the job

For those with only a passable knowledge of statistics, it can seem intimidating to dive headfirst into data-driven decision making. But it doesn’t take a data science degree to be able to act on good data. In fact, few teams employ full-time analysts at early stages of growth. Most teams get by early on with the skills of a few generalists, who, it turns out, often learn on the job.

“Quantitative methods are something that I’ve learned in my career,” says Maggie Paveza, Senior Digital Strategist at The Good. Having previously worked as a UX Researcher at Usertesting.com, Maggie started with a strong foundation in qualitative research before adding quantitative methods to her toolkit, which she says helps her tell a fuller story. “The qualitative research forms the why; the quantitative research forms the what.”

For Hanna Gervelius, CPO at GolfGamebook, her relationship data started from close collaboration with Product Managers.

“My role when I started was in support, answering customer support emails. In trying to understand the scalability of issues, I got to work and talk a lot to product managers who really helped me understand we need to look at the data to know: is it one person who experienced the bug? Is it from a specific version of the app? Is it related to the device or operating system they were on?”

Hanna says learning how to dig for data helped her contextualize customer pain. And through that practice, she built the skills necessary to transition into Product Management. “It was through support that I started to understand that we should look into the data, then eventually I moved over to work on Product Management.”

When she added A/B testing to her toolkit, that took her passion for data to a whole new level.

“It’s so clear when you A/B test that even a small change can have a big impact. When you start seeing the difference, that really sparks an interest.”

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

2. Use data to define your focus

Once Hanna could confidently dive into the data, she started to use it in her practice, evaluating where traffic hits the app most frequently and focusing on those high-value, high-traffic areas first. This exercise in opportunity sizing taught her that it’s ok to shift focus in light of new data.

Maggie takes a similar approach to prioritization. She uses traffic data to understand what areas of a site or app are highly trafficked, and before proposing a test, she always verifies that an A/B test would see significance within an acceptable amount of time.

“We rely on prioritization methodologies to understand if running a test in an area would have a significant revenue impact and if an A/B test would help us gauge in a number of weeks or longer.”

If you’re just starting out with a new property, Maggie and Hanna both suggest building a foundational understanding of traffic patterns and to regularly refine your strategy. Priorities often shift as a result.

3. In the absence of data, start with a guess

One valuable skill that came later in their careers was understanding the value of a lead. Boosting form fills can feel invigorating, but without an understanding of what portion of that audience might become a deal later, it’s hard to know if your work is making a difference. Assigning a dollar amount to a lead is a powerful tool to evaluate your performance.

But if you’re joining an organization without mature data practices, leads often have no value assigned. And without institutional knowledge, it can be intimidating to make a guesstimate. But to Hanna, it’s worth starting with a guess to set initial priorities.

Hanna advises using a rough calculation to estimate the value of a metric (with things like average deal value and percent of pipeline that converts), which can help you get an early read.

“Over time, you can start adjusting it higher or lower. But trying to put a value on it and making decisions based on that is the best way to still work in a data-driven way even when you don’t have all the answers.”

Hanna warns that an estimate is just that, and that staying above board about where the data comes from is key to retaining trust.

“What’s really important in that estimation reporting is that you’re always super clear that you’re estimating—that it could be a lot higher and a lot lower, because if you start making critical budget decisions on it, you can end up in a dangerous situation.”

4. Be the change you want to see

For those who know the clarity that data can bring to the decision-making process, working within a data-poor organization can be challenging. But Hanna says it’s fairly easy to lead others to data advocacy, even if you’re not in a C-suite. “Most people nowadays want to be data-driven,” Hanna says. In her opinion, it doesn’t take a fancy title to turn others into advocates.

“If you are working in an org where you are the only person who is responsible for testing, the best thing you can do is try to spread that knowledge. Get them involved and feel a sense of ownership. Try to make it so that you’re not the only one who cares about A/B testing and being data-driven.”

In order to build stewardship throughout the organization, Hanna’s advice is to walk through your thinking, specifically by walking colleagues through the potential upside to testing, and the risks of not. “That can help people who are not so interested in testing to be a bit more curious and to want to understand.”

In Hanna’s experience, your passion can be quite contagious. “Data and testing, it opens up a world that is so fun.”

As for how she does it, Hanna shares her excitement by showing rather than telling. “As soon as you have the test going, share a bit of the data early on,” she says. Rather than being cagey about how inaccurate early test data is, she uses it as a teaching moment.

“All of us who work in the testing space know that data from one day or three days is probably going to be completely wrong, and you can say that also. But show it to that person. Show that ‘this is super early, we have no idea if this is going to be correct or not, and stat sig, but after one day this is what it looks like’”

And of course, once you run successful tests down the line, Maggie’s experience tells her that there is nothing more powerful than sharing a win with your team.

Artfully navigating the shift

Advocating for data-driven decision-making in intuition-led companies isn’t always easy, but it’s a challenge worth taking on.

As Maggie and Hanna’s experiences show, starting small, whether by learning on the job, prioritizing based on data, making informed estimates, or sharing early insights, can lead to big shifts in mindset.

By fostering curiosity and collaboration, you can help transform your organization’s approach to decision-making, making data a natural and valued part of the process.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
You Launched A New Website; What’s Next? https://thegood.com/insights/you-launched-a-new-website-whats-next/ Fri, 14 Mar 2025 19:04:26 +0000 https://thegood.com/?post_type=insights&p=110404 Launching a redesigned or re-platformed website feels like crossing the finish line of a marathon. Months, sometimes years, of hard work are finally coming to fruition. You’re dreaming of the rest you’ll be able to enjoy now that the project is complete. But you aren’t crossing a finish line; you’re summiting a mountain. Reaching the […]

The post You Launched A New Website; What’s Next? appeared first on The Good.

]]>
Launching a redesigned or re-platformed website feels like crossing the finish line of a marathon. Months, sometimes years, of hard work are finally coming to fruition. You’re dreaming of the rest you’ll be able to enjoy now that the project is complete.

But you aren’t crossing a finish line; you’re summiting a mountain. Reaching the peak feels like completion, but experienced climbers know that’s only halfway. The descent is equally challenging and requires different skills and focus. In the same way, optimization requires a different approach than a website launch.

You’ve successfully achieved a huge accomplishment, but you’re only partway through. You still have to climb down the mountain or, in this case, optimize your website based on user feedback.

In our decades in business, we have discovered that the most successful companies view launch day as “day one” of an optimization journey. If you want to do the same, this is the playbook to follow to maximize ROI on redesign investments.

The benefits of optimization post-website launch

By leveraging optimization post-launch, you can expect benefits like the following:

  • Objectively and quickly determined opportunities for change
  • Easily determined priorities according to potential impact
  • Less waste of resources on changes that won’t work
  • Higher ROI because you have success rates at a lower cost

Leveraging optimization after launching a website is a high-performing, systematic way of getting better results from your hard work.

In an ideal world, you conducted a data-driven redesign. You carefully derived measurements of what was actually happening on the site and feedback from customers to inform the process.

In this case, you have already conducted user testing and received clear customer feedback on your new site. You’re set up to seamlessly start collecting post-launch data and begin identifying improvement opportunities to build on the momentum you’ve generated.

If you haven’t, don’t fret. You can still reap the rewards of optimizing your site post-launch. As long as you don’t “set it and forget it,” you’re in a much better position than most of your competition.

The best optimization tools post-launch

What do you need to get started?

Our recommended toolkit for optimization post-launch includes:

Quantitative data

  • Google Analytics: Analyze traffic sources, user behavior, and conversion paths. GA is essential for comparing pre/post-launch performance and identifying underperforming segments or pages.
  • Heatmaps: Visual representations of user clicking, scrolling, and movement behavior that reveal which elements attract attention and which are ignored. Allows you to optimize content placement and identify what does/doesn’t resonate.

Qualitative data

  • Usability testing: Structured observation of real users completing key tasks on your new site. Reveals pain points invisible in quantitative data alone and provides direct insight into how customers actually experience your redesigned user journeys.
  • Session recordings: Video captures of actual visitor interactions with your site that expose unexpected navigation patterns and friction points. Helps you identify where users get confused, hesitate, or abandon their journey on specific pages.
  • Customer feedback tools: Direct voice-of-customer collection mechanisms, such as surveys and feedback widgets, that capture qualitative insights about the redesign, highlighting immediate improvement opportunities from your most valuable asset: your customers.

Experimentation

  • Rapid testing: Quickly validate design and content changes without dev support. Get efficient feedback on elements like CTAs, headlines, and pricing or product page components.
  • A/B testing or multivariate testing: Get statistically significant proof to validate riskier design and content changes without full deployment. Test in context and make sure any further website updates will drive target user behavior.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

What to do post-website launch step-by-step

With those tools in mind, you’re ready to get started.

Step 1: Rapid response protocol and the basics

The best way to celebrate a website launch is to get your organization together for a post-launch bug-squash-a-thon.

During the first 72 hours post-launch, get cross-functional teams together to hunt for bugs on the new site. Document any technical issues in a central location and use a simple prioritization framework so your devs know what to tackle first.

Implement fixes while the bug hunt is going on so people see the improvements in real time and are motivated to keep searching. Keep in mind that some of these bugs will indicate larger UX issues, so don’t delete them after solving them; keep them available for review later in the week.

Your only other priority in the first three days post-launch is to ensure all of your tooling is set up correctly. Make sure your Google Analytics is tracking the right metrics, get your heatmapping software working on the new site, and be sure any customer support functions (chatbots, etc.) are up and running.

Step 2: Compare pre and post launch data

Revisit your baseline metrics to get started reviewing pre- and post-launch data. Specifically, compare KPIs before and after the redesign and compare those across top channels and device types to get a foundational understanding of what is working and where to dig in.

Example key performance indicators to track immediately:

  • Conversion rates by traffic source and device type
  • User engagement metrics (time on site, pages per session)
  • Cart abandonment and checkout completion rates
  • Customer acquisition costs vs. lifetime value

To effectively do this, set up before/after comparison dashboards and circulate them with your team.

Layer heatmaps on top of this to understand how performance changed on a page-by-page basis with more context.

When you identify where things aren’t improving, you can identify low-hanging fruit optimization opportunities and areas that need more testing to understand what is going wrong.

Step 3: Refresh user testing and competitive analysis

Once you have a baseline of data and changes, you’ll want to conduct user research and testing. Have real people use your new site to discover its flaws and give their feedback. If you didn’t conduct testing during the refresh process, you might also find it helpful to have them use both versions of your site (the old and the redesigned) to tell you which they prefer.

Even if you have plenty of user testing data on the old site, it’s crucial to gather impressions of the site update and collect ideas for additional optimization. You can send the site to the same user testers for comparative feedback, and also be sure to get your site in front of new users for an unbiased and completely fresh perspective.

Specifically, conduct usability testing with new users to help uncover accessibility issues and where customers are getting stuck on the path to conversion.

It’s also a good time to study your competitors. What features and systems work well for their digital properties? Their success doesn’t guarantee success for you (even if your audiences overlap perfectly), but it can be a good reminder of what’s going on in the competitive landscape after many months of focusing on your own website.

Step 4: Build an optimization roadmap

Based on your GA data, the heatmaps you’ve set up, the user testing you’ve conducted, and the competitive analysis, you’re ready to start building an optimization roadmap.

A clear roadmap will help align optimization efforts with business objectives, allocate resources for continuous optimization, and generate buy-in from leadership at your organization.

Theme the categories of problems and opportunities that you identified in steps 1-4 (more on this in our article on theme-based roadmaps). Then, prioritize the themes to create a clear 90-day plan of action.

Some outcomes of your roadmap could be:

  • Using initial post-launch data, you identify underperforming customer segments and decide to theme personalization opportunities based on their early behavior patterns.
  • Traffic source performance is lower than your benchmark, so you create a plan for optimizing landing pages based and adjusting paid media strategies.
  • Session recordings show customers getting lost while looking for more information on a specific product, so navigation and directional guidance become the main focus for your next phase of optimization.

Step 5: Delegate or outsource

With a roadmap in hand, it’s time to optimize. At this stage, the post-launch excitement might be starting to fade, but this is the step that sets successful teams apart.

If you have an internal optimization team, set up a meeting cadence and reporting framework that drives accountability, and you’re off to the races.

For teams that are struggling with:

  • Data interpretation challenges
  • Testing velocity limitations
  • Expertise gaps in specialized areas
  • General confusion about what to do next

You may need external support. Sometimes, it’s hard to read the label from inside the jar when you’ve been working on the site for so long. A partner like The Good can lend the expertise and fresh perspective to make sure you drive ROI from your redesign investment.

Double down to keep the momentum going

Your website improvement journey doesn’t end with launch; it evolves.

The businesses that treat their website as a living, breathing asset rather than a static project are the ones that consistently outperform competitors.

By implementing the five-step process outlined above, you’re positioning your organization to capture immediate wins while building a sustainable optimization practice that drives continuous improvement. 

Each insight gained, each test run, and each improvement made compounds over time, creating an ever-widening gap between your business and those that “set it and forget it.”

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post You Launched A New Website; What’s Next? appeared first on The Good.

]]>
Which Rapid Testing Method Should I Use? https://thegood.com/insights/rapid-experimentation/ Wed, 18 Dec 2024 16:00:00 +0000 https://thegood.com/?post_type=insights&p=110108 “Research” often means “identify problems to solve.” But it can also mean “verify that proposed solutions actually solve problems.” The most buzzy way to get that validation is via A/B testing. But many don’t have the budget, appetite, time, or team to even get started. Enter: Rapid testing. Like A/B testing, rapid testing helps you […]

The post Which Rapid Testing Method Should I Use? appeared first on The Good.

]]>
“Research” often means “identify problems to solve.” But it can also mean “verify that proposed solutions actually solve problems.”

The most buzzy way to get that validation is via A/B testing. But many don’t have the budget, appetite, time, or team to even get started.

Enter: Rapid testing.

Like A/B testing, rapid testing helps you understand if your solutions are actually working.

Unlike A/B testing, rapid tests are fast, done with small sample sizes, and offer a level of qualitative insight not afforded via experimentation alone.

Rapid testing is no substitute for A/B testing, but it has a ton of applications:

  • Get a gut check when true A/B testing is not a viable option
  • Understand where new features might be confusing or unclear
  • Evaluate time-to-success and pass/fail rates of task flows
  • Narrow down your options from many to few when deciding what messages to test in the market

Think of it as your canary in the coal mine. A utility to mitigate the risk of feature flop.

In this article, we’ll explore what rapid experimentation is, its benefits, the types of rapid tests you can run, and when to use each. If you’re looking to de-risk your decisions and innovate faster, keep reading for a framework to get you started.

What is rapid experimentation?

Rapid experimentation or rapid testing refers to a collection of tactics we use to get quick feedback for operational decisions. This type of testing helps teams make agile decisions around design, copy, and other site elements.

Rapid experimentation is a lean approach to validating ideas, designs, or features in a quick, iterative manner. It focuses on qualitative insights and directional data.

Instead of waiting weeks for results, you can gather actionable insights in days or even hours. This method enables teams to:

  • Understand whether users grasp a new concept
  • Identify potential usability issues
  • Test multiple variations of an idea before committing to development

In short, rapid experimentation helps you answer the question: “Am I moving in the right direction?”

Why do teams use rapid experimentation?

Rapid experimentation delivers value in multiple ways, particularly for SaaS teams that need to move fast and make data-informed decisions.

While rapid testing uses less qualified participants and smaller sample sizes than traditional A/B testing, the tradeoff is exponentially faster results. Rapid testing delivers value by:

  • Speeding up results: Unlike A/B testing, which can take weeks to produce reliable results, rapid tests can be designed, executed, and analyzed in days. This speed allows teams to iterate quickly.
  • Limiting politics of A/B testing: Which A/B tests get run is informed by rapid test data instead of executive opinions.
  • Narrowing down many ideas: When you need to identify the best few ideas out of many, rapid testing is an efficient way to do so.
  • Lowering costs: Because rapid tests require smaller sample sizes and fewer resources, they’re accessible to teams with limited budgets.
  • Identifying problems early: Rapid experimentation helps uncover potential usability issues or misunderstandings before they’re baked into a feature or product. This can save significant rework down the line.
  • Increasing qualitative depth: Where A/B testing provides numbers, rapid tests provide context. Understanding the “why” behind user behavior can inform better solutions.
  • De-risking decisions: By testing ideas early and often, teams can reduce the risk of releasing features or products that fail to meet user needs.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

What are the types of rapid tests?

Rapid experimentation is not a one-size-fits-all process. Different scenarios call for different types of tests.

Here are some common methods:

Task Completion Analysis

Task completion analysis allows us to quickly test new ideas to understand time-on-task and success rates.

Typically, users are asked to complete a specific task, such as signing up for a trial or finding a key feature. Teams observe where users struggle and measure success rates, time-to-completion, and drop-off points.

First-Click Tests

First-click tests evaluate whether users can intuitively find the primary action or information on a page. Participants are given a task and asked to click where they think they should start. This is ideal for evaluating navigation or CTA placement.

Tree Test

Tree testing is a usability technique that helps you understand how users navigate through your website or app’s structure. It focuses on how well people can find information within a system.

By stripping away visual elements and focusing solely on the structure (the “tree”), you can identify whether the content organization makes sense or if users are getting lost.

Sentiment Analysis

Sentiment analysis lets us preview how users might respond and react to a treatment. It allows us to evaluate user emotions and opinions about a product or experience. Typically, feedback is collected through surveys, reviews, or user interviews, and responses are analyzed to identify positive, neutral, or negative sentiments. Teams use this data to uncover pain points, gauge satisfaction, and prioritize improvements.

5-Second Tests

5-second tests assess a user’s immediate impression of a design or message. They show participants an interface or design for five seconds and then ask them what they remember or understand. This is great for defining the value propositions or headlines that are most memorable.

Design Surveys

Design surveys collect qualitative feedback on wireframes or mockups. They can help validate designs before investing in development to implement them on your site.

Preference Tests

This test involves showing users two or more design variations and asking which they prefer and why. It’s perfect for narrowing down visual or messaging options before launching a formal test.

Card Sorting

Card sorting is a research technique used to understand how users organize and categorize information. You present participants with a set of cards, each representing a piece of content or functionality, and ask them to group these cards in a way that makes sense to them.

This process reveals how people naturally think about and structure information. It lets you uncover insights into how users might intuitively organize menu items, product categories, or any other structured content on your site. Ultimately, this helps you design a website or app that aligns with their expectations.

These are just six of the many types of rapid experimentation.

How to choose the right method for your scenario

With so many options, it can be challenging to know which rapid testing method to use in a given situation. Each method has strengths and weaknesses, and choosing the wrong one can result in wasted effort or inconclusive results.

If you’re interested in getting started with rapid testing but aren’t sure which method is right for your scenario, we devised a simple way to narrow down the options.

A framework developed by The Good for determining which rapid testing method to use.

In this decision tree, you can ask questions to help understand which rapid testing method best suits your needs.

A few caveats:

  • There are more methods than are covered here; this is just a sample
  • Test types can be used in combination in some instances, and
  • There are always exceptions to the rule

There’s no substitute for experience, but if you’re just getting started with this kind of research, I hope this gives you a head start.

Using this framework ensures you select the method best suited to your goals, saving time and effort while delivering more meaningful results.

The Telegraph used rapid testing to increase registrations

So, what might rapid testing look like in action?

During a Digital Experience Optimization Program™, we worked with The Telegraph to improve their paywall experience as a part of their goal to reach a million subscribers.

The first part of any DXO Program™, our team conducted a thorough audit of the end-to-end customer experience to uncover the biggest barriers and opportunities for conversions. Once we had the research plan and were armed with a strategic roadmap, it was on to the next phase of the program. We took hypothesized improvements and tested them with The Telegraph’s ideal audience to confirm they would move the needle before investing in implementation.

Thanks to rapid testing, we were able to design, test, and decide on the first phase of implementations in a matter of days.

One rapid test we ran for The Telegraph assessed site banner color and layout. When shown two banner variants, visitors had a clear preference — 78% of participants found content easier to read against a yellow background. Recall tests also showed visitors were more likely to remember key details in this variant as well, further supporting it as the preferable option.

Two banner variants used in a rapid test The Good conducted for The Telegraph.
Two banner variants we ran for The Telegraph; the yellow was the winner.

We ran over 20 similar tests to assess cookie notification placement and design, desktop and mobile paywall presentation, brand headlines, offer messaging, and more. Each test leveraged the method relevant to the hypothesis we hoped to validate with experimentation. We chose the testing methodology using a similar thought process to the rapid testing decision tree framework shared earlier.

And the best part? We did this in just a few weeks, something that would have been impossible to accomplish via A/B testing due to resource constraints. David Humber, Head of Conversion at The Telegraph, also credits the efficiency and effectiveness of the rapid tests to having a team of external experts come in. “You do less spinning of the wheels because you’re having somebody come in that’s got this additional expertise as their bread and butter.”

Overall, identifying small wins in numerous places added up to a significant impact for The Telegraph in both improved metrics and an understanding of the customer.

Upskill your team with external support

While rapid experimentation is a powerful tool, getting started can feel overwhelming. How do you design effective tests? What metrics should you measure? And how do you ensure your insights lead to meaningful improvements?

This is where The Good can help. Our team specializes in UX research and digital experience optimization for SaaS companies. From designing and executing rapid tests to implementing insights, we’re here to guide you every step of the way. With our proven frameworks and expertise, you can:

  • Validate ideas faster and more effectively
  • Reduce the risk of feature flop
  • Build a culture of experimentation within your team

Ready to get started? Contact us to learn how we can help you make better decisions faster.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Which Rapid Testing Method Should I Use? appeared first on The Good.

]]>