Maggie Paveza - The Good https://thegood.com Optimizing Digital Experiences Fri, 21 Nov 2025 19:10:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company https://thegood.com/insights/intent-based-segmentation/ Fri, 21 Nov 2025 18:45:09 +0000 https://thegood.com/?post_type=insights&p=111181 Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do. The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different […]

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do.

The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different goals, workflows, and definitions of success.

We recently worked with a leading enterprise SaaS company facing exactly this challenge. Their product served everyone from solo creators to enterprise teams, but the experience treated everyone the same.

Users logging in to track a single client project got bombarded with team collaboration features. Power users managing complex workflows couldn’t find advanced capabilities buried in generic menus.

Users felt overwhelmed by irrelevant features, engagement plateaued, and retention suffered.

The solution wasn’t another traditional segmentation model. It was intent-based segmentation; a framework for grouping users by what they’re actually trying to accomplish, then personalizing the experience to match those specific goals.

Understanding behavioral segmentation and where intent fits

Before diving into how to build intent-based clusters, it helps to understand where this approach fits within the broader landscape of user segmentation.

Most SaaS companies are familiar with demographic segmentation (company size, industry, role) and firmographic segmentation (ARR, team size, geographic location). These approaches tell you who your users are, but they don’t tell you what they’re trying to do, which is why they often fail to predict engagement or retention.

Behavioral segmentation takes a different approach. Instead of looking at static attributes, it focuses on how users actually interact with your product.

Behavioral segmentation divides users based on engagement. This includes actions like feature usage patterns, open frequency, purchase behavior, and time-to-value metrics.

While not the only way to do things, behavioral segmentation is widely regarded as more effective than demographic segmentation alone, with plenty of research backing it up.

Within behavioral segmentation, there are several approaches:

  • Usage-based segmentation looks at frequency and intensity of use.
  • Lifecycle segmentation tracks where users are in their journey.
  • Benefit-sought segmentation groups users by the outcomes they want to achieve.

Intent-based segmentation sits at the intersection of all three.

visual portraying intent based segmentation at the center of different types of user segmentation

It identifies clusters of users who share similar goals and workflows, then maps those patterns to create a more personalized experience.

Intent-based clusters answer the question: “What is this user trying to accomplish right now?”

In a recent client engagement that inspired this article, this distinction mattered. They had mountains of usage data showing which features people clicked, but no framework for understanding why certain feature combinations existed or what job users were trying to complete.

They knew “Business Professionals” used their tool, but that category was so broad it offered no actionable insights. A marketing manager building campaign timelines has completely different needs than a legal team tracking contract approvals, even though both might be classified as “business professionals.”

Intent-based clustering gave them that missing layer of insight.

Case study: How to spot the need for intent-based segmentation

Let’s talk more about the client engagement I mentioned. This is a great case study for when to use intent-based segmentation.

We work on a quarterly retainer for these clients with our on-demand growth research services. So, when they mentioned struggling with how to personalize experiences and improve retention, we opened up a research project that same day.

The team could see that certain users logged in daily and used five or more features. Great, right? Not really. When we dug deeper, we discovered something critical. Heavy feature usage didn’t predict retention. Some power users churned while casual users stuck around for years.

The issue wasn’t the quantity of features used; it was whether those features aligned with what users were trying to accomplish. A user coming in weekly to update a single client dashboard showed higher retention than someone exploring ten features that didn’t match their core workflow.

The symptoms: What teams told us was broken

During our stakeholder interviews, we heard the same frustrations across departments:

From product: “We know the top five features everyone uses, but that doesn’t help us understand why they’re using them or what to build next. Two users might both use our template feature, but one is building client proposals while the other is standardizing internal processes. They need completely different things from that feature.”

From marketing: “Our segments are too broad to be useful. ‘Business Professional’ could mean anyone from a solo consultant to an enterprise VP. When we send educational content, we can’t make it relevant because we don’t know what problem they’re trying to solve.”

From customer success: “We can see when someone is at risk of churning because their usage drops off, but we can’t predict it before it happens. By the time we notice, they’ve already decided the product isn’t right for them. We need to understand intent earlier so we can intervene proactively.”

From UX research: “Users think in terms of tasks, not features. They don’t say ‘I want to use the dependency mapping tool.’ They say, ‘I need to make sure the design team finishes before development starts.’ But our product talks about features, not outcomes.”

The underlying problem: Missing the ‘why’

What became clear was that the organization had plenty of data about behavior but no framework for understanding intent. They could answer questions like:

  • How many people use feature X?
  • What’s the average session duration?
  • Which users log in most frequently?

But they couldn’t answer the questions that actually mattered:

  • What are users trying to accomplish when they use feature X?
  • Why do some users stick around while others churn?
  • What combination of goals and workflows predicts long-term retention?

This may sound familiar. They have data about behavior but lack context about intent. Without understanding the users’ different definitions of success, they use generic personalization that recommends “similar features” and misses the mark entirely.

The four-phase framework for creating intent-based user clusters

Based on our work with the enterprise SaaS client, we developed a systematic framework for building intent-based clusters from scratch.

The process has four distinct phases, each building on the previous one.

Think of this as a directional guide rather than a rigid formula. You can adapt the scope based on your resources and organizational complexity.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Phase 1: Capture institutional knowledge and identify gaps

Most organizations have more customer insight than they realize; it’s just scattered across teams, buried in old reports, and locked in individual team members’ heads. The first phase consolidates that knowledge and identifies what’s missing.

Graphic of phase 1 in the intent based user segmentation process

1. Conduct cross-functional interviews

Start by interviewing stakeholders who interact with users regularly: product managers, customer success, sales, support, and marketing. For our client, we conducted interviews with seven team members from UX research, growth, engagement, marketing, and analytics.

The goal isn’t consensus. You want to uncover patterns and contradictions instead. Focus your conversations on these questions:

  • How do you currently describe different user types?
  • What patterns have you noticed in how different users engage with the product?
  • What data exists about user behavior that isn’t being used to inform decisions?
  • Where do personalization efforts break down today?
  • What questions about users keep you up at night?

These conversations surface institutional knowledge that never makes it into documentation.

Capture everything. The contradictions are especially valuable—they reveal where teams operate with different mental models of the same users.

2. Audit existing research and reports

Next, gather relevant user research, data analysis, and customer insight. For our client, we analyzed eight existing reports, including retention data, cancellation surveys, user studies, engagement patterns, and conversion analysis.

Look for:

  • Marketing segmentation models (usually demographic-heavy)
  • User research studies (often small sample, rich insights)
  • Behavioral analytics reports (feature usage patterns, cohort analysis)
  • Customer journey maps (theoretical vs. actual paths)
  • Support ticket analysis (pain points and use cases)
  • Cancellation surveys (why users leave)

Pay attention to gaps between how different teams think about users. Marketing might segment by company size, while product segments by feature usage. Neither is wrong, but the lack of a unified framework means teams optimize for different definitions of success.

3. Define hypotheses about intent-based variables

Based on interviews and research, develop hypotheses about what variables might define user intent. This is where you move from “who uses our product” to “what are they trying to accomplish.”

For our client, we identified several dimensions that seemed to correlate with intent:

  • Primary workflow type: Are users managing team projects, client deliverables, or personal tasks?
  • Collaboration patterns: Solo work, small team coordination, or cross-functional orchestration?
  • Usage frequency: Daily operational tool or periodic project management?
  • Success metrics: Speed (quick task completion) vs. thoroughness (detailed planning and tracking)?
  • Document complexity: Simple task lists or multi-layered project hierarchies?

The goal is to create testable hypotheses that can be validated in Phase 2.

Phase 2: Validate clusters through user research and behavioral analysis

Once you have initial hypotheses, Phase 2 tests them against real user behavior and feedback. This is where hunches become data-backed insights.

visual of phase 2 in the intent based user segmentation process

1. Develop provisional cluster groups

Transform your hypotheses into provisional clusters. For our client, we identified six distinct intent-based clusters. Let me illustrate with a fictional example of how this might work for a project management SaaS tool:

Sprint Executors: Users focused on rapid task completion and daily standup workflows. They need speed, simple task updates, and quick team coordination. Think startup teams moving fast with lightweight processes.

Client Project Coordinators: Users managing multiple client engagements simultaneously with strict deliverable timelines. They need client visibility controls, progress tracking, and professional reporting. Think agencies and consultancies.

Cross-Functional Orchestrators: Users coordinating complex projects across departments with dependencies and approval workflows. They need Gantt views, resource allocation, and stakeholder communication tools. Think enterprise program managers.

Personal Productivity Optimizers: Users treat the tool as their second brain for personal task management and goal tracking. They need customization, recurring tasks, and minimal collaboration features. Think solopreneurs and executives.

Seasonal Campaign Managers: Users with predictable high-intensity periods followed by dormancy. They need templates, bulk operations, and the ability to archive/reactivate projects easily. Think retail operations teams or event planners.

Mobile-First Coordinators: Users who primarily access the tool from mobile devices for field work or on-the-go updates. They need streamlined mobile experiences and offline sync. Think field service teams or traveling consultants.

Each cluster gets a descriptive name that captures the user’s primary intent, not just their behavior. “Sprint Executor” tells you more about what someone is trying to do than “high-frequency user.”

2. Conduct targeted user research

With provisional clusters defined, recruit users who fit each profile and conduct interviews to understand:

  • Their primary use cases and goals when they first adopted the tool
  • How they discovered and currently use the product
  • Their typical workflows from start to finish
  • What defines success in their role
  • Pain points and unmet needs
  • How they decide which features to explore
  • What would make them cancel vs. what keeps them subscribed

For our client, we conducted three to four interviews per cluster, totaling around 24 user conversations. This gave us enough coverage to validate patterns without drowning in data.

The insights were eye-opening. We discovered that one cluster had the fastest time-to-value but the lowest feature adoption. They found what they needed immediately and never explored further. Another cluster showed the highest retention but needed the longest onboarding. They invested time up front because the tool was critical to their workflow.

3. Analyze behavioral data to confirm patterns

User interviews reveal what people say they do. Behavioral data shows what they actually do. Cross-reference your clusters against:

  • Feature usage sequences (which tools appear together in sessions)
  • Time-to-value metrics by cluster (how quickly do they get their first win)
  • Retention and churn patterns (which clusters stick around)
  • Upgrade and expansion behavior (which clusters grow their usage)
  • Support ticket themes (which clusters need help with what)
  • Feature adoption curves (how exploration patterns differ)

For our client, the data revealed critical differences. The “Sprint Executor” equivalent had fast initial adoption but plateaued quickly. They found their core workflow and stopped exploring.

The “Cross-Functional Orchestrator” cluster showed slow initial adoption but deep engagement over time. They needed to learn the tool thoroughly to unlock value.

These patterns weren’t visible in aggregate data. Only by segmenting users by intent could we see that different clusters had fundamentally different paths to retention.

4. Build detailed cluster profiles

For each validated cluster, create a comprehensive profile that becomes the foundation for personalization. For example:

Cluster name: Sprint Executors

Primary intent: Complete daily tasks quickly with minimal friction and maximum team visibility

Most-used features:

  • Quick-add task creation
  • Board view for visual sprint planning
  • Mobile app for on-the-go updates
  • Real-time team activity feed

Typical workflow patterns:

  • Morning standup with task assignments
  • Throughout the day: quick updates and status changes
  • End of day: marking tasks complete and planning tomorrow

Behavioral flags that identify this cluster:

  • Creates 5+ tasks in first week
  • Returns daily within first 14 days
  • Uses mobile app within first 7 days
  • Rarely uses advanced features like Gantt charts or dependencies

Retention drivers:

  • Speed of task completion
  • Team visibility and accountability
  • Mobile accessibility

Churn risks:

  • Tool feels too complex for simple needs
  • Feature bloat is making core actions harder to find
  • Forced upgrades to access speed-focused features

Personalization opportunities:

  • Streamlined onboarding focused on quick task creation
  • Mobile-first feature discovery
  • Templates for common sprint workflows
  • Integrations with communication tools

These profiles become the single source of truth that product, marketing, and customer success can all reference.

Phase 3: Develop indicators and personalization strategies

The final phase connects clusters to action. This is where the framework moves from insight to implementation.

1. Create behavioral flags for cluster identification

Most users won’t self-identify their intent at signup. You need to infer cluster membership from behavioral signals early in their journey. The key is identifying flags that appear within the first 7-14 days. It should be early enough to personalize the experience before users decide if the tool is right for them.

visual of phase 3 in the intent based user segmentation process

For reference, the “Sprint Executor” cluster in our fictional example:

  • Created 5+ tasks in first week
  • Logged in on 4+ separate days in first 14 days
  • Used mobile app within first 7 days
  • Board or list view used more than timeline/Gantt view (80%+ of sessions)
  • Invited at least one team member within first 10 days
  • Never explored advanced dependency features
  • Average session length under 10 minutes

Versus the “Client Project Coordinator” cluster:

  • Created 3+ separate projects within first week (indicating multiple clients)
  • Used folder or workspace organization features within first 5 days
  • Set up client-specific permissions or external sharing settings
  • Created custom views or reports within first 14 days
  • Longer average session times (20+ minutes per session)
  • Uses professional or client-specific terminology in project names
  • High usage of export or presentation features

The goal is to find the minimum viable signal set that reliably predicts cluster membership. Start with more flags and refine over time based on which actually correlate with long-term behavior.

One critical finding from our client work: early behavioral flags predicted retention better than demographic data.

A user who exhibited “Client Project Coordinator” behaviors in week one showed 40% higher 90-day retention than the average user, regardless of their company size or job title.

2. Map personalization opportunities to each cluster

With clusters and flags defined, identify specific ways to personalize the experience across the user journey:

Onboarding sequences: Tailor the first-run experience to highlight features that match user intent. Show Sprint Executors how to set up their first sprint board, not the full feature catalog with Gantt charts and resource allocation tools they don’t need.

In-app messaging: Trigger contextual tips based on usage patterns. When a Client Project Coordinator creates their third project with similar structure, suggest project templates to save time.

Feature discovery: Recommend next-step features that align with cluster workflows. For Sprint Executors who’ve mastered basic task management, introduce the mobile app and integrations with their communication tools—not complex dependency mapping.

Content and education: Send targeted educational content that addresses cluster-specific goals. Client Project Coordinators get tips on professional reporting and client permissions. Sprint Executors get productivity hacks and team coordination strategies.

Upgrade paths: Present pricing tiers and feature upgrades that match cluster needs. Don’t push team collaboration features to Personal Productivity Optimizers who work solo and won’t use them.

Support prioritization: Route support tickets differently based on cluster. Client Project Coordinators might get priority support since they’re often managing paying clients. Seasonal Campaign Managers might get proactive check-ins before predicted busy periods.

For our client, this mapping revealed opportunities they’d completely missed. One cluster had been receiving generic “explore more features” emails when what they actually needed was advanced security capabilities for compliance requirements. Another cluster kept churning at the end of trial because onboarding emphasized features they’d never use instead of the speed-focused tools that matched their workflow.

Phase 4: Develop test concepts to validate impact

Turn personalization opportunities into testable hypotheses. Don’t roll everything out at once. Start with high-impact, low-effort tests that prove the value of intent-based segmentation.

For our client, we proposed several test concepts structured to validate clusters quickly and build organizational confidence in the framework. Here are a few examples.

visual of phase 4 in the intent based user segmentation process

Example Test 1: Intent-Based Onboarding Survey

Background: The organization lacked a way to identify user intent at the critical moment when users were most open to guidance: right after signup, but before they’d formed opinions about product fit.

Hypothesis: By asking users to self-identify their primary goal during their first meaningful session, we can segment them into actionable clusters that enable personalized feature discovery and messaging, resulting in improved 3-month retention rates by 5-10%.

Test design: During the first session (after initial signup but before deep engagement), present a brief survey asking: “What brings you here today?” with options that map directly to identified clusters:

☐ Coordinate my team’s daily work (Sprint Executors)

☐ Manage multiple client projects (Client Project Coordinators)

☐ Organize complex cross-functional initiatives (Cross-Functional Orchestrators)

☐ Track my personal tasks and goals (Personal Productivity Optimizers)

☐ Plan seasonal campaigns or events (Seasonal Campaign Managers)

☐ Update projects while on the go (Mobile-First Coordinators)

☐ Something else (with optional text field)

Then immediately personalize their first experience based on their response: Sprint Executors see a streamlined task creation tutorial, Client Project Coordinators get guidance on setting up client workspaces, etc.

Success metrics:

  • Primary: 3-month retention rate by selected cluster (looking for 5-10% lift)
  • Secondary: Survey completion rate (target: >80%), feature adoption aligned with cluster (target: 20% lift), time to first value-generating action
  • Guardrails: No negative impact on day 2 or day 7 retention

Acceptance criteria for “winning test”:

  • Survey completion rate >80%
  • 60% of users select a pre-set option (vs. “something else”)
  • Statistically significant retention lift in at least one cluster
  • No degradation in key engagement metrics

Acceptance criteria for “learning test”:

  • 40% of users select “something else” (suggests clusters don’t match user mental models)
  • No statistically significant difference in retention (suggests clusters exist, but personalization approach needs refinement)

Audience: New paid subscribers on first day, trial users converting to paid, reactivated users returning after 30+ days dormant. Start with 25% of eligible users to minimize risk.

Timeline: 90 days to measure primary retention metric, but early signals (completion rate, feature adoption) available within 14 days.

Example Test 2: Cluster-Specific Feature Recommendations

Background: Generic in-app messaging had low click-through rates (<5%) and wasn’t driving feature adoption. Users felt bombarded by irrelevant suggestions.

Hypothesis: For users who match behavioral flags within the first 14 days, triggering personalized feature recommendations will increase feature adoption by 20% and engagement depth by 15%.

Test design: Identify users by behavioral flags, then trigger targeted in-app messages at contextually relevant moments:

  • Sprint Executors see mobile app download prompt after completing 5 tasks on desktop: “Update tasks on the go: get the mobile app”
  • Client Project Coordinators see reporting features after creating third project: “Impress clients with professional progress reports”
  • Cross-Functional Orchestrators see dependency mapping after creating complex project: “Map dependencies to keep cross-functional teams aligned”

Success metrics:

  • Primary: Feature adoption rate for recommended features (target: 20% lift vs. control)
  • Secondary: Overall engagement depth (features used per session), message click-through rate
  • Guardrails: No increase in feature abandonment (starting but not completing flows)

Audience: Users who match cluster behavioral flags within first 14 days. Test one cluster at a time to isolate impact.

Timeline: 30 days to measure feature adoption impact.

Example Test 3: Retention Email Campaigns by Cluster

Background: Generic “tips and tricks” email campaigns had 8% open rates and weren’t moving retention metrics. Content felt irrelevant to most recipients.

Hypothesis: Segmenting email campaigns by identified cluster will improve email engagement by 50% and show a measurable correlation with retention.

Test design: Replace generic weekly tips emails with cluster-specific content:

  • Sprint Executors: “5 ways to speed up your daily standup” / “Mobile shortcuts that save 2 hours per week”
  • Client Project Coordinators: “How to impress clients with professional project reports” / “3 ways to give clients visibility without overwhelming them”
  • Personal Productivity Optimizers: “Build your second brain: advanced filtering techniques” / “Automate your recurring tasks in 5 minutes”

Send to users identified through either the onboarding survey or behavioral flags. Track engagement and retention by cluster.

Success metrics:

  • Primary: Email open rates (target: 50% lift), click-through rates (target: 100% lift)
  • Secondary: Correlation between email engagement and 90-day retention
  • Guardrails: Unsubscribe rates remain stable or decrease

Audience: Users identified as belonging to specific clusters either through survey responses or behavioral flags, minimum 14 days after signup.

Timeline: 6 weeks for initial engagement metrics, 90 days for retention correlation.

Post-test analysis framework

For each test, we established a clear decision framework:

If “winning test”:

  • Roll out to 100% of eligible users
  • Begin development on next phase of personalization for that cluster
  • Use learnings to inform tests for other clusters
  • Document what worked to build organizational playbook

If “learning test”:

  • Analyze all “something else” responses for missing clusters or unclear framing
  • Review behavioral data to see if clusters exist, but personalization was wrong
  • Iterate on messaging, timing, or format
  • Decide whether to retest with refinements or try different approach

If negative impact:

  • Immediately roll back to the control experience
  • Conduct user interviews to understand what went wrong
  • Reassess cluster definitions or personalization approach
  • Consider whether the cluster exists but needs a different treatment

The key to successful testing is starting small, measuring rigorously, and being willing to learn from failures. Not every cluster will respond to every type of personalization, and that’s valuable information. The goal isn’t perfect personalization immediately; it’s continuous improvement based on what actually moves metrics.

Intent-based segmentation mistakes and how to avoid them

Based on our experience implementing this framework, here are the mistakes that will derail your efforts:

1. Starting with too many clusters

More isn’t better. Six well-defined clusters are more useful than fifteen overlapping ones. You need enough clusters to capture meaningfully different intents, but few enough that teams can actually remember and act on them. Start with 4-6 clusters and refine over time. If you find yourself creating clusters that differ only slightly, you’ve gone too granular.

2. Confusing demographics with intent

Job title, company size, or industry might correlate with intent, but they don’t define it. We’ve seen solo consultants behave like “Cross-Functional Orchestrators” and enterprise teams behave like “Sprint Executors.” Focus on what users are trying to accomplish, not who they are on paper.

3. Creating overlapping clusters

Each cluster should be distinct in its primary intent and workflow patterns. If you’re struggling to articulate how two clusters differ behaviorally, they’re probably the same cluster with different labels. Test this by asking: “If I saw someone’s usage data, could I confidently assign them to one cluster?”

4. Ignoring edge cases entirely

Some users will span multiple clusters or switch between them based on context. That’s fine. The framework should accommodate primary intent while recognizing that users are complex. A user might primarily be a “Client Project Coordinator” but occasionally use “Personal Productivity Optimizer” features for their own task management. Don’t force rigid categorization.

5. Skipping the validation step

Your initial hypotheses will be wrong in places. User research and behavioral data keep you honest and prevent confirmation bias. We’ve seen teams fall in love with theoretically elegant clusters that don’t actually exist in their user base, or miss entire segments because they didn’t fit the initial hypothesis.

6. Treating clusters as static

User intent evolves. Someone might start as a “Personal Productivity Optimizer” and grow into a “Client Project Coordinator” as their business scales. Review and refine your clusters quarterly based on new data, product changes, and market shifts.

7. Personalizing too aggressively too soon

Start with high-confidence, low-risk personalization (like targeted email content) before you completely diverge user experiences. You want to validate that clusters behave differently before you build entirely separate onboarding flows.

8. Forgetting to measure impact

Intent-based segmentation is valuable only if it improves outcomes. Define success metrics upfront (e.g., retention lifts, engagement depth, upgrade rates, support ticket reduction) and track them by cluster. If personalization isn’t moving these metrics, refine your approach.

Making intent-based segmentation work for your organization

The framework we’ve outlined works across product categories and company sizes, but implementation varies based on your resources and organizational maturity.

If you have limited data: Start with Phase 1 and Phase 2, using qualitative research to define clusters before investing in behavioral infrastructure. You can manually tag users based on interview responses and onboarding surveys, then personalize through targeted emails and customer success outreach. As you grow, build the data systems to automate cluster identification.

If you have rich behavioral data but limited research capabilities: Reverse the order. Start with data patterns and validate through targeted interviews. Look for natural groupings in your analytics that suggest different workflow types, then talk to representative users from each group to understand their intent.

If you’re a small team: Don’t let perfect be the enemy of good. Start with 3-4 obvious clusters based on your highest-level workflow differences. The founder of a 10-person startup probably has a better intuitive understanding of user intent than a 500-person company with siloed data. Write down what you know, test it with a few users, and start personalizing.

If you’re a large enterprise: The challenge is getting organizational alignment, not defining clusters. Use Phase 1 to surface where teams already operate with different mental models, then use data to arbitrate. Create executive sponsorship for the new framework so it becomes the shared language across product, marketing, and CS.

The key is starting somewhere. Most companies know their one-size-fits-all approach isn’t working, but they keep personalizing around the wrong variables that don’t actually predict what users are trying to accomplish.

Intent-based segmentation reorients everything around the question that actually matters: What is this user trying to accomplish, and how can we help them succeed at that specific goal?

Turn insights into retention that drives revenue

Understanding user intent is just the first step. The real value comes from translating those insights into personalized experiences that keep users engaged and drive measurable revenue growth.

At The Good, we’ve spent 16 years helping SaaS companies identify their most valuable user segments and optimize experiences around what actually drives retention. Our systematic approach to user segmentation goes beyond frameworks. We help you implement experimentation strategies that prove which personalization efforts move the needle on the metrics your board cares about.

Plenty of companies struggle to implement segmentation that’s actually actionable. They end up with beautiful personas gathering dust or broad categories that don’t inform product decisions.

Intent-based segmentation is different because it connects directly to behavior you can observe and experiences you can personalize.

If you’re struggling with generic experiences that fail to resonate with different user types, or if you know your segmentation could be better but aren’t sure where to start, let’s talk about how intent-based segmentation could transform your retention strategy and drive revenue growth.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
How User-Centered Prioritization Helps Improve Feature Adoption Rates https://thegood.com/insights/feature-adoption/ Thu, 29 May 2025 22:05:08 +0000 https://thegood.com/?post_type=insights&p=110621 Imagine launching a feature and knowing it will be a hit. What if you could flip the script on wasted development efforts and build only what your users truly crave? For most SaaS companies, a high feature adoption rate is linked to increased upgrades, retention, and loyalty. When users fully adopt a product, they integrate […]

The post How User-Centered Prioritization Helps Improve Feature Adoption Rates appeared first on The Good.

]]>
Imagine launching a feature and knowing it will be a hit. What if you could flip the script on wasted development efforts and build only what your users truly crave?

For most SaaS companies, a high feature adoption rate is linked to increased upgrades, retention, and loyalty. When users fully adopt a product, they integrate it into their daily workflow and continually find value.

But alarmingly, a reasonable feature adoption rate in SaaS is between 20% and 30%, and similarly, only 20% of launched features are used. We can all understand that some features won’t hit the mark, but should we really accept that up to 80% of the features we build will go unused?

I don’t think so.

By refining the prioritization process, we can make sure you’re working on the right features that will drive value for users and improve feature adoption rates. And it starts with understanding your user.

Reasons for low feature adoption

As product teams focus on developing innovative capabilities and addressing technical debt, the gap between feature development and feature adoption widens.

Underperforming features are a drain on your company’s time and resources. But what causes low adoption rates in the first place?

  • Lack of awareness: The new feature isn’t presented/marketed to users in a compelling way
  • Wrong messaging: The marketing message doesn’t resonate with users, and they’re unaware of the benefits
  • Bad feature: The feature doesn’t actually address a user’s need or pain point

While these are the three most commonly cited reasons for low feature adoption, we’ve found that these symptoms often stem from underlying issues with how features are prioritized for development and release. Teams let internal assumptions, stakeholder requests, or competitive pressures (rather than genuine user insights) drive priorities. In turn, the wrong features are released, spurring feature bloat, low adoption rates, and more.

Think of that ‘AI-powered suggestion’ feature that no one uses. Was it truly solving a user need, or just a cool tech demo?

We’ve seen firsthand with clients how prioritization directly impacts performance. When companies prioritize effectively, they stay focused on what is proven to deliver results. And when they don’t, the opposite happens.

What is user-centered prioritization?

There are plenty of ways to address low feature adoption, but user-centered prioritization might be the Trojan horse you didn’t see coming.

User-centered prioritization is an approach that places the user at the heart of every decision regarding feature development and enhancement.

It’s a systematic way to ensure that the features you build truly solve your users’ problems, meet their needs, and provide the most value. This contrasts with traditional prioritization methods that might heavily weigh internal opinions, market trends, or ease of development.

With user-centered prioritization, you leverage user research, behavior analytics, and feedback loops to make data-driven development decisions. By understanding not just what users say they want, but how they actually behave, product teams can make more strategic choices about which features to build, when to release them, and how to position them for maximum adoption.

User-centered prioritization is the first step to higher feature adoption rates

We’ll get to some specific strategies in a minute, but for now, I want to provide some additional context on why user-centered prioritization is the first step to higher feature adoption.

It’s more than just a method; it’s a mindset. The core idea is to build products and services that truly solve user problems and provide a positive experience.

When faced with a long list of potential features or improvements, user-centered prioritization helps teams decide what matters most. It can:

  • Identify pain points and focus on features that directly address user frustrations or obstacles.
  • Hone in on the most frequent and important tasks users want to accomplish, then prioritize content and features that support these tasks.
  • Visualize the user journey and break it down into actionable user stories, prioritizing those with the highest potential impact on user satisfaction and business goals.
  • Classify and prioritize usability problems based on their impact on user task completion, frequency, and ease of fix.

To make any of this happen, you need a deep understanding of users. Prioritization begins with thorough user research. Use various methods such as interviews, surveys, observational studies, usability testing, and analyzing user data to gather insights. Try to build an understanding of how and where users will interact with the feature.

In essence, user-centered prioritization ensures that product development efforts are aligned with what users truly need and value, leading to ethical and successful products.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Strategies for improving feature adoption rates with user-centered prioritization in the customer journey

So, what does it look like in action when a user-centered approach dictates feature development and deployment? Here are eight strategies.

1. Build the right features

    The foundation is to build the right features, and to ensure you don’t do it in a vacuum. As we have covered, you need user research to understand the problems you are trying to solve.

    Before moving to development, test concepts and prototypes with real users to ensure the feature addresses a need and has a clear value proposition.

    Prioritize features that will deliver the most significant value to users, not just those that are “nice to have” or technically interesting.

    2. Be clear about the value proposition of your feature

      Users need to understand why a feature is beneficial for them and how it solves a problem, not just what it does. Articulate this clearly in all communications.

      Each feature should address a distinct user pain point or enable a new, valuable capability. Ideally, use language that has been tested and proven to clearly convey the value of the feature.

      3. Make onboarding frictionless

        Segment users (by role, industry, goals, etc.) and tailor onboarding experiences. A marketing professional might need a different introduction than a developer.

        Guide users to the core value of the product and its key features as quickly as possible. This is the moment they realize “this product is for me.”

        Instead of static tours, interactively prompt users to take the desired action so that you’re teaching them while they accomplish tasks.

        4. Create context in the app experience

          Use subtle, in-context cues to highlight new features or explain specific UI elements when a user is in a relevant area.

          For more significant feature announcements, use in-experience banners or modals that appear at relevant moments. Behavioral triggers can also deliver guidance based on what a user is currently doing or has done.

          When a feature’s area is empty, use this space to explain the feature’s purpose and guide the user on how to get started. Make it easy to find the features your user is looking for.

          5. Educate and clearly communicate with users

            As mentioned above, prioritize in-app methods for immediate context, but be sure to supplement with marketing materials like:

            • Targeted emails that announce new features, explain their benefits, and link directly to the feature in the product. Segment these emails to ensure relevance.
            • Blogs that add in-depth explanations, use cases, and technical details for those who want them.
            • For complex or high-impact features, host live or recorded webinars to demo features and answer questions.
            • Social media, including short, engaging content (videos, graphics) to announce features and drive interest.

            6. Personalize the feature

              Not all features are for every user. To be sure the right features are being shown to the target user, you can hide or highlight features based on a user’s role or permissions.

              Allow users to tailor their experience, making the most relevant features easily accessible, and use machine learning to suggest features or workflows based on a user’s past behavior or similar user segments.

              7. Gather data and feedback

                Instead of relying on just feature adoption rates, gather supplemental data and feedback to understand why users are or aren’t adopting the feature. Use micro-surveys (e.g., after a user interacts with a new feature) to get immediate feedback on usability and value. Monitor overall satisfaction with NPS & CSAT surveys, conduct regular user interviews, and look for recurring issues in support tickets.

                Make sure to analyze all this information across different user segments to identify differences and tailor strategies.

                8. Iterate on the feature

                Don’t just launch and leave a feature; you can continue iterating on the experience and messaging post-launch until you figure out what works. Test different onboarding flows, in-app messages, or feature designs to see what drives higher adoption.

                Feature adoption is an ongoing process. Regularly review data, implement changes, and measure their impact. Don’t stop promoting after the announcement.

                By adopting these strategies, SaaS companies can move beyond simply launching features to truly integrating them into their users’ workflows, maximizing the value delivered, and ultimately driving sustainable growth.

                A good feature adoption rate is always improving

                We’ve often touted the uselessness of benchmarks. And while they are meaningless for setting goals, they can help to paint a picture of industry averages and to set expectations. In the case of feature adoption rates, if you’re below that 20% mark, you should strongly consider building a more user-centered prioritization process.

                Incorporating user feedback early and often can significantly reduce development time and costs. Instead of building features based on incorrect assumptions, you’ll focus resources where they’ll have the most impact, leading to higher ROI.

                The direct link between user-centered prioritization and feature adoption is clear.

                The days of simply building features and hoping for the best are over. If you’re ready to take a different approach, our team is available to support.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post How User-Centered Prioritization Helps Improve Feature Adoption Rates appeared first on The Good.

                ]]>
                How to Identify Your Most Valuable User Segments and Prioritize Accordingly https://thegood.com/insights/user-segments/ Thu, 01 May 2025 05:24:04 +0000 https://thegood.com/?post_type=insights&p=110491 Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers. Despite this being a proven economic model, companies are rarely focusing their effort on that 20%. It’s not because they don’t want to; it’s […]

                The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

                ]]>
                Have you ever heard of the Pareto Principle? Even if the name doesn’t ring a bell, you’re likely familiar with the premise that 80% of revenue comes from 20% of customers.

                Despite this being a proven economic model, companies are rarely focusing their effort on that 20%.

                It’s not because they don’t want to; it’s because it is easy to get wrapped up in not losing a single sale, to the point that you are spreading yourself too thin.

                If you focus your energy and product improvements on the highest-value user segment, you will see greater returns for less work.

                In this article, we’re sharing the study we recently ran for a client that helped us identify their most valuable user segments and prioritize improvements to meet their needs.

                What are user segments?

                User segments are groups within a customer base who share similar characteristics, behaviors, or values.

                They are created with user segmentation, which researches those commonalities and divides your audience into distinct groups. You can then tailor experiences, personalize messaging, and focus optimization efforts on their specific needs.

                Common types of user segments

                User segments can be divided based on different traits. The type of segmentation you use will vary based on your use case and goals. Here is a quick overview of common user segments.

                Segmentation TypeDescriptionExample Use Case
                DemographicSegments users by age, gender, income, education, etc.Targeting campaigns for specific roles
                FirmographicSegments by company size, industry, revenue, or locationTailoring features for SMBs vs. enterprise
                BehavioralBased on how users interact with your product, such as product usage, feature adoption, or login frequencyIdentifying power users or at-risk users
                TechnographicSegments by technology stack, device, browser, or OSPrioritizing integrations or support
                Needs-BasedSegments by specific problems or needsCustomizing messaging for value drivers
                Value-BasedGroups by economic value (annual recurring revenue, lifetime value, subscription tier)Prioritizing high-revenue customers
                Lifecycle StageSegments by user journey (trial, active, churn risk, etc.)Triggering onboarding or win-back flows
                RFM (Recency, Frequency, Monetary)Groups based on most recent activity, engagement frequency, and spendIdentifying loyal or dormant users
                AcquisitionBased on the marketing channel or campaign sourceTailor messaging or personalize the experience

                Why companies optimize for the wrong segments

                When we run prioritization exercises, one of the most common mistakes we see is companies focused on segments of users based on volume. If the segment has more users, they automatically believe it deserves more attention.

                This reflects one of the three common prioritization mistakes:

                1. Volume bias: Prioritizing segments with the most users rather than the most value
                2. Squeaky wheel focus: Optimizing for the users who complain the loudest
                3. Recency fallacy: Focusing on the latest acquisition channel or user cohort without evaluating their actual value

                The uncomfortable truth is that your most valuable segments may not be your largest, your loudest, or your newest.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

                Conducting a segmentation study step by step

                At The Good, we’ve developed a systematic approach to identify and prioritize your most valuable user segments. Here’s how it works.

                Step 1: Set your goals

                Before you start analyzing data, segmenting users, and prioritizing, you need a clear understanding of your project goals. In most cases, they will look something like this:

                • Identify and quantify subsets of user segments based on use cases
                • Understand the potential value of known segments
                • Identify features and benefits that are most important on a per-segment basis
                • Find opportunities to improve the engagement of high-value users

                These goals can be turned into the key research questions of your study.

                Step 2: Identify valuable behaviors beyond revenue

                Your most valuable user segments, of course, need to drive revenue, but there are other indicators to consider when prioritizing who you are building/optimizing for.

                Current value metrics, future value indicators, influence value, and cost-to-serve factors will all influence the overall value of a user segment.

                • Current value metrics: Revenue generated, subscription tier, feature usage, team size
                • Future value indicators: Growth trajectory, expansion potential
                • Influence value: Referral behavior, advocacy impact
                • Cost-to-serve factors: Support requirements, implementation complexity, churn risk, acquisition cost

                Identifying and tracking these metrics and scoring segments based on this information will help paint a more holistic picture of value. Some segments might not be your biggest revenue drivers today, but they represent significant future opportunities, so you may choose to optimize for them instead of your current biggest spenders.

                Step 3: Collect qualitative and quantitative data

                Once you’re clear on goals and value metrics, you’re ready to start collecting data for your segmentation analysis. Gathering a multidimensional data set will help you better understand users as the complex humans they are. Types of data that will help your analysis will include:

                • Usage patterns: Frequency, features used, time spent in the product
                • Transactional data: Revenue contribution, plan type, upgrade/downgrade history
                • Behavioral signals: Engagement with key activation points, referral behavior
                • Acquisition source: Channel origin, customer acquisition cost, time to convert
                • Demographic/firmographic data: Company size, industry, role

                Most of this data will be sourced from your main quantitative collection tool, such as Google Analytics or your product analytics. But for a truly effective study, you need to supplement all this information with qualitative context. Surveys, session recordings, or user tests can help you better understand why your users are doing what they do.

                Step 4: Conduct factor analysis to identify value drivers

                Group your data together into a reduced number of independent factors that represent the underlying themes within the dataset. This will help identify value drivers that differentiate your user segments.

                For example, in a recent segmentation project, we discovered distinct value factors that formed natural segment groupings:

                • Efficiency seekers: Primarily valued time savings and streamlined workflows
                • Integration power users: Heavily utilized connections to other tools in their stack
                • Data-driven optimizers: Focused on analytics and performance insights
                • Scale-focused operators: Needed enterprise features and team collaboration

                Understanding these value drivers helps you move beyond simple demographic segmentation to truly understand what motivates different user groups.

                Step 5: Apply cluster analysis to form actionable segments

                Once you’ve identified the key value drivers, use cluster analysis to group users with similar characteristics. Usually, 3-7 distinct segments emerge from the exercise.

                These segments often cross traditional demographic lines, revealing unexpected patterns. For example, power users might not be enterprise customers as you assumed, but mid-market companies with specific workflow needs.

                This is also the time to start looking for natural clusters of behavior that indicate high-value segments. Considering this, when you’re analyzing user clusters, look for key differentiators like:

                1. Usage frequency: Daily users vs. weekly vs. monthly
                2. Feature utilization: Which user flows are most common for each segment
                3. Value perception: What features each segment values most highly
                4. Growth potential: Which segments show increasing usage over time

                Step 6: Quantify segment value and opportunity size

                The inputs from your data, factor, and cluster analyses will produce outputs of your high-value segments.

                Here’s an example of that workflow so far. The data (survey themes collected) on habits, values, and use cases were the inputs for the factor and cluster analyses. That resulted in segments around the frequency of product use, customer values, and reason for use.

                An example of the workflow to quantify segment value and opportunity size.

                For each potential high-value segment, revisit the value metrics you established in step 2 of the process. Calculate the relevant metrics to ensure you’re not just following hunches but making data-backed decisions about where to focus.

                The most valuable segments often show strength across multiple metrics, not just in current revenue. For example, a segment with moderate current revenue but excellent retention and high referral rates may be more valuable than a high-revenue segment with poor retention.

                You’ll also start to see how your most valuable segments differ from your hypotheses. Maybe it’s not defined by company size but by a specific usage pattern. As a specific example, imagine users who perform at least 3 exports per week AND invite 2+ team members within the first 30 days are 4.5x more likely to upgrade to the enterprise tier within 6 months.

                This kind of insight could transform your priorities, focusing on making these specific actions easier and more intuitive, rather than spending time/money on creating new features for other segments.

                Step 7: Map segments to specific opportunities

                The final step is to leverage your knowledge about high-value users to focus optimization efforts. Now, you can connect your segment analysis to concrete optimization opportunities. A few thought starters for this process:

                1. What actions correlate with long-term success for this segment?
                2. Where do users in this segment typically struggle?
                3. What capabilities does this segment need but doesn’t have?
                4. What value propositions connect most strongly to this segment?

                You’ll end up with a list of optimization opportunities. To prioritize those efforts and start building a roadmap, we recommend scoring them across these dimensions on a 1-10 scale, then calculating a weighted score that reflects your company’s specific situation and constraints.

                1. Potential revenue impact: How much additional revenue could optimizing for this segment generate?
                2. Implementation effort: How difficult would it be to implement changes for this segment?
                3. Time to results: How quickly can you expect to see meaningful outcomes?
                4. Strategic alignment: How well does focusing on this segment align with your long-term business strategy?

                For example, if you’re under pressure to show quick wins, you might weigh “time to results” more heavily. If you’re planning for long-term growth, strategic alignment might carry more weight.

                This will be the start of your roadmap for optimization efforts, ensuring that you focus resources on the right opportunities for your most valuable segments.

                Focus on your highest-value segments first, then gradually expand your optimization efforts to secondary segments once you’ve captured the initial value. Always consider potential cross-segment impacts when making changes.

                Drive growth with user segmentation and prioritization

                As your product and market evolve, so will your user segments. What constitutes a high-value segment today may shift as you introduce new features or enter new markets.

                We recommend evaluating your user segments quarterly, with a more comprehensive review annually or whenever you experience significant business changes.

                Remember, the path to scaling your SaaS business isn’t through trying to please everyone with generic optimizations. It’s through deeply understanding which user segments create the most value and deliberately focusing your limited resources on enhancing their experience.

                Ready to identify and prioritize your most valuable user segments? The Good’s Digital Experience Optimization Program™ can help you discover untapped growth opportunities through expert research, strategic insight, and data-driven experimentation. Contact us to learn more about how our team can help your SaaS business scale faster.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post How to Identify Your Most Valuable User Segments and Prioritize Accordingly appeared first on The Good.

                ]]>
                You Launched A New Website; What’s Next? https://thegood.com/insights/you-launched-a-new-website-whats-next/ Fri, 14 Mar 2025 19:04:26 +0000 https://thegood.com/?post_type=insights&p=110404 Launching a redesigned or re-platformed website feels like crossing the finish line of a marathon. Months, sometimes years, of hard work are finally coming to fruition. You’re dreaming of the rest you’ll be able to enjoy now that the project is complete. But you aren’t crossing a finish line; you’re summiting a mountain. Reaching the […]

                The post You Launched A New Website; What’s Next? appeared first on The Good.

                ]]>
                Launching a redesigned or re-platformed website feels like crossing the finish line of a marathon. Months, sometimes years, of hard work are finally coming to fruition. You’re dreaming of the rest you’ll be able to enjoy now that the project is complete.

                But you aren’t crossing a finish line; you’re summiting a mountain. Reaching the peak feels like completion, but experienced climbers know that’s only halfway. The descent is equally challenging and requires different skills and focus. In the same way, optimization requires a different approach than a website launch.

                You’ve successfully achieved a huge accomplishment, but you’re only partway through. You still have to climb down the mountain or, in this case, optimize your website based on user feedback.

                In our decades in business, we have discovered that the most successful companies view launch day as “day one” of an optimization journey. If you want to do the same, this is the playbook to follow to maximize ROI on redesign investments.

                The benefits of optimization post-website launch

                By leveraging optimization post-launch, you can expect benefits like the following:

                • Objectively and quickly determined opportunities for change
                • Easily determined priorities according to potential impact
                • Less waste of resources on changes that won’t work
                • Higher ROI because you have success rates at a lower cost

                Leveraging optimization after launching a website is a high-performing, systematic way of getting better results from your hard work.

                In an ideal world, you conducted a data-driven redesign. You carefully derived measurements of what was actually happening on the site and feedback from customers to inform the process.

                In this case, you have already conducted user testing and received clear customer feedback on your new site. You’re set up to seamlessly start collecting post-launch data and begin identifying improvement opportunities to build on the momentum you’ve generated.

                If you haven’t, don’t fret. You can still reap the rewards of optimizing your site post-launch. As long as you don’t “set it and forget it,” you’re in a much better position than most of your competition.

                The best optimization tools post-launch

                What do you need to get started?

                Our recommended toolkit for optimization post-launch includes:

                Quantitative data

                • Google Analytics: Analyze traffic sources, user behavior, and conversion paths. GA is essential for comparing pre/post-launch performance and identifying underperforming segments or pages.
                • Heatmaps: Visual representations of user clicking, scrolling, and movement behavior that reveal which elements attract attention and which are ignored. Allows you to optimize content placement and identify what does/doesn’t resonate.

                Qualitative data

                • Usability testing: Structured observation of real users completing key tasks on your new site. Reveals pain points invisible in quantitative data alone and provides direct insight into how customers actually experience your redesigned user journeys.
                • Session recordings: Video captures of actual visitor interactions with your site that expose unexpected navigation patterns and friction points. Helps you identify where users get confused, hesitate, or abandon their journey on specific pages.
                • Customer feedback tools: Direct voice-of-customer collection mechanisms, such as surveys and feedback widgets, that capture qualitative insights about the redesign, highlighting immediate improvement opportunities from your most valuable asset: your customers.

                Experimentation

                • Rapid testing: Quickly validate design and content changes without dev support. Get efficient feedback on elements like CTAs, headlines, and pricing or product page components.
                • A/B testing or multivariate testing: Get statistically significant proof to validate riskier design and content changes without full deployment. Test in context and make sure any further website updates will drive target user behavior.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

                What to do post-website launch step-by-step

                With those tools in mind, you’re ready to get started.

                Step 1: Rapid response protocol and the basics

                The best way to celebrate a website launch is to get your organization together for a post-launch bug-squash-a-thon.

                During the first 72 hours post-launch, get cross-functional teams together to hunt for bugs on the new site. Document any technical issues in a central location and use a simple prioritization framework so your devs know what to tackle first.

                Implement fixes while the bug hunt is going on so people see the improvements in real time and are motivated to keep searching. Keep in mind that some of these bugs will indicate larger UX issues, so don’t delete them after solving them; keep them available for review later in the week.

                Your only other priority in the first three days post-launch is to ensure all of your tooling is set up correctly. Make sure your Google Analytics is tracking the right metrics, get your heatmapping software working on the new site, and be sure any customer support functions (chatbots, etc.) are up and running.

                Step 2: Compare pre and post launch data

                Revisit your baseline metrics to get started reviewing pre- and post-launch data. Specifically, compare KPIs before and after the redesign and compare those across top channels and device types to get a foundational understanding of what is working and where to dig in.

                Example key performance indicators to track immediately:

                • Conversion rates by traffic source and device type
                • User engagement metrics (time on site, pages per session)
                • Cart abandonment and checkout completion rates
                • Customer acquisition costs vs. lifetime value

                To effectively do this, set up before/after comparison dashboards and circulate them with your team.

                Layer heatmaps on top of this to understand how performance changed on a page-by-page basis with more context.

                When you identify where things aren’t improving, you can identify low-hanging fruit optimization opportunities and areas that need more testing to understand what is going wrong.

                Step 3: Refresh user testing and competitive analysis

                Once you have a baseline of data and changes, you’ll want to conduct user research and testing. Have real people use your new site to discover its flaws and give their feedback. If you didn’t conduct testing during the refresh process, you might also find it helpful to have them use both versions of your site (the old and the redesigned) to tell you which they prefer.

                Even if you have plenty of user testing data on the old site, it’s crucial to gather impressions of the site update and collect ideas for additional optimization. You can send the site to the same user testers for comparative feedback, and also be sure to get your site in front of new users for an unbiased and completely fresh perspective.

                Specifically, conduct usability testing with new users to help uncover accessibility issues and where customers are getting stuck on the path to conversion.

                It’s also a good time to study your competitors. What features and systems work well for their digital properties? Their success doesn’t guarantee success for you (even if your audiences overlap perfectly), but it can be a good reminder of what’s going on in the competitive landscape after many months of focusing on your own website.

                Step 4: Build an optimization roadmap

                Based on your GA data, the heatmaps you’ve set up, the user testing you’ve conducted, and the competitive analysis, you’re ready to start building an optimization roadmap.

                A clear roadmap will help align optimization efforts with business objectives, allocate resources for continuous optimization, and generate buy-in from leadership at your organization.

                Theme the categories of problems and opportunities that you identified in steps 1-4 (more on this in our article on theme-based roadmaps). Then, prioritize the themes to create a clear 90-day plan of action.

                Some outcomes of your roadmap could be:

                • Using initial post-launch data, you identify underperforming customer segments and decide to theme personalization opportunities based on their early behavior patterns.
                • Traffic source performance is lower than your benchmark, so you create a plan for optimizing landing pages based and adjusting paid media strategies.
                • Session recordings show customers getting lost while looking for more information on a specific product, so navigation and directional guidance become the main focus for your next phase of optimization.

                Step 5: Delegate or outsource

                With a roadmap in hand, it’s time to optimize. At this stage, the post-launch excitement might be starting to fade, but this is the step that sets successful teams apart.

                If you have an internal optimization team, set up a meeting cadence and reporting framework that drives accountability, and you’re off to the races.

                For teams that are struggling with:

                • Data interpretation challenges
                • Testing velocity limitations
                • Expertise gaps in specialized areas
                • General confusion about what to do next

                You may need external support. Sometimes, it’s hard to read the label from inside the jar when you’ve been working on the site for so long. A partner like The Good can lend the expertise and fresh perspective to make sure you drive ROI from your redesign investment.

                Double down to keep the momentum going

                Your website improvement journey doesn’t end with launch; it evolves.

                The businesses that treat their website as a living, breathing asset rather than a static project are the ones that consistently outperform competitors.

                By implementing the five-step process outlined above, you’re positioning your organization to capture immediate wins while building a sustainable optimization practice that drives continuous improvement. 

                Each insight gained, each test run, and each improvement made compounds over time, creating an ever-widening gap between your business and those that “set it and forget it.”

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post You Launched A New Website; What’s Next? appeared first on The Good.

                ]]>
                How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For https://thegood.com/insights/how-to-read-a-heatmap/ Mon, 10 Feb 2025 17:49:14 +0000 https://thegood.com/?post_type=insights&p=110285 Wondering why users leave your site without converting? You may have a gut-instinct answer to the question. You might even have ideas for how to tweak design, rewrite headlines, or add new features in an attempt to get users to stick around. But guesswork isn’t a strategy. Expert researchers don’t guess. They use data, and […]

                The post How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For appeared first on The Good.

                ]]>
                Wondering why users leave your site without converting?

                You may have a gut-instinct answer to the question. You might even have ideas for how to tweak design, rewrite headlines, or add new features in an attempt to get users to stick around. But guesswork isn’t a strategy.

                Expert researchers don’t guess. They use data, and one of their most powerful tools is the heatmap.

                When used correctly, heatmaps reveal where users hesitate, what grabs their attention, and where they drop off—all critical insights for optimizing conversions. But the real magic isn’t simply generating a heatmap; it’s knowing how to read it.

                In this guide, we’ll break down how to read a heatmap like an expert so you can stop guessing and start making informed, high-impact changes to your website.

                Intro to heatmap analysis

                Heatmap analysis shows real data points that represent actual human behavior. And when those behaviors form visibly discernable patterns, we use these patterns to form hypotheses about user wants and needs.

                Heatmaps can help answer questions about user behavior and uncover sticking points in the customer journey.

                Like footprints in the sand, heatmaps show us where users have been. And we use that information to infer and imply intent.

                Types of heatmaps

                At The Good, we primarily use three types of heatmaps: Click maps, movement maps, and scroll maps.

                These types of heatmaps provide insights that answer critical UX and conversion questions, such as:

                • Are users seeing my key content?
                • What elements are they engaging with?
                • Where are they paying the most attention?

                By analyzing these patterns, we can pinpoint where users get stuck, what’s drawing their attention, and where they drop off—and take action to improve the experience.

                Scroll Maps

                Scroll maps visually depict typical scroll depth on any web page. Key insights you can glean from scroll maps:

                • Where users drop off (high exit points may signal a false bottom)
                • Whether important content is being seen
                • If users are scanning or engaged

                Tools typically use scales to show you the portion of users who scroll to different parts of your page. Red or “Hot” areas of your Heatmap indicate that all or almost all your users have seen this part of the page. As you move down the page, the colors will get “colder” according to the percentage of users who scroll down to that point.

                The lines on the page below indicate where 25%, 50%, and 75% of users dropped off, meaning they left the page or clicked on something, therefore not scrolling further.

                While shallow scrolling is not inherently negative, it may indicate lost user attention or that a page does not look scrollable.

                The same goes for deeper scroll depths. It is not inherently positive or negative to see a deeper scroll depth. Depending on the surrounding context, deeper scroll depth may indicate that users are failing to find meaningful content higher on the page, and therefore go looking by scrolling down.

                Movement Maps

                Movement maps show where users have hovered their mouse on a page. They are valuable because they show us where the majority of user attention is focused. Movement maps can show:

                • What content users are reading or skimming
                • Where their attention is most concentrated
                • Whether key information is being overlooked

                Movement maps help us infer what content is most valuable to users during the decision-making process.

                Reading movement maps is similar to reading eye-tracking heat maps. For many users, their mouse movement follows their gaze, so knowing where mouse movement occurs tells us what content users are reading or paying attention to.

                Based on our experience, concentrated left-to-right movement over text generally indicates intentional reading, since many mouse movements tend to follow the user’s eyes.

                In this example, we see side-to-side movement over FAQs, indicating users are reading each question to determine which one may reveal helpful information about the services being offered. We looked at movement clusters in the FAQs, which when paired with data about the most highly clicked FAQ items helped us determine what questions users needed answered to have the confidence to purchase services.

                In contrast, up-and-down movement may indicate areas that users are simply skimming rather than intently reading. Take this example: seeing vertical movement patterns indicated to us that users may be scanning the resources available (rather than reading). User testers told us that the content did not look worthwhile, so those two bits of data together told us this area needed some fresh content and a redesign.

                Click Maps

                Click maps show us what elements users click on most commonly. Click maps can uncover insights including:

                • What elements drive engagement (or get ignored)
                • If users are clicking on non-clickable elements (indicating confusion)
                • Which navigation links or CTAs attract the most interest

                Hot spots, shown in red, have the highest concentration of clicks. Transparent blue spots represent a low density of clicks.

                In the click map below, we see a list of “All Products” with one notable hot spot in the middle of the menu. What, you may wonder, is in the middle of the list that is drawing so many clicks?

                The answer is in the name: Paints. Here we see an example of a company with a clear specialty and a large portion of their sales going to one category. Yet, when we saw this heatmap we realized they were making the user work hard to find these most popular products by burying them in the middle of the list.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

                18 Heatmap patterns to look out for

                To get the most value out of heatmaps, researchers have to analyze how different heatmap elements interact, compare trends across pages, and validate findings with other data sources like session recordings or analytics.

                In this section, we’ll walk through the most common heatmap patterns, what they look like, and what they reveal about user behavior so you can start making smarter, data-backed decisions.

                The Spot Specific Pattern

                Where it appears: Click map

                What it looks like: Highly concentrated heat activity on an individual spot in a sea of text.

                What it means: Users might have a specific interest related to a need. They could also be clicking on a non-clickable element within a paragraph or looking for information that is slightly buried within other text. It may be an indicator that you need to rearrange a menu or better highlight certain features of a product.

                Gapped Patterns

                Where it appears: Click map

                What it looks like: In a list of items, there is one that gets no heat activity.

                What it means: It usually means that a user doesn’t know what to expect if they were to click here, or they are simply disinterested.

                Primacy vs Recency Pattern

                Where it appears: Click map

                What it looks like: Concentrated click activity on the first and last items in a list.

                What it means: Typical of menus, users often engage most with the first and last item in any list. Named after the psychological phenomenon where users are best at recalling the first and last words in any list.

                Filter Hot Spots

                Where it appears: Click map

                What it looks like: Concentrated clicks on certain areas of a filter, and a lack of clicks on other areas of a filter.

                What it means: Users generally rely heavily on certain filters and less on others. Knowing what filters are helpful to users might tell us how we should rearrange filters or give us context for what users care about in their products.

                Consistent Browsing Pattern

                Where it appears: Click map

                What it looks like: Strong click patterns across products on category pages.

                What it means: This tells us that users are interested in a variety of products on the category page and are clicking on various product pages.

                Spotted Browsing Pattern

                Where it appears: Click map

                What it looks like: Strong amount of clicks on only a few product images on category pages.

                What it means: This tells us that users are most interested in specific products. These might be flagship products (as in this example).

                Strong Pagination Pattern

                Where it appears: Click map

                What it looks like: Concentrated activity on the pagination with little activity on filters or product tiles.

                What it means: Users might not have very intentional browsing behavior, and instead of engaging with product tiles and narrowing down their search, they are simply going from page to page to see all products.

                Click Indecision

                Where it appears: Movement map

                What it looks like: Horizontal heat patterns found in the middle of two clickable elements, usually between 2 or more different elements positioned next to each other. Can be found on a menu navigation or even dual CTAs.

                What it means: Users are hovering between clickable elements. They might be experiencing a bit of uncertainty in their browsing experience. They’re not sure where to click because both options are similar in nature or unclear.

                F-Shaped Reading

                Where it appears: Movement map

                What it looks like: Concentrations of heat in the shape of an F on the page. The direction begins with the user tracing the page from top to bottom and then from left to right.

                What it means: Users are assessing the content on the page but they are not necessarily reading it.

                Source.

                Commitment Reading

                Where it appears: Movement map

                What it looks like: Blocks of heat activity usually on content pages or chunks of text.

                What it means: Users are high-intent and they’re learners. These patterns show strong interest in the information displayed and intentional reading.

                Source.

                Layer Cake Pattern

                Where it appears: Movement map

                What it looks like: Users read headlines but overlook the associated subtext.

                What it means: They are interested in the content but are reviewing the page at a high level.

                The downside of this pattern is that users could be missing content related to their needs or diminish the influence of the content’s intended purpose to promote a desired course of action.

                Scrolling Pattern

                Where it appears: Movement map

                What it looks like: A vertical heat pattern that travels down the page. On low-traffic pages, this might be represented by dots that align in a vertical fashion, as with the example here.

                What it means: This signifies that users are scrolling down the page, without necessarily reading the content. They might be looking for something that they are not finding, or the content might be arranged in a fashion that is best for scanning. If this is paired with truly little click engagement, we might assume that the content is not very valuable.

                Truncated Scanning

                Where it appears: Movement map

                What it looks like: Users skip a consistently repeated word in a text.

                What it means: Users are reading content faster, likely because the content is repetitive and it’s easy to recall the skipped word.

                Dropdown Residue

                Where it appears: Movement map

                What it looks like: A spotted heat residue in a rectangular fashion positioned below the top navigation.

                What it means: This is residual activity of users strongly considering items in the drop-down menu or some drop-down element on the page. Residue will be concentrated in the areas where users are actually scanning and considering the content.

                Image Hover

                Where it appears: Movement map

                What it looks like: Heat activity around images on a page. Could be on a category page or rows of photos.

                What it means: Imagery is dynamic–a secondary image shows when the users hover over the primary image. The user is hovering around the image to see the second photo.

                Content Avoidance

                Where it appears: Movement map

                What it looks like: The inverse of the image-specific pattern, content avoidance happens when people explicitly avoid an area with their mouse, almost creating a frame.

                What it means: This might mean that users perceive this as an ad and are intentionally avoiding it, or have “banner blindness” and simply don’t see the content as relevant to their visit.

                False Bottom

                Where it appears: Scroll map

                What it looks like: On scroll maps, there is a high drop-off on the page (drop-off is above the halfway mark on the page).

                What it means: Users might perceive that they’ve reached the end of the page. This is extremely common when email signups are in the middle of a page (see example right) and when there is a strong color contrast, full-bleed section early in the page. These things signal the footer is coming, so they often make users think they’ve seen everything they need to see.

                Halted Pattern

                Where it appears: Scroll map

                What it looks like: Drop-off is right above the fold, and nearly no users scroll below it.

                What it means: Either most users are finding something to click on above the fold, there is a high bounce/abandon rate, or there is a false bottom. It could also be some combination of the three.

                What is the best tool for heat mapping?

                Not all heatmaps are created equal. The best heat mapping tool is the one that provides clear, actionable insights without adding unnecessary complexity.

                For most teams, Hotjar will be a great go-to solution. It’s lightweight, easy to set up, and provides a suite of heatmaps—including click maps, scroll maps, and movement maps—that help you understand user behavior at a glance.

                Why Hotjar?

                • Comprehensive Behavior Tracking: Hotjar captures how users interact with your site—where they click, how far they scroll, and what elements they hover over.
                • Fast Insights, No Heavy Lifting: Unlike enterprise tools that require complex setup, Hotjar makes it easy to get started and see results quickly.
                • Paired with Session Recordings: Heatmaps alone tell part of the story; Hotjar lets you connect heatmap insights to real visitor session recordings for deeper analysis.

                While it’s our top pick, if Hotjar isn’t the right fit, another good option is Microsoft Clarity.

                Turning heatmap data into actionable strategies

                Reading a heatmap like an expert researcher isn’t just about spotting red and blue zones—it’s about understanding the “why” behind user behavior and knowing what to do next.

                But if you don’t have the time or resources to build a research team, you don’t have to go it alone. At The Good, we specialize in turning heatmap data into clear, actionable strategies that drive real results.

                Want to skip the learning curve and get expert insights now? Let’s talk.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For appeared first on The Good.

                ]]>
                A Guide For Preventing Form Fatigue To Increase Conversions & Improve UX https://thegood.com/insights/form-fatigue/ Mon, 27 Jan 2025 19:21:18 +0000 https://thegood.com/?post_type=insights&p=110253 While terms like scroll fatigue or decision fatigue are commonplace in UX, a quick search for resources on form fatigue doesn’t surface much. But, with over 15 years of experience optimizing digital experiences, we know how prevalent it can be. Drawing from those years of experience improving SaaS platforms, we’ve identified and addressed form fatigue […]

                The post A Guide For Preventing Form Fatigue To Increase Conversions & Improve UX appeared first on The Good.

                ]]>
                While terms like scroll fatigue or decision fatigue are commonplace in UX, a quick search for resources on form fatigue doesn’t surface much. But, with over 15 years of experience optimizing digital experiences, we know how prevalent it can be.

                Drawing from those years of experience improving SaaS platforms, we’ve identified and addressed form fatigue across various products. In this article, we’ll show you how to uncover and fix it effectively.

                Keep reading to learn:

                • Research methods for uncovering form fatigue
                • User behavior patterns that indicate your users suffer from form fatigue
                • Actionable strategies to improve form fatigue and increase conversions

                What is form fatigue?

                Form fatigue occurs when a user gets frustrated and/or exhausted by the complexity or length of a digital form. The poor design of the form directly contributes to this sense of fatigue and causes them to abandon.

                Psychologically, users are conditioned to prefer experiences that require minimal cognitive effort. We want experiences that accomplish our goals simply and quickly. When a user experience does not meet those instincts, conversion rates drop.

                Form fatigue is typically caused by things like:

                • Content fatigue: When excessive textual/visual content on a page overwhelms users, hindering their ability to find relevant content for successful task completion.
                • Heavy cognitive load: When undue mental effort is required to accomplish a task, causing analysis paralysis or frustration, leading to abandonment.
                • High interaction cost: When a task or interaction requires significant time and/or effort to accomplish, possibly creating frustration and resulting in abandonment.

                How to identify form fatigue

                When working on a product day in and day out, you might be too close to the forms to know if fatigue is happening. That is where research can help.

                Getting an external, real user perspective can expose things like content fatigue, heavy cognitive load, or high interaction cost in your forms.

                So, the best way to identify form fatigue is through user research. While there are plenty of methods, the best for this particular scenario include:

                • Session recordings
                • Heatmaps
                • Scroll maps
                • Click maps
                • User tests

                With your raw data in hand, look out for some specific patterns that might indicate form fatigue:

                • Scanning: A user scrolls over content (text or images) at a higher scroll rate on mobile, while on desktop they might hover over some words or phrases, or completely skip over content altogether.
                • Halted Scrolling: The user pauses on the site to possibly engage with content/reorient themselves or this pause may indicate that the user perceives a false bottom.
                • U-turns: When a user back navigates to the previous page they were just on, using either breadcrumbs or the back button.

                These research patterns can point to moments when users are experiencing form fatigue and the digital experience can be optimized.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

                7 ways to prevent form fatigue

                If you suspect form fatigue or uncover evidence of it in your research, don’t fret. There is plenty you can do to fix it. For companies building new forms, these tips can also be used to prevent form fatigue in the first place.

                1. Execute the 10 principles of good form design

                The first, and arguably the most important, way to limit form fatigue is to understand and act on the principles of good form design. Website forms are one of your most important onsite elements. They are the crux of a user’s path to conversion.

                Bad form design can cause users to drop off during critical conversion opportunities, leaving them frustrated or confused, while great form design creates a seamless user experience that can increase conversion rates and leave users feeling excited about a product or company.

                These are the ten established form design principles to help you create better experiences:

                1. Priming: Prepare users by setting clear expectations about the form’s purpose, length, and benefits before they begin.
                2. Error Prevention: Design forms to minimize user mistakes by using constraints, clear labels, and smart defaults.
                3. Error Recovery: Make it easy for users to identify, understand, and fix errors with real-time validation and clear messaging.
                4. Feedback: Provide immediate, actionable responses to user inputs to build confidence and guide progression.
                5. Proximity: Group related fields together logically to make forms easier to navigate and process mentally.
                6. Convention: Follow familiar design patterns to ensure users can complete the form intuitively without unnecessary friction.
                7. Momentum: Encourage users to keep going by visually or textually reinforcing their progress through the form.
                8. Proof: Build trust and reduce hesitation with evidence like security assurances, testimonials, or recognizable logos.
                9. Demonstrated Value: Highlight the benefits of completing the form so users feel their effort is worthwhile.
                10. Perceived Effort Level: Design forms to appear simple and manageable by minimizing visible fields and breaking longer forms into steps.

                To learn more, we explore these principles and include 32 good form design examples in this companion article.

                2. Ask for minimal information upfront

                In research and testing for clients, we have found that asking for less information upfront may help to prevent form fatigue and in turn, increase initial registrations. The highest converting forms ask for only the necessary information in order to register, saving additional information for post-registration. That could be as little as just the email or include name and other essential information.

                Once the user is registered, they can be guided through additional steps to help personalize the account to their needs, for example, more personal information, settings, shipping preferences, choosing a plan, adding orders, etc.

                3. Reduce form length perception

                For forms that can’t reduce the information required, research shows users’ perception of form length can be as important as the actual length.

                You can reduce perceived effort with strategies like:

                • Chunking forms into steps: Break longer forms into smaller, manageable sections and use clear step titles (e.g., “Step 1: Account Details”).
                • Collapsible sections: Use collapsible form fields to make the interface less overwhelming while still providing access to all necessary fields.
                • Auto-advance fields: Automatically move users to the next field when input is complete (e.g., credit card information split into boxes).

                4. Make clear suggestions

                Simplify decision-making by limiting options and highlighting recommended choices. You can use autofill and predictive text to reduce manual input and create an intuitive, logical flow that guides users naturally through the form.

                5. Optimize for mobile or desktop

                At this point, we shouldn’t even have to say it, but you’d be surprised how often teams forget to tailor the experience for the correct device. Form fatigue is exasperated when the design doesn’t function on the user interface being navigated. The design should adapt for mobile or desktop users, regardless of whether you are an app-first or desktop-first product.

                One essential way to do this is by adjusting keyboard inputs. For example, when a field is asking for a zip code or phone number, default to the numeric keyboard on mobile to make it as simple as possible to fill out the form.

                6. Use gamification to entertain

                Gamifying the form-filling experience can motivate users to complete it. So, when you have an extensive form that needs filling and can’t be simplified, add elements like milestones, progress rewards, and personal messages to keep users entertained and motivated. Celebrate small wins when users complete sections and consider unlocking discounts, offers, or badges as users complete each step. It’s hard to be fatigued when you’re having fun.

                7. Leverage post-signup emails

                Preventing form fatigue can also happen by supplementing information in other ways. Use post-signup emails to collect information that isn’t imperative to registration. For example, a user’s birthday could come in handy for rewards later on, but it is better to collect it post-signup to prevent form fatigue.

                Additionally, the email body can link the user to connect new apps to their account, access more discounts, watch tutorials, download resources, or contact their team.

                Many SaaS companies also send emails from a real person to encourage users to respond if they have questions or need help. These personal follow-ups can also help recapture users who abandon the form initially.

                To prevent form fatigue in UX design, focus on strategies that simplify and streamline the user’s form-filling experience. Remember, the goal is to make form completion feel easy and painless for the user.

                Ready to eliminate form fatigue and boost conversions?

                Form fatigue can quietly undermine your UX efforts, leading to missed conversions and frustrated users. However, with thoughtful research, clear design principles, and actionable strategies, you can create forms that not only engage users but also encourage them to complete the journey.

                At The Good, we specialize in helping businesses like yours eliminate friction and create digital experiences that drive results. See this form improvement example from our work with Helium 10.

                If you’re ready to optimize your forms and increase conversions, reach out to our team today. Let’s work together to turn your users into loyal customers.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post A Guide For Preventing Form Fatigue To Increase Conversions & Improve UX appeared first on The Good.

                ]]>
                What Comes After Product Market Fit? https://thegood.com/insights/product-market-fit/ Fri, 25 Oct 2024 05:24:22 +0000 https://thegood.com/?post_type=insights&p=109565 Finding product-market fit is a big achievement for any SaaS organization, but it can leave you feeling lost. It’s like the adage of the dog that finally catches the car. Now that you have it, what do you do with it? Many leaders think the solution is straightforward: “Just scale up!” they say. In truth, […]

                The post What Comes After Product Market Fit? appeared first on The Good.

                ]]>
                Finding product-market fit is a big achievement for any SaaS organization, but it can leave you feeling lost. It’s like the adage of the dog that finally catches the car. Now that you have it, what do you do with it?

                Many leaders think the solution is straightforward: “Just scale up!” they say.

                In truth, it’s not that simple. Product-market fit isn’t a single moment in time. It’s a state that you must maintain, even as the market and your customers change. If you’re not careful, you could lose it.

                Let’s explore product-market fit and what to do once you’ve found it.

                Product-Market Fit Definition

                Before we get too deep into the post-product-market fit part of your lifecycle, let’s get on the same page.

                Product-market fit is the point where your product satisfies a strong market demand. It happens when your target customers recognize your product as the ideal solution to their problem. This recognition leads to organic growth.

                Essentially, it’s the point where you’ve created something people want and are willing to pay for.

                an illustration defining what product market fit is.
                Source

                There is a misconception that product-market fit happens at the $1M ARR mark, but product-market fit is not a revenue stage. It can happen at any income level.

                A trusted way to gauge if you have found product-market fit is the 40% test by Sean Ellis. It is a pretty straightforward assessment using a customer survey question, “How would you feel if you could no longer use [product]?”

                If more than 40% of your surveyed customers answer “very disappointed,” then you have found product-market fit. It’s a leading indicator of what portion of users really value your product.

                Product-market fit is often considered the milestone that signals your product is ready to scale. It’s not just about acquiring customers, though—it’s about retaining them because your product consistently delivers value.

                What Comes After Product-Market Fit?

                Once you find your place in the market, the next obvious step is to scale up. But that comes with a caveat: You have to scale without losing product-market fit.

                a graph showing the stage between product market fit and scaling.

                “Product-market fit is a key milestone to reach, but it’s often misinterpreted as being a static moment in time,” say Fareed Mosavat and Casey Winters, product leaders from companies like Eventbrite, Pinterest, Grubhub, Instacart, and Slack.

                “The reality is that your customer base is always changing and consumer expectations are always growing. Once you get [an] initial product-market fit, you not only have to keep it, but also expand it.”

                If you focus too hard on acquisition and fail to refine your product, there’s a chance you could lose product-market fit. Obviously, that’s disastrous.

                Smart leaders who find product-market fit are wise to protect it to avoid losing market share. They continue to iterate on their product so customers always see it as the ideal solution.

                Sean Ellis, head of growth at companies like Dropbox, LogMeIn, and Eventbrite, and the guy who coined the term “growth hacker,” says it perfectly:

                “The mistake that many marketers make is that they are optimizing for short-term conversions. They think it’s all about maximizing clicks and sign-ups. But if the product isn’t truly great at delivering on the promise, then you will likely lose these people anyway.”

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

                Strategies for After Product-Market Fit

                All of this begs the question: How do you scale up without losing product-market fit? It requires a shift in thinking and a bit of strategy.

                Step 1: Reassess Your User

                In some cases, product-market fit can be fleeting because the users who loved the early version of your product aren’t the same as your long-term users.

                We call this the “early adopter problem.” Early adopters love to try new products, especially when those products promise to disrupt existing systems or ways of doing things.

                However, those early adopters are also likely to move on to the next big thing. If too many of your customers are these early adopters, your customer base might bleed away until you lose your share of the market.

                “The problem is, the early adopters are only ever a small percentage of the overall market,” says Marc Andreessen. “And so a lot of founders, especially technical ones, will convince themselves that the rest of the market behaves like the early adopters, which is to say that the customers will find them. And that’s just not true.”

                Early adopters aren’t your only challenge. It’s counterintuitive, but if your product-market fit is good, you tend to grow fast, and your customers raise their expectations.

                “Slack had [an] extremely strong product market fit from the early days and ended up growing so fast,” says Fareed Mosavat, VP Programs and Partners at Reforge. “It was extremely difficult to keep up with the rising expectations of our customers over time, and took us a while to launch things like WYSIWYG editing, better ways to launch apps vs. just text commands, and simplified channel discovery that were important for our newer, less-technical users.”

                Once you find your place in the market, turn your attention to your users. Continue to conduct research to understand how to meet the needs of customers who aren’t the early adopters.

                Step 2: Build a Data-Driven Culture

                The need for robust customer research should come as no surprise to any growth-focused company, yet too many leaders take their foot off the metaphorical research gas pedal once they achieve product-market fit.

                Maintaining product-market fit and scaling up both require a culture that makes its decisions based on data and research. Scrappy startups can rely on quick, intuition-based movements, but scaling companies can’t ignore their data.

                The first step to a data-driven culture is to establish your shared growth KPIs early and ensure your team is moving in the same direction.

                Next, dig into your customer research. Conduct interviews, analyze data, and gather feedback to identify pain points, features, and reasons for churn/low engagement. Ideate on improvements to address those challenges.

                Finally, rapid test or A/B test changes to your website, marketing, and product to understand if investing resources makes sense.

                The key to building a data-driven culture is to make it a habit. You can start small by scheduling a few user/customer interviews each week, as expert human-centered product leader Emma Leyden suggests, and then build on that. With more and better data, you can more easily fold it into your process.

                The good news is the value of your data continues to grow as you read this article. A bigger user base and more sessions mean more data points, which makes trends and patterns more apparent and reliable for decision-making.

                Step 3: Shift Your Hiring Priorities

                With product-market fit, your team and priorities will naturally evolve. You have new scale goals and likely a growing team that needs oversight.

                Strong leadership is key here. You’ll likely also struggle at this stage with balancing team support for your growing user base while optimizing your product. Admittedly, that balance isn’t easy to achieve. It requires clear prioritization. Sometimes, newly hired leaders make the mistake of assuming their product is “built” and devest product design and development to work on other initiatives.

                It can help to leverage an external pod of product experts to fill in any gaps and help prioritize changes.

                The Good is a great option to supplement your product team post-product-market-fit. We can help you simplify decision-making by staying laser-focused on research, data, and goals through our Digital Experience Optimization Program™.

                Step 4: Start Building a Feature Moat

                A feature moat is when a product offers such unique and superior product features that the competition can’t quickly replicate them. There’s literally a gap—a moat—that your competitors will be scrambling to cross.

                Think of it like this: If your product is a great solution, it will change the lives and work of your users. Their needs and preferences change. They develop new problems that you’re positioned to solve. Each solved problem represents a widening moat between you and your competitors.

                How do you create this advantage? By continuing to drill deep into user needs and pain points even after you’ve achieved product-market fit.

                Don’t rest, satisfied that you’ve learned enough about your users. Continue to leverage generative and evaluative research to uncover new insights into their behavior and needs. Ultimately, this is key to developing a customer experience that evolves with the user.

                Step 5: Stop Obsessing Over Registration

                Registration is just one moment in the user lifecycle. It’s a big moment, for sure, but too much focus on new users can cause you to ignore your existing user base. After you have product-market fit is an ideal time to level-set with your team about initiatives beyond registration.

                This is the time to work on your product-led growth strategy. Focus on improving the rest of the customer experience after registration, including onboarding, activation, engagement, expansion, and evangelism. These stages not only increase growth (through new users and retention), but they also give you a roadmap to iterate on your product moving forward.

                Use research in the post-product-market fit stage to lower the cost of acquisition by considering that every user who doesn’t churn is an opportunity to spread positive word-of-mouth.

                Step 6: Consider Market Expansion

                Your product evolves as you introduce new features, achieve more growth, and scale up your platform. But eventually, your current product will reach a saturation point where growth reaches the limits of the market.

                In this case, the only way to grow is to expand your product-market fit by expanding into adjacent products or markets to find new potential customers.

                an illustration showing product market fit expansion.
                How product-market fit expands.

                Expanding product-market fit doesn’t necessarily mean building new features. It’s a turn from fulfilling your users’ problems to anticipating their next problems. Expanding usually happens in three ways:

                • Same product in a new target market, e.g., Instacart expanding into pharmacy delivery.
                • Same market with an adjacent product, e.g., Lyft expanding to bikes and scooters.
                • New product in a new market, e.g., Amazon launching AWS.

                This kind of expansion does not happen incrementally. It typically happens in bursts when you recognize a new market, vertical, or user to serve. And in nearly all cases, it requires taking bigger bets.

                Support for Post-Product-Market Fit

                When your entire organization is built around finding product-market fit, the switch to a post-product-market fit strategy can be challenging. It requires a new way of thinking for this new stage in your company’s lifecycle.

                In these cases, outside perspective is more important than ever. Our Digital Experience Optimization Program™ brings the pieces you need to build a better digital product. Our team can help you scale up without losing product-market fit. We bring the tools, technique, and expertise that you just can’t find in a single hire.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post What Comes After Product Market Fit? appeared first on The Good.

                ]]>
                What is Card Sorting? https://thegood.com/insights/card-sorting/ Fri, 30 Aug 2024 16:36:13 +0000 https://thegood.com/?post_type=insights&p=109378 A/B testing is often the go-to play of digital optimization programs, but it comes with a serious limitation that many teams ignore: It can’t generate new ideas on its own. You see, A/B testing is evaluative research. It can tell you if one option works better or worse than another option. However, it can require […]

                The post What is Card Sorting? appeared first on The Good.

                ]]>
                A/B testing is often the go-to play of digital optimization programs, but it comes with a serious limitation that many teams ignore: It can’t generate new ideas on its own.

                You see, A/B testing is evaluative research. It can tell you if one option works better or worse than another option. However, it can require significant time and traffic before you can make informed decisions based on the data collected.

                When you need a direction to start testing, it’s important to use generative research: Exploratory user research that focuses on discovering and understanding user needs, behaviors, motivations, and expected experiences.

                Card sorting is one of the most powerful generative research methods we have at our disposal and a key part of a full research program. It’s a tool we use with clients because it can drill deep into what users expect from a digital experience.

                In this article, we’re going to dive deep into card sorting. This research method is a powerful tool every optimizer should have in their toolkit.

                What is Card Sorting?

                Card sorting is a research technique used to understand how users organize and categorize information. Essentially, you present participants with a set of cards, each representing a piece of content or functionality, and ask them to group these cards in a way that makes sense to them.

                an example of a card sorting exercise.
                Concepts are grouped by category to create good information architecture.

                This process reveals how people naturally think about and structure information. It lets you uncover insights into how users might intuitively organize menu items, product categories, or any other structured content on your site. Ultimately, this helps you design a website or app that aligns with their expectations.

                3 Types of Card Sorting: Open, Closed, and Hybrid

                There are three main types of card sorting methods. They all help you gain a deeper understanding of how users categorize information, but they have their own use cases. Choosing the right type comes down to the main objective of your study.

                Open Card Sorting

                In open card sorting, participants organize cards into categories and then label the groups themselves. Researchers do not provide any guidance. This is a generative research method in that it helps define the categories rather than evaluating existing ones.

                an illustrated example of open card sorting.
                Source

                Open card sorting is ideal for exploring how users naturally categorize information without any predefined structure. It helps you create new information architectures, find patterns in user expectations, and generate ideas for structuring and labeling your app.

                Closed Card Sorting

                In closed card sorting, participants sort cards into predefined categories provided by the researcher. This makes it an evaluative research method, meaning it’s useful when you want to test the effectiveness of an existing information structure, but it won’t tell you how users naturally categorize concepts.

                an illustrated example of closed card sorting.
                Source

                You can use closed card sorts to learn whether users understand your existing categories, identify misleading categories, and validate whether your information is presented the right way for your audience.

                Hybrid Card Sorting

                Hybrid card sorting combines elements of both open and closed card sorting. Participants can place cards into predefined categories or create new ones if they feel the existing options don’t fit. This method provides flexibility while still allowing you to test specific category structures.

                Hybrid card sorts are great for validating existing information without closing the door on new ideas. This hybrid approach makes it a great tool for improving a live website or app.

                Moderated vs. Unmoderated Card Sorting

                Moderated card sorting involves a facilitator who guides the participants through the exercise. The facilitator is present to provide instructions, answer questions, and probe deeper into participants’ thought processes.

                A moderator allows for real-time interaction and clarification. The facilitator can ask participants to explain their reasoning, which leads to richer insights. It’s useful when you need a deep understanding of the “why” behind users’ decisions. The downside is that it’s time-consuming, so lean teams or those with a “move fast” mindset don’t often opt for a moderated approach.

                In unmoderated card sorting, participants complete the exercise on their own, typically using an online tool. Participants receive instructions and then sort the cards at their own pace without supervision.

                An unmoderated card sort is more scalable and cost-effective. It’s convenient for participants, as they can complete the task at their own pace and on their own schedule. It also reduces the potential for facilitator bias.

                The lack of a facilitator means there’s no opportunity to ask follow-up questions or clarify participants’ reasoning. As a result, you might miss out on insights into why participants sorted the cards in a particular way. Furthermore, without guidance, participants might misunderstand the task or make errors that a moderator could have corrected.

                In-Person vs. Remote Card Sorting

                You can conduct card sorting in person or remotely. Both methods are effective, but the choice depends on project needs and participant availability.

                In-person card sorting, participants sort physical cards. This offers direct interaction, immediate feedback, and a tactile experience for the user. In some cases, people are better at categorization when they can physically move objects (note cards) around a table.

                Remote card sorting, however, is the more common card sorting technique. It provides flexibility, scalability, and access to a broader audience. Researchers can conduct more sessions with a digital tool when they don’t need to meet in person. Remote card sorts are also less expensive.

                An example of OptimalSort, a digital card sorting tool.

                When to Use Card Sorting

                Card sorting is useful throughout various stages of the website and app design process. It helps you understand how users naturally group information. By revealing users’ mental models, it informs decisions that enhance the user experience, making it easier for visitors to find what they’re looking for.

                This technique is particularly valuable when designing or redesigning navigation menus, categorizing products, or structuring complex information.

                Additionally, card sorting plays a role in conversion funnel analysis. It can help identify where users might get confused or drop off due to poorly structured information, which is crucial for optimizing the funnel and improving conversions.

                For example, if you observe a significant drop-off at a navigation step, card sorting can help redesign that menu to reduce confusion and keep users progressing toward conversion​.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

                How to Do Card Sorting (Step-by-Step)

                By following these steps, you can effectively use card sorting to organize content to enhance user experience and support your broader goals, such as improving navigation, reducing bounce rates, and ultimately driving conversions.

                Step 1: Define Your Goals

                Your first step is to define what you hope to achieve with the card-sorting exercise. Are you trying to improve the navigation of a website? Categorize products? Understand user behavior? Understanding your goal will guide the rest of the process.

                Step 2: Choose Open or Closed Card Sorting

                Based on your goals, choose whether to conduct open or closed card sorting. Open sorting is great for exploration. Closed sorting is used to test predefined categories, and hybrid sorting combines both approaches.

                Step 3: Select Your Participants

                It’s important to choose participants that represent your target users. They should have a background, needs, and behaviors similar to those of your typical users.

                This group may include a mix of people who have used your product in the past and potential users who are seeing it for the first time. The goal is to ensure that the results are relevant to the people who will use the site or app.

                For qualitative insights, around 15-20 participants can provide meaningful patterns. For more robust statistical data, consider larger sample sizes.

                Step 4: Prepare Your Materials

                Write down each item, feature, or piece of content on individual cards. This can be done physically with actual cards or using online card sorting tools like Optimal Workshop or Miro. Label the cards clearly so all users can understand them.

                UXtweak, an example of a digital card sorting tool showing that the cards an categories are clearly labeled.
                Give your cards and categories (if any) clear labels.

                How many cards should you create? It depends on your needs, but no more than 30-50 is about the sweet spot to prevent user fatigue.

                If you’re doing a closed card sort, you’ll need to prepare the categories in advance. These categories should be mutually exclusive and cover all possible groups for the content. Label the cards as clearly as possible. You can use words, links, images, or even descriptions of pages or concepts.

                Avoid using identical words for different cards, as users tend to group them automatically. For example, cards named Honda Civic, Honda CRV, and Honda Pilot are too similar. Users will likely put them into a Honda category, even if there are better ways to categorize them.

                Step 5: Set Up the Sorting Exercise

                Before participants sort cards, you’ll need to issue some instructions.

                • Group the cards into whatever categories make the most sense to you.
                • The number and size of groups can vary.
                • You can rearrange or change your mind at any time.
                • Leave it to the side if you don’t know what a card means.
                • There are no right or wrong answers.
                • Please think aloud and explain your thought process as you go.

                If the exercise is in-person, you’ll need to explain the task carefully, whether they are creating their own groups (open sort) or sorting into predefined categories (closed sort).

                If the exercise is remote, you’ll need to write out instructions as clearly as possible. Try to anticipate any question they might ask so you can answer it preemptively.

                Step 6: Conduct the Card Sorting Exercise

                Give each participant a set of unsorted cards. Randomize their order so there isn’t any inherent grouping. Ask participants to sort cards into whatever groups they find appropriate.

                For instance, an ecommerce brand might give participants cards that say “T-shirts,” “hats,” “shoes,” “socks,” and “sweatpants.” A participant might put the “hats” card in a category called “Accessories.” Or they might put it in its own category.

                Give the participants as long as they need to sort the cards. If you’re using an open card sorting exercise, ask them to label each group with a unique name, but only once they’ve finished sorting. It’s important that they do this last. Naming a group early can limit themselves to a specific category name.

                If your exercise is in-person, observe the participants as they sort the cards, but avoid interfering. Note any comments or questions they have, as these can provide valuable insight into their thought process.

                Most importantly, participants should be asked to verbalize their thought process as they sort the cards. This can reveal their reasoning and provide deeper insights into how they understand and organize the information.

                Step 7: Ask Follow-up Questions

                If you’re conducting the card sorting in person, take the opportunity to ask some follow-up questions. This is a great way to get inside their brains. Here are some questions you might ask, but feel free to come up with your own:

                • Did any items seem like they should appear in multiple groups?
                • Why did you choose those category labels? Did you consider other labels?
                • Were any items especially hard to place?
                • Why did you leave these items unsorted?

                Step 8: Analyze the Results

                After collecting the sorted data, look for patterns in how participants grouped the cards. In open sorting, focus on recurring themes and common groupings. In closed sorting, assess how well the predefined categories worked.

                If you’re using card sorting software, comb through its analytics for insights. A particularly useful tool is a cluster analysis, which helps identify groups of cards that participants frequently place together.

                Pay attention to any cards that were consistently placed in different categories by different users. These outliers may indicate unclear content or concepts that need further refinement.

                Step 9: Implement and Test the Findings

                Based on the results, design or refine your website or app’s information architecture. Ensure that the structure reflects the natural groupings identified during the card sort.

                After implementing changes, it’s a good idea to conduct user testing to validate that the new structure works well in practice. This step helps you understand whether the restructured content improves the overall experience.

                Step 10: Schedule Future Testing

                Card sorting is not a one-time activity. Consider conducting additional card sorts as your content evolves to refine and adapt your information structure. The sites and apps that are best aligned with user needs go through cycles of continuous feedback and iteration.

                Card Sorting Examples

                Let’s look at three card sorting examples to help you understand their impact on the digital experience.

                Mattel

                The problem: Mattel wanted to review the information architecture of the Doll Showcase section of their Barbie collectors’ website. The Showcase was struggling with declining visits, likely due to the marketing-based organization of the site and layers of sub-navigation.

                The goal: Learn more about the mental models that collectors use to explore the site and then reorganize the section’s architecture to match.

                The card sort: Mattel ran two card sorting methods: 1) An open sort of cards from a fashion section of the Showcase, and 2) A closed sort of cards that represented the existing architecture.

                The results: They learned that collectors don’t have a universal mental model. They browse in their own ways, so a single search method isn’t sufficient. They ultimately created a faceted navigation system that allowed searching by doll name, serial numbers, themes, and release year. As a result, the use of Showcase increased dramatically.

                Screenshots from Mattel's Barbie Collector website after using card sorting to reorganize the site architecture.
                The Barbie Collector website

                The Telegraph

                The Telegraph is one of our clients. We worked with them to help optimize their subscriber experience.

                The problem: The Telegraph was struggling to convert readers into paying subscribers. Their varied audience means one-size-fits-all solutions aren’t effective. Plus, A/B testing on a live site of their size is expensive.

                The goal: Improve the paywall experience (the “subscribe to read” call-to-action). They also wanted to improve the user experience for readers to help them find the content they need.

                The card sort: We used card sorting in conjunction with other rapid testing tools. This allowed us to identify opportunities for live tests that had a high level of success.

                The results: We helped The Telegraph increase paywall conversions, keep those new subscribers, and improve content exploration throughout the site. Most importantly, we shifted their internal mindset by showing them how to reach customers to get feedback without disrupting their usual operations.

                Singapore Polytechnic

                The problem: Singapore Polytechnic’s website content was scattered and chaotic. The site included confusing layers of sub-navigation, and the university’s schools all used different branding. As a result, content was hard to find.

                The goal: Understand how users group information and learn where they expect to find content, then build an information architecture that provides it.

                The card sort: The optimization team used a digital card sorting tool to conduct an open card sorting study. Participants were asked to sort cards of the existing content into whatever categories they saw fit. The data was later narrowed down using tree testing.

                The results: Researchers identified three major content groups: student life, courses (grouped by topic), and admissions and financial matters. This formed the foundation of the architecture for Singapore Polytechnic’s new site.

                Singapore Polytechnic's site architecture after they used the card sorting technique as part of their redesign efforts.
                Singapore Polytechnic’s site architecture after card sorting. Source.

                Frequently Asked Questions About Card Sorting

                Here are some common questions people ask about card sorting that might help you understand the concept better.

                Is Card Sorting Qualitative or Quantitative?

                Card sorting can be both qualitative and quantitative. Qualitative insights come from understanding why participants group items a certain way, while quantitative data is gathered by analyzing patterns and frequencies in groupings across many participants.

                How Long Does a Card Sorting Session Take?

                A card sorting session typically takes 15 to 45 minutes, depending on the number of cards and the complexity of the task. For the best responses from your participants, keeping the exercise as short as possible is best.

                Are There Any Disadvantages to Card Sorting?

                Disadvantages include potential participant bias, especially in moderated sessions, and the time-consuming nature of analyzing qualitative data. Additionally, the results may not always translate directly into an effective information architecture.

                What is Card Sorting in UX?

                Card sorting in UX is a research technique where users organize content into groups, revealing their mental models. This helps designers create intuitive navigation structures and improve the overall user experience by aligning the website or app layout with user expectations.

                What is Card Sorting in Project Management?

                In project management, card sorting is used to prioritize tasks, organize project elements, or brainstorm ideas. It’s a visual way for teams to identify patterns, set priorities, and ensure everyone has a shared understanding of the project’s structure.

                What is Card Sorting in Website Design?

                Card sorting in website design involves users grouping content or features to reveal their preferred organization. This process guides the design of navigation menus, category names, and information architecture. The goal is to make a website that meets user expectations and is, therefore, easy to navigate.

                Card Sorting and Your Broader Toolkit

                Card sorting is a valuable tool in your broader UX and optimization toolkit, but it’s most effective when used in conjunction with other methods. It’s one tool of our 5 Factors framework, a set of competencies that set high-performance teams apart.

                By integrating card sorting into this broader toolkit, you can ensure that your site not only meets user expectations but also performs well in areas that matter most, such as navigation, content findability, and overall user experience.

                But card sorting is just the beginning. To truly optimize your digital product, it’s critical that you build an optimization program that includes all of the right qualities. Evaluate how your optimization efforts stack up against the top performers by taking this short 5-Factors assessment. It will show you where to invest your resources to improve your overall digital experience.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post What is Card Sorting? appeared first on The Good.

                ]]>
                How to Convert Free Trial Users to Paying Customers https://thegood.com/insights/how-to-convert-free-trial-users-to-paying-customers/ Tue, 04 Jun 2024 16:01:00 +0000 https://thegood.com/?post_type=insights&p=108709 Here’s a secret about free trials that most SaaS organizations miss: No one signs up for a free trial to learn more about your product. They sign up to learn how your product benefits them. Truthfully, most people couldn’t care less if you offer one feature or 50. They just want the product to solve […]

                The post How to Convert Free Trial Users to Paying Customers appeared first on The Good.

                ]]>
                Here’s a secret about free trials that most SaaS organizations miss: No one signs up for a free trial to learn more about your product. They sign up to learn how your product benefits them.

                Truthfully, most people couldn’t care less if you offer one feature or 50. They just want the product to solve their problems and make their life easier.

                So, if you want to convert free trial users, your task is to show them how your product meets their needs. If you can’t help them achieve a desired outcome, they simply won’t buy.

                In this article, we’re going to discuss how to convert free trial users to paying customers. We’ll talk about your free-to-paid conversion rate and offer some strategies to boost conversions.

                What is a Free-to-Paid Conversion Rate?

                Free-to-Paid Conversion Rate measures the percentage of users who transition from using a free version of a product or service to a paid version.

                If you want users to upgrade to paid tiers of your product, this is an important metric to track.

                It’s also important for freemium models, where users can access some features for free and are encouraged to pay for premium features.

                Here’s the formula to calculate a Free-to-Paid Conversion Rate:

                Free-to-Paid Conversion Rate = (number of users who convert to paid / total number of free users) X 100%

                For example, if a SaaS company has 10,000 free users and 500 of them upgrade to the paid version, the Free-to-Paid Conversion Rate would be:

                (500/10,000) X 100% = 5%

                As you would imagine, you’ll want to push this number as high as possible, as more conversions to paid accounts mean more revenue.

                What We Mean by “Free Trial”

                Before we discuss converting free trial users, let’s clarify the different free trial strategies.

                Freemium Model

                Freemium is a two-tiered model with a free tier and a premium plan. The free tier usually grants perpetual access to a restricted version of the product, either by limiting the accessible features (e.g., four of six features available) or placing caps on features (e.g., a limit of 20 downloads per month).

                Freemium product users can upgrade to a paid version to access the full features. In some cases, freemium users are charged a la carte for product usage.

                Reverse Trial

                In a reverse trial, a time-based approach coined by Elena Verna, Head of Growth at Dropbox, users start with full access to all features for a limited time during a trial phase. Then they get moved to a freemium plan with limited product features.

                With this system, they get the product’s maximum value from the beginning of the trial experience. If they want to regain access to full features, they need to purchase the paid plan.

                Trial With Payment

                In a trial with payment, users are required to provide payment information upfront (a credit card) to gain full access to the product for a limited period of time. The trial is free, but upon a specific date, they will be charged to use the full suite of product features.

                Which Model is Right for You?

                Naturally, it depends on your product and potential customers. There is a ton of nuance in understanding and building your product strategy.

                Two helpful tools to leverage when exploring the right fit for you and your users are the ROPES framework and verb scoring.

                ropes framework for product led growth

                Once you know what your customers expect and need, you can choose the trial offering that matches their journey.

                How Do I Convert Free Users to Paid Users?

                Converting free trial users to paid users is about demonstrating your product’s value. You can do this by strategically placing messaging throughout your site and/or app.

                Keep in mind that your free trial signups already know the product is good. That’s why they signed up in the first place. Your job is to convince them that the value they’ll get from the product is worth the price.

                Basically, you need them to conduct a cost-benefit analysis of your product and decide that your product comes out on top. Highlighting benefits, offering social proof, giving product tours, and boosting user engagement are just some of the techniques brands use to increase activation rates.

                You can’t invite this kind of thinking unless you know your customers well. Exceedingly well. Only once you know what triggers them to buy can you build a user experience that entices them to convert.

                9 Free Trial Conversion Strategies

                Let’s walk through some powerful free-to-paid conversion strategies. Use some or all of these to turn free trial users into paying customers. As always, experiment and test to find the techniques that produce the best results.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

                1. Remind Users to Upgrade Early and Often

                Many SaaS organizations make the mistake of waiting until the end of the free trial to prompt users to upgrade, often by direct outreach from the sales team. However, this approach fails to make use of numerous opportunities that could lead to a potential conversion.

                Start prompting users to upgrade from the beginning of their experience and remind them that they aren’t getting the full feature suite. Do this often using CTAs, in-app notifications, tooltips, popup overlays, onboarding videos, support messages, and marketing emails.

                Dropbox isn’t shy about prompting non-paying users to upgrade on each page of the dashboard.

                Dropbox uses a header row in the dashboard that continually prompts users to upgrade.

                2. Drive Users to Value Quickly

                As we’ve mentioned a few times, people use your product to benefit themselves. If you want them to think it’s worth the cost, help them achieve that value quickly.

                First, you have to understand what that “moment of value” is for your customers. This requires robust knowledge of your customers’ problems and needs. The moment they see value in your product may not be as intuitive as you think. For example, you might think the most valuable feature is Download when really it is Share or Cloud Storage.

                Next, drive your users toward that valuable moment by nudging user behavior. Checklists and progress bars are great tools here. Give your users a concise set of steps to follow that culminates in the moment of value.

                Trello uses an onboarding checklist to encourage users to set up quickly.

                When a user begins a checklist, psychological motivators like the sunk cost dilemma and endowed progress encourage them to complete it.

                Throughout the checklist, you walk free trialists through any tasks necessary to get to the moment of value, like adding contacts, filling out their profile, integrating other apps, etc. At the end, the user should achieve something that makes them think, “Oh, this product actually solves my problem.”

                Evernote's onboarding checklist prioritizes actions that help the user achieve value with the product.

                3. Present Gated Features Near Free Features

                By strategically placing prompts or highlights for premium features adjacent to free ones, you create an opportunity for trial users to envision how the paid features can enhance their experience.

                For example, PDF Converter lets you convert PDF files into other formats for free. However, the premium feature (a higher print quality) is positioned nearby.

                PDF converter How to Convert Free Trial Users to Paying Customers

                This ensures that users are consistently reminded of what they are missing, sparks curiosity, and demonstrates the tangible benefits of upgrading.

                free vs gated

                Using visual cues, such as icons, badges, or color contrasts, can further draw attention to these gated features. For example, a lock icon next to an advanced tool can indicate that it is part of the paid tier, prompting users to consider the upgrade.

                4. Make Your Calls-to-Action Clear and Consistent

                A call-to-action (CTA) is a quintessential marketing tool. Clear and consistent CTAs placed throughout your user journey are great ways to guide users toward the next step. In this case, the next step is a paid tier of your product.

                Your CTAs should be direct and easy to understand. There’s no room for ambiguity here. Users should immediately understand what will happen when they click that button.

                Avoid vague or overly complex language. Use straightforward phrases like “Upgrade Now,” “Unlock Premium Fonts,” or “Start Your Subscription.”

                Place your CTAs in locations where users are most likely to engage with them, such as:

                • Onboarding screens
                • Dashboards and home screens
                • Email campaigns
                • Global header
                • In-product

                Maintain a consistent design for your CTAs so they are recognizable across your platform. Use uniform colors, fonts, and button styles.

                It’s also helpful to accompany each CTA with some brief text that highlights the benefit. For instance, “Upgrade Now to Access Advanced Analytics” or “Unlock Unlimited Storage.” This helps remind users that the upgrade is worth their investment.

                Canva uses a great call-to-action. It describes the benefits and what users get and reminds the user that they can cancel at any time. The app uses upgrade buttons of similar design elsewhere in the app.

                try canva pro for free

                5. Be Thoughtful About Which Features are Gated

                Generally, you want to give away enough value with the free version of your product to build a solid user base. This will help users make the connection that the paid version offers even more value.

                Offer free features that make users reliant on the product. You want them to build it into their personal and professional workflow.

                For instance, if your product involves storing users’ files, give away some storage space for free to bring them into the product, then charge for additional storage. They will be more likely to purchase your storage because their files are already there rather than switch to a new provider.

                Canva is a notable example of this. Creating documents is free, but exporting them into certain formats is gated behind a paywall. Which formats are gated? The ones associated with experts or business users.

                canva pro gated features

                6. Make Free Users Aware of Their Trial Time

                Keeping your users aware of their remaining time can create a sense of urgency and encourage them to consider transitioning to a paid tier before the trial expires.

                Clear and Frequent Reminders

                Send regular emails or in-app notifications to inform users about their trial status. These reminders should start as soon as they begin the trial and become more frequent as the trial period nears its end.

                Chipmunk keeps free users informed about their trial time limit, including an easy-to-understand visual indicator.

                Countdown Timers

                Incorporate countdown timers within your app or on your website. These visual cues serve as constant reminders of the trial period’s ticking clock, subtly urging users to make a decision.

                Slack's countdown timer is always present within the app.

                Highlight Benefits

                Each reminder should not only inform users about the time left but also emphasize the value and benefits of the paid version. Use these touchpoints to showcase features they haven’t explored yet or to highlight how the paid tier can solve specific pain points they’ve experienced.

                duo lingo highlight benefits

                Offer Limited-Time Discounts

                As the trial period comes to a close, consider offering a limited-time discount for upgrading. This tactic leverages the sense of urgency created by the trial countdown and adds an additional incentive to convert.

                Storyist offers a 50% discount for upgrading before the trial ends.

                That said, we don’t always recommend discounting your product. It can be useful to get someone in the door, but it can also devalue your product and brand. Be very careful with discounts.

                7. Offer a Great Onboarding Experience

                The onboarding process is the first impression users have of your product or service. A positive experience can significantly influence their decision to upgrade.

                Provide a clear and concise step-by-step guide to help users navigate your product. Use tooltips, interactive tutorials, or walkthroughs to highlight key features and demonstrate how to use them effectively.

                Userpilot walks users through a product tour so there's no confusion.

                Emphasize the unique features and benefits of the paid version. Show users how these features can solve their problems or enhance their experience.

                It’s also smart to help users achieve quick success to boost their confidence and satisfaction with your product. These early wins can be as simple as completing a task, setting up their profile, or customizing their dashboard.

                8. Use Paywalls to Demonstrate Paid Features

                Paywalls demonstrate the value and benefits of premium features, which entices users to upgrade to unlock full access. When designed thoughtfully, they can drive conversions without causing frustration.

                vogue paywall

                Place your paywalls strategically at points where users are likely to see the value of upgrading. These can include:

                • Feature usage: When a user attempts to access a feature that is only available in the paid tier, present a paywall that explains the benefits of that feature. For example, if your product is a photo editing tool, a paywall might appear when a user tries to use advanced filters or high-resolution exports.
                • Content access: Content-based platforms, such as news websites or educational sites, use paywalls to restrict access to premium articles, videos, or courses. Clearly communicate the added value of the premium content to encourage users to upgrade.
                • Usage limits: Implement usage-based paywalls where users can access basic features for free but encounter limits on their usage. For example, a project management tool might allow a certain number of projects or tasks in the free version, with a paywall prompting an upgrade to manage more.

                Each paywall should clearly articulate the benefits of upgrading to the paid version. Use persuasive messaging to highlight key value propositions, such as enhanced features, better performance, exclusive content, etc.

                The Wall Street Journal's paywall includes clearly articulated lists of benefits.

                9. Clearly Label Your Paid Features

                Transparent and distinct labeling helps users understand what they are missing out on and how the paid version can enhance their experience. This fosters a sense of curiosity and desire.

                Ensure that the paid features are visibly differentiated from free ones. Use consistent visual cues such as icons, badges, or color schemes to indicate premium features.

                MailChimp places an impossible-to-miss call-out on features that could be better if the user upgrades.

                For example, you might use a lock icon or a different color for buttons and menus that lead to paid features. This visual differentiation helps users easily identify what they can unlock by upgrading.

                Use in-context prompts to highlight paid features during the user’s interaction with your product. For example, if a user is using a basic editing tool, a prompt might suggest, “Upgrade to access advanced editing options,” along with a brief description of the additional tools they would receive.

                Improve Your Free-to-Paid Conversion Strategy with The Right Disciplines

                While you may handle some strategies internally, improving your free-to-paid conversion rate requires a specific skill set and multiple disciplines. The Good’s Digital Experience Optimization Program™ offers a comprehensive solution tailored for SaaS, ecommerce, and product marketing teams.

                Clients like Adobe and The Telegraph have praised The Good for our ability to validate hypotheses, drive engagement, and achieve substantial growth.

                How it works: We conduct a full funnel analysis of your digital product using heatmap analysis, session recordings, and usability testing. Then, based on those insights, we build a custom program that includes road mapping, experimentation, and customer journey mapping.

                Ready to see how your strategy can be optimized? Schedule an introductory call and unlock your brand’s full potential.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post How to Convert Free Trial Users to Paying Customers appeared first on The Good.

                ]]>
                What Is Peeking And How Do I Avoid It? https://thegood.com/insights/what-is-peeking/ Mon, 22 Jan 2024 14:55:41 +0000 https://thegood.com/?post_type=insights&p=106779 Have you ever sacrificed the purity of a test to make a quicker decision? Maybe you’ve taken a look at the data before you reached your prescribed sample size and felt the test was already giving obvious results in one direction or the other. Almost every product owner running tests faces this dilemma. And trust me, […]

                The post What Is Peeking And How Do I Avoid It? appeared first on The Good.

                ]]>
                Have you ever sacrificed the purity of a test to make a quicker decision?

                Maybe you’ve taken a look at the data before you reached your prescribed sample size and felt the test was already giving obvious results in one direction or the other.

                Almost every product owner running tests faces this dilemma. And trust me, I get it.

                The pursuit of results at an adequate sample time takes patience, and there are plenty of reasons to try to rush to the finish line: As you wait for a test to run, you may be prolonging a negative experience on your site. Or you may be losing out on presenting a positive experience to all your visitors. You also could be struggling to hold off stakeholders who want a quick decision and implementation.

                But it’s crucial to let your tests run for their pre-established amount of time. If you don’t, you risk “peeking,” which increases error rates, causes False Positives, and invalidates results.

                What is peeking?

                Peeking is the act of looking at your A/B test results with the intent to take action before the test is complete.

                Because most experiments have a 70% chance of looking “significant” before they are truly done collecting sufficient data, peeking at test results too early can introduce bias and potentially alter the course of decision-making based on incomplete information. Waiting until the test is complete allows for a more accurate assessment of statistical significance.

                what is peeking graph

                Think about it like this. If you flip a coin twice and get heads both times, you could incorrectly assume that the coin will land on heads 100% of the time.

                However, if you flipped it a statistically significant amount of times, you’d get closer to the true rate of 50%.

                Testing is the same. If you don’t allow enough time for an experiment to run and are constantly peeking at the results to make faster decisions, you are likely to make incorrect assumptions.

                Plenty of mistakes can be made when running a test, but peeking is trickier to avoid than the rest. It is tempting to even the most experienced experimentation practitioners.

                So, let’s talk about it and how you can avoid getting caught in its crosshairs.

                How To Avoid Peeking: Set Clear Minimum Standards Before You Run A Test & Stick To Them

                Before running a test, clearly define minimum standards to ensure the results are valid. These are the same standards to use when interpreting the results of a test.

                Pre-determine your Significance Level

                The general rule is to let your tests reach 90+% statistical significance, but the exact number can vary slightly depending on your team’s risk tolerance.

                NASA scientists likely need a 99.999999% statistical significance before feeling sure of a decision, while an ecommerce site owner might only need 85% statistical significance to feel confident in their decisions.

                Establishing the significance level helps your team come to a consensus about the level of error or False Positives you’re willing to accept in your test.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

                Achieve Appropriate Sample Size

                Set a goal sample size that is representative of your audience and large enough to account for variability. It’s necessary to calculate your sample size before the test to determine how long to run a test to achieve rapid but reliable results.

                If the test is stopped before it reaches a significant number of visitors, the results may not be valid. That’s because when the number of sessions or conversions is low, there is a high likelihood that changes will be observed by chance.

                As the test collects more data, the conversion rates converge toward their true long-term values. This is known as “regression to the mean.” We often see a false positive on the first day of running an A/B test, and we expect those changes to regress to the mean or “normalize” over time.

                For example, we might see a novelty effect of existing users who are already familiar with your site who are reacting positively to the changes made in your experiment. That would result in a false positive that would normalize as users get used to the change. That’s why seeing 90+% statistical significance isn’t a stopping rule alone.

                Set a Minimum Test Duration

                Letting your test run for a pre-allotted amount of time is key to avoiding the pitfalls of peeking.

                We suggest a minimum of two weeks to account for two full business cycles. This leaves room for any unexpected variables (maybe your competitor is running a sale that week, which lowers your traffic volume, or there is a federal holiday, so fewer people are online shopping).

                One important factor is looking for a good understanding of the performance range on any test, and that range gets smaller with more data. Testing tools may show statistical significance with even a small sample size, but even those tools will recommend running tests for at least two weeks.

                Like I said, before that, you’re peeking, which can cause you to have false positives.

                For a test duration cap, everyone is different. As our Director of CRO and UX Strategy, Natalie Thomas, says:

                “Every team has a different tolerance for test duration. I know teams that will let a test run for six months and others that only want to prioritize initiatives that will see significance in two weeks. Having this litmus just assures folks are talking about their tolerance up-front.”

                – Natalie Thomas

                Set a Minimum Number of Conversions

                Set a goal for the number of conversions or actions taken that will be a large enough sample size for your audience to know if the test was a winner.

                This will vary based on the primary goal you’re focused on (e.g., ecommerce transactions versus inquiries, for example), so you’ll need to know an average number of conversions you get in a week.

                Look for Alignment with Secondary Goals 

                Part of the test setup process is defining a primary goal (for us, typically transactions or increasing conversion rate) that will determine if a test is ‘successful,’ but secondary goals will provide more insight into behavior, for example, adds to cart, visits to the next stage in the funnel.

                If you’re at a point where you’re trying to analyze results, and these are not aligned (e.g., conversion rate is up, adds to cart are down, visits to product pages are unaffected), it could mean you don’t have enough data yet to tell the whole story.

                A/B Testing Like The Experts

                In the same way opening the oven before a cake is fully baked can impact the cake’s final consistency, prematurely analyzing test results can lead to skewed outcomes.

                When testing on a website or app, the goal is to gather user-centered evidence that helps you make a decision.

                You’re looking for a signal, not the final answer.

                But to be confident in that signal, you need to set your tests up with pre-established standards. It helps your team align on when to end the test and makes sure you avoid the trickiest pitfall of experimentation: peeking.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

                The post What Is Peeking And How Do I Avoid It? appeared first on The Good.

                ]]>