Optimization Strategy Articles - The Good https://thegood.com/insight-category/optimization-strategy/ Optimizing Digital Experiences Fri, 05 Dec 2025 20:57:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 How Do You Reduce Cancellations During SaaS Free Trials? https://thegood.com/insights/trial-optimization/ Fri, 05 Dec 2025 20:57:52 +0000 https://thegood.com/?post_type=insights&p=111216 Leaders often assume users cancel because the product isn’t good enough. The reality is more nuanced. Users rarely cancel because your product lacks value. They cancel because they didn’t experience that value quickly enough, clearly enough, or in a way that made sense for their specific needs. The stakes are high. According to recent industry […]

The post How Do You Reduce Cancellations During SaaS Free Trials? appeared first on The Good.

]]>
Leaders often assume users cancel because the product isn’t good enough. The reality is more nuanced. Users rarely cancel because your product lacks value. They cancel because they didn’t experience that value quickly enough, clearly enough, or in a way that made sense for their specific needs.

The stakes are high. According to recent industry data, the average SaaS free trial converts less than 25% of users to paying customers. That means roughly two-thirds to three-quarters of your trial users are walking away without ever becoming customers.

But the good news is that trial cancellations aren’t random. They follow patterns. Users drop off at predictable moments in their journey and struggle with the same features or tasks. Once you identify these patterns, you can systematically address them through trial optimization.

Understanding why trial optimization matters for reducing cancellations

Before diving into how to reduce cancellations, let’s be clear about what we mean by trial optimization and why it deserves your attention.

Trial optimization is the systematic process of improving every touchpoint in your free trial or freemium experience to increase the likelihood that users will see value, engage consistently, and ultimately convert to paying customers. It’s not about manipulation or dark patterns. It’s about removing unnecessary friction, clarifying value, and helping users succeed with your product.

The impact of effective trial optimization extends beyond conversion rates. When you optimize the trial experience, you also reduce customer acquisition costs, improve customer lifetime value, and build a stronger foundation for retention.

Understanding your specific trial model is the first step toward optimization. Different trial structures create different challenges and opportunities.

What is a freemium model?

The freemium model offers perpetual access to a restricted version of your product, either by limiting features or placing caps on usage. Think Spotify’s free tier with ads, or Canva’s basic design tools. The challenge with freemium is that users can stay indefinitely without converting. Your optimization goal is building reliance while strategically gating features that create urgency to upgrade.

What is a reverse trial?

In a reverse trial, users start with full access to all features for a limited time, then get moved to a freemium plan with limited capabilities. This approach, coined by growth leader Elena Verna, prioritizes maximum value upfront. Users experience everything your product can do, making the subsequent feature restrictions feel more pronounced. Trial optimization here focuses on ensuring users activate on premium features during that full-access window.

What is trial with payment?

This model requires payment information up front for full product access during a limited period. Users are charged automatically after the trial unless they cancel. The friction of providing credit card details means fewer signups but typically higher conversion rates, with opt-out trials converting at 49-60% compared to opt-in trials at 18-25%. Optimization here balances making signup worthwhile despite the friction while ensuring the experience justifies the automatic charge.

Five steps to audit and optimize your trial experience

Trial optimization looks different in each of these trial models, but one thing is true across the board: reducing cancellations requires a systematic approach.

You can’t fix what you don’t measure, and you can’t optimize what you don’t understand.

Here is a summary of the five-step framework for auditing your trial experience. For a detailed walkthrough, including specific templates and decision trees, see our article on auditing free user experiences.

Step 1: Identify drop-off points with data analysis

Examine your product analytics to pinpoint exactly where users abandon their trial journey.

  • Track activation drop-offs in your onboarding flow
  • Monitor which features users engage with versus ignore
  • Calculate time-to-value and compare against churn timing
  • Segment data by acquisition channel, trial type, and user cohort
  • Layer in session recordings to see what users actually do before leaving

Step 2: Conduct user interviews to understand the “why”

Numbers show where users leave. Conversations reveal why.

  • Interview 10-15 users, split between active trial users and those who churned
  • Ask what value they found, what confused them, and what would make them pay
  • Listen for the exact language they use to describe their experience
  • Note any competitors or alternatives they mention for market context

Step 3: Benchmark your experience against market standards

Your users compare you to every tool they’ve used. Conduct some competitive analysis to gauge where you fall in the market.

  • Document how competitors structure their trial experiences
  • Screenshot monetization touchpoints, upgrade prompts, and limit notifications
  • Study products your users mention in interviews, even if indirect competitors
  • Identify where your experience creates more or less friction than market norms

Step 4: Map user actions with verb scoring

Break down every meaningful action in your product and score the friction required by running a verb scoring exercise.

  • List discrete actions users can take (create, share, export, invite, etc.)
  • Assign each a verb score from Anonymous to Gated
  • Look for inconsistencies in how similar actions are gated
  • Identify if you’re giving away too much or asking too soon

Step 5: Connect insights to create an optimization roadmap

Synthesize your findings to prioritize what to fix first.

  • Friction without reason: unnecessary barriers compared to competitors
  • Value leaks: popular free features that don’t drive conversion
  • Invisible gates: paywalls users hit without understanding why
  • Poorly timed friction: asking users to pay before they’ve seen value

Prioritize optimizations by impact (users affected), confidence (data supports it), effort (time to implement), and market alignment (are you an outlier).

Six strategies for reducing trial cancellations

Once you’ve audited your trial experience and identified optimization opportunities, you will have a clear roadmap for addressing issues.

Plenty of strategies might arise in your research. Here are a few themes we see often.

Accelerate time-to-first-value

The faster users experience value, the less likely they are to cancel. Industry benchmarks suggest that users should reach their first “aha moment” within 48 hours of signup.

Design your onboarding to guide users directly toward the action that delivers value. Use progress bars and checklists to create clear paths forward.

Remove any friction between signup and first value. If users need to integrate other tools, fill out profiles, or configure settings before experiencing core benefits, you’re creating opportunities for abandonment. Save non-essential setup for after users have seen value.

Provide personalized onboarding experiences

Companies using personalized experiences see conversion rates improve by up to 67%. Generic onboarding treats all users the same, but different user segments have different needs, different technical sophistication, and different use cases.

Segment users based on their role, company size, or stated goals during signup. A solo entrepreneur using your project management tool has different needs than a project manager at a 100-person company. Your onboarding should reflect these differences.

Use progressive disclosure to reveal features as they become relevant. Don’t overwhelm new users with every capability on day one. Instead, introduce advanced features once users have mastered the basics.

Implement strategic reminder systems

Trials between 7-14 days convert better than longer trials because they create urgency. But urgency only works if users remember they’re on a trial.

Send regular emails and in-app notifications informing users about remaining trial time. These reminders should do more than count down days. Each one should emphasize value, highlight features users haven’t explored, or address specific pain points.

Gate features strategically based on usage patterns

In our experience optimizing for SaaS, offering too many free features can actually hurt conversion rates. Users need to experience value from free features while simultaneously understanding what they’re missing from paid capabilities.

Place prompts for premium features adjacent to free ones. PDF Converter, for example, offers free file conversion but positions the premium, higher-quality option nearby. This ensures users understand the upgrade path without being pushy.

Use clear visual cues like lock icons, “Pro” badges, or color contrasts to differentiate free from paid features.

Provide proactive support during critical moments

Customer support engagement during trial periods can significantly boost conversion rates.

Don’t wait for users to ask for help. Implement triggered messages based on behavior patterns. If a user hasn’t logged in for three days, send a helpful email with tips. If someone tries to use a gated feature multiple times, offer a personalized demo or support call.

For high-value potential customers, consider human touchpoints. A quick call from customer success at day three of a 14-day trial can answer questions, provide personalized guidance, and significantly increase conversion likelihood.

Design thoughtful cancellation flows

Not every cancellation is preventable, but many are. When users attempt to cancel, use that moment as an opportunity to understand why and potentially offer alternatives.

Implement exit surveys that capture cancellation reasons. According to data on subscription churn, understanding why users leave is critical for preventing future cancellations. Are they leaving because of the price? Missing features? Poor onboarding? Bugs?

Based on cancellation reasons, offer segment-specific alternatives. If someone is canceling due to price, offer a discount or payment plan. If they barely used the product after the trial, extend the trial. If they’re leaving due to missing features, ask which features would keep them.

Common mistakes that increase trial cancellations

Even well-intentioned optimization efforts can backfire. Avoid these common mistakes that actually increase cancellation rates.

Making cancellation difficult

Some SaaS companies deliberately make cancellation difficult, requiring users to call or email rather than cancel with a simple click. This dark pattern might delay cancellations temporarily, but it can destroy trust and create negative word-of-mouth.

Make cancellation simple. The goal isn’t to trap users; it’s to create such a good experience that they don’t want to leave.

Gating core value too aggressively

If users can’t experience your product’s core value without upgrading, they’ll cancel before converting. The free version should deliver genuine utility while creating a desire for premium features.

Neglecting mobile trial experiences

With increasing mobile usage, trial experiences must work seamlessly across devices.

If your onboarding is desktop-optimized but breaks on mobile, you’re creating cancellations for a substantial user segment.

Sending generic email communications

Automated email sequences that ignore user behavior feel impersonal and often go unread. According to research on trial optimization, personalized communication based on user activity significantly outperforms generic campaigns.

If a user hasn’t logged in since signing up, an email about advanced features is irrelevant. If they’re actively using the product daily, countdown reminders may feel pushy. Segment communications based on engagement levels.

Trial optimization frequently asked questions

What’s the ideal trial length to minimize cancellations?

The optimal length depends on your product’s complexity and how quickly users can experience value. Simple products often perform better with 7-14 day trials that create urgency.

Complex B2B tools may need 30-60 days for users to properly evaluate capabilities. If you are completely lost, start with 14 days and adjust based on your activation data and time-to-value metrics.

Should I require a credit card for trial signup?

This decision significantly impacts both signup volume and conversion rates.

Opt-out trials (credit card required) convert higher but generate fewer signups. Opt-in trials (no credit card) convert lower but attract more users.

The right choice depends on whether you prioritize higher conversion rates per trial or a larger volume of trials and how much more utility the full tier offers versus a free trial.

Most product-led companies start with opt-in trials to maximize exposure, then consider opt-out trials once they’ve optimized the trial experience.

How can I tell if my trial cancellations are normal or problematic?

Track cohort-specific metrics. If certain user segments, acquisition channels, or trial lengths show notably different cancellation patterns, those differences reveal opportunities for targeted optimization.

What’s the most important metric to track for trial optimization?

While trial-to-paid conversion rate matters, activation rate is often more predictive.

Activation measures whether users complete key actions that indicate they’ve experienced value. Research shows users who reach activation are significantly more likely to convert.

Define your activation criteria based on behaviors that correlate with conversion, then optimize to increase the percentage of trial users who activate.

How often should I test and iterate on my trial experience?

Trial optimization is continuous, not a one-time project.

High-performing SaaS companies test constantly. Start with your highest-impact opportunities identified during your audit, then implement a regular testing cadence.

Track results for statistical significance before making changes permanent. Plan quarterly reviews of your trial metrics to identify new optimization opportunities as your product and market evolve.

Can I reduce trial cancellations without changing my product?

Yes. Many cancellations stem from poor onboarding, unclear value communication, or inadequate support rather than product deficiencies.

You can significantly reduce cancellations by improving onboarding sequences, providing better in-app guidance, personalizing the trial experience, implementing proactive support, and strategically positioning upgrade prompts.

That said, if users consistently cancel, citing missing features or bugs, product improvements may be necessary alongside trial optimization.

Build a systematic approach to trial optimization

Reducing SaaS trial cancellations isn’t about quick fixes or growth hacks. It requires systematic analysis of your trial experience, a deep understanding of user behavior and needs, and continuous optimization based on data.

The five-step audit framework provides a structured approach: analyze data to find drop-off points, interview users to understand why they leave, benchmark against market expectations, map actions with verb scoring, and synthesize insights into a prioritized roadmap. Each step builds on the previous one to create a picture of optimization opportunities.

Implementation matters as much as analysis. Accelerate time-to-value, personalize onboarding, implement strategic reminders, gate features based on usage patterns, provide proactive support, and design thoughtful cancellation flows. These six strategies address the most common causes of trial cancellations, but keep in mind that your analysis will likely surface other unique issues.

Most importantly, treat trial optimization as an ongoing discipline rather than a one-time project. User expectations evolve, competitors improve their experiences, and your product adds features. Regular review and iteration ensure your trial experience continues performing as your business grows.

At The Good, we’ve helped SaaS companies reduce trial cancellations and improve conversion rates through our Digital Experience Optimization Program™. We conduct comprehensive audits using heatmaps, session recordings, and user research to identify exactly where trial users encounter friction. Then we build custom optimization roadmaps and validate improvements through experimentation.

Ready to reduce your trial cancellations and accelerate growth? Schedule an introductory call to discuss how we can optimize your trial experience for better conversion and retention.

The post How Do You Reduce Cancellations During SaaS Free Trials? appeared first on The Good.

]]>
The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company https://thegood.com/insights/intent-based-segmentation/ Fri, 21 Nov 2025 18:45:09 +0000 https://thegood.com/?post_type=insights&p=111181 Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do. The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different […]

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do.

The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different goals, workflows, and definitions of success.

We recently worked with a leading enterprise SaaS company facing exactly this challenge. Their product served everyone from solo creators to enterprise teams, but the experience treated everyone the same.

Users logging in to track a single client project got bombarded with team collaboration features. Power users managing complex workflows couldn’t find advanced capabilities buried in generic menus.

Users felt overwhelmed by irrelevant features, engagement plateaued, and retention suffered.

The solution wasn’t another traditional segmentation model. It was intent-based segmentation; a framework for grouping users by what they’re actually trying to accomplish, then personalizing the experience to match those specific goals.

Understanding behavioral segmentation and where intent fits

Before diving into how to build intent-based clusters, it helps to understand where this approach fits within the broader landscape of user segmentation.

Most SaaS companies are familiar with demographic segmentation (company size, industry, role) and firmographic segmentation (ARR, team size, geographic location). These approaches tell you who your users are, but they don’t tell you what they’re trying to do, which is why they often fail to predict engagement or retention.

Behavioral segmentation takes a different approach. Instead of looking at static attributes, it focuses on how users actually interact with your product.

Behavioral segmentation divides users based on engagement. This includes actions like feature usage patterns, open frequency, purchase behavior, and time-to-value metrics.

While not the only way to do things, behavioral segmentation is widely regarded as more effective than demographic segmentation alone, with plenty of research backing it up.

Within behavioral segmentation, there are several approaches:

  • Usage-based segmentation looks at frequency and intensity of use.
  • Lifecycle segmentation tracks where users are in their journey.
  • Benefit-sought segmentation groups users by the outcomes they want to achieve.

Intent-based segmentation sits at the intersection of all three.

visual portraying intent based segmentation at the center of different types of user segmentation

It identifies clusters of users who share similar goals and workflows, then maps those patterns to create a more personalized experience.

Intent-based clusters answer the question: “What is this user trying to accomplish right now?”

In a recent client engagement that inspired this article, this distinction mattered. They had mountains of usage data showing which features people clicked, but no framework for understanding why certain feature combinations existed or what job users were trying to complete.

They knew “Business Professionals” used their tool, but that category was so broad it offered no actionable insights. A marketing manager building campaign timelines has completely different needs than a legal team tracking contract approvals, even though both might be classified as “business professionals.”

Intent-based clustering gave them that missing layer of insight.

Case study: How to spot the need for intent-based segmentation

Let’s talk more about the client engagement I mentioned. This is a great case study for when to use intent-based segmentation.

We work on a quarterly retainer for these clients with our on-demand growth research services. So, when they mentioned struggling with how to personalize experiences and improve retention, we opened up a research project that same day.

The team could see that certain users logged in daily and used five or more features. Great, right? Not really. When we dug deeper, we discovered something critical. Heavy feature usage didn’t predict retention. Some power users churned while casual users stuck around for years.

The issue wasn’t the quantity of features used; it was whether those features aligned with what users were trying to accomplish. A user coming in weekly to update a single client dashboard showed higher retention than someone exploring ten features that didn’t match their core workflow.

The symptoms: What teams told us was broken

During our stakeholder interviews, we heard the same frustrations across departments:

From product: “We know the top five features everyone uses, but that doesn’t help us understand why they’re using them or what to build next. Two users might both use our template feature, but one is building client proposals while the other is standardizing internal processes. They need completely different things from that feature.”

From marketing: “Our segments are too broad to be useful. ‘Business Professional’ could mean anyone from a solo consultant to an enterprise VP. When we send educational content, we can’t make it relevant because we don’t know what problem they’re trying to solve.”

From customer success: “We can see when someone is at risk of churning because their usage drops off, but we can’t predict it before it happens. By the time we notice, they’ve already decided the product isn’t right for them. We need to understand intent earlier so we can intervene proactively.”

From UX research: “Users think in terms of tasks, not features. They don’t say ‘I want to use the dependency mapping tool.’ They say, ‘I need to make sure the design team finishes before development starts.’ But our product talks about features, not outcomes.”

The underlying problem: Missing the ‘why’

What became clear was that the organization had plenty of data about behavior but no framework for understanding intent. They could answer questions like:

  • How many people use feature X?
  • What’s the average session duration?
  • Which users log in most frequently?

But they couldn’t answer the questions that actually mattered:

  • What are users trying to accomplish when they use feature X?
  • Why do some users stick around while others churn?
  • What combination of goals and workflows predicts long-term retention?

This may sound familiar. They have data about behavior but lack context about intent. Without understanding the users’ different definitions of success, they use generic personalization that recommends “similar features” and misses the mark entirely.

The four-phase framework for creating intent-based user clusters

Based on our work with the enterprise SaaS client, we developed a systematic framework for building intent-based clusters from scratch.

The process has four distinct phases, each building on the previous one.

Think of this as a directional guide rather than a rigid formula. You can adapt the scope based on your resources and organizational complexity.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Phase 1: Capture institutional knowledge and identify gaps

Most organizations have more customer insight than they realize; it’s just scattered across teams, buried in old reports, and locked in individual team members’ heads. The first phase consolidates that knowledge and identifies what’s missing.

Graphic of phase 1 in the intent based user segmentation process

1. Conduct cross-functional interviews

Start by interviewing stakeholders who interact with users regularly: product managers, customer success, sales, support, and marketing. For our client, we conducted interviews with seven team members from UX research, growth, engagement, marketing, and analytics.

The goal isn’t consensus. You want to uncover patterns and contradictions instead. Focus your conversations on these questions:

  • How do you currently describe different user types?
  • What patterns have you noticed in how different users engage with the product?
  • What data exists about user behavior that isn’t being used to inform decisions?
  • Where do personalization efforts break down today?
  • What questions about users keep you up at night?

These conversations surface institutional knowledge that never makes it into documentation.

Capture everything. The contradictions are especially valuable—they reveal where teams operate with different mental models of the same users.

2. Audit existing research and reports

Next, gather relevant user research, data analysis, and customer insight. For our client, we analyzed eight existing reports, including retention data, cancellation surveys, user studies, engagement patterns, and conversion analysis.

Look for:

  • Marketing segmentation models (usually demographic-heavy)
  • User research studies (often small sample, rich insights)
  • Behavioral analytics reports (feature usage patterns, cohort analysis)
  • Customer journey maps (theoretical vs. actual paths)
  • Support ticket analysis (pain points and use cases)
  • Cancellation surveys (why users leave)

Pay attention to gaps between how different teams think about users. Marketing might segment by company size, while product segments by feature usage. Neither is wrong, but the lack of a unified framework means teams optimize for different definitions of success.

3. Define hypotheses about intent-based variables

Based on interviews and research, develop hypotheses about what variables might define user intent. This is where you move from “who uses our product” to “what are they trying to accomplish.”

For our client, we identified several dimensions that seemed to correlate with intent:

  • Primary workflow type: Are users managing team projects, client deliverables, or personal tasks?
  • Collaboration patterns: Solo work, small team coordination, or cross-functional orchestration?
  • Usage frequency: Daily operational tool or periodic project management?
  • Success metrics: Speed (quick task completion) vs. thoroughness (detailed planning and tracking)?
  • Document complexity: Simple task lists or multi-layered project hierarchies?

The goal is to create testable hypotheses that can be validated in Phase 2.

Phase 2: Validate clusters through user research and behavioral analysis

Once you have initial hypotheses, Phase 2 tests them against real user behavior and feedback. This is where hunches become data-backed insights.

visual of phase 2 in the intent based user segmentation process

1. Develop provisional cluster groups

Transform your hypotheses into provisional clusters. For our client, we identified six distinct intent-based clusters. Let me illustrate with a fictional example of how this might work for a project management SaaS tool:

Sprint Executors: Users focused on rapid task completion and daily standup workflows. They need speed, simple task updates, and quick team coordination. Think startup teams moving fast with lightweight processes.

Client Project Coordinators: Users managing multiple client engagements simultaneously with strict deliverable timelines. They need client visibility controls, progress tracking, and professional reporting. Think agencies and consultancies.

Cross-Functional Orchestrators: Users coordinating complex projects across departments with dependencies and approval workflows. They need Gantt views, resource allocation, and stakeholder communication tools. Think enterprise program managers.

Personal Productivity Optimizers: Users treat the tool as their second brain for personal task management and goal tracking. They need customization, recurring tasks, and minimal collaboration features. Think solopreneurs and executives.

Seasonal Campaign Managers: Users with predictable high-intensity periods followed by dormancy. They need templates, bulk operations, and the ability to archive/reactivate projects easily. Think retail operations teams or event planners.

Mobile-First Coordinators: Users who primarily access the tool from mobile devices for field work or on-the-go updates. They need streamlined mobile experiences and offline sync. Think field service teams or traveling consultants.

Each cluster gets a descriptive name that captures the user’s primary intent, not just their behavior. “Sprint Executor” tells you more about what someone is trying to do than “high-frequency user.”

2. Conduct targeted user research

With provisional clusters defined, recruit users who fit each profile and conduct interviews to understand:

  • Their primary use cases and goals when they first adopted the tool
  • How they discovered and currently use the product
  • Their typical workflows from start to finish
  • What defines success in their role
  • Pain points and unmet needs
  • How they decide which features to explore
  • What would make them cancel vs. what keeps them subscribed

For our client, we conducted three to four interviews per cluster, totaling around 24 user conversations. This gave us enough coverage to validate patterns without drowning in data.

The insights were eye-opening. We discovered that one cluster had the fastest time-to-value but the lowest feature adoption. They found what they needed immediately and never explored further. Another cluster showed the highest retention but needed the longest onboarding. They invested time up front because the tool was critical to their workflow.

3. Analyze behavioral data to confirm patterns

User interviews reveal what people say they do. Behavioral data shows what they actually do. Cross-reference your clusters against:

  • Feature usage sequences (which tools appear together in sessions)
  • Time-to-value metrics by cluster (how quickly do they get their first win)
  • Retention and churn patterns (which clusters stick around)
  • Upgrade and expansion behavior (which clusters grow their usage)
  • Support ticket themes (which clusters need help with what)
  • Feature adoption curves (how exploration patterns differ)

For our client, the data revealed critical differences. The “Sprint Executor” equivalent had fast initial adoption but plateaued quickly. They found their core workflow and stopped exploring.

The “Cross-Functional Orchestrator” cluster showed slow initial adoption but deep engagement over time. They needed to learn the tool thoroughly to unlock value.

These patterns weren’t visible in aggregate data. Only by segmenting users by intent could we see that different clusters had fundamentally different paths to retention.

4. Build detailed cluster profiles

For each validated cluster, create a comprehensive profile that becomes the foundation for personalization. For example:

Cluster name: Sprint Executors

Primary intent: Complete daily tasks quickly with minimal friction and maximum team visibility

Most-used features:

  • Quick-add task creation
  • Board view for visual sprint planning
  • Mobile app for on-the-go updates
  • Real-time team activity feed

Typical workflow patterns:

  • Morning standup with task assignments
  • Throughout the day: quick updates and status changes
  • End of day: marking tasks complete and planning tomorrow

Behavioral flags that identify this cluster:

  • Creates 5+ tasks in first week
  • Returns daily within first 14 days
  • Uses mobile app within first 7 days
  • Rarely uses advanced features like Gantt charts or dependencies

Retention drivers:

  • Speed of task completion
  • Team visibility and accountability
  • Mobile accessibility

Churn risks:

  • Tool feels too complex for simple needs
  • Feature bloat is making core actions harder to find
  • Forced upgrades to access speed-focused features

Personalization opportunities:

  • Streamlined onboarding focused on quick task creation
  • Mobile-first feature discovery
  • Templates for common sprint workflows
  • Integrations with communication tools

These profiles become the single source of truth that product, marketing, and customer success can all reference.

Phase 3: Develop indicators and personalization strategies

The final phase connects clusters to action. This is where the framework moves from insight to implementation.

1. Create behavioral flags for cluster identification

Most users won’t self-identify their intent at signup. You need to infer cluster membership from behavioral signals early in their journey. The key is identifying flags that appear within the first 7-14 days. It should be early enough to personalize the experience before users decide if the tool is right for them.

visual of phase 3 in the intent based user segmentation process

For reference, the “Sprint Executor” cluster in our fictional example:

  • Created 5+ tasks in first week
  • Logged in on 4+ separate days in first 14 days
  • Used mobile app within first 7 days
  • Board or list view used more than timeline/Gantt view (80%+ of sessions)
  • Invited at least one team member within first 10 days
  • Never explored advanced dependency features
  • Average session length under 10 minutes

Versus the “Client Project Coordinator” cluster:

  • Created 3+ separate projects within first week (indicating multiple clients)
  • Used folder or workspace organization features within first 5 days
  • Set up client-specific permissions or external sharing settings
  • Created custom views or reports within first 14 days
  • Longer average session times (20+ minutes per session)
  • Uses professional or client-specific terminology in project names
  • High usage of export or presentation features

The goal is to find the minimum viable signal set that reliably predicts cluster membership. Start with more flags and refine over time based on which actually correlate with long-term behavior.

One critical finding from our client work: early behavioral flags predicted retention better than demographic data.

A user who exhibited “Client Project Coordinator” behaviors in week one showed 40% higher 90-day retention than the average user, regardless of their company size or job title.

2. Map personalization opportunities to each cluster

With clusters and flags defined, identify specific ways to personalize the experience across the user journey:

Onboarding sequences: Tailor the first-run experience to highlight features that match user intent. Show Sprint Executors how to set up their first sprint board, not the full feature catalog with Gantt charts and resource allocation tools they don’t need.

In-app messaging: Trigger contextual tips based on usage patterns. When a Client Project Coordinator creates their third project with similar structure, suggest project templates to save time.

Feature discovery: Recommend next-step features that align with cluster workflows. For Sprint Executors who’ve mastered basic task management, introduce the mobile app and integrations with their communication tools—not complex dependency mapping.

Content and education: Send targeted educational content that addresses cluster-specific goals. Client Project Coordinators get tips on professional reporting and client permissions. Sprint Executors get productivity hacks and team coordination strategies.

Upgrade paths: Present pricing tiers and feature upgrades that match cluster needs. Don’t push team collaboration features to Personal Productivity Optimizers who work solo and won’t use them.

Support prioritization: Route support tickets differently based on cluster. Client Project Coordinators might get priority support since they’re often managing paying clients. Seasonal Campaign Managers might get proactive check-ins before predicted busy periods.

For our client, this mapping revealed opportunities they’d completely missed. One cluster had been receiving generic “explore more features” emails when what they actually needed was advanced security capabilities for compliance requirements. Another cluster kept churning at the end of trial because onboarding emphasized features they’d never use instead of the speed-focused tools that matched their workflow.

Phase 4: Develop test concepts to validate impact

Turn personalization opportunities into testable hypotheses. Don’t roll everything out at once. Start with high-impact, low-effort tests that prove the value of intent-based segmentation.

For our client, we proposed several test concepts structured to validate clusters quickly and build organizational confidence in the framework. Here are a few examples.

visual of phase 4 in the intent based user segmentation process

Example Test 1: Intent-Based Onboarding Survey

Background: The organization lacked a way to identify user intent at the critical moment when users were most open to guidance: right after signup, but before they’d formed opinions about product fit.

Hypothesis: By asking users to self-identify their primary goal during their first meaningful session, we can segment them into actionable clusters that enable personalized feature discovery and messaging, resulting in improved 3-month retention rates by 5-10%.

Test design: During the first session (after initial signup but before deep engagement), present a brief survey asking: “What brings you here today?” with options that map directly to identified clusters:

☐ Coordinate my team’s daily work (Sprint Executors)

☐ Manage multiple client projects (Client Project Coordinators)

☐ Organize complex cross-functional initiatives (Cross-Functional Orchestrators)

☐ Track my personal tasks and goals (Personal Productivity Optimizers)

☐ Plan seasonal campaigns or events (Seasonal Campaign Managers)

☐ Update projects while on the go (Mobile-First Coordinators)

☐ Something else (with optional text field)

Then immediately personalize their first experience based on their response: Sprint Executors see a streamlined task creation tutorial, Client Project Coordinators get guidance on setting up client workspaces, etc.

Success metrics:

  • Primary: 3-month retention rate by selected cluster (looking for 5-10% lift)
  • Secondary: Survey completion rate (target: >80%), feature adoption aligned with cluster (target: 20% lift), time to first value-generating action
  • Guardrails: No negative impact on day 2 or day 7 retention

Acceptance criteria for “winning test”:

  • Survey completion rate >80%
  • 60% of users select a pre-set option (vs. “something else”)
  • Statistically significant retention lift in at least one cluster
  • No degradation in key engagement metrics

Acceptance criteria for “learning test”:

  • 40% of users select “something else” (suggests clusters don’t match user mental models)
  • No statistically significant difference in retention (suggests clusters exist, but personalization approach needs refinement)

Audience: New paid subscribers on first day, trial users converting to paid, reactivated users returning after 30+ days dormant. Start with 25% of eligible users to minimize risk.

Timeline: 90 days to measure primary retention metric, but early signals (completion rate, feature adoption) available within 14 days.

Example Test 2: Cluster-Specific Feature Recommendations

Background: Generic in-app messaging had low click-through rates (<5%) and wasn’t driving feature adoption. Users felt bombarded by irrelevant suggestions.

Hypothesis: For users who match behavioral flags within the first 14 days, triggering personalized feature recommendations will increase feature adoption by 20% and engagement depth by 15%.

Test design: Identify users by behavioral flags, then trigger targeted in-app messages at contextually relevant moments:

  • Sprint Executors see mobile app download prompt after completing 5 tasks on desktop: “Update tasks on the go: get the mobile app”
  • Client Project Coordinators see reporting features after creating third project: “Impress clients with professional progress reports”
  • Cross-Functional Orchestrators see dependency mapping after creating complex project: “Map dependencies to keep cross-functional teams aligned”

Success metrics:

  • Primary: Feature adoption rate for recommended features (target: 20% lift vs. control)
  • Secondary: Overall engagement depth (features used per session), message click-through rate
  • Guardrails: No increase in feature abandonment (starting but not completing flows)

Audience: Users who match cluster behavioral flags within first 14 days. Test one cluster at a time to isolate impact.

Timeline: 30 days to measure feature adoption impact.

Example Test 3: Retention Email Campaigns by Cluster

Background: Generic “tips and tricks” email campaigns had 8% open rates and weren’t moving retention metrics. Content felt irrelevant to most recipients.

Hypothesis: Segmenting email campaigns by identified cluster will improve email engagement by 50% and show a measurable correlation with retention.

Test design: Replace generic weekly tips emails with cluster-specific content:

  • Sprint Executors: “5 ways to speed up your daily standup” / “Mobile shortcuts that save 2 hours per week”
  • Client Project Coordinators: “How to impress clients with professional project reports” / “3 ways to give clients visibility without overwhelming them”
  • Personal Productivity Optimizers: “Build your second brain: advanced filtering techniques” / “Automate your recurring tasks in 5 minutes”

Send to users identified through either the onboarding survey or behavioral flags. Track engagement and retention by cluster.

Success metrics:

  • Primary: Email open rates (target: 50% lift), click-through rates (target: 100% lift)
  • Secondary: Correlation between email engagement and 90-day retention
  • Guardrails: Unsubscribe rates remain stable or decrease

Audience: Users identified as belonging to specific clusters either through survey responses or behavioral flags, minimum 14 days after signup.

Timeline: 6 weeks for initial engagement metrics, 90 days for retention correlation.

Post-test analysis framework

For each test, we established a clear decision framework:

If “winning test”:

  • Roll out to 100% of eligible users
  • Begin development on next phase of personalization for that cluster
  • Use learnings to inform tests for other clusters
  • Document what worked to build organizational playbook

If “learning test”:

  • Analyze all “something else” responses for missing clusters or unclear framing
  • Review behavioral data to see if clusters exist, but personalization was wrong
  • Iterate on messaging, timing, or format
  • Decide whether to retest with refinements or try different approach

If negative impact:

  • Immediately roll back to the control experience
  • Conduct user interviews to understand what went wrong
  • Reassess cluster definitions or personalization approach
  • Consider whether the cluster exists but needs a different treatment

The key to successful testing is starting small, measuring rigorously, and being willing to learn from failures. Not every cluster will respond to every type of personalization, and that’s valuable information. The goal isn’t perfect personalization immediately; it’s continuous improvement based on what actually moves metrics.

Intent-based segmentation mistakes and how to avoid them

Based on our experience implementing this framework, here are the mistakes that will derail your efforts:

1. Starting with too many clusters

More isn’t better. Six well-defined clusters are more useful than fifteen overlapping ones. You need enough clusters to capture meaningfully different intents, but few enough that teams can actually remember and act on them. Start with 4-6 clusters and refine over time. If you find yourself creating clusters that differ only slightly, you’ve gone too granular.

2. Confusing demographics with intent

Job title, company size, or industry might correlate with intent, but they don’t define it. We’ve seen solo consultants behave like “Cross-Functional Orchestrators” and enterprise teams behave like “Sprint Executors.” Focus on what users are trying to accomplish, not who they are on paper.

3. Creating overlapping clusters

Each cluster should be distinct in its primary intent and workflow patterns. If you’re struggling to articulate how two clusters differ behaviorally, they’re probably the same cluster with different labels. Test this by asking: “If I saw someone’s usage data, could I confidently assign them to one cluster?”

4. Ignoring edge cases entirely

Some users will span multiple clusters or switch between them based on context. That’s fine. The framework should accommodate primary intent while recognizing that users are complex. A user might primarily be a “Client Project Coordinator” but occasionally use “Personal Productivity Optimizer” features for their own task management. Don’t force rigid categorization.

5. Skipping the validation step

Your initial hypotheses will be wrong in places. User research and behavioral data keep you honest and prevent confirmation bias. We’ve seen teams fall in love with theoretically elegant clusters that don’t actually exist in their user base, or miss entire segments because they didn’t fit the initial hypothesis.

6. Treating clusters as static

User intent evolves. Someone might start as a “Personal Productivity Optimizer” and grow into a “Client Project Coordinator” as their business scales. Review and refine your clusters quarterly based on new data, product changes, and market shifts.

7. Personalizing too aggressively too soon

Start with high-confidence, low-risk personalization (like targeted email content) before you completely diverge user experiences. You want to validate that clusters behave differently before you build entirely separate onboarding flows.

8. Forgetting to measure impact

Intent-based segmentation is valuable only if it improves outcomes. Define success metrics upfront (e.g., retention lifts, engagement depth, upgrade rates, support ticket reduction) and track them by cluster. If personalization isn’t moving these metrics, refine your approach.

Making intent-based segmentation work for your organization

The framework we’ve outlined works across product categories and company sizes, but implementation varies based on your resources and organizational maturity.

If you have limited data: Start with Phase 1 and Phase 2, using qualitative research to define clusters before investing in behavioral infrastructure. You can manually tag users based on interview responses and onboarding surveys, then personalize through targeted emails and customer success outreach. As you grow, build the data systems to automate cluster identification.

If you have rich behavioral data but limited research capabilities: Reverse the order. Start with data patterns and validate through targeted interviews. Look for natural groupings in your analytics that suggest different workflow types, then talk to representative users from each group to understand their intent.

If you’re a small team: Don’t let perfect be the enemy of good. Start with 3-4 obvious clusters based on your highest-level workflow differences. The founder of a 10-person startup probably has a better intuitive understanding of user intent than a 500-person company with siloed data. Write down what you know, test it with a few users, and start personalizing.

If you’re a large enterprise: The challenge is getting organizational alignment, not defining clusters. Use Phase 1 to surface where teams already operate with different mental models, then use data to arbitrate. Create executive sponsorship for the new framework so it becomes the shared language across product, marketing, and CS.

The key is starting somewhere. Most companies know their one-size-fits-all approach isn’t working, but they keep personalizing around the wrong variables that don’t actually predict what users are trying to accomplish.

Intent-based segmentation reorients everything around the question that actually matters: What is this user trying to accomplish, and how can we help them succeed at that specific goal?

Turn insights into retention that drives revenue

Understanding user intent is just the first step. The real value comes from translating those insights into personalized experiences that keep users engaged and drive measurable revenue growth.

At The Good, we’ve spent 16 years helping SaaS companies identify their most valuable user segments and optimize experiences around what actually drives retention. Our systematic approach to user segmentation goes beyond frameworks. We help you implement experimentation strategies that prove which personalization efforts move the needle on the metrics your board cares about.

Plenty of companies struggle to implement segmentation that’s actually actionable. They end up with beautiful personas gathering dust or broad categories that don’t inform product decisions.

Intent-based segmentation is different because it connects directly to behavior you can observe and experiences you can personalize.

If you’re struggling with generic experiences that fail to resonate with different user types, or if you know your segmentation could be better but aren’t sure where to start, let’s talk about how intent-based segmentation could transform your retention strategy and drive revenue growth.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience https://thegood.com/insights/why-are-free-users-churning/ Thu, 16 Oct 2025 20:56:17 +0000 https://thegood.com/?post_type=insights&p=110962 “My free users aren’t converting, where do I start?” If you’re asking this question, you’re already ahead of most product leaders. You recognize the problem. But here’s what many miss: conversion is a symptom, not the root cause of the problem. SaaS churn often happens before users ever consider paying. It’s common for users to […]

The post Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience appeared first on The Good.

]]>
“My free users aren’t converting, where do I start?”

If you’re asking this question, you’re already ahead of most product leaders. You recognize the problem. But here’s what many miss: conversion is a symptom, not the root cause of the problem.

SaaS churn often happens before users ever consider paying.

It’s common for users to hit friction points you didn’t know existed. They encounter gates that make no sense in context. They drop off at moments when just a bit more clarity could have kept them engaged.

The good news? You can fix this. But not by guessing. Not by copying what Dropbox or Notion does. And usually not by adding more features.

What you need is a systematic audit of your free or anonymous user experience. One that reveals exactly where users hit walls, why they bounce, and what you can do to keep them engaged long enough to see value.

This article walks through a five-step framework that SaaS product and growth leaders can use to audit their free experience and reduce churn. It’s the same approach we use with clients, adapted so you can run it internally. Fair warning: this takes work. But if you’re serious about improving SaaS user retention, it’s worth every hour.

Why your free experience impacts your retention rate

Before we get into the framework, let’s be clear about what we mean by “free experience.”

This includes any interaction where users engage with your product without paying. That could be a free trial, a freemium tier, anonymous tool usage, or limited feature access. It’s the first impression, the test drive, the “try before you buy” phase.

And it matters more than you think.

Most SaaS companies obsess over free-to-paid conversion rates. But conversion is a lagging indicator. By the time a user decides not to convert, the damage is already done. They disengaged days or weeks ago. They just didn’t tell you.

The real opportunity sits upstream. If you can identify and remove friction in the free experience, you don’t just improve conversion rates. You improve activation rates, engagement, time-to-value, and long-term retention. You build a user base that actually wants to pay because they’ve already seen the value.

Here’s how to find those friction points.

Step 1: Review your data for drop-off points

Start with what’s already happening in your product. Before you talk to anyone or look at competitors, you need to know exactly where users are getting stuck.

Dig into your product analytics. You’re looking for three things:

Activation drop-offs: Where do users abandon the onboarding flow? Which steps have the highest exit rates? If 60% of users drop off when asked to invite teammates, that’s a signal.

Feature engagement patterns: Which features do free users actually use? Which ones do they try once and never touch again? Are there features you’ve gated that users don’t even attempt to access?

Time-to-value analysis: How long does it take users to complete their first valuable action? And what percentage of users never get there? If your median time-to-value is three days, but 70% of users churn within 48 hours, you have a problem.

Set up a dashboard that tracks these metrics by cohort. New signups this week versus last month. Users from different acquisition channels. Free trial versus freemium. The patterns that emerge will guide your optimization priorities.

Layer on session recordings and heatmaps to see exactly what’s happening at key drop-off points. Numbers tell you where the problem is. Qualitative data tells you why.

Watch 20-30 sessions of users who churned in their first week. What did they try to do? Where did they get stuck? What confusion or frustration is evident in their behavior?

This isn’t just a data review. It’s detective work. You’re building a picture of where your free experience breaks down.

Step 2: Talk to users (both active and churned)

Now that you’ve identified drop-off points in your analytics, it’s time to understand the human story behind those numbers. Conduct 10-15 interviews, split between two groups:

Active free users (people still using your product but haven’t upgraded): Why are they still here? What value are they getting? What would make them pay? What’s holding them back?

Churned users (people who tried your product and left): What were they trying to accomplish? Where did they get stuck? What made them give up? What would have kept them engaged?

Keep these conversations short (15-20 minutes) and focused. You’re not selling. You’re learning.

Sample questions for active free users:

  • What problem were you trying to solve when you first signed up?
  • Walk me through how you use [product] today.
  • What features do you wish you had access to?
  • What would need to change for you to consider upgrading?
  • If we removed [specific free feature], would you still use the product?

Sample questions for churned users:

  • What were you hoping to accomplish with [product]?
  • Where did you get stuck?
  • Was there a specific moment when you decided it wasn’t for you?
  • Did you consider other tools? What made you choose them instead?

Record these conversations (with permission) and transcribe them. The exact language users employ to describe their experience reveals friction points you’d never spot in analytics alone.

Pay special attention when users mention alternatives they considered or are currently using. This context becomes critical in the next step.

Step 3: Map what your users are being offered in the market

You now understand what’s happening in your product and why users make the decisions they do. The next question is: what are they comparing you against?

Your users don’t evaluate your free experience in a vacuum. They’re weighing it against every other tool they’ve tried, every competitor they’re considering, and every product they wish yours worked more like.

This step isn’t about copying competitors. It’s about understanding the full landscape of options your users are navigating.

Create a comprehensive inventory of how other products in your space (and adjacent to it) handle their free experiences. Document what your users are seeing elsewhere.

Here’s what to capture in a Figma or Notion file.

An example from The Good showing what to capture in Figma when auditing SaaS tools and answering why are free users churning?

Set up a page with one row per product. For each one, document:

  • What features are available without registration
  • What requires an email address but remains free
  • Where the hard paywalls sit
  • How they communicate limits (countdown timers, credit displays, etc.)
  • Placement and messaging of upgrade prompts
  • Onboarding flows and activation sequences

Don’t limit yourself to direct competitors. Look at the tools your users mentioned in interviews. If they’re comparing your productivity tool to Notion, your design tool to Figma, or your automation platform to Zapier, study how those products handle free users.

Pro tip: Screenshot everything. Your database should include visual documentation of every monetization touchpoint, limit notification, and upgrade CTA. These screenshots become invaluable references when you’re making decisions about your own experience.

This exercise typically takes 8-12 hours for a thorough analysis of five to seven products. You’ll surface approaches you hadn’t considered and identify industry patterns that users have come to expect.

The goal here is context. When a user hits a limit in your product, they’re mentally comparing that experience to how Dropbox handles storage limits, how Canva displays upgrade options, or how Grammarly shows premium features. Understanding those reference points helps you design a free experience that meets or exceeds market expectations.

Step 4: Run a verb scoring exercise

With data, user insights, and market context in hand, it’s time to systematically evaluate your own product’s free experience. This is where verb scoring comes in.

Verb scoring evaluates the discrete actions users can take in your product and assigns each one a “score” based on the level of friction required. The six verb scores are:

  • Anonymous – Users can take this action without providing any information
  • Limited Anonymous Use – Users can take this action without registration, but only a limited number of times
  • Free with Registration – Users must register (email + basic info), but can take this action unlimited times for free
  • Limited Registered Use – Registered users can take this action, but with caps or restrictions
  • Trial with Payment – Users must provide payment information to access this action (even if they’re not charged immediately)
  • Gated – Only paying customers can take this action
A chart from The Good outlining verb score, definition and purpose.

List every meaningful action users can take in your product. Not features, but actions. “Create a document” is a verb. “Edit collaboratively” is a verb. “Export to PDF” is a verb. “Share via link” is a verb.

Then score each one. Where does it fall on the spectrum from Anonymous to Gated?

This exercise reveals your actual monetization strategy, not the one you think you have. You’ll often find that verbs are gated inconsistently, or that you’re giving away too much (or too little) at critical moments.

For a detailed walkthrough of verb scoring, including decision trees and examples, see our guide on verb scoring for product strategy.

Create a verb scoring matrix that maps all your verbs against these six scores. This becomes your baseline. It shows exactly where friction exists in your free experience, allowing you to compare it directly to what you documented in Step 3.

Step 5: Connect the dots between data, users, market context, and verb scoring

This is where the audit comes together. You now have four layers of insight:

  1. Quantitative and qualitative data: Where users drop off and what they’re doing (or not doing)
  2. User feedback: Why they drop off and what they’re thinking
  3. Market context: What alternatives they’re comparing you against
  4. Verb scoring matrix: Where friction exists in your own product

Lay them side by side. Look for patterns.

Here’s what you’re hunting for:

Friction without reason

Look out for verb scores that create unnecessary barriers relative to market norms. For example, if your data shows 40% of users bounce before registering, user interviews reveal confusion about what your product does, and your market analysis shows that competitors allow anonymous exploration, you’re likely losing users before they experience value. Your verb scoring can reveal that you’re gating too early.

Value leaks

Check for free features that users love but don’t move them toward conversion. If your most-used free features have no connection to paid capabilities, and users in interviews can’t articulate why they’d upgrade, you’re building a user base that will never pay. Your verb scoring might show you’re giving away too many “Free with Registration” verbs without strategic “Limited Registered Use” prompts.

Invisible gates

Paywalls that users hit without understanding why. Your data shows sudden drop-offs at specific upgrade prompts. User interviews reveal confusion about value or poor timing. Market analysis shows competitors explain premium benefits more clearly. Your verb scoring identifies which verbs are gated, but not whether those gates make sense to users.

Poorly timed friction

Limits or gates that appear before users have experienced enough value. Data shows high bounce rates at the first upgrade prompt. User interviews reveal frustration: “I hadn’t even figured out the basics yet.” Market analysis shows that similar tools delay friction until after activation. Your verb scoring might reveal that you’re using “Limited Anonymous Use” or “Trial with Payment” too early in the journey.

Market misalignment

Patterns where your verb scoring differs significantly from market norms, and your churn data supports that this matters. For instance, if every competitor allows free PDF exports but you gate this behind payment, your churned user interviews will likely mention this as a dealbreaker.

Create a prioritized list of friction points based on:

  • Impact (how many users are affected, based on your data?)
  • Confidence (do your user interviews confirm this is a problem?)
  • Effort (how hard is this to fix?)
  • Market expectation (is this friction standard, or are you an outlier?)

This becomes your retention optimization roadmap.

Why this framework works

This five-step audit framework delivers three specific outcomes that improve SaaS user retention:

Get a clear path to higher retention rates: No more guessing. You’ll have a prioritized list of friction points ranked by impact and effort. Fix the top three and you’ll see measurable improvement in activation, engagement, and conversion.

Make data-driven decisions: Create a culture of user-centered decisions rather than those based on the highest-paid person’s opinion, historical choices, or a gut feeling. When you combine quantitative data, qualitative research, market context, and systematic verb scoring, arguments become easy to settle.

Prevent feature flop: Validate changes before implementation. You’ll know which gates to remove, which features to add to your free tier, and which upgrade prompts to reposition, all before you waste valuable development resources.

Teams that run this audit consistently report two things: first, they’re surprised by what they find. Assumptions they’d held for months or years turn out to be wrong. Second, the fixes are often simpler than expected. Sometimes all it takes is moving an upgrade prompt, clarifying messaging, or ungating a single feature.

Running this audit takes time (and that’s the point)

Let’s be honest: this framework requires a meaningful investment. Between data analysis, user interviews, market research, and verb scoring, you’re looking at 40-60 hours of work.

That’s assuming you have the right tools, know how to set up proper analytics, can recruit and interview users effectively, and have experience interpreting qualitative data.

For many SaaS teams, that’s exactly the problem. You know you need to audit your free experience. You know churn is killing growth. But your product team is building features, your growth team is running acquisition campaigns, and nobody has the bandwidth or expertise to run a proper retention audit.

That’s where The Good’s Digital Experience Optimization Program™ comes in.

We’ve run this exact process dozens of times for SaaS companies between product-market fit and scale. Companies like yours with $1M-$30M ARR and pressure to accelerate growth while battling churn.

Our team conducts the full audit, including data review, user research, market analysis, and verb scoring, and delivers a prioritized roadmap of friction points with specific recommendations. Then we help you implement, test, and optimize the changes.

The result? Clients typically see measurable improvements in activation and retention within 60-90 days. More importantly, they build an optimization discipline that compounds over time.

Want to see where your free experience is bleeding users? Schedule an introductory call to discuss how we can help you reduce churn and improve SaaS user retention.

FREE RESOURCE


How Top AI Tools Turn Free Users Into Paying Customers


Opting In To Optimization

The post Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience appeared first on The Good.

]]>
Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times https://thegood.com/insights/fritz-oconnor/ Thu, 04 Sep 2025 20:09:59 +0000 https://thegood.com/?post_type=insights&p=110835 Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide. Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two […]

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide.

Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two decades of experience spanning manufacturing, sales, and marketing leadership, Fritz has developed a methodology for building high-performing organizations that deliver results consistently, even in challenging circumstances.

A marketing leader built for manufacturing

Fritz’s career journey reads like a masterclass in understanding customers across different industries. Starting in the printing and paper industry, he cut his teeth in structured sales training programs that taught him the fundamentals of professional sales and business operations.

“I’ve spent my entire career in sales and marketing roles. Almost exclusively in the manufacturing sector for companies that make stuff,” Fritz explains. This foundation in manufacturing would prove invaluable throughout his career, giving him deep insight into the complexity of bringing physical products to market.

His two-decade tenure at GE further refined his skills across diverse business environments. “We always used to say we can work in any industry, anywhere in the world, and still get paid by the same company,” he recalls. This experience working across plastics, appliances, and GE Corporate gave him a unique perspective on how great companies operate at scale.

But it was during his time at GE Corporate that Fritz discovered what would become his career-defining framework: differential value proposition (DVP). Working in a marketing consulting role with virtually every business in GE’s global portfolio, he helped launch this customer-centric approach to messaging and positioning throughout the organization.

This systematic approach to understanding and serving customers became foundational to Fritz’s ongoing success.

Implementing systems and frameworks that take teams from features to solutions

Originally coined by the founder of Valkre Solutions, Jerry Alderman, the DVP framework transforms how companies think about customer messaging and competitive positioning. Fritz became a master at implementing this methodology across diverse organizations.

“What are you offering? Be it a product or service that is better than the customer’s next best alternative,” Fritz explains. This might seem simple, but the implications are profound. Rather than competing on features or price, DVP focuses on solving customer problems in ways that competitors simply cannot match.

The challenge, as Fritz learned during his GE implementation, is that DVP represents a fundamental shift in thinking. "Every business, product, or service has a value proposition, but not every value proposition is differential. So many companies have the same value proposition. The white space is that differential part."

"It's about switching thinking from a feature to a benefit. For example, a blue appliance is not a differential value proposition. It's a feature."

Fritz teaches teams to make this shift by leading with problems and solutions.

"It's how it makes the consumer or customer's life better, how it solves that problem. You have to identify what the problem is. You have to articulate how you can fix that problem in a different way, better than anybody else."

This shift from features to solutions requires teams to understand their customers' actual problems, not just their stated needs.

For leaders, this translates directly into more effective product messaging, clearer value propositions, and ultimately, higher conversion rates.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Overcoming the "this is how we've always done it" challenge

One of Fritz's biggest career wins (and ongoing challenges) centers around implementing the Differential Value Proposition (DVP) methodology across organizations. The implementation at GE became both a success story and a learning experience in change management.

"As you can imagine, anytime you try and launch a new process in a company the size of GE, you can be met with resistance. Especially when you're coming out of corporate."

This resistance taught Fritz a crucial lesson about implementing change: "I don't view that as a challenge or a stumbling block, but as a fantastic and wonderful opportunity because when you flip those people, they become your biggest proponents."

His approach centers on listening first, then demonstrating value in the stakeholder's own language. "It's a listening journey. You've gotta understand what the challenges are that of the people with whom you're working, whether it's an external customer or an internal customer."

"Proactively listen and walk in the shoes of the people I'm working with. When I'm trying to introduce something as significant as DVP or other business tools."

This listening approach helps identify the real challenges and resistance points, making it possible to address them effectively.

The foundation: accountability, responsibility, and challenge

But having the right frameworks isn't enough. Fritz learned that execution depends on creating the right team culture. He is quick to credit his teams as the backbone of his successful projects, and one of the ways he supports them is with clear organizational principles.

"I have a few underlying business principles that I've gained along the way that are the foundational threads for me," Fritz explains. "One is, any team I work with or works for me, my job is to make them as successful as possible."

This people-first approach manifests through three guiding principles:

  • Accountability: Holding yourself and your team responsible for deliverables and outcomes
  • Responsibility: Taking ownership of significant business challenges
  • Challenge: Embracing difficult problems that create meaningful business impact

"The way I do that is through three guiding principles, which are accountability, responsibility, and challenge," Fritz notes. "I want to be entrusted with significant responsibility that is helping to solve a significant business challenge."

These principles translate into a simple but powerful operational mantra: deliver on time, complete with excellence.

"I know those all sound like buzzwords, but they're not meant to be. And we don't treat them as such. We treat them as very simple guiding principles to keep us focused."

Putting it all together at Ironman 4x4

When Fritz joined Ironman 4x4 America, he found the perfect opportunity to apply all of these frameworks.

Ironman 4x4 is a global company that sells off-road parts and accessories for 4x4 vehicles (lift kits, suspension parts, bumpers, etc.). They have been around since the 1950s, but were new to the United States, so Fritz had the opportunity to find new ways to market their complex "fitment" products, or parts that must work with specific vehicle makes and models. This complexity creates both technical and marketing challenges that Fritz's team had to solve systematically.

His sales background gave him an invaluable perspective on marketing effectiveness. "If you spend any time in sales, that means you're around customers, whether those are B2B or B2C customers. And you learn what's important to them."

This customer proximity taught him the critical principle of "show me, don't tell me." Rather than relying on feature lists or industry awards, effective marketing demonstrates value through customer experiences and outcomes.

"We always, in both sales and marketing, it's easy to get into the trap of just talking, talking, talking, describing stuff, talking about features and benefits. Talking about the industry's best. Nobody cares about your industry. They care about how your product or service is going to impact them."

The key to marketing complex products, Fritz knew, is understanding how customers think about their problems. Rather than leading with technical specifications, the focus should be on the customer's end goal and the emotional drivers behind their purchase decisions.

Fritz emphasizes the importance of demonstrating value rather than just describing it: "Really, visual storytelling, video storytelling, placing the customer in the scene so they understand your value. That ability comes from firsthand experience of seeing that happen in the sales arena."

A data-driven website replatforming

His POV shaped everything he was involved in at Ironman 4x4 America, from new product introduction processes to website optimization. Fritz implemented structured new product integration toll gates with clear deliverables and cross-functional accountability, ensuring every product launch was executed with precision across creative, digital, and channel marketing.

His customer-centered thinking and frameworks proved essential when his team tackled a complex website migration from an outdated platform to Shopify. The project was based on their understanding that a website change was necessary to better serve their audience and increase ecommerce sales.

Working with The Good on a DXO Program™, the Ironman 4x4 team executed the redesign and replatforming with data-driven methodology. Rather than relying on opinions about what the site should look like, they embraced rapid prototyping and continuous testing.

"Any decision made without data is just an opinion, right?" Fritz notes, referencing CEO Luke Schnacke's philosophy.

"We try to be very data-driven, which is why it was so important for us to work with The Good, to get that data and share it with the team managing the website replatforming so that they were making data-driven decisions on design and functionality."

They didn’t wait for a “perfect website” to figure out what customers wanted. They tested and got feedback throughout the entire process to make sure they were developing the right ideas.

"I realized we were never going to do it perfectly," Fritz recalls. The team was getting bogged down in opinions about checkout processes, product customizers, and overall site design. "We could end up using half our development budget on building something that doesn't perform."

"Ultimately, we agreed to launch and then test the heck out of it. We didn't want to overburden the development pipeline with projects that don't have a financial impact."

This represents a fundamental shift in thinking. They went from trying to build the perfect site to building a testable foundation for continuous improvement.

The beauty of working with The Good in this situation, Fritz explains, was "the rapid prototyping, the test and learn. We could very quickly get feedback and iterate and then test and learn again."

Multiplying results through partnership

Leveraging an external partnership accelerated progress beyond what internal resources could achieve alone and held the team accountable to the frameworks and goals of staying user-centered and data-driven.

"If you're not an expert, I would recommend doing a website project with a company like The Good. It wasn't a cost, it was an investment," Fritz emphasizes. "And I think that Ironman 4x4 is the beneficiary of the investment that they made with The Good as they migrated over to Shopify and learned about what customers would like."

The partnership enabled intentional, studied testing with proper dependencies and measurable results tracking.

"That whole test and learn methodology is done in a very structured, deliberate way. Making changes in a waterfall, with the proper dependencies articulated, and then tracking the measurable benefits of changes, and then tweaking accordingly from there."

This approach breeds confidence because it's entirely data-driven, removing guesswork from critical business decisions.

Lessons for marketing and sales leaders

For marketing and sales leaders looking to build similar operational excellence, Fritz's approach provides a roadmap: start with principles, understand your customers deeply, make decisions based on data, and never underestimate the power of strategic partnerships to unlock potential.

Start with principles, not tactics

Before implementing any marketing or optimization program, establish clear guiding principles. Fritz's framework of accountability, responsibility, and challenge provided a foundation that influenced every decision and created lasting organizational change.

Understand your customer's next best alternative

Move beyond feature-benefit messaging to understand what your customers would do if your solution didn't exist. This "next best alternative" thinking is the foundation of truly differential value propositions.

Convert resistance through understanding

When facing organizational resistance to change, focus on understanding stakeholder concerns rather than pushing solutions. Meet people where they are and demonstrate value in their language.

Embrace data-driven decision making

Resist the temptation to rely on opinions or best practices. Instead, create structured testing methodologies that let customer behavior guide optimization decisions.

Invest in external partnerships strategically

Recognize when external expertise can accelerate progress. The right partnerships provide capabilities and perspectives that internal teams may not possess, ultimately delivering better results faster.

Starting an optimization journey

Fritz's approach to building and scaling teams, including Ironman 4x4's US marketing operations, demonstrates how principled leadership, customer-centric thinking, and strategic partnerships can create sustainable competitive advantages.

"There's no obstacle too big that can't be overcome with data and optimization, right?" Fritz states emphatically. "The whole point of being data-driven and optimizing is to get time back and to become more efficient."

His advice for other leaders facing similar challenges?

"Get to yes. Figure out how to do it. Don't say, this is why I can't do it. Say this is how I'm going to do it. Here are things I need to do in order to do it. Then hold yourself accountable. Make it happen. Do it."

The secret, according to Fritz, lies in celebrating small wins that compound over time: "Little steps, I always like to say, celebrate the little wins. Go after the little wins because they compound on one another and then all of a sudden you're gonna look back and go, holy mackerel, I can't believe I am where I am."

The secret is consistency: "And it starts with data as your foundation and optimization as the accelerator."

For ecommerce leaders looking to build similar operational excellence, Fritz's framework provides a proven template: establish clear principles, understand customer problems deeply, make data-driven decisions, and never underestimate the power of strategic partnerships to accelerate growth.

Ready to optimize your ecommerce experience with data-driven methodology? Learn more about The Good's Digital Experience Optimization Program™ and discover how strategic partnerships can unlock your growth potential.


The Good helps ecommerce brands like Ironman 4x4 optimize their digital experiences through research-backed testing and strategic partnerships. Our team combines deep technical expertise with proven methodologies to deliver measurable results for growing brands.

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
How Does Experimentation Support Product-Led Growth? https://thegood.com/insights/experimentation-product-led-growth/ Mon, 25 Aug 2025 19:00:23 +0000 https://thegood.com/?post_type=insights&p=110784 The product-led growth (PLG) playbook is no longer a secret. Free trials, frictionless onboarding, viral mechanics. Many SaaS companies are following the same script. Yet despite implementing all the product-led growth best practices, most companies leveraging these strategies hit a growth plateau, watching competitors with seemingly similar products pull ahead. Here’s what they’re missing: the […]

The post How Does Experimentation Support Product-Led Growth? appeared first on The Good.

]]>
The product-led growth (PLG) playbook is no longer a secret. Free trials, frictionless onboarding, viral mechanics. Many SaaS companies are following the same script. Yet despite implementing all the product-led growth best practices, most companies leveraging these strategies hit a growth plateau, watching competitors with seemingly similar products pull ahead.

Here’s what they’re missing: the most successful product-led companies don’t just follow the playbook. They rewrite it based on what their actual users reveal through experimentation.

While everyone else copies best practices, companies that layer experimentation into their PLG strategy are discovering the specific insights that accelerate their growth. In a world where everyone has access to the same tactics, the ability to learn about your own users (and do it faster) becomes a moat.

Companies like Booking.com, Netflix, and Amazon didn’t achieve their dominance by following conventional wisdom. They made experimentation central to their success, running thousands of experiments annually to optimize their user experience. And you don’t need their resources to adopt their approach.

What is product-led growth?

Product-led growth is a strategy that emphasizes the product itself as the primary driver of customer acquisition, conversion, and retention.

Traditionally, companies have relied on sales and marketing tactics to create leads and drive customer adoption. Ads and websites had to do most of the selling, and the onus was on the potential user to read ads, navigate websites, choose between feature matrices, and, at times, go through a complicated sales process (on or off-site).

In a product-led growth model, companies remove as many obstacles as possible to acquiring free registered users. This approach often involves offering a free or freemium version of the product, allowing users to experience its value before committing to a paid subscription.

An infographic comparison of how experimentation product led growth differs from traditional sales models.

If the experience is good enough to keep them using it, and the paid features are valuable enough, then the hope is that users will ultimately convert into paying customers. In this way, the product serves as the main vehicle for customer acquisition and expansion.

Just like test driving a car, they let you test drive their product and discover the value on your own, before making a purchase decision.

Companies that successfully implement a product-led growth strategy often benefit from increased customer loyalty, higher conversion rates, lower customer acquisition costs, and sustainable long-term growth.

The shift from “launch and learn” to “test and learn”

Plenty of companies, between product-market fit and scale, run their growth strategies on a “launch and learn” philosophy. They build features based on hunches, ship them to users, then analyze the results afterward. This approach can work, but when operating on a product-led growth model, product decisions carry outsized impact. The product experience influences pretty much every KPI from acquisition to retention.

When you launch first and learn later, you’re essentially gambling with your users’ experience. Every poorly conceived feature, every friction point, every missed opportunity represents lost revenue and potentially churned customers. More importantly, it represents wasted development resources that could have been deployed more strategically.

This is where experimentation comes in. Instead of “launch and learn,” companies can shift to “test and learn.” This means experimentation and analysis of results happen pre-launch, not after. Changes are validated with real users before full implementation, minimizing risk and maximizing ROI.

Experimentation before implementation gives you an understanding of real customer behavior and clearly indicates how you can repeat results by uncovering the why behind those behaviors.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

How experimentation amplifies PLG success

Experimentation is only helpful to a product-led growth strategy when it is done right. So what are some of the ways to implement that will amplify PLG success?

1. Systematic optimization across the customer journey

The most effective approach to PLG experimentation uses frameworks like ROPES (Registration, Onboarding, Product, Evangelize, Save) to systematically optimize each stage of the customer experience. Rather than randomly testing features, successful companies identify specific levers within each stage and experiment systematically.

For example:

  • Registration phase: Testing form length, social proof elements, and value propositions
  • Onboarding phase: Experimenting with tutorial formats, progress indicators, and time-to-value optimization
  • Product phase: Testing feature discoverability, UI changes, and user flow improvements
  • Evangelize phase: Optimizing sharing mechanisms, referral programs, and viral loops
  • Save phase: Testing retention tactics, upgrade prompts, and churn prevention strategies

This systematic approach ensures that experimentation efforts are strategic rather than scattered, creating compounding improvements across the entire user journey.

2. Accelerated learning through parallel testing

Traditional A/B testing approaches test one hypothesis at a time, which can drastically slow your learning velocity. Advanced PLG companies run multiple experiments simultaneously across different parts of their product experience, dramatically increasing the rate at which they gather insights.

The key to successful parallel testing is ensuring experiments don’t interfere with each other. As Natalie Thomas, our Director of UX and Strategy, explains: “It’s important to look at behavior goals to assess why your metrics improved after a series of tests. So if you’re running too many similar tests at once, it will be difficult to pinpoint and assess exactly which test led to the positive result.”

Successful parallel testing requires:

  • Creating testing roadmaps that cover independent product areas
  • Building small, cross-functional teams assigned to each area
  • Establishing clear metrics and success criteria for each test
  • Implementing proper statistical controls to avoid interference

3. Rapid experimentation for faster innovation

Speed matters in PLG. Market opportunities disappear quickly, and user expectations evolve constantly.

So, one of the main objections to implementing an experimentation strategy is that testing cycles often take weeks or months to complete. But high-performing PLG companies have found ways to cut this time in half without losing statistical rigor. Key strategies include:

Supplementing A/B Tests with Rapid Testing: Not every hypothesis requires a full A/B test. Qualitative research, user interviews, and rapid prototyping can validate concepts quickly before investing in development.

Modular Testing Approaches: Instead of starting from scratch each time, successful teams create reusable components like design templates, testing frameworks, and analysis processes to reduce setup time.

AI-Powered Research: Using artificial intelligence as a research assistant to speed up data collection, user recruitment, and insight generation.

Prioritization Frameworks: Implementing systematic prioritization (like the ADVIS’R framework) to ensure high-impact experiments get fast-tracked through the process.

4. Data-driven feature development

Experimentation helps PLG companies avoid the biggest roadmap mistake: prioritizing low-impact features. Instead of building what seems logical, experimentation reveals what actually drives user behavior and business metrics.

This is particularly important as you scale beyond basic PLG practices. When you’re competing with other product-led companies, the quality of your feature decisions becomes a key differentiator. Companies that systematically test and validate features before full development consistently outperform those that rely on intuition.

The most successful approach combines quantitative testing with qualitative insights. This means not just measuring what users do, but understanding why they do it. This deeper understanding enables teams to build features that truly resonate with users rather than features that just check boxes.

5. Building an experimentation-first culture

An outcome of adding experimentation to a product-led growth strategy is that it will help build the practice into your company culture. To do that, you can follow a few key steps.

Start with infrastructure

Before you can effectively use experimentation to support PLG, you need the right infrastructure. This includes:

  • Testing platforms that can handle both simple A/B tests and complex multivariate experiments
  • Analytics systems that provide real-time insights into user behavior
  • Data pipelines that connect user actions to business outcomes
  • Collaboration tools that enable cross-functional teams to work together effectively

Establish clear processes

Successful experimentation requires discipline. Teams need clear processes for:

  • Hypothesis formation and validation
  • Test design and statistical planning
  • Resource allocation and project management
  • Results analysis and decision-making
  • Knowledge sharing and organizational learning

Foster cross-functional collaboration

The most impactful experiments often come from unexpected sources. Engineers closest to the code understand technical constraints and opportunities. Designers see user experience friction points. Customer success teams hear directly from users about pain points.

Creating space for these diverse perspectives to contribute to experimentation efforts often leads to breakthrough insights that no single team would discover independently.

The compound effect of systematic experimentation

What makes experimentation so powerful for PLG companies is its compound effect. Each successful experiment doesn’t just improve one metric. It teaches you something about your users that informs future experiments.

Over time, this creates an accelerating cycle of improvement. Companies that have been systematically experimenting for years possess a deep, nuanced understanding of their users that newcomers can’t easily replicate. This understanding becomes a sustainable competitive advantage.

Moreover, experimentation capabilities themselves improve with practice. Teams get faster at designing tests, more sophisticated in their analysis, and better at translating insights into action. The infrastructure and culture that support experimentation become organizational assets that compound over time.

Experimentation as your PLG multiplier

Product-led growth without experimentation is like driving with your eyes closed. You might reach your destination, but probably not efficiently, and certainly not safely. Experimentation transforms PLG from a collection of best practices into a systematic approach to user-centered product development.

The companies that win in today’s competitive SaaS landscape aren’t just those with the best products; they’re those that can consistently improve their products based on real user insights. They’ve made experimentation not just a tactic, but a core organizational capability.

Ready to transform your PLG strategy with systematic experimentation? The Good specializes in helping product-led companies build experimentation capabilities that drive sustainable growth.

Our Digital Experience Optimization Program™ combines strategic frameworks like ROPES with hands-on experimentation support to help you uncover the specific insights your business needs to scale. Let’s explore how experimentation can accelerate your growth →

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post How Does Experimentation Support Product-Led Growth? appeared first on The Good.

]]>
Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? https://thegood.com/insights/regulated-saas/ Fri, 08 Aug 2025 18:36:19 +0000 https://thegood.com/?post_type=insights&p=110753 The conversation happens on nearly every discovery call we have with a leader tasked with optimizing SaaS or software for regulated industries. It starts with optimism about growth potential, then quickly shifts to the reality of their constraints. Healthcare software companies can’t freely experiment with patient data. Financial technology firms face strict compliance requirements that […]

The post Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? appeared first on The Good.

]]>
The conversation happens on nearly every discovery call we have with a leader tasked with optimizing SaaS or software for regulated industries. It starts with optimism about growth potential, then quickly shifts to the reality of their constraints.

Healthcare software companies can’t freely experiment with patient data. Financial technology firms face strict compliance requirements that limit onsite testing capabilities. Government contractors operate under security clearances that restrict user research. Insurance platforms must navigate complex regulatory frameworks. HR and ATS software handle sensitive employee data that requires careful privacy protection.

Experimentation seems nearly impossible under these circumstances, and the product-led growth strategies these teams see working for companies riding exponential growth waves like Linktree or Lovable can’t work for them.

These regulated SaaS companies still need to grow. They have the same fundamental challenges as any SaaS business: converting leads, reducing churn, and improving user experience. But the traditional growth toolkit doesn’t fit their reality, so let’s explore what can work.

The problem with product-led growth in regulated industries

Product-led growth has become the gold standard for SaaS success.

Companies like Canva, Grammarly, and Spotify have proven that letting users experience your product before purchasing leads to higher conversion rates, lower customer acquisition costs, and sustainable growth.

The strategy is to remove obstacles to product adoption, offer free trials or freemium versions, and let the product sell itself. These companies often move quickly and test new features relentlessly as a way to “hack” growth.

The product-led growth playbook includes:

  • Free trials and freemium models that give users immediate product access
  • Continuous A/B testing on live user experiences
  • Extensive user tracking and behavioral analytics to optimize conversion funnels
  • Rapid iteration based on user feedback and behavior data
  • Self-service onboarding that guides users to their “aha moment
  • Viral growth loop, where users invite others or share content

And it works…for many. But regulated SaaS companies see these success stories and struggle to replicate them.

How do you offer a free trial for an HR tool that has to be rolled out across an entire organization to be useful? How do you minimize sign-up friction for a fintech software that requires bank information to function?

Experimenting with new features is too risky when system failure or emergency calling disruptions in telecommunications could result in massive fines.

Sometimes the stakes are too high for the product-led growth best practices that we see working in less-restrictive industries.

Regulated SaaS challenges are unique, and their growth solutions should be too

The challenges for this subset of SaaS companies are real and varied.

Compliance and privacy restrictions: Healthcare companies can’t freely test with patient data. Financial services face strict data handling requirements. Government contractors operate under security clearances.

Low traffic volume: Many regulated SaaS companies serve niche markets with limited user bases, making traditional A/B testing statistically impossible.

Long testing cycles: By the time regulated companies collect enough data from different regions and customer segments, it can take years to reach statistical significance. Different customers use different features across various geographical locations, making it difficult to design meaningful experiments that won’t disrupt service.

Risk-averse customers: Enterprise clients in regulated industries don’t want to be testing subjects for new features or experiences.

Resource constraints: Many regulated SaaS companies are highly technical but lack dedicated growth or UX teams.

Unique challenges require unique solutions, and that is what The Good can provide.

The alternative: off-site experiment-led growth

The solution isn’t to abandon growth optimization. It’s to use different methods that work within regulatory constraints.

This is where off-site experiment-led growth becomes the game-changer.

Experiment-led growth is a strategic approach that relies on continuous research, experimentation, and data-driven decision-making to drive business improvements. It allows teams to rapidly iterate on ideas that improve UX, marketing, and more.

Regulated SaaS can add an extra layer to experiment-led growth by taking things off-site or out of the product experience. Moving the growth tactics and experimentation away from the regulated environment and live user base gives teams the chance to make changes freely and quickly, gauge user reaction to those changes, and either launch with confidence or kill the ideas.

While product-led growth relies on in-product experimentation with real users, off-site experiment-led growth validates hypotheses and optimizes experiences before they ever touch your production environment. Instead of letting users test drive your product to discover value, you test drive your assumptions about users to deliver value immediately.

This approach flips the model to accommodate some of the restraints that regulated SaaS companies face. It’s no longer required to iterate on live systems with real customer data. There is an option to conduct experiments in controlled environments that don’t compromise compliance or risk customer relationships. You gather similar insights that drive product-led growth success, but through methods that work within constraints.

The result is a growth strategy that’s both data-driven and compliant, giving regulated SaaS companies access to the same optimization advantages that unrestricted companies enjoy, just through different means.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Off-site experiment-led growth tactics

Here are a few of the methods we use to deliver optimization outcomes for companies with the challenges and constraints outlined earlier in the article.

User testing

Because of the difficulty in getting customer data, there can be a disconnect between product teams and users.

Lookalike user testing solves this by bringing external participants who match the ideal customer profiles through your live experience. They complete tasks while thinking out loud, revealing friction points and confusion without exposing any sensitive data or requiring system changes.

This helps understand user behavior patterns, identify conversion barriers, and validate solutions, all without touching your production environment or compromising compliance.

AI-powered heatmaps and analytics

AI-generated heatmaps can predict user behavior with 92% accuracy without requiring any actual user data. These tools can analyze your interface and predict where users will look, what they’ll miss, and how long they’ll engage with different elements.

This is particularly valuable for regulated companies because you can understand user attention patterns and optimize layouts before the system is used.

Rapid testing

Experimentation is a proven way to get essential feedback on new features or website changes. And with A/B testing off the table in many regulated industries, rapid testing can fill in the gaps.

Unlike traditional A/B testing, rapid testing doesn’t require code changes, live traffic, or long research cycles. Instead, it uses a combination of techniques to validate hypotheses and inform decisions before anything goes live.

Rapid experimentation is not a one-size-fits-all process. Different scenarios call for different types of tests. Here are some common methods:

  • First-click tests: First-click tests evaluate whether users can intuitively find the primary action or information on a page.
  • Tree tests: Tree testing is a usability technique that helps you understand how users navigate through your website or app’s structure.
  • 5-second tests: 5-second tests assess a user’s immediate impression of a design or message.
  • Design surveys: Design surveys collect qualitative feedback on wireframes or mockups.
  • Preference tests: This test involves showing users two or more design variations and asking which they prefer and why. It’s perfect for narrowing down visual or messaging options before launching a formal test.
  • Card sorting: Card sorting is a research technique used to understand how users organize and categorize information.

These are just six of the many types of rapid experimentation.

While none deliver a 1:1 result when compared to A/B or multivariate testing, rapid experimentation offers a way for regulated SaaS companies to focus their development resources on work that has already shown positive signals from users.

For a tangible example, imagine a company struggling with positioning (a common challenge in technical, regulated industries). Five-second testing provides immediate feedback on messaging effectiveness. Users see your page for five seconds, then recall what they remember.

Competitive intelligence and market research

Structured competitive analysis and market research don’t require access to your own user base.

Understanding how competitors position themselves, what messaging resonates in your industry, and what user expectations exist can inform optimization decisions.

Also, gathering growth strategies from businesses in a similar industry with compliance or other restraints will offer a starting point to come up with new ideas that you can rapid test later on.

Getting started with optimization

Optimization can be intimidating and complex for regulated SaaS companies. Based on experiences working with teams like yours, here’s how to get started implementing growth optimization within your constraints.

1. Start with an audit or assessment of your current situation

Before making any changes, conduct a comprehensive audit of your current digital experience. This includes:

  • Technical tracking setup to understand what data you can legally collect
  • User journey mapping to identify critical conversion points
  • Competitive analysis to understand industry standards and opportunities
  • Stakeholder interviews to align on growth priorities and compliance requirements

2. Implement the methodologies we covered

Focus on techniques that provide insights without requiring on-site or in-product experimentation:

  • User testing with 5-7 participants per user type (you’ll get 80% of insights from this small sample)
  • Message testing to validate positioning and value propositions
  • Prototype testing for new features or flows before development
  • Heat mapping to understand attention patterns and interaction likelihood

3. Prioritize based on impact and compliance

Create a roadmap that balances growth potential with regulatory requirements. Focus on:

  • High-impact, low-risk optimizations that don’t require system changes
  • Messaging and positioning improvements that can be implemented quickly
  • User experience enhancements that reduce friction without compromising security
  • Qualification improvements to ensure you’re attracting the right prospects

4. Build your internal capabilities and outsource what you can’t

Many regulated SaaS companies lack dedicated growth resources. Consider:

  • Training technical teams on user experience principles
  • Establishing research processes that work within compliance frameworks
  • Creating feedback loops between customer-facing teams and product development
  • Implementing regular optimization cycles that don’t disrupt core operations
  • Outsourcing what you just can’t manage internally

Growth within constraints isn’t impossible

Regulated SaaS companies don’t need to accept mediocre growth because of their constraints. They need different approaches that work within their reality.

The key is recognizing that optimization isn’t restricted to product-led strategies or A/B testing. Understanding your users, validating your assumptions, and making data-driven decisions can deliver outcomes that are just as impactful.

Whether you’re in healthcare, financial services, government, or any other regulated industry, growth optimization is possible. It just requires the right toolkit and a willingness to think beyond traditional approaches.

Making off-site experiment-led growth work within your regulatory constraints starts with a conversation. Learn what’s actually possible when you have the right methodology and expertise guiding your optimization efforts by getting in touch with our team.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? appeared first on The Good.

]]>
Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base https://thegood.com/insights/monetization-strategy/ Thu, 17 Jul 2025 15:22:34 +0000 https://thegood.com/?post_type=insights&p=110736 Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously. But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach. The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who […]

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously.

But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach.

The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who build monetization strategies that focus on their existing user base. As users find more value in the tool and increase usage, the tool’s pricing fluctuates accordingly.

Realistically, every SaaS tool will hit a growth plateau. There aren’t infinite users that will find value in your product, even though we all wish there were.

The goal is to build growth into your monetization strategies so you don’t leave any untapped revenue in your existing user base. This ensures you don’t reach a premature growth plateau once net new users become stagnant.

The fundamental shift in monetization strategy from seats to value

Before we get started, throw your traditional SaaS monetization playbook out the window.

For years, companies have relied on seat-based pricing because it made sense. With each new hire, a new account or seat was purchased for tools. Revenue grew linearly with team size.

But now one person can do the work of two or three people. AI tools, automation, and productivity software mean that the relationship between users and value creation has completely shifted. When your customers can accomplish twice as much with half the team, seat-based pricing isn’t sustainable.

Smart companies are pivoting to value-based extraction. Instead of charging for the number of people using your software, they charge for the value you create. This isn’t just about switching to usage-based pricing; it’s about fundamentally rethinking how you capture the value your product delivers.

Consider HubSpot’s evolution. Instead of sticking to their standard seat-based pricing model as the market has evolved, they’ve created a dynamic pricing system. Users can pay for seats at their specific account tier, but also have a layer of contact-based pricing, aligning cost with the actual value delivered rather than just the number of users.

They’ve also recently added token-based pricing for certain functions in the tool, like marketing email sends, AI features, and API calls. These changes allow them to maintain revenue growth even as customers reduce their seat count.

You’re trying to capture more of the consumer surplus

Most SaaS tools have a consumer surplus. There are features or outcomes that customers would pay more for, but don’t have to because of your pricing model.

You can never eliminate all surplus (you need happy customers), but you can likely capture more of it through strategic segmentation and value extraction.

Think about your demand curve. It’s not a straight line. It’s a complex slope that varies by customer segment, use case, and willingness to pay. Most companies set one or two price points and leave massive value on the table. The companies that scale create multiple packages along that curve.

Netflix understood this when it evolved from a single $7.99 plan to Basic, Standard, and Premium tiers. Each tier captures different segments of willingness to pay while allowing customers to self-select into the option that works for them. However, the real insight wasn’t in the tiers themselves, but rather an understanding that different customer segments valued different features. Knowing that allowed Netflix to extract more value from customers who were willing to pay more while keeping price-sensitive customers from defecting.

Research changes everything

To get started on a monetization strategy based on value and capture more of the consumer surplus, companies have to build their understanding of what customers are willing to pay for.

Research from monetization and pricing expert Madhavan Ramanujam says that 20% of features drive 80% of willingness to pay. The challenge is to make sure you aren’t over-indexing on features that customers don’t actually value while underdeveloping the ones that drive revenue.

The solution is systematic research that reveals what customers actually want to pay for. Here are three methods to make it happen:

Max diff analysis: Present customers with feature lists and ask them to identify the most and least important items. With enough volume, you can rank features by their impact on willingness to pay. Features that over 50% of customers want become your “leader” features or the core value proposition that justifies your price point.

Anchoring questions: Instead of asking customers what they’d pay (which doesn’t work), ask them to compare your value to a known competitor. “If Salesforce brings your team 100 points of value, where do we rank?” This gives you relative value positioning without the discomfort of direct pricing questions.

Van Westendorp pricing: Ask customers four questions about price sensitivity: What’s acceptable? What’s expensive but you’d consider it? What’s so expensive that you wouldn’t consider it? What’s so cheap that you’d question the quality? This reveals the psychological price boundaries for different customer segments, providing a window of tenable prices that capture both the price-sensitive and high-willingness-to-pay corners of the market.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The Shopify monetization strategy: how to scale with your customers

An innovative and extremely effective monetization strategy allows you to grow with your customers. Shopify cracked this code by creating a model where their revenue increases as their customers become more successful. Instead of charging an ever-larger flat monthly fee, they take a percentage of gross merchandise volume (GMV).

This creates a virtuous cycle: Shopify is incentivized to help their customers succeed because customer success directly translates to revenue growth. When a merchant goes from $10,000 to $100,000 in monthly sales, Shopify’s revenue from that customer increases 10x.

Smaller businesses benefit from a proportional cost as they get started, and if businesses leave once they grow, Shopify doesn’t mind.

Shopify actually optimizes for this churn, not against it. As Archie Abrams, VP of Product and Head of Growth at Shopify, explains: “The way we think about churn [goes] back to Shopify’s mission and what we want to do, which is to increase the amount of entrepreneurship on the Internet.”

Instead of trying to prevent customers from leaving, Shopify focuses on lowering barriers to entry so more entrepreneurs can try starting businesses. They know most will fail, but the few who succeed generate massive value. This counterintuitive approach has helped Shopify power over 10% of U.S. e-commerce with $235 billion in GMV in 2023.

The beauty of this model lies in its retention through value creation, rather than friction. Traditional SaaS companies worry about churn because losing a customer means losing all their revenue. But when your revenue scales with customer success, churn becomes less of a concern. Your most successful customers are worth 10x or 100x more than your average customer, creating a natural buffer against churn.

Finding your untapped revenue

The process of discovering untapped revenue in your user base can be synthesized into a few steps:

Step 1: Segment your demand curve

Different customer segments have different willingness to pay. Enterprise customers might value security and compliance features, while SMBs prioritize ease of use and cost. Map these segments and understand what each values most.

Step 2: Identify value gaps

Look for places where customers are getting significant value but paying relatively little. These are your biggest opportunities for revenue expansion. Often, these are found in features that save customers time or help them make money.

Step 3: Create extraction mechanisms

Build pricing tiers, usage limits, or premium features that allow high-value customers to pay more for the value they receive. The key is making this feel like a fair exchange rather than a penalty.

The most effective monetization strategies combine multiple approaches. For example:

  • Base + usage: Provide a predictable subscription base with usage-based charges for additional value. This gives customers cost certainty while allowing you to capture upside from heavy users.
  • Tiered value: Create pricing tiers based on customer segments and use cases, not just feature lists. Each tier should feel designed for a specific type of customer.
  • Expansion revenue: Build mechanisms for customers to naturally increase their spending as they grow. This could be through additional seats, increased usage, or premium features.
  • Value-based upgrades: Tie pricing increases to value delivered, rather than just features added. When customers see clear ROI, they’re willing to pay more.

Step 4: Test and iterate

Pricing optimization is an ongoing process. Test different approaches, measure customer response, and iterate based on data. The best monetization strategies evolve continuously.

A monetization strategy that works for the long term

The future of SaaS monetization is about aligning pricing with value creation rather than resource consumption. The untapped revenue in your user base is real, measurable, and accessible, and approaching it with a value-based strategy will help you capture it.

At The Good, we specialize in helping SaaS companies optimize their monetization strategies through data-driven research and strategic experimentation.

Our services can help you identify value gaps, design pricing experiments, and implement changes that drive meaningful revenue growth. Get in touch to learn how we can help you extract more value from your existing customers.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
Why Feature Parity Isn’t Always the Goal: A Guide to Cross-device SaaS Strategy https://thegood.com/insights/feature-parity/ Wed, 02 Jul 2025 18:27:30 +0000 https://thegood.com/?post_type=insights&p=110703 Lots of SaaS product leaders believe feature parity is the holy grail. The assumption is that if users can do something on your desktop app, they should be able to do it on mobile, web, and in any other version of your tool as well. Your customers expect it, your competitors are doing it, so […]

The post Why Feature Parity Isn’t Always the Goal: A Guide to Cross-device SaaS Strategy appeared first on The Good.

]]>
Lots of SaaS product leaders believe feature parity is the holy grail. The assumption is that if users can do something on your desktop app, they should be able to do it on mobile, web, and in any other version of your tool as well. Your customers expect it, your competitors are doing it, so you’d better keep up.

This thinking is not only wrong, but also expensive and potentially harmful to your product strategy.

Today’s SaaS products exist across multiple “surfaces,” not just desktop and mobile apps, but also mobile web, browser extensions, widgets, and even smart TVs. Each surface represents a different way users can interact with your product, and each can serve a distinct purpose.

After working with dozens of scaling SaaS companies and analyzing surface strategies across hundreds of products, we’ve discovered that the most successful companies don’t aim for feature parity. Instead, they make deliberate, strategic decisions about which surfaces serve which purposes in their ecosystem.

Here’s the framework that’s helping product leaders at companies like Adobe, Slack, and emerging SaaS startups rethink their entire multi-surface strategy.

Organic growth spurs feature parity

The pressure to achieve feature parity stems from a fundamental misunderstanding of how users actually interact with different surfaces. Product teams often default to replicating their experience across surfaces without considering the strategic implications.

“Most products start with just one surface,” explains Natalie Thomas, Director of Strategy & UX at The Good. “Adobe started with a desktop app, and YouTube started on the web. Then they often bleed into other surfaces. The family of surfaces is likely to grow over time, and they are of different strategic importance.”

This organic growth pattern creates a dangerous assumption that every surface should eventually do everything the original surface does. But here’s what we’ve learned from analyzing successful SaaS ecosystems: the most strategic approach isn’t about matching features. It’s about defining distinct purposes.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The four strategic surface types every product leader should know

Rather than thinking in terms of feature parity, successful SaaS companies categorize their surfaces based on strategic purpose. This categorization is based on our analysis of high-performing SaaS ecosystems.

1. Replica surfaces

These are true feature-parity experiences where users expect identical functionality across platforms.

Example: Workplace productivity tools where users frequently switch between devices. Slack exemplifies this perfectly. You can upload documents, chat, huddle, and access virtually every feature across web, desktop, and mobile.

Slack as an example of replica surfaces, showing complete feature parity across devices

For collaboration tools, inconsistent experiences create friction in team workflows. Users expect to pick up exactly where they left off, regardless of device.

2. Utility surfaces

These platforms fundamentally can’t work without each other. One surface serves as a critical utility that supports the primary platform.

Example: TLDV’s Chrome extension functions as a utility for their web-based recording platform. “In this situation, we’re not really looking for feature parity in the Chrome extension because it really does serve as a utility that adds a lot of functionality and depth to what we are able to get out of the web experience,” notes Natalie.

TLDV example as a utility surface, showing how the chrome extension strategically doesn't have feature parity

Don’t waste product development resources building standalone functionality in utility surfaces. Their entire value comes from integration with the core platform.

3. Accessory/companion surfaces

These add value to the main platform but can’t function independently.

Example: Figma’s mobile app serves as a companion to their desktop design tool. Users can’t create designs on mobile, but they can preview prototypes and test user flows on actual devices.

Figma as an example of accessory/companion surfaces that adds value without feature parity
Image source

You can’t do anything without the main surface, but the accessory/companion adds value. The mobile app enhances the design process without attempting to replicate the full desktop experience.

4. Growth lever surfaces

These exist primarily to acquire new users, not to provide comprehensive functionality.

Example: Adobe’s free web tools, like online PDF converters, serve as growth levers. Users get limited functionality for free, experience the brand value, then convert to paid desktop or mobile experiences.

Adobe's free web tools act as growth levers rather than feature parity for their main tools

“A surface, especially one with very, very limited capabilities, can exist solely as a strategic growth lever. It doesn’t have to exist just to get feature parity or to add value to an existing platform. It can exist just to try to get new customers in the door,” explains Natalie.

What it looks like to intentionally limit feature parity

One of the most instructive examples of strategic surface limitation comes from Instagram’s deliberate choice to restrict posting capabilities on desktop. While it can frustrate users, it actually reveals Instagram’s strategic genius. By limiting posting to mobile, they:

  • Maintain their mobile-first brand identity
  • Prevent the platform from becoming a business publishing tool
  • Keep content creation spontaneous and authentic
  • Reduce operational complexity

Mobile-first continues to dominate 2025 SaaS trends, with companies prioritizing mobile user experiences over desktop feature replication.

The lesson? Sometimes the features you don’t build are more strategically important than the ones you do.

How to start building a surface strategy that avoids the feature parity trap

So, with all of this in mind, how do you build a great surface strategy? Instead of blindly building features across all surfaces, successful SaaS companies have a few strategies in common to make smarter surface decisions.

1. Let platform economics shape your strategy

Understanding how users discover and purchase your product should directly influence your surface strategy. The path differs dramatically between mobile apps and web/desktop experiences.

Mobile considerations:

  • App store optimization becomes critical
  • Apple retains approximately 30% of subscription revenue
  • Updates require user opt-in and are often batched
  • Attribution becomes increasingly difficult

Web/desktop considerations:

  • Direct-to-payment journeys possible
  • Immediate updates without user intervention
  • Better attribution tracking
  • More flexible pricing models

These fundamental differences should influence not just your pricing strategy, but also which surfaces you prioritize for different user segments.

2. Build where your users engage

How users engage with surfaces could shape your strategy. For example, mobile users are significantly more likely to opt into push notifications than desktop users.

While working on surface strategy for a leading SaaS company, our client shared, “Opt-in rates for push notifications on desktop are so low that the only avenue to do outreach to those existing dormant customers is through emails.”

In this case, the ideal was to build any push notification functionality into mobile because on desktop it was practically useless. The learning can be applied across the board. Build your retention features on surfaces where users actually engage, not where you think they should engage.

3. Design for authentication, not attribution

Cross-device attribution is getting harder thanks to privacy changes and cookie deprecation. Instead of fighting this trend with complex tracking, design surface experiences that get users logged in quickly.

“Once someone is logged in, all bets are off; we’ve got good information about them. But until then, they are anonymous and we’re generally not able to attribute data,” says Natalie.

This means prioritizing authentication flows over extensive anonymous functionality. In this case, depending on your growth initiatives, your surface strategy may prioritize guiding users toward logged-in states rather than providing comprehensive experiences for guest users.

4. Match your tools to your strategy

Most SaaS companies default to familiar tools like Google Analytics and Hotjar because they’ve historically focused on web experiences. But scaling to multiple surfaces requires different technology approaches.

Web-Focused Tools:

  • Google Analytics
  • Hotjar
  • Traditional A/B testing platforms

App-Optimized Tools:

  • Amplitude: Combines analytics and testing specifically for app experiences; allows product managers direct data access
  • Pendo: Integrates surveys, heat maps, and onboarding flows for mobile apps
  • Adobe Journey Optimizer: Enables in-product testing across surfaces

Choose tools that support your surface strategy rather than forcing your strategy to fit your existing tool stack. Surface strategy is a business decision that should be driven by user needs, revenue models, and competitive positioning, not technical capability.

5. Define success differently for each surface

A growth lever surface shouldn’t be measured the same way as a full-featured replica surface. Define success metrics that align with each surface’s strategic purpose:

  • Growth surfaces: Conversion rate to core platform; cost per qualified lead
  • Utility surfaces: Integration success rate; core platform usage lift
  • Companion surfaces: Feature adoption in main platform; user satisfaction
  • Replica surfaces: Cross-device workflow completion; feature usage parity

Stop measuring everything the same way. Different surfaces serve different purposes and should be evaluated accordingly.

6. Start with purpose, not capability

The wrong question: “Can we build this feature on mobile?” The right question: “Should this feature exist on mobile given our strategic purpose for this surface?”

Before building anything new, clearly define what strategic purpose each surface serves:

  • Growth lever: Limited functionality to drive awareness and conversion
  • Utility: Essential support that makes the core platform more valuable
  • Companion: Unique value that leverages platform-specific capabilities
  • Replica: Full feature parity for seamless cross-device workflows

Once you’re clear on purpose, feature decisions become much easier to make.

Building everything, everywhere, isn’t the answer. Many product teams default to feature parity because it feels “fair” to users. In reality, this often creates mediocre user experiences across all surfaces instead of excellent user experiences where they matter most.

Getting started with a surface strategy that doesn’t over-emphasize feature parity

The companies winning in the multi-surface SaaS landscape aren’t the ones with the most features across the most platforms. They’re the ones making the smartest strategic decisions about where to focus their development resources.

If you’re struggling with where to start, here are a few ideas:

  • Start with one surface audit. Pick your least strategic surface and honestly evaluate whether you’re over-building functionality that doesn’t serve your business goals or user needs in the name of feature parity.
  • Question your assumptions about user expectations. Users might actually prefer a focused, excellent user experience over a comprehensive one that is mediocre.
  • Align your team around surface strategy. Make sure product, engineering, and growth teams understand the strategic purpose of each surface, not just the feature requirements.

The goal isn’t to build less, it’s to build more strategically.

Ready to optimize your SaaS surface strategy? At The Good, we help scaling SaaS companies make smarter product decisions through data-driven audits and optimization. Our team has guided companies, from Adobe to emerging startups, in creating multi-surface user experiences that actually drive growth, rather than just checking feature boxes.

Schedule a strategic consultation to discover which surfaces are driving growth and which are consuming resources without strategic return. Let’s turn your multi-surface challenge into your competitive advantage.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Why Feature Parity Isn’t Always the Goal: A Guide to Cross-device SaaS Strategy appeared first on The Good.

]]>
How User-Centered Prioritization Helps Improve Feature Adoption Rates https://thegood.com/insights/feature-adoption/ Thu, 29 May 2025 22:05:08 +0000 https://thegood.com/?post_type=insights&p=110621 Imagine launching a feature and knowing it will be a hit. What if you could flip the script on wasted development efforts and build only what your users truly crave? For most SaaS companies, a high feature adoption rate is linked to increased upgrades, retention, and loyalty. When users fully adopt a product, they integrate […]

The post How User-Centered Prioritization Helps Improve Feature Adoption Rates appeared first on The Good.

]]>
Imagine launching a feature and knowing it will be a hit. What if you could flip the script on wasted development efforts and build only what your users truly crave?

For most SaaS companies, a high feature adoption rate is linked to increased upgrades, retention, and loyalty. When users fully adopt a product, they integrate it into their daily workflow and continually find value.

But alarmingly, a reasonable feature adoption rate in SaaS is between 20% and 30%, and similarly, only 20% of launched features are used. We can all understand that some features won’t hit the mark, but should we really accept that up to 80% of the features we build will go unused?

I don’t think so.

By refining the prioritization process, we can make sure you’re working on the right features that will drive value for users and improve feature adoption rates. And it starts with understanding your user.

Reasons for low feature adoption

As product teams focus on developing innovative capabilities and addressing technical debt, the gap between feature development and feature adoption widens.

Underperforming features are a drain on your company’s time and resources. But what causes low adoption rates in the first place?

  • Lack of awareness: The new feature isn’t presented/marketed to users in a compelling way
  • Wrong messaging: The marketing message doesn’t resonate with users, and they’re unaware of the benefits
  • Bad feature: The feature doesn’t actually address a user’s need or pain point

While these are the three most commonly cited reasons for low feature adoption, we’ve found that these symptoms often stem from underlying issues with how features are prioritized for development and release. Teams let internal assumptions, stakeholder requests, or competitive pressures (rather than genuine user insights) drive priorities. In turn, the wrong features are released, spurring feature bloat, low adoption rates, and more.

Think of that ‘AI-powered suggestion’ feature that no one uses. Was it truly solving a user need, or just a cool tech demo?

We’ve seen firsthand with clients how prioritization directly impacts performance. When companies prioritize effectively, they stay focused on what is proven to deliver results. And when they don’t, the opposite happens.

What is user-centered prioritization?

There are plenty of ways to address low feature adoption, but user-centered prioritization might be the Trojan horse you didn’t see coming.

User-centered prioritization is an approach that places the user at the heart of every decision regarding feature development and enhancement.

It’s a systematic way to ensure that the features you build truly solve your users’ problems, meet their needs, and provide the most value. This contrasts with traditional prioritization methods that might heavily weigh internal opinions, market trends, or ease of development.

With user-centered prioritization, you leverage user research, behavior analytics, and feedback loops to make data-driven development decisions. By understanding not just what users say they want, but how they actually behave, product teams can make more strategic choices about which features to build, when to release them, and how to position them for maximum adoption.

User-centered prioritization is the first step to higher feature adoption rates

We’ll get to some specific strategies in a minute, but for now, I want to provide some additional context on why user-centered prioritization is the first step to higher feature adoption.

It’s more than just a method; it’s a mindset. The core idea is to build products and services that truly solve user problems and provide a positive experience.

When faced with a long list of potential features or improvements, user-centered prioritization helps teams decide what matters most. It can:

  • Identify pain points and focus on features that directly address user frustrations or obstacles.
  • Hone in on the most frequent and important tasks users want to accomplish, then prioritize content and features that support these tasks.
  • Visualize the user journey and break it down into actionable user stories, prioritizing those with the highest potential impact on user satisfaction and business goals.
  • Classify and prioritize usability problems based on their impact on user task completion, frequency, and ease of fix.

To make any of this happen, you need a deep understanding of users. Prioritization begins with thorough user research. Use various methods such as interviews, surveys, observational studies, usability testing, and analyzing user data to gather insights. Try to build an understanding of how and where users will interact with the feature.

In essence, user-centered prioritization ensures that product development efforts are aligned with what users truly need and value, leading to ethical and successful products.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Strategies for improving feature adoption rates with user-centered prioritization in the customer journey

So, what does it look like in action when a user-centered approach dictates feature development and deployment? Here are eight strategies.

1. Build the right features

    The foundation is to build the right features, and to ensure you don’t do it in a vacuum. As we have covered, you need user research to understand the problems you are trying to solve.

    Before moving to development, test concepts and prototypes with real users to ensure the feature addresses a need and has a clear value proposition.

    Prioritize features that will deliver the most significant value to users, not just those that are “nice to have” or technically interesting.

    2. Be clear about the value proposition of your feature

      Users need to understand why a feature is beneficial for them and how it solves a problem, not just what it does. Articulate this clearly in all communications.

      Each feature should address a distinct user pain point or enable a new, valuable capability. Ideally, use language that has been tested and proven to clearly convey the value of the feature.

      3. Make onboarding frictionless

        Segment users (by role, industry, goals, etc.) and tailor onboarding experiences. A marketing professional might need a different introduction than a developer.

        Guide users to the core value of the product and its key features as quickly as possible. This is the moment they realize “this product is for me.”

        Instead of static tours, interactively prompt users to take the desired action so that you’re teaching them while they accomplish tasks.

        4. Create context in the app experience

          Use subtle, in-context cues to highlight new features or explain specific UI elements when a user is in a relevant area.

          For more significant feature announcements, use in-experience banners or modals that appear at relevant moments. Behavioral triggers can also deliver guidance based on what a user is currently doing or has done.

          When a feature’s area is empty, use this space to explain the feature’s purpose and guide the user on how to get started. Make it easy to find the features your user is looking for.

          5. Educate and clearly communicate with users

            As mentioned above, prioritize in-app methods for immediate context, but be sure to supplement with marketing materials like:

            • Targeted emails that announce new features, explain their benefits, and link directly to the feature in the product. Segment these emails to ensure relevance.
            • Blogs that add in-depth explanations, use cases, and technical details for those who want them.
            • For complex or high-impact features, host live or recorded webinars to demo features and answer questions.
            • Social media, including short, engaging content (videos, graphics) to announce features and drive interest.

            6. Personalize the feature

              Not all features are for every user. To be sure the right features are being shown to the target user, you can hide or highlight features based on a user’s role or permissions.

              Allow users to tailor their experience, making the most relevant features easily accessible, and use machine learning to suggest features or workflows based on a user’s past behavior or similar user segments.

              7. Gather data and feedback

                Instead of relying on just feature adoption rates, gather supplemental data and feedback to understand why users are or aren’t adopting the feature. Use micro-surveys (e.g., after a user interacts with a new feature) to get immediate feedback on usability and value. Monitor overall satisfaction with NPS & CSAT surveys, conduct regular user interviews, and look for recurring issues in support tickets.

                Make sure to analyze all this information across different user segments to identify differences and tailor strategies.

                8. Iterate on the feature

                Don’t just launch and leave a feature; you can continue iterating on the experience and messaging post-launch until you figure out what works. Test different onboarding flows, in-app messages, or feature designs to see what drives higher adoption.

                Feature adoption is an ongoing process. Regularly review data, implement changes, and measure their impact. Don’t stop promoting after the announcement.

                By adopting these strategies, SaaS companies can move beyond simply launching features to truly integrating them into their users’ workflows, maximizing the value delivered, and ultimately driving sustainable growth.

                A good feature adoption rate is always improving

                We’ve often touted the uselessness of benchmarks. And while they are meaningless for setting goals, they can help to paint a picture of industry averages and to set expectations. In the case of feature adoption rates, if you’re below that 20% mark, you should strongly consider building a more user-centered prioritization process.

                Incorporating user feedback early and often can significantly reduce development time and costs. Instead of building features based on incorrect assumptions, you’ll focus resources where they’ll have the most impact, leading to higher ROI.

                The direct link between user-centered prioritization and feature adoption is clear.

                The days of simply building features and hoping for the best are over. If you’re ready to take a different approach, our team is available to support.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post How User-Centered Prioritization Helps Improve Feature Adoption Rates appeared first on The Good.

                ]]>
                The Biggest Roadmap Mistake: Prioritizing Low-Impact Features https://thegood.com/insights/feature-bloat/ Mon, 19 May 2025 19:43:44 +0000 https://thegood.com/?post_type=insights&p=110593 Picture this: Your product team just wrapped up the quarter with a bang. Fifteen new features shipped. The engineering and development teams are exhausted but proud. The roadmap is color-coded and beautiful. But then the metrics start to roll in. Conversion rates are flat. Churn is up. Customer satisfaction scores haven’t budged. Sound familiar? You’re […]

                The post The Biggest Roadmap Mistake: Prioritizing Low-Impact Features appeared first on The Good.

                ]]>
                Picture this: Your product team just wrapped up the quarter with a bang. Fifteen new features shipped. The engineering and development teams are exhausted but proud. The roadmap is color-coded and beautiful.

                But then the metrics start to roll in. Conversion rates are flat. Churn is up. Customer satisfaction scores haven’t budged.

                Sound familiar? You’re not alone.

                Most SaaS companies are stuck in a feature factory, churning out functionality users don’t want, don’t use, or actively avoid. While your competitors are optimizing the core experiences that drive growth, you’re polishing the peripheral features.

                Here’s the uncomfortable truth: You’re probably building the wrong things.

                The hidden cost of feature bloat

                Low-impact features aren’t just harmless additions to your product; they’re silent growth killers. Every hour spent building or optimizing a feature that doesn’t move the needle is an hour stolen from something that could grow your business.

                But what exactly makes a feature “low-impact”? It’s not about whether the feature works or whether someone, somewhere, might find it useful. Low-impact features are those that:

                • Address edge cases rather than core user needs
                • Generate minimal usage after launch
                • Don’t correlate with key business metrics like retention or expansion revenue
                • Create more complexity than value

                According to research by UserPilot, the average core feature adoption rate is 24.5%. That means more than 75% of features might as well not exist from a user perspective.

                When a SaaS company prioritizes those extra features, it is likely suffering from feature bloat.

                Feature bloat is costly for your team, your users, and your business. An excess of features creates complexity and detracts from your product’s core value. Sometimes, feature bloat can actually prevent your product from doing its main job.

                The cost of feature bloat develops quickly. Some examples include:

                Development opportunity cost: While your team builds that quirky reporting dashboard that three power users requested, your core onboarding flow continues to hemorrhage trial users.

                User experience degradation: Every new feature is another decision your users have to make, another item in the navigation, another potential source of confusion. Research from the Nielson Norman Group shows that feature bloat directly correlates with decreased user satisfaction and other industry experts agree. Jared Spool calls it experience rot and often highlights the inevitable complexity creep and user experience decline that occurs when teams add features without ruthless prioritization.

                Technical debt accumulation: Low-impact features still need maintenance, bug fixes, and updates. They create dependencies that slow down future development and increase the risk of breaking changes.

                Low-impact features don’t just waste resources; they actively prevent you from building high-impact ones.

                Consider a hypothetical case of a B2B SaaS platform that spent six months building an advanced scheduling feature requested by their largest enterprise client. The feature worked beautifully for that one client, but sat unused by 98% of their user base. Meanwhile, their core product suffered a 60% drop-off rate during onboarding. This was a fixable problem that could have doubled their conversion rate.

                The real kicker? That scheduling feature became a maintenance burden, requiring updates every time they changed their core platform. What started as a “quick win” became an ongoing resource drain.

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

                Warning signs you’re in the feature bloat trap

                It isn’t always easy to identify if and when you’re prioritizing low-impact features. Here are some of the common red flags that might make you think twice about how you’re building your roadmap:

                • Lack of data: Decisions based on gut feeling rather than data-driven insights can easily lead to prioritizing the wrong things.
                • The squeaky wheel syndrome: Your roadmap is driven by whoever complains loudest, not by what data shows you should build.
                • Internal politics: Sometimes, features are prioritized based on the influence of certain stakeholders rather than their actual value to the user or the business.
                • Fear of risk: High-impact features often involve more risk and uncertainty. Teams might opt for safer, less impactful options to avoid potential failures.
                • Shiny object syndrome: New feature ideas consistently trump optimization of existing functionality, or the allure of new and trendy features can sometimes overshadow the importance of addressing core user needs.
                • Short-term focus: A focus on immediate gains can lead to neglecting long-term strategic goals and prioritizing quick wins over sustainable growth.
                • The metrics disconnect: You can’t clearly articulate how each planned feature connects to business outcomes like revenue, retention, or user satisfaction.
                • Poor prioritization framework: Without a clear and consistent framework for evaluating and prioritizing features, it’s easy to fall into the trap of prioritizing the wrong things.
                • The “just one more thing” mentality: Features keep getting added to releases because they seem small and easy.

                The longer your team functions in the trap of any of these situations, the harder it is to change the behavior. So, if this resonates, try to get your team on board to shift behavior and implement some of the strategies we outline below.

                A better way: Data-driven prioritization

                The solution isn’t to stop building features, it’s to build and optimize the right ones. This means establishing clear criteria for what constitutes “high-impact” before you write a single line of code.

                Start with the outcome, not the output

                Instead of asking “What features should we build?” ask “What user behaviors drive business growth, and how can we encourage more of them?”

                Implement continuous user research

                Don’t just collect feature requests, use them as an opportunity to understand the underlying problems. Continuous research that includes things like regular user interviews, behavioral analytics, and feedback loops can help you distinguish between what users say they want and what actually drives value.

                Continuous research also allows you to test assumptions before implementation. Including rapid testing in your workflow can help you get fast, early feedback on concepts from real users for better direction.

                Let the data guide decisions

                Base your prioritization decisions on data from user research, analytics, and market analysis so that you can focus on what users truly need and what will drive the most significant impact.

                Use prioritization frameworks consistently

                Tools like RICE (Reach, Impact, Confidence, Effort) or the ICE (Impact, Confidence, Ease) scoring model help you compare feature ideas objectively. The specific framework matters less than using one consistently.

                At The Good, we use the ADVIS’R Prioritization Framework™ to guide our optimization strategy.

                Measure everything

                For every feature you build, define success metrics upfront. If you can’t measure whether a feature is working, you can’t determine if it’s worth the investment.

                Consider the indirect impact

                Sometimes, a feature might not directly impact a North Star metric but could have a significant indirect impact. For example, improving the onboarding experience might not immediately increase conversion rates but could lead to higher user retention and lifetime value in the long run.

                Focus on your most valuable users

                Part of building and optimizing the right features means understanding your users. If you haven’t, conduct a step-by-step user segmentation study to help identify your highest-value users. Then you can tailor feature prioritization and optimization to their use case before moving on to other segments. A feature that’s high-impact for one segment might be low-impact for another.

                Embrace the power of “no”

                The most successful product teams are ruthless about saying no to good ideas so they can say yes to great ones. Create explicit criteria for what doesn’t make the cut. It’s okay to say “no” to features that don’t align with your strategic goals or offer significant value.

                Moving beyond the feature bloat factory

                Breaking free from the low-impact feature trap requires discipline, but the payoff is substantial. Companies that master prioritization don’t just build better products; they build products faster, with fewer resources, and with much better business outcomes.

                The goal isn’t to build everything your users request. It’s to understand what truly drives value and relentlessly focus on that.

                Your roadmap should be a strategic weapon, not a wishlist. Every feature should earn its place through clear evidence that it will move the metrics that matter.

                Stop building features. Start building value.

                Struggling to identify which features truly drive growth? Our Digital Experience Optimization Program™ helps SaaS companies cut through the noise and focus on changes that move the needle.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post The Biggest Roadmap Mistake: Prioritizing Low-Impact Features appeared first on The Good.

                ]]>