saas Archives - The Good Optimizing Digital Experiences Fri, 05 Dec 2025 20:57:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 How Do You Reduce Cancellations During SaaS Free Trials? https://thegood.com/insights/trial-optimization/ Fri, 05 Dec 2025 20:57:52 +0000 https://thegood.com/?post_type=insights&p=111216 Leaders often assume users cancel because the product isn’t good enough. The reality is more nuanced. Users rarely cancel because your product lacks value. They cancel because they didn’t experience that value quickly enough, clearly enough, or in a way that made sense for their specific needs. The stakes are high. According to recent industry […]

The post How Do You Reduce Cancellations During SaaS Free Trials? appeared first on The Good.

]]>
Leaders often assume users cancel because the product isn’t good enough. The reality is more nuanced. Users rarely cancel because your product lacks value. They cancel because they didn’t experience that value quickly enough, clearly enough, or in a way that made sense for their specific needs.

The stakes are high. According to recent industry data, the average SaaS free trial converts less than 25% of users to paying customers. That means roughly two-thirds to three-quarters of your trial users are walking away without ever becoming customers.

But the good news is that trial cancellations aren’t random. They follow patterns. Users drop off at predictable moments in their journey and struggle with the same features or tasks. Once you identify these patterns, you can systematically address them through trial optimization.

Understanding why trial optimization matters for reducing cancellations

Before diving into how to reduce cancellations, let’s be clear about what we mean by trial optimization and why it deserves your attention.

Trial optimization is the systematic process of improving every touchpoint in your free trial or freemium experience to increase the likelihood that users will see value, engage consistently, and ultimately convert to paying customers. It’s not about manipulation or dark patterns. It’s about removing unnecessary friction, clarifying value, and helping users succeed with your product.

The impact of effective trial optimization extends beyond conversion rates. When you optimize the trial experience, you also reduce customer acquisition costs, improve customer lifetime value, and build a stronger foundation for retention.

Understanding your specific trial model is the first step toward optimization. Different trial structures create different challenges and opportunities.

What is a freemium model?

The freemium model offers perpetual access to a restricted version of your product, either by limiting features or placing caps on usage. Think Spotify’s free tier with ads, or Canva’s basic design tools. The challenge with freemium is that users can stay indefinitely without converting. Your optimization goal is building reliance while strategically gating features that create urgency to upgrade.

What is a reverse trial?

In a reverse trial, users start with full access to all features for a limited time, then get moved to a freemium plan with limited capabilities. This approach, coined by growth leader Elena Verna, prioritizes maximum value upfront. Users experience everything your product can do, making the subsequent feature restrictions feel more pronounced. Trial optimization here focuses on ensuring users activate on premium features during that full-access window.

What is trial with payment?

This model requires payment information up front for full product access during a limited period. Users are charged automatically after the trial unless they cancel. The friction of providing credit card details means fewer signups but typically higher conversion rates, with opt-out trials converting at 49-60% compared to opt-in trials at 18-25%. Optimization here balances making signup worthwhile despite the friction while ensuring the experience justifies the automatic charge.

Five steps to audit and optimize your trial experience

Trial optimization looks different in each of these trial models, but one thing is true across the board: reducing cancellations requires a systematic approach.

You can’t fix what you don’t measure, and you can’t optimize what you don’t understand.

Here is a summary of the five-step framework for auditing your trial experience. For a detailed walkthrough, including specific templates and decision trees, see our article on auditing free user experiences.

Step 1: Identify drop-off points with data analysis

Examine your product analytics to pinpoint exactly where users abandon their trial journey.

  • Track activation drop-offs in your onboarding flow
  • Monitor which features users engage with versus ignore
  • Calculate time-to-value and compare against churn timing
  • Segment data by acquisition channel, trial type, and user cohort
  • Layer in session recordings to see what users actually do before leaving

Step 2: Conduct user interviews to understand the “why”

Numbers show where users leave. Conversations reveal why.

  • Interview 10-15 users, split between active trial users and those who churned
  • Ask what value they found, what confused them, and what would make them pay
  • Listen for the exact language they use to describe their experience
  • Note any competitors or alternatives they mention for market context

Step 3: Benchmark your experience against market standards

Your users compare you to every tool they’ve used. Conduct some competitive analysis to gauge where you fall in the market.

  • Document how competitors structure their trial experiences
  • Screenshot monetization touchpoints, upgrade prompts, and limit notifications
  • Study products your users mention in interviews, even if indirect competitors
  • Identify where your experience creates more or less friction than market norms

Step 4: Map user actions with verb scoring

Break down every meaningful action in your product and score the friction required by running a verb scoring exercise.

  • List discrete actions users can take (create, share, export, invite, etc.)
  • Assign each a verb score from Anonymous to Gated
  • Look for inconsistencies in how similar actions are gated
  • Identify if you’re giving away too much or asking too soon

Step 5: Connect insights to create an optimization roadmap

Synthesize your findings to prioritize what to fix first.

  • Friction without reason: unnecessary barriers compared to competitors
  • Value leaks: popular free features that don’t drive conversion
  • Invisible gates: paywalls users hit without understanding why
  • Poorly timed friction: asking users to pay before they’ve seen value

Prioritize optimizations by impact (users affected), confidence (data supports it), effort (time to implement), and market alignment (are you an outlier).

Six strategies for reducing trial cancellations

Once you’ve audited your trial experience and identified optimization opportunities, you will have a clear roadmap for addressing issues.

Plenty of strategies might arise in your research. Here are a few themes we see often.

Accelerate time-to-first-value

The faster users experience value, the less likely they are to cancel. Industry benchmarks suggest that users should reach their first “aha moment” within 48 hours of signup.

Design your onboarding to guide users directly toward the action that delivers value. Use progress bars and checklists to create clear paths forward.

Remove any friction between signup and first value. If users need to integrate other tools, fill out profiles, or configure settings before experiencing core benefits, you’re creating opportunities for abandonment. Save non-essential setup for after users have seen value.

Provide personalized onboarding experiences

Companies using personalized experiences see conversion rates improve by up to 67%. Generic onboarding treats all users the same, but different user segments have different needs, different technical sophistication, and different use cases.

Segment users based on their role, company size, or stated goals during signup. A solo entrepreneur using your project management tool has different needs than a project manager at a 100-person company. Your onboarding should reflect these differences.

Use progressive disclosure to reveal features as they become relevant. Don’t overwhelm new users with every capability on day one. Instead, introduce advanced features once users have mastered the basics.

Implement strategic reminder systems

Trials between 7-14 days convert better than longer trials because they create urgency. But urgency only works if users remember they’re on a trial.

Send regular emails and in-app notifications informing users about remaining trial time. These reminders should do more than count down days. Each one should emphasize value, highlight features users haven’t explored, or address specific pain points.

Gate features strategically based on usage patterns

In our experience optimizing for SaaS, offering too many free features can actually hurt conversion rates. Users need to experience value from free features while simultaneously understanding what they’re missing from paid capabilities.

Place prompts for premium features adjacent to free ones. PDF Converter, for example, offers free file conversion but positions the premium, higher-quality option nearby. This ensures users understand the upgrade path without being pushy.

Use clear visual cues like lock icons, “Pro” badges, or color contrasts to differentiate free from paid features.

Provide proactive support during critical moments

Customer support engagement during trial periods can significantly boost conversion rates.

Don’t wait for users to ask for help. Implement triggered messages based on behavior patterns. If a user hasn’t logged in for three days, send a helpful email with tips. If someone tries to use a gated feature multiple times, offer a personalized demo or support call.

For high-value potential customers, consider human touchpoints. A quick call from customer success at day three of a 14-day trial can answer questions, provide personalized guidance, and significantly increase conversion likelihood.

Design thoughtful cancellation flows

Not every cancellation is preventable, but many are. When users attempt to cancel, use that moment as an opportunity to understand why and potentially offer alternatives.

Implement exit surveys that capture cancellation reasons. According to data on subscription churn, understanding why users leave is critical for preventing future cancellations. Are they leaving because of the price? Missing features? Poor onboarding? Bugs?

Based on cancellation reasons, offer segment-specific alternatives. If someone is canceling due to price, offer a discount or payment plan. If they barely used the product after the trial, extend the trial. If they’re leaving due to missing features, ask which features would keep them.

Common mistakes that increase trial cancellations

Even well-intentioned optimization efforts can backfire. Avoid these common mistakes that actually increase cancellation rates.

Making cancellation difficult

Some SaaS companies deliberately make cancellation difficult, requiring users to call or email rather than cancel with a simple click. This dark pattern might delay cancellations temporarily, but it can destroy trust and create negative word-of-mouth.

Make cancellation simple. The goal isn’t to trap users; it’s to create such a good experience that they don’t want to leave.

Gating core value too aggressively

If users can’t experience your product’s core value without upgrading, they’ll cancel before converting. The free version should deliver genuine utility while creating a desire for premium features.

Neglecting mobile trial experiences

With increasing mobile usage, trial experiences must work seamlessly across devices.

If your onboarding is desktop-optimized but breaks on mobile, you’re creating cancellations for a substantial user segment.

Sending generic email communications

Automated email sequences that ignore user behavior feel impersonal and often go unread. According to research on trial optimization, personalized communication based on user activity significantly outperforms generic campaigns.

If a user hasn’t logged in since signing up, an email about advanced features is irrelevant. If they’re actively using the product daily, countdown reminders may feel pushy. Segment communications based on engagement levels.

Trial optimization frequently asked questions

What’s the ideal trial length to minimize cancellations?

The optimal length depends on your product’s complexity and how quickly users can experience value. Simple products often perform better with 7-14 day trials that create urgency.

Complex B2B tools may need 30-60 days for users to properly evaluate capabilities. If you are completely lost, start with 14 days and adjust based on your activation data and time-to-value metrics.

Should I require a credit card for trial signup?

This decision significantly impacts both signup volume and conversion rates.

Opt-out trials (credit card required) convert higher but generate fewer signups. Opt-in trials (no credit card) convert lower but attract more users.

The right choice depends on whether you prioritize higher conversion rates per trial or a larger volume of trials and how much more utility the full tier offers versus a free trial.

Most product-led companies start with opt-in trials to maximize exposure, then consider opt-out trials once they’ve optimized the trial experience.

How can I tell if my trial cancellations are normal or problematic?

Track cohort-specific metrics. If certain user segments, acquisition channels, or trial lengths show notably different cancellation patterns, those differences reveal opportunities for targeted optimization.

What’s the most important metric to track for trial optimization?

While trial-to-paid conversion rate matters, activation rate is often more predictive.

Activation measures whether users complete key actions that indicate they’ve experienced value. Research shows users who reach activation are significantly more likely to convert.

Define your activation criteria based on behaviors that correlate with conversion, then optimize to increase the percentage of trial users who activate.

How often should I test and iterate on my trial experience?

Trial optimization is continuous, not a one-time project.

High-performing SaaS companies test constantly. Start with your highest-impact opportunities identified during your audit, then implement a regular testing cadence.

Track results for statistical significance before making changes permanent. Plan quarterly reviews of your trial metrics to identify new optimization opportunities as your product and market evolve.

Can I reduce trial cancellations without changing my product?

Yes. Many cancellations stem from poor onboarding, unclear value communication, or inadequate support rather than product deficiencies.

You can significantly reduce cancellations by improving onboarding sequences, providing better in-app guidance, personalizing the trial experience, implementing proactive support, and strategically positioning upgrade prompts.

That said, if users consistently cancel, citing missing features or bugs, product improvements may be necessary alongside trial optimization.

Build a systematic approach to trial optimization

Reducing SaaS trial cancellations isn’t about quick fixes or growth hacks. It requires systematic analysis of your trial experience, a deep understanding of user behavior and needs, and continuous optimization based on data.

The five-step audit framework provides a structured approach: analyze data to find drop-off points, interview users to understand why they leave, benchmark against market expectations, map actions with verb scoring, and synthesize insights into a prioritized roadmap. Each step builds on the previous one to create a picture of optimization opportunities.

Implementation matters as much as analysis. Accelerate time-to-value, personalize onboarding, implement strategic reminders, gate features based on usage patterns, provide proactive support, and design thoughtful cancellation flows. These six strategies address the most common causes of trial cancellations, but keep in mind that your analysis will likely surface other unique issues.

Most importantly, treat trial optimization as an ongoing discipline rather than a one-time project. User expectations evolve, competitors improve their experiences, and your product adds features. Regular review and iteration ensure your trial experience continues performing as your business grows.

At The Good, we’ve helped SaaS companies reduce trial cancellations and improve conversion rates through our Digital Experience Optimization Program™. We conduct comprehensive audits using heatmaps, session recordings, and user research to identify exactly where trial users encounter friction. Then we build custom optimization roadmaps and validate improvements through experimentation.

Ready to reduce your trial cancellations and accelerate growth? Schedule an introductory call to discuss how we can optimize your trial experience for better conversion and retention.

The post How Do You Reduce Cancellations During SaaS Free Trials? appeared first on The Good.

]]>
MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust https://thegood.com/insights/maxdiff-analysis/ Wed, 26 Nov 2025 17:56:30 +0000 https://thegood.com/?post_type=insights&p=111202 When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like […]

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like ours,” and “How do I know this won’t be another failed implementation?”

The company had assembled a list of proof points they could showcase on their homepage: years in business, number of integrations, customer counts, implementation guarantees, security certifications, industry awards, analyst recognition, and more. But they only had space to highlight four of these benefits prominently below their hero section. They faced the classic messaging dilemma: which trust signals would actually move the needle with prospects evaluating B2B software?

This is where MaxDiff analysis becomes valuable. Instead of relying on stakeholder opinions or generic best practices, we could let their target buyers vote with data on what mattered most.

What makes MaxDiff analysis different from other survey methods

MaxDiff analysis (short for Maximum Difference Scaling) is a research methodology that forces trade-offs. Rather than asking people to rate items individually on a scale, MaxDiff presents sets of options and asks participants to identify the most and least important items in each set. This forced-choice format reveals true preferences because people can’t rate everything as “very important.”

Here’s why this matters: traditional rating scales often produce compressed results where everything scores high. When you ask customers, “How important is X on a scale of 1-10?” most people will hover around 7 or 8 for anything remotely relevant. You end up with a spreadsheet full of similar numbers and no clear direction.

MaxDiff cuts through that noise. By repeatedly asking “which of these five options matters most to you, and which matters least?” across different combinations, you build a statistical picture of relative importance. The math behind MaxDiff generates a best-worst score for each item, showing not just which options are preferred, but by how much.

For digital experience optimization, this methodology is particularly useful when you need to prioritize limited real estate on a website, determine which features to build first, or figure out which messaging will actually differentiate your brand.

How we structured the MaxDiff study for maximum insight

In the project for our client, we started by defining the target audience precisely. The company was a B2B SaaS platform serving mid-market operations teams, so we recruited 60 participants who matched their customer profile: director-level or above at companies with 50-500 employees, working in operations or supply chain roles, currently using at least two SaaS tools in their workflow, and actively evaluating solutions within the past six months.

From the initial audit and stakeholder interviews, we identified 11 potential trust signals the company could emphasize on its homepage. These included things like:

  • Concrete numbers (customer counts, uptime percentages, integrations available)
  • Credentials (security certifications, enterprise clients)
  • Promises (implementation timelines, support response times, money-back guarantees)
  • And more

Each represented something the company could truthfully claim, but we needed to know which ones would build the most trust with prospects evaluating the platform.

The survey design was straightforward. Each participant saw these 11 benefits randomized into multiple sets of five items. For each set, they selected the most important factor and the least important factor when considering whether to adopt this type of software. Participants completed several rounds of these comparisons, seeing different combinations each time.

This approach gave us enough data points to calculate a robust best-worst score for each benefit: the number of times it was selected as “most important” minus the number of times it was selected as “least important.” Positive scores indicate a strong preference, negative scores indicate a low importance, and the magnitude of the scores shows the strength of feeling.

The results revealed a clear hierarchy of trust signals

When we analyzed the MaxDiff results, the pattern was striking. The top-scoring benefits shared a common theme: they provided concrete evidence of proven reliability and satisfied users. The bottom-scoring benefits? They emphasized company scale and marketing visibility.

A chart showing the ranking of MaxDiff analysis SaaS trust signals.

The four highest-scoring trust signals were clear winners. G2 or Capterra ratings scored 38 points (the highest possible), indicating this was nearly universal in its importance. The number of active customers scored 30 points. An implementation guarantee (“live in 30 days or your money back”) scored 25 points. And SOC 2 Type II certification scored 16 points.

These weren’t arbitrary marketing metrics. They were the specific signals that would make someone think, “this platform delivers real value and other companies trust them.”

The middle tier included operational details that registered as minor positives but weren’t decisive: the number of successful implementations (7 points), availability of 24/7 support (6 points). These signals suggested competence but didn’t particularly move the needle on trust.

Then came the surprises. Years in business scored -5 points, indicating it was slightly more often selected as “least important” than “most important.” The number of integrations available scored -11 points. AI-powered features claimed scored -15 points. Employee headcount scored -36 points. And recognition as a Gartner Cool Vendor scored -55 points, the lowest possible score.

Think about what prospects were telling us: “I don’t care that you have 200 employees or that Gartner mentioned you. Show me that real companies like mine trust you and that you’ll actually deliver on your promises.”

Why customers rejected company-focused metrics

The findings revealed an insight into trust-building that extends beyond this single company. B2B buyers weigh social proof and reliability guarantees far more heavily than they weigh indicators of company scale or industry recognition.

When a business talks about its employee headcount or analyst mentions, prospects interpret this as the company talking about itself. These metrics answer the question “How big is your business?” but not “Will this solve my problem?” From the buyer’s perspective, a larger team or Gartner mention doesn’t necessarily correlate with better software or smoother implementation.

By contrast, user reviews and customer counts answer the implicit question every prospect has: “Did this work for companies like mine?” A guarantee directly addresses risk: “What happens if implementation fails?” Security certifications address legitimacy: “Is this platform secure enough for our data?”

The AI-powered features claim scored poorly, likely because it felt trendy rather than practical. Prospects for this specific business weren’t primarily concerned about cutting-edge technology; they wanted a platform that would reliably solve their workflow problems. Leading with an AI angle, while possibly true, didn’t address the core decision-making criteria.

Years in business scored negatively for similar reasons. While longevity can signal stability, in this context, it didn’t address the prospect’s immediate concerns about implementation speed and user adoption. A company could be around for years while providing clunky software with poor support.

From insight to implementation: turning research into revenue

The MaxDiff analysis gave the company a clear action plan. We recommended implementing a four-part trust signal section directly below their homepage hero, featuring the top four scoring benefits in order of importance.

This meant reworking their existing homepage structure. Previously, they had emphasized their implementation guarantee in the hero area while burying customer counts and ratings further down the page. The research showed this approach had it backward. Prospects needed to see evidence of customer satisfaction first, then the implementation guarantee as additional reassurance.

We also recommended removing or de-emphasizing several elements they had been proud of. The employee headcount mention, the Gartner recognition, and several other low-scoring items were either removed entirely or moved to less prominent positions on the site. The goal was to prevent low-value signals from crowding out high-value ones.

The broader lesson here extends beyond this single homepage optimization. The MaxDiff results provided a messaging hierarchy that the company could apply across its entire go-to-market strategy. Email campaigns, landing pages, sales conversations, demo decks, and even their LinkedIn company page could now emphasize the trust signals that actually mattered to prospects.

When MaxDiff analysis makes sense for your business

MaxDiff is particularly valuable when you’re facing a prioritization problem with limited data. It works best in these scenarios:

  • You have more options than you can implement. Whether that’s features to build, benefits to highlight, or messages to test, MaxDiff helps you choose wisely when you can’t do everything at once.
  • Stakeholder opinions are conflicting. When internal debates about priorities can’t be resolved through argument, customer data settles the question. MaxDiff provides quantitative evidence for decision-making.
  • You need to differentiate in a crowded market. If competitors are all saying similar things, MaxDiff reveals which specific claims will break through. Often, the winning messages are ones companies overlook because they seem “obvious” or “not unique enough.”
  • You’re optimizing for a specific audience segment. Generic research about “customers in general” often produces generic insights. MaxDiff works best when you recruit participants who precisely match your target customer profile.

The methodology has limitations worth noting. It requires careful setup, and you need to know which options to test before you start.

If you don’t include the right benefits in your initial list, you won’t discover them through MaxDiff.

It also works best with a reasonably sized set of options (typically 5-15 items).

And the results tell you about relative importance, not absolute importance; everything could theoretically matter, but MaxDiff reveals the hierarchy.

How to use MaxDiff findings in your optimization strategy

Once you have MaxDiff results, the application extends beyond simply reordering homepage elements. The insights should inform your entire digital experience.

Your messaging architecture should reflect the importance hierarchy. High-scoring benefits deserve prominent placement, repetition across pages, and detailed explanation. Low-scoring benefits can either be removed or repositioned as supporting rather than leading messages.

Your testing roadmap should prioritize changes based on MaxDiff findings. If customer reviews scored highest in your study, test different ways of showcasing reviews before you test other elements. Let the data guide your experimentation priorities.

Your content strategy should emphasize what customers care about. If service guarantees scored highly, create content that explains the guarantee in detail, shares stories of when it was honored, and addresses common concerns. Build your editorial calendar around the topics MaxDiff revealed as important.

Your sales enablement should align with customer priorities. If the research showed that prospects value licensing credentials, make sure your sales team knows to emphasize this early in conversations. Create collateral that highlights the trust signals that matter most.

The most effective companies use MaxDiff as one tool in a broader research program. They combine it with qualitative research to understand why certain benefits matter, behavioral analytics to see how users interact with different messages, and continuous testing to validate that the predicted preferences translate into actual conversion improvements.

Turning guesswork into growth

The SaaS company we worked with started with a dozen possible messages and no clear sense of which would build trust most effectively with B2B buyers. After the MaxDiff analysis, they had a data-backed hierarchy that let them confidently restructure their homepage and broader messaging strategy.

This is the power of asking prospects the right questions in the right way. Not “do you like this?” which produces inflated scores for everything. Not “rank these 11 items,” which overwhelms participants and produces unreliable data. But rather, through repeated forced choices, revealing the true importance of each element.

If you’re struggling with similar prioritization challenges (too many options, limited space, stakeholder disagreement about what matters), MaxDiff analysis might be the tool that breaks through the noise. It transforms subjective opinion into statistical evidence, letting your prospects vote on what will actually convince them to choose your platform.

Ready to discover which messages actually resonate with your customers? The Good’s Digital Experience Optimization Program™ includes research methodologies like MaxDiff analysis to help you prioritize changes based on real customer preferences, not guesswork.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company https://thegood.com/insights/intent-based-segmentation/ Fri, 21 Nov 2025 18:45:09 +0000 https://thegood.com/?post_type=insights&p=111181 Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do. The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different […]

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do.

The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different goals, workflows, and definitions of success.

We recently worked with a leading enterprise SaaS company facing exactly this challenge. Their product served everyone from solo creators to enterprise teams, but the experience treated everyone the same.

Users logging in to track a single client project got bombarded with team collaboration features. Power users managing complex workflows couldn’t find advanced capabilities buried in generic menus.

Users felt overwhelmed by irrelevant features, engagement plateaued, and retention suffered.

The solution wasn’t another traditional segmentation model. It was intent-based segmentation; a framework for grouping users by what they’re actually trying to accomplish, then personalizing the experience to match those specific goals.

Understanding behavioral segmentation and where intent fits

Before diving into how to build intent-based clusters, it helps to understand where this approach fits within the broader landscape of user segmentation.

Most SaaS companies are familiar with demographic segmentation (company size, industry, role) and firmographic segmentation (ARR, team size, geographic location). These approaches tell you who your users are, but they don’t tell you what they’re trying to do, which is why they often fail to predict engagement or retention.

Behavioral segmentation takes a different approach. Instead of looking at static attributes, it focuses on how users actually interact with your product.

Behavioral segmentation divides users based on engagement. This includes actions like feature usage patterns, open frequency, purchase behavior, and time-to-value metrics.

While not the only way to do things, behavioral segmentation is widely regarded as more effective than demographic segmentation alone, with plenty of research backing it up.

Within behavioral segmentation, there are several approaches:

  • Usage-based segmentation looks at frequency and intensity of use.
  • Lifecycle segmentation tracks where users are in their journey.
  • Benefit-sought segmentation groups users by the outcomes they want to achieve.

Intent-based segmentation sits at the intersection of all three.

visual portraying intent based segmentation at the center of different types of user segmentation

It identifies clusters of users who share similar goals and workflows, then maps those patterns to create a more personalized experience.

Intent-based clusters answer the question: “What is this user trying to accomplish right now?”

In a recent client engagement that inspired this article, this distinction mattered. They had mountains of usage data showing which features people clicked, but no framework for understanding why certain feature combinations existed or what job users were trying to complete.

They knew “Business Professionals” used their tool, but that category was so broad it offered no actionable insights. A marketing manager building campaign timelines has completely different needs than a legal team tracking contract approvals, even though both might be classified as “business professionals.”

Intent-based clustering gave them that missing layer of insight.

Case study: How to spot the need for intent-based segmentation

Let’s talk more about the client engagement I mentioned. This is a great case study for when to use intent-based segmentation.

We work on a quarterly retainer for these clients with our on-demand growth research services. So, when they mentioned struggling with how to personalize experiences and improve retention, we opened up a research project that same day.

The team could see that certain users logged in daily and used five or more features. Great, right? Not really. When we dug deeper, we discovered something critical. Heavy feature usage didn’t predict retention. Some power users churned while casual users stuck around for years.

The issue wasn’t the quantity of features used; it was whether those features aligned with what users were trying to accomplish. A user coming in weekly to update a single client dashboard showed higher retention than someone exploring ten features that didn’t match their core workflow.

The symptoms: What teams told us was broken

During our stakeholder interviews, we heard the same frustrations across departments:

From product: “We know the top five features everyone uses, but that doesn’t help us understand why they’re using them or what to build next. Two users might both use our template feature, but one is building client proposals while the other is standardizing internal processes. They need completely different things from that feature.”

From marketing: “Our segments are too broad to be useful. ‘Business Professional’ could mean anyone from a solo consultant to an enterprise VP. When we send educational content, we can’t make it relevant because we don’t know what problem they’re trying to solve.”

From customer success: “We can see when someone is at risk of churning because their usage drops off, but we can’t predict it before it happens. By the time we notice, they’ve already decided the product isn’t right for them. We need to understand intent earlier so we can intervene proactively.”

From UX research: “Users think in terms of tasks, not features. They don’t say ‘I want to use the dependency mapping tool.’ They say, ‘I need to make sure the design team finishes before development starts.’ But our product talks about features, not outcomes.”

The underlying problem: Missing the ‘why’

What became clear was that the organization had plenty of data about behavior but no framework for understanding intent. They could answer questions like:

  • How many people use feature X?
  • What’s the average session duration?
  • Which users log in most frequently?

But they couldn’t answer the questions that actually mattered:

  • What are users trying to accomplish when they use feature X?
  • Why do some users stick around while others churn?
  • What combination of goals and workflows predicts long-term retention?

This may sound familiar. They have data about behavior but lack context about intent. Without understanding the users’ different definitions of success, they use generic personalization that recommends “similar features” and misses the mark entirely.

The four-phase framework for creating intent-based user clusters

Based on our work with the enterprise SaaS client, we developed a systematic framework for building intent-based clusters from scratch.

The process has four distinct phases, each building on the previous one.

Think of this as a directional guide rather than a rigid formula. You can adapt the scope based on your resources and organizational complexity.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Phase 1: Capture institutional knowledge and identify gaps

Most organizations have more customer insight than they realize; it’s just scattered across teams, buried in old reports, and locked in individual team members’ heads. The first phase consolidates that knowledge and identifies what’s missing.

Graphic of phase 1 in the intent based user segmentation process

1. Conduct cross-functional interviews

Start by interviewing stakeholders who interact with users regularly: product managers, customer success, sales, support, and marketing. For our client, we conducted interviews with seven team members from UX research, growth, engagement, marketing, and analytics.

The goal isn’t consensus. You want to uncover patterns and contradictions instead. Focus your conversations on these questions:

  • How do you currently describe different user types?
  • What patterns have you noticed in how different users engage with the product?
  • What data exists about user behavior that isn’t being used to inform decisions?
  • Where do personalization efforts break down today?
  • What questions about users keep you up at night?

These conversations surface institutional knowledge that never makes it into documentation.

Capture everything. The contradictions are especially valuable—they reveal where teams operate with different mental models of the same users.

2. Audit existing research and reports

Next, gather relevant user research, data analysis, and customer insight. For our client, we analyzed eight existing reports, including retention data, cancellation surveys, user studies, engagement patterns, and conversion analysis.

Look for:

  • Marketing segmentation models (usually demographic-heavy)
  • User research studies (often small sample, rich insights)
  • Behavioral analytics reports (feature usage patterns, cohort analysis)
  • Customer journey maps (theoretical vs. actual paths)
  • Support ticket analysis (pain points and use cases)
  • Cancellation surveys (why users leave)

Pay attention to gaps between how different teams think about users. Marketing might segment by company size, while product segments by feature usage. Neither is wrong, but the lack of a unified framework means teams optimize for different definitions of success.

3. Define hypotheses about intent-based variables

Based on interviews and research, develop hypotheses about what variables might define user intent. This is where you move from “who uses our product” to “what are they trying to accomplish.”

For our client, we identified several dimensions that seemed to correlate with intent:

  • Primary workflow type: Are users managing team projects, client deliverables, or personal tasks?
  • Collaboration patterns: Solo work, small team coordination, or cross-functional orchestration?
  • Usage frequency: Daily operational tool or periodic project management?
  • Success metrics: Speed (quick task completion) vs. thoroughness (detailed planning and tracking)?
  • Document complexity: Simple task lists or multi-layered project hierarchies?

The goal is to create testable hypotheses that can be validated in Phase 2.

Phase 2: Validate clusters through user research and behavioral analysis

Once you have initial hypotheses, Phase 2 tests them against real user behavior and feedback. This is where hunches become data-backed insights.

visual of phase 2 in the intent based user segmentation process

1. Develop provisional cluster groups

Transform your hypotheses into provisional clusters. For our client, we identified six distinct intent-based clusters. Let me illustrate with a fictional example of how this might work for a project management SaaS tool:

Sprint Executors: Users focused on rapid task completion and daily standup workflows. They need speed, simple task updates, and quick team coordination. Think startup teams moving fast with lightweight processes.

Client Project Coordinators: Users managing multiple client engagements simultaneously with strict deliverable timelines. They need client visibility controls, progress tracking, and professional reporting. Think agencies and consultancies.

Cross-Functional Orchestrators: Users coordinating complex projects across departments with dependencies and approval workflows. They need Gantt views, resource allocation, and stakeholder communication tools. Think enterprise program managers.

Personal Productivity Optimizers: Users treat the tool as their second brain for personal task management and goal tracking. They need customization, recurring tasks, and minimal collaboration features. Think solopreneurs and executives.

Seasonal Campaign Managers: Users with predictable high-intensity periods followed by dormancy. They need templates, bulk operations, and the ability to archive/reactivate projects easily. Think retail operations teams or event planners.

Mobile-First Coordinators: Users who primarily access the tool from mobile devices for field work or on-the-go updates. They need streamlined mobile experiences and offline sync. Think field service teams or traveling consultants.

Each cluster gets a descriptive name that captures the user’s primary intent, not just their behavior. “Sprint Executor” tells you more about what someone is trying to do than “high-frequency user.”

2. Conduct targeted user research

With provisional clusters defined, recruit users who fit each profile and conduct interviews to understand:

  • Their primary use cases and goals when they first adopted the tool
  • How they discovered and currently use the product
  • Their typical workflows from start to finish
  • What defines success in their role
  • Pain points and unmet needs
  • How they decide which features to explore
  • What would make them cancel vs. what keeps them subscribed

For our client, we conducted three to four interviews per cluster, totaling around 24 user conversations. This gave us enough coverage to validate patterns without drowning in data.

The insights were eye-opening. We discovered that one cluster had the fastest time-to-value but the lowest feature adoption. They found what they needed immediately and never explored further. Another cluster showed the highest retention but needed the longest onboarding. They invested time up front because the tool was critical to their workflow.

3. Analyze behavioral data to confirm patterns

User interviews reveal what people say they do. Behavioral data shows what they actually do. Cross-reference your clusters against:

  • Feature usage sequences (which tools appear together in sessions)
  • Time-to-value metrics by cluster (how quickly do they get their first win)
  • Retention and churn patterns (which clusters stick around)
  • Upgrade and expansion behavior (which clusters grow their usage)
  • Support ticket themes (which clusters need help with what)
  • Feature adoption curves (how exploration patterns differ)

For our client, the data revealed critical differences. The “Sprint Executor” equivalent had fast initial adoption but plateaued quickly. They found their core workflow and stopped exploring.

The “Cross-Functional Orchestrator” cluster showed slow initial adoption but deep engagement over time. They needed to learn the tool thoroughly to unlock value.

These patterns weren’t visible in aggregate data. Only by segmenting users by intent could we see that different clusters had fundamentally different paths to retention.

4. Build detailed cluster profiles

For each validated cluster, create a comprehensive profile that becomes the foundation for personalization. For example:

Cluster name: Sprint Executors

Primary intent: Complete daily tasks quickly with minimal friction and maximum team visibility

Most-used features:

  • Quick-add task creation
  • Board view for visual sprint planning
  • Mobile app for on-the-go updates
  • Real-time team activity feed

Typical workflow patterns:

  • Morning standup with task assignments
  • Throughout the day: quick updates and status changes
  • End of day: marking tasks complete and planning tomorrow

Behavioral flags that identify this cluster:

  • Creates 5+ tasks in first week
  • Returns daily within first 14 days
  • Uses mobile app within first 7 days
  • Rarely uses advanced features like Gantt charts or dependencies

Retention drivers:

  • Speed of task completion
  • Team visibility and accountability
  • Mobile accessibility

Churn risks:

  • Tool feels too complex for simple needs
  • Feature bloat is making core actions harder to find
  • Forced upgrades to access speed-focused features

Personalization opportunities:

  • Streamlined onboarding focused on quick task creation
  • Mobile-first feature discovery
  • Templates for common sprint workflows
  • Integrations with communication tools

These profiles become the single source of truth that product, marketing, and customer success can all reference.

Phase 3: Develop indicators and personalization strategies

The final phase connects clusters to action. This is where the framework moves from insight to implementation.

1. Create behavioral flags for cluster identification

Most users won’t self-identify their intent at signup. You need to infer cluster membership from behavioral signals early in their journey. The key is identifying flags that appear within the first 7-14 days. It should be early enough to personalize the experience before users decide if the tool is right for them.

visual of phase 3 in the intent based user segmentation process

For reference, the “Sprint Executor” cluster in our fictional example:

  • Created 5+ tasks in first week
  • Logged in on 4+ separate days in first 14 days
  • Used mobile app within first 7 days
  • Board or list view used more than timeline/Gantt view (80%+ of sessions)
  • Invited at least one team member within first 10 days
  • Never explored advanced dependency features
  • Average session length under 10 minutes

Versus the “Client Project Coordinator” cluster:

  • Created 3+ separate projects within first week (indicating multiple clients)
  • Used folder or workspace organization features within first 5 days
  • Set up client-specific permissions or external sharing settings
  • Created custom views or reports within first 14 days
  • Longer average session times (20+ minutes per session)
  • Uses professional or client-specific terminology in project names
  • High usage of export or presentation features

The goal is to find the minimum viable signal set that reliably predicts cluster membership. Start with more flags and refine over time based on which actually correlate with long-term behavior.

One critical finding from our client work: early behavioral flags predicted retention better than demographic data.

A user who exhibited “Client Project Coordinator” behaviors in week one showed 40% higher 90-day retention than the average user, regardless of their company size or job title.

2. Map personalization opportunities to each cluster

With clusters and flags defined, identify specific ways to personalize the experience across the user journey:

Onboarding sequences: Tailor the first-run experience to highlight features that match user intent. Show Sprint Executors how to set up their first sprint board, not the full feature catalog with Gantt charts and resource allocation tools they don’t need.

In-app messaging: Trigger contextual tips based on usage patterns. When a Client Project Coordinator creates their third project with similar structure, suggest project templates to save time.

Feature discovery: Recommend next-step features that align with cluster workflows. For Sprint Executors who’ve mastered basic task management, introduce the mobile app and integrations with their communication tools—not complex dependency mapping.

Content and education: Send targeted educational content that addresses cluster-specific goals. Client Project Coordinators get tips on professional reporting and client permissions. Sprint Executors get productivity hacks and team coordination strategies.

Upgrade paths: Present pricing tiers and feature upgrades that match cluster needs. Don’t push team collaboration features to Personal Productivity Optimizers who work solo and won’t use them.

Support prioritization: Route support tickets differently based on cluster. Client Project Coordinators might get priority support since they’re often managing paying clients. Seasonal Campaign Managers might get proactive check-ins before predicted busy periods.

For our client, this mapping revealed opportunities they’d completely missed. One cluster had been receiving generic “explore more features” emails when what they actually needed was advanced security capabilities for compliance requirements. Another cluster kept churning at the end of trial because onboarding emphasized features they’d never use instead of the speed-focused tools that matched their workflow.

Phase 4: Develop test concepts to validate impact

Turn personalization opportunities into testable hypotheses. Don’t roll everything out at once. Start with high-impact, low-effort tests that prove the value of intent-based segmentation.

For our client, we proposed several test concepts structured to validate clusters quickly and build organizational confidence in the framework. Here are a few examples.

visual of phase 4 in the intent based user segmentation process

Example Test 1: Intent-Based Onboarding Survey

Background: The organization lacked a way to identify user intent at the critical moment when users were most open to guidance: right after signup, but before they’d formed opinions about product fit.

Hypothesis: By asking users to self-identify their primary goal during their first meaningful session, we can segment them into actionable clusters that enable personalized feature discovery and messaging, resulting in improved 3-month retention rates by 5-10%.

Test design: During the first session (after initial signup but before deep engagement), present a brief survey asking: “What brings you here today?” with options that map directly to identified clusters:

☐ Coordinate my team’s daily work (Sprint Executors)

☐ Manage multiple client projects (Client Project Coordinators)

☐ Organize complex cross-functional initiatives (Cross-Functional Orchestrators)

☐ Track my personal tasks and goals (Personal Productivity Optimizers)

☐ Plan seasonal campaigns or events (Seasonal Campaign Managers)

☐ Update projects while on the go (Mobile-First Coordinators)

☐ Something else (with optional text field)

Then immediately personalize their first experience based on their response: Sprint Executors see a streamlined task creation tutorial, Client Project Coordinators get guidance on setting up client workspaces, etc.

Success metrics:

  • Primary: 3-month retention rate by selected cluster (looking for 5-10% lift)
  • Secondary: Survey completion rate (target: >80%), feature adoption aligned with cluster (target: 20% lift), time to first value-generating action
  • Guardrails: No negative impact on day 2 or day 7 retention

Acceptance criteria for “winning test”:

  • Survey completion rate >80%
  • 60% of users select a pre-set option (vs. “something else”)
  • Statistically significant retention lift in at least one cluster
  • No degradation in key engagement metrics

Acceptance criteria for “learning test”:

  • 40% of users select “something else” (suggests clusters don’t match user mental models)
  • No statistically significant difference in retention (suggests clusters exist, but personalization approach needs refinement)

Audience: New paid subscribers on first day, trial users converting to paid, reactivated users returning after 30+ days dormant. Start with 25% of eligible users to minimize risk.

Timeline: 90 days to measure primary retention metric, but early signals (completion rate, feature adoption) available within 14 days.

Example Test 2: Cluster-Specific Feature Recommendations

Background: Generic in-app messaging had low click-through rates (<5%) and wasn’t driving feature adoption. Users felt bombarded by irrelevant suggestions.

Hypothesis: For users who match behavioral flags within the first 14 days, triggering personalized feature recommendations will increase feature adoption by 20% and engagement depth by 15%.

Test design: Identify users by behavioral flags, then trigger targeted in-app messages at contextually relevant moments:

  • Sprint Executors see mobile app download prompt after completing 5 tasks on desktop: “Update tasks on the go: get the mobile app”
  • Client Project Coordinators see reporting features after creating third project: “Impress clients with professional progress reports”
  • Cross-Functional Orchestrators see dependency mapping after creating complex project: “Map dependencies to keep cross-functional teams aligned”

Success metrics:

  • Primary: Feature adoption rate for recommended features (target: 20% lift vs. control)
  • Secondary: Overall engagement depth (features used per session), message click-through rate
  • Guardrails: No increase in feature abandonment (starting but not completing flows)

Audience: Users who match cluster behavioral flags within first 14 days. Test one cluster at a time to isolate impact.

Timeline: 30 days to measure feature adoption impact.

Example Test 3: Retention Email Campaigns by Cluster

Background: Generic “tips and tricks” email campaigns had 8% open rates and weren’t moving retention metrics. Content felt irrelevant to most recipients.

Hypothesis: Segmenting email campaigns by identified cluster will improve email engagement by 50% and show a measurable correlation with retention.

Test design: Replace generic weekly tips emails with cluster-specific content:

  • Sprint Executors: “5 ways to speed up your daily standup” / “Mobile shortcuts that save 2 hours per week”
  • Client Project Coordinators: “How to impress clients with professional project reports” / “3 ways to give clients visibility without overwhelming them”
  • Personal Productivity Optimizers: “Build your second brain: advanced filtering techniques” / “Automate your recurring tasks in 5 minutes”

Send to users identified through either the onboarding survey or behavioral flags. Track engagement and retention by cluster.

Success metrics:

  • Primary: Email open rates (target: 50% lift), click-through rates (target: 100% lift)
  • Secondary: Correlation between email engagement and 90-day retention
  • Guardrails: Unsubscribe rates remain stable or decrease

Audience: Users identified as belonging to specific clusters either through survey responses or behavioral flags, minimum 14 days after signup.

Timeline: 6 weeks for initial engagement metrics, 90 days for retention correlation.

Post-test analysis framework

For each test, we established a clear decision framework:

If “winning test”:

  • Roll out to 100% of eligible users
  • Begin development on next phase of personalization for that cluster
  • Use learnings to inform tests for other clusters
  • Document what worked to build organizational playbook

If “learning test”:

  • Analyze all “something else” responses for missing clusters or unclear framing
  • Review behavioral data to see if clusters exist, but personalization was wrong
  • Iterate on messaging, timing, or format
  • Decide whether to retest with refinements or try different approach

If negative impact:

  • Immediately roll back to the control experience
  • Conduct user interviews to understand what went wrong
  • Reassess cluster definitions or personalization approach
  • Consider whether the cluster exists but needs a different treatment

The key to successful testing is starting small, measuring rigorously, and being willing to learn from failures. Not every cluster will respond to every type of personalization, and that’s valuable information. The goal isn’t perfect personalization immediately; it’s continuous improvement based on what actually moves metrics.

Intent-based segmentation mistakes and how to avoid them

Based on our experience implementing this framework, here are the mistakes that will derail your efforts:

1. Starting with too many clusters

More isn’t better. Six well-defined clusters are more useful than fifteen overlapping ones. You need enough clusters to capture meaningfully different intents, but few enough that teams can actually remember and act on them. Start with 4-6 clusters and refine over time. If you find yourself creating clusters that differ only slightly, you’ve gone too granular.

2. Confusing demographics with intent

Job title, company size, or industry might correlate with intent, but they don’t define it. We’ve seen solo consultants behave like “Cross-Functional Orchestrators” and enterprise teams behave like “Sprint Executors.” Focus on what users are trying to accomplish, not who they are on paper.

3. Creating overlapping clusters

Each cluster should be distinct in its primary intent and workflow patterns. If you’re struggling to articulate how two clusters differ behaviorally, they’re probably the same cluster with different labels. Test this by asking: “If I saw someone’s usage data, could I confidently assign them to one cluster?”

4. Ignoring edge cases entirely

Some users will span multiple clusters or switch between them based on context. That’s fine. The framework should accommodate primary intent while recognizing that users are complex. A user might primarily be a “Client Project Coordinator” but occasionally use “Personal Productivity Optimizer” features for their own task management. Don’t force rigid categorization.

5. Skipping the validation step

Your initial hypotheses will be wrong in places. User research and behavioral data keep you honest and prevent confirmation bias. We’ve seen teams fall in love with theoretically elegant clusters that don’t actually exist in their user base, or miss entire segments because they didn’t fit the initial hypothesis.

6. Treating clusters as static

User intent evolves. Someone might start as a “Personal Productivity Optimizer” and grow into a “Client Project Coordinator” as their business scales. Review and refine your clusters quarterly based on new data, product changes, and market shifts.

7. Personalizing too aggressively too soon

Start with high-confidence, low-risk personalization (like targeted email content) before you completely diverge user experiences. You want to validate that clusters behave differently before you build entirely separate onboarding flows.

8. Forgetting to measure impact

Intent-based segmentation is valuable only if it improves outcomes. Define success metrics upfront (e.g., retention lifts, engagement depth, upgrade rates, support ticket reduction) and track them by cluster. If personalization isn’t moving these metrics, refine your approach.

Making intent-based segmentation work for your organization

The framework we’ve outlined works across product categories and company sizes, but implementation varies based on your resources and organizational maturity.

If you have limited data: Start with Phase 1 and Phase 2, using qualitative research to define clusters before investing in behavioral infrastructure. You can manually tag users based on interview responses and onboarding surveys, then personalize through targeted emails and customer success outreach. As you grow, build the data systems to automate cluster identification.

If you have rich behavioral data but limited research capabilities: Reverse the order. Start with data patterns and validate through targeted interviews. Look for natural groupings in your analytics that suggest different workflow types, then talk to representative users from each group to understand their intent.

If you’re a small team: Don’t let perfect be the enemy of good. Start with 3-4 obvious clusters based on your highest-level workflow differences. The founder of a 10-person startup probably has a better intuitive understanding of user intent than a 500-person company with siloed data. Write down what you know, test it with a few users, and start personalizing.

If you’re a large enterprise: The challenge is getting organizational alignment, not defining clusters. Use Phase 1 to surface where teams already operate with different mental models, then use data to arbitrate. Create executive sponsorship for the new framework so it becomes the shared language across product, marketing, and CS.

The key is starting somewhere. Most companies know their one-size-fits-all approach isn’t working, but they keep personalizing around the wrong variables that don’t actually predict what users are trying to accomplish.

Intent-based segmentation reorients everything around the question that actually matters: What is this user trying to accomplish, and how can we help them succeed at that specific goal?

Turn insights into retention that drives revenue

Understanding user intent is just the first step. The real value comes from translating those insights into personalized experiences that keep users engaged and drive measurable revenue growth.

At The Good, we’ve spent 16 years helping SaaS companies identify their most valuable user segments and optimize experiences around what actually drives retention. Our systematic approach to user segmentation goes beyond frameworks. We help you implement experimentation strategies that prove which personalization efforts move the needle on the metrics your board cares about.

Plenty of companies struggle to implement segmentation that’s actually actionable. They end up with beautiful personas gathering dust or broad categories that don’t inform product decisions.

Intent-based segmentation is different because it connects directly to behavior you can observe and experiences you can personalize.

If you’re struggling with generic experiences that fail to resonate with different user types, or if you know your segmentation could be better but aren’t sure where to start, let’s talk about how intent-based segmentation could transform your retention strategy and drive revenue growth.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience https://thegood.com/insights/why-are-free-users-churning/ Thu, 16 Oct 2025 20:56:17 +0000 https://thegood.com/?post_type=insights&p=110962 “My free users aren’t converting, where do I start?” If you’re asking this question, you’re already ahead of most product leaders. You recognize the problem. But here’s what many miss: conversion is a symptom, not the root cause of the problem. SaaS churn often happens before users ever consider paying. It’s common for users to […]

The post Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience appeared first on The Good.

]]>
“My free users aren’t converting, where do I start?”

If you’re asking this question, you’re already ahead of most product leaders. You recognize the problem. But here’s what many miss: conversion is a symptom, not the root cause of the problem.

SaaS churn often happens before users ever consider paying.

It’s common for users to hit friction points you didn’t know existed. They encounter gates that make no sense in context. They drop off at moments when just a bit more clarity could have kept them engaged.

The good news? You can fix this. But not by guessing. Not by copying what Dropbox or Notion does. And usually not by adding more features.

What you need is a systematic audit of your free or anonymous user experience. One that reveals exactly where users hit walls, why they bounce, and what you can do to keep them engaged long enough to see value.

This article walks through a five-step framework that SaaS product and growth leaders can use to audit their free experience and reduce churn. It’s the same approach we use with clients, adapted so you can run it internally. Fair warning: this takes work. But if you’re serious about improving SaaS user retention, it’s worth every hour.

Why your free experience impacts your retention rate

Before we get into the framework, let’s be clear about what we mean by “free experience.”

This includes any interaction where users engage with your product without paying. That could be a free trial, a freemium tier, anonymous tool usage, or limited feature access. It’s the first impression, the test drive, the “try before you buy” phase.

And it matters more than you think.

Most SaaS companies obsess over free-to-paid conversion rates. But conversion is a lagging indicator. By the time a user decides not to convert, the damage is already done. They disengaged days or weeks ago. They just didn’t tell you.

The real opportunity sits upstream. If you can identify and remove friction in the free experience, you don’t just improve conversion rates. You improve activation rates, engagement, time-to-value, and long-term retention. You build a user base that actually wants to pay because they’ve already seen the value.

Here’s how to find those friction points.

Step 1: Review your data for drop-off points

Start with what’s already happening in your product. Before you talk to anyone or look at competitors, you need to know exactly where users are getting stuck.

Dig into your product analytics. You’re looking for three things:

Activation drop-offs: Where do users abandon the onboarding flow? Which steps have the highest exit rates? If 60% of users drop off when asked to invite teammates, that’s a signal.

Feature engagement patterns: Which features do free users actually use? Which ones do they try once and never touch again? Are there features you’ve gated that users don’t even attempt to access?

Time-to-value analysis: How long does it take users to complete their first valuable action? And what percentage of users never get there? If your median time-to-value is three days, but 70% of users churn within 48 hours, you have a problem.

Set up a dashboard that tracks these metrics by cohort. New signups this week versus last month. Users from different acquisition channels. Free trial versus freemium. The patterns that emerge will guide your optimization priorities.

Layer on session recordings and heatmaps to see exactly what’s happening at key drop-off points. Numbers tell you where the problem is. Qualitative data tells you why.

Watch 20-30 sessions of users who churned in their first week. What did they try to do? Where did they get stuck? What confusion or frustration is evident in their behavior?

This isn’t just a data review. It’s detective work. You’re building a picture of where your free experience breaks down.

Step 2: Talk to users (both active and churned)

Now that you’ve identified drop-off points in your analytics, it’s time to understand the human story behind those numbers. Conduct 10-15 interviews, split between two groups:

Active free users (people still using your product but haven’t upgraded): Why are they still here? What value are they getting? What would make them pay? What’s holding them back?

Churned users (people who tried your product and left): What were they trying to accomplish? Where did they get stuck? What made them give up? What would have kept them engaged?

Keep these conversations short (15-20 minutes) and focused. You’re not selling. You’re learning.

Sample questions for active free users:

  • What problem were you trying to solve when you first signed up?
  • Walk me through how you use [product] today.
  • What features do you wish you had access to?
  • What would need to change for you to consider upgrading?
  • If we removed [specific free feature], would you still use the product?

Sample questions for churned users:

  • What were you hoping to accomplish with [product]?
  • Where did you get stuck?
  • Was there a specific moment when you decided it wasn’t for you?
  • Did you consider other tools? What made you choose them instead?

Record these conversations (with permission) and transcribe them. The exact language users employ to describe their experience reveals friction points you’d never spot in analytics alone.

Pay special attention when users mention alternatives they considered or are currently using. This context becomes critical in the next step.

Step 3: Map what your users are being offered in the market

You now understand what’s happening in your product and why users make the decisions they do. The next question is: what are they comparing you against?

Your users don’t evaluate your free experience in a vacuum. They’re weighing it against every other tool they’ve tried, every competitor they’re considering, and every product they wish yours worked more like.

This step isn’t about copying competitors. It’s about understanding the full landscape of options your users are navigating.

Create a comprehensive inventory of how other products in your space (and adjacent to it) handle their free experiences. Document what your users are seeing elsewhere.

Here’s what to capture in a Figma or Notion file.

An example from The Good showing what to capture in Figma when auditing SaaS tools and answering why are free users churning?

Set up a page with one row per product. For each one, document:

  • What features are available without registration
  • What requires an email address but remains free
  • Where the hard paywalls sit
  • How they communicate limits (countdown timers, credit displays, etc.)
  • Placement and messaging of upgrade prompts
  • Onboarding flows and activation sequences

Don’t limit yourself to direct competitors. Look at the tools your users mentioned in interviews. If they’re comparing your productivity tool to Notion, your design tool to Figma, or your automation platform to Zapier, study how those products handle free users.

Pro tip: Screenshot everything. Your database should include visual documentation of every monetization touchpoint, limit notification, and upgrade CTA. These screenshots become invaluable references when you’re making decisions about your own experience.

This exercise typically takes 8-12 hours for a thorough analysis of five to seven products. You’ll surface approaches you hadn’t considered and identify industry patterns that users have come to expect.

The goal here is context. When a user hits a limit in your product, they’re mentally comparing that experience to how Dropbox handles storage limits, how Canva displays upgrade options, or how Grammarly shows premium features. Understanding those reference points helps you design a free experience that meets or exceeds market expectations.

Step 4: Run a verb scoring exercise

With data, user insights, and market context in hand, it’s time to systematically evaluate your own product’s free experience. This is where verb scoring comes in.

Verb scoring evaluates the discrete actions users can take in your product and assigns each one a “score” based on the level of friction required. The six verb scores are:

  • Anonymous – Users can take this action without providing any information
  • Limited Anonymous Use – Users can take this action without registration, but only a limited number of times
  • Free with Registration – Users must register (email + basic info), but can take this action unlimited times for free
  • Limited Registered Use – Registered users can take this action, but with caps or restrictions
  • Trial with Payment – Users must provide payment information to access this action (even if they’re not charged immediately)
  • Gated – Only paying customers can take this action
A chart from The Good outlining verb score, definition and purpose.

List every meaningful action users can take in your product. Not features, but actions. “Create a document” is a verb. “Edit collaboratively” is a verb. “Export to PDF” is a verb. “Share via link” is a verb.

Then score each one. Where does it fall on the spectrum from Anonymous to Gated?

This exercise reveals your actual monetization strategy, not the one you think you have. You’ll often find that verbs are gated inconsistently, or that you’re giving away too much (or too little) at critical moments.

For a detailed walkthrough of verb scoring, including decision trees and examples, see our guide on verb scoring for product strategy.

Create a verb scoring matrix that maps all your verbs against these six scores. This becomes your baseline. It shows exactly where friction exists in your free experience, allowing you to compare it directly to what you documented in Step 3.

Step 5: Connect the dots between data, users, market context, and verb scoring

This is where the audit comes together. You now have four layers of insight:

  1. Quantitative and qualitative data: Where users drop off and what they’re doing (or not doing)
  2. User feedback: Why they drop off and what they’re thinking
  3. Market context: What alternatives they’re comparing you against
  4. Verb scoring matrix: Where friction exists in your own product

Lay them side by side. Look for patterns.

Here’s what you’re hunting for:

Friction without reason

Look out for verb scores that create unnecessary barriers relative to market norms. For example, if your data shows 40% of users bounce before registering, user interviews reveal confusion about what your product does, and your market analysis shows that competitors allow anonymous exploration, you’re likely losing users before they experience value. Your verb scoring can reveal that you’re gating too early.

Value leaks

Check for free features that users love but don’t move them toward conversion. If your most-used free features have no connection to paid capabilities, and users in interviews can’t articulate why they’d upgrade, you’re building a user base that will never pay. Your verb scoring might show you’re giving away too many “Free with Registration” verbs without strategic “Limited Registered Use” prompts.

Invisible gates

Paywalls that users hit without understanding why. Your data shows sudden drop-offs at specific upgrade prompts. User interviews reveal confusion about value or poor timing. Market analysis shows competitors explain premium benefits more clearly. Your verb scoring identifies which verbs are gated, but not whether those gates make sense to users.

Poorly timed friction

Limits or gates that appear before users have experienced enough value. Data shows high bounce rates at the first upgrade prompt. User interviews reveal frustration: “I hadn’t even figured out the basics yet.” Market analysis shows that similar tools delay friction until after activation. Your verb scoring might reveal that you’re using “Limited Anonymous Use” or “Trial with Payment” too early in the journey.

Market misalignment

Patterns where your verb scoring differs significantly from market norms, and your churn data supports that this matters. For instance, if every competitor allows free PDF exports but you gate this behind payment, your churned user interviews will likely mention this as a dealbreaker.

Create a prioritized list of friction points based on:

  • Impact (how many users are affected, based on your data?)
  • Confidence (do your user interviews confirm this is a problem?)
  • Effort (how hard is this to fix?)
  • Market expectation (is this friction standard, or are you an outlier?)

This becomes your retention optimization roadmap.

Why this framework works

This five-step audit framework delivers three specific outcomes that improve SaaS user retention:

Get a clear path to higher retention rates: No more guessing. You’ll have a prioritized list of friction points ranked by impact and effort. Fix the top three and you’ll see measurable improvement in activation, engagement, and conversion.

Make data-driven decisions: Create a culture of user-centered decisions rather than those based on the highest-paid person’s opinion, historical choices, or a gut feeling. When you combine quantitative data, qualitative research, market context, and systematic verb scoring, arguments become easy to settle.

Prevent feature flop: Validate changes before implementation. You’ll know which gates to remove, which features to add to your free tier, and which upgrade prompts to reposition, all before you waste valuable development resources.

Teams that run this audit consistently report two things: first, they’re surprised by what they find. Assumptions they’d held for months or years turn out to be wrong. Second, the fixes are often simpler than expected. Sometimes all it takes is moving an upgrade prompt, clarifying messaging, or ungating a single feature.

Running this audit takes time (and that’s the point)

Let’s be honest: this framework requires a meaningful investment. Between data analysis, user interviews, market research, and verb scoring, you’re looking at 40-60 hours of work.

That’s assuming you have the right tools, know how to set up proper analytics, can recruit and interview users effectively, and have experience interpreting qualitative data.

For many SaaS teams, that’s exactly the problem. You know you need to audit your free experience. You know churn is killing growth. But your product team is building features, your growth team is running acquisition campaigns, and nobody has the bandwidth or expertise to run a proper retention audit.

That’s where The Good’s Digital Experience Optimization Program™ comes in.

We’ve run this exact process dozens of times for SaaS companies between product-market fit and scale. Companies like yours with $1M-$30M ARR and pressure to accelerate growth while battling churn.

Our team conducts the full audit, including data review, user research, market analysis, and verb scoring, and delivers a prioritized roadmap of friction points with specific recommendations. Then we help you implement, test, and optimize the changes.

The result? Clients typically see measurable improvements in activation and retention within 60-90 days. More importantly, they build an optimization discipline that compounds over time.

Want to see where your free experience is bleeding users? Schedule an introductory call to discuss how we can help you reduce churn and improve SaaS user retention.

FREE RESOURCE


How Top AI Tools Turn Free Users Into Paying Customers


Opting In To Optimization

The post Why Are Free Users Churning? A Growth Leader’s 5-Step Guide To Auditing The Free User Experience appeared first on The Good.

]]>
How to Validate Website Design Changes: A Decision Framework https://thegood.com/insights/website-design-changes/ Thu, 28 Aug 2025 21:23:05 +0000 https://thegood.com/?post_type=insights&p=110805 How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it? The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have […]

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it?

The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have fallen into the trap of either testing everything (creating bottlenecks and slowing innovation) or testing nothing (making changes based purely on intuition).

There’s a better way. By understanding when different validation* methods are most appropriate, SaaS teams can make faster, more confident design decisions while maintaining the rigor needed for their most critical changes.

*Note: We know validation is a bad word in the research community because it implies “proving you’re right,” but we feel it’s easier to read and more quickly comprehensible for those not in research disciplines. We’re using “validation” in this article, but “evaluation” or “confirm or disconfirm” would be more acute in other settings.

The real cost of a bad experimentation strategy

When teams lack a clear strategy for validating decisions, they create what researcher Jared Spool calls “Experience Rot” – the gradual deterioration of user experience quality from moving too slowly or focusing solely on economic outcomes rather than user needs.

The costs manifest in several ways:

  • Opportunity cost: Market opportunities disappear while waiting for test results that may not even be necessary
  • Resource waste: Development time gets tied up in prolonged testing initiatives for low-risk changes
  • Analysis paralysis: Teams debate endlessly about what to test next instead of making decisions
  • Competitive disadvantage: Competitors gain ground while you’re stuck in lengthy validation cycles

The key is matching your experimentation method to the decision you’re making, rather than forcing every design change through the same validation process.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A framework for design validation decisions

The path to better validation starts with two fundamental questions about any proposed design change:

  1. Is this strategically important? Does this change significantly impact key business metrics or user experience?
  2. What’s the potential risk? What happens if this change performs worse than expected?

Using these dimensions, you can map any design change into one of four validation approaches:

A decision making framework for validating decisions regarding website design changes.

High Strategic Importance + Low Risk = Just ship it

If you can’t explain meaningful downsides to a design change but know it’s strategically important, you probably don’t need to validate it at all. These are your quick wins.

Examples for SaaS teams:

  • Adding customer testimonials to your pricing page
  • Improving mobile responsiveness
  • Fixing broken links or outdated screenshots
  • Adding clearer error messages in your product

Why this works: The upside is clear, the downside is minimal, and the time spent testing could be better invested elsewhere.

Low Strategic Importance = Deprioritize

Not every design change needs validation because not every change is worth making. Some modifications simply aren’t worth the time and resources, regardless of the validation method you might use.

Examples of low-impact changes:

  • Minor color adjustments to non-critical elements
  • Changing footer layouts
  • Tweaking secondary page designs that get little traffic
  • Adjusting spacing that doesn’t affect usability

When to reconsider: If data later shows these areas are creating friction, they can move up in priority.

High Strategic Importance + High Risk = Validation territory

This is where both A/B testing and rapid testing methods become valuable. The critical next decision becomes: can you reach statistical significance within an acceptable timeframe, and are you technically capable of running the experiment?

When to use A/B testing vs rapid testing

This decision tree helps determine if your website design changes should be tested or if another approach should be used.

When to use A/B testing for design changes

A/B testing remains your best option for design changes when:

  • You have sufficient traffic on the experience: Generally, you need 1,000+ visitors per week to the page being tested
  • The change is reversible: You can easily switch back if the results are negative
  • You need statistical confidence: Stakes are high enough to justify the time investment
  • Technical capability exists: Your team can implement and track the test properly

Examples of SaaS use cases for A/B testing:

  • Complete homepage redesigns
  • Pricing page layouts and messaging
  • Sign-up flow modifications
  • Core product onboarding changes
  • High-traffic landing page variations

When to use rapid testing for design changes

When A/B testing isn’t right due to traffic constraints, technical limitations, or time pressures, rapid testing provides a faster path to validation.

Rapid testing methods work particularly well for SaaS design validation because they can:

  • Validate concepts before development: Test wireframes and mockups before building
  • Narrow down options: Compare multiple design variations quickly
  • Identify usability issues: Spot problems before they reach real users
  • Provide qualitative insights: Understand the “why” behind user preferences

Examples of SaaS use cases for rapid testing:

  • New feature naming and messaging
  • Dashboard navigation restructuring
  • Enterprise sales page designs (low traffic)
  • Value proposition clarity testing
  • Multi-option comparisons (6-8 variations)

The natural next question might be “which rapid testing method should I use?” Here is another decision tree framework to help answer that.

This framework is a guide to determining which rapid testing method is best suited for your website design changes.

Incorporate your experimentation strategy into your design process

With a decision-making strategy for how and what to test, you’ll need to incorporate the strategy into your design process. The most successful SaaS teams don’t treat validation as an afterthought. They build it into their process from the beginning:

  • During ideation: Use rapid testing to validate concepts and narrow options before detailed design work
  • During design: Test wireframes and mockups to identify issues before development
  • Before launch: Use A/B testing for high-stakes changes, rapid testing for others
  • After launch: Continue testing iterations based on user feedback and performance data

The compounding benefits of a sound experimentation strategy

The goal isn’t to replace A/B testing with rapid methods or vice versa. Both have their place in a mature experimentation strategy. The key is understanding when each approach provides the most value for your specific situation and constraints.

Teams that master this balanced approach to validation see remarkable improvement, including:

  • 50% better A/B test win rates (because rapid testing helps identify winning concepts)
  • Faster time-to-market for design improvements
  • More confident decision-making across the organization
  • Better team morale from seeing results from their work more quickly

Perhaps most importantly, they avoid the extremes of either testing nothing (high risk) or testing everything (slow progress).

For SaaS teams serious about optimization, the question isn’t whether to validate design changes; it’s whether you’re using the right validation method for each decision.

Start by auditing your current design change process. Are you testing changes that should be implemented immediately? Are you implementing changes that should be tested? By aligning your validation approach with the strategic importance and risk level of each change, you can move faster without sacrificing confidence in your decisions.

And if you aren’t sure how to get started, our team can help.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
How Does Experimentation Support Product-Led Growth? https://thegood.com/insights/experimentation-product-led-growth/ Mon, 25 Aug 2025 19:00:23 +0000 https://thegood.com/?post_type=insights&p=110784 The product-led growth (PLG) playbook is no longer a secret. Free trials, frictionless onboarding, viral mechanics. Many SaaS companies are following the same script. Yet despite implementing all the product-led growth best practices, most companies leveraging these strategies hit a growth plateau, watching competitors with seemingly similar products pull ahead. Here’s what they’re missing: the […]

The post How Does Experimentation Support Product-Led Growth? appeared first on The Good.

]]>
The product-led growth (PLG) playbook is no longer a secret. Free trials, frictionless onboarding, viral mechanics. Many SaaS companies are following the same script. Yet despite implementing all the product-led growth best practices, most companies leveraging these strategies hit a growth plateau, watching competitors with seemingly similar products pull ahead.

Here’s what they’re missing: the most successful product-led companies don’t just follow the playbook. They rewrite it based on what their actual users reveal through experimentation.

While everyone else copies best practices, companies that layer experimentation into their PLG strategy are discovering the specific insights that accelerate their growth. In a world where everyone has access to the same tactics, the ability to learn about your own users (and do it faster) becomes a moat.

Companies like Booking.com, Netflix, and Amazon didn’t achieve their dominance by following conventional wisdom. They made experimentation central to their success, running thousands of experiments annually to optimize their user experience. And you don’t need their resources to adopt their approach.

What is product-led growth?

Product-led growth is a strategy that emphasizes the product itself as the primary driver of customer acquisition, conversion, and retention.

Traditionally, companies have relied on sales and marketing tactics to create leads and drive customer adoption. Ads and websites had to do most of the selling, and the onus was on the potential user to read ads, navigate websites, choose between feature matrices, and, at times, go through a complicated sales process (on or off-site).

In a product-led growth model, companies remove as many obstacles as possible to acquiring free registered users. This approach often involves offering a free or freemium version of the product, allowing users to experience its value before committing to a paid subscription.

An infographic comparison of how experimentation product led growth differs from traditional sales models.

If the experience is good enough to keep them using it, and the paid features are valuable enough, then the hope is that users will ultimately convert into paying customers. In this way, the product serves as the main vehicle for customer acquisition and expansion.

Just like test driving a car, they let you test drive their product and discover the value on your own, before making a purchase decision.

Companies that successfully implement a product-led growth strategy often benefit from increased customer loyalty, higher conversion rates, lower customer acquisition costs, and sustainable long-term growth.

The shift from “launch and learn” to “test and learn”

Plenty of companies, between product-market fit and scale, run their growth strategies on a “launch and learn” philosophy. They build features based on hunches, ship them to users, then analyze the results afterward. This approach can work, but when operating on a product-led growth model, product decisions carry outsized impact. The product experience influences pretty much every KPI from acquisition to retention.

When you launch first and learn later, you’re essentially gambling with your users’ experience. Every poorly conceived feature, every friction point, every missed opportunity represents lost revenue and potentially churned customers. More importantly, it represents wasted development resources that could have been deployed more strategically.

This is where experimentation comes in. Instead of “launch and learn,” companies can shift to “test and learn.” This means experimentation and analysis of results happen pre-launch, not after. Changes are validated with real users before full implementation, minimizing risk and maximizing ROI.

Experimentation before implementation gives you an understanding of real customer behavior and clearly indicates how you can repeat results by uncovering the why behind those behaviors.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

How experimentation amplifies PLG success

Experimentation is only helpful to a product-led growth strategy when it is done right. So what are some of the ways to implement that will amplify PLG success?

1. Systematic optimization across the customer journey

The most effective approach to PLG experimentation uses frameworks like ROPES (Registration, Onboarding, Product, Evangelize, Save) to systematically optimize each stage of the customer experience. Rather than randomly testing features, successful companies identify specific levers within each stage and experiment systematically.

For example:

  • Registration phase: Testing form length, social proof elements, and value propositions
  • Onboarding phase: Experimenting with tutorial formats, progress indicators, and time-to-value optimization
  • Product phase: Testing feature discoverability, UI changes, and user flow improvements
  • Evangelize phase: Optimizing sharing mechanisms, referral programs, and viral loops
  • Save phase: Testing retention tactics, upgrade prompts, and churn prevention strategies

This systematic approach ensures that experimentation efforts are strategic rather than scattered, creating compounding improvements across the entire user journey.

2. Accelerated learning through parallel testing

Traditional A/B testing approaches test one hypothesis at a time, which can drastically slow your learning velocity. Advanced PLG companies run multiple experiments simultaneously across different parts of their product experience, dramatically increasing the rate at which they gather insights.

The key to successful parallel testing is ensuring experiments don’t interfere with each other. As Natalie Thomas, our Director of UX and Strategy, explains: “It’s important to look at behavior goals to assess why your metrics improved after a series of tests. So if you’re running too many similar tests at once, it will be difficult to pinpoint and assess exactly which test led to the positive result.”

Successful parallel testing requires:

  • Creating testing roadmaps that cover independent product areas
  • Building small, cross-functional teams assigned to each area
  • Establishing clear metrics and success criteria for each test
  • Implementing proper statistical controls to avoid interference

3. Rapid experimentation for faster innovation

Speed matters in PLG. Market opportunities disappear quickly, and user expectations evolve constantly.

So, one of the main objections to implementing an experimentation strategy is that testing cycles often take weeks or months to complete. But high-performing PLG companies have found ways to cut this time in half without losing statistical rigor. Key strategies include:

Supplementing A/B Tests with Rapid Testing: Not every hypothesis requires a full A/B test. Qualitative research, user interviews, and rapid prototyping can validate concepts quickly before investing in development.

Modular Testing Approaches: Instead of starting from scratch each time, successful teams create reusable components like design templates, testing frameworks, and analysis processes to reduce setup time.

AI-Powered Research: Using artificial intelligence as a research assistant to speed up data collection, user recruitment, and insight generation.

Prioritization Frameworks: Implementing systematic prioritization (like the ADVIS’R framework) to ensure high-impact experiments get fast-tracked through the process.

4. Data-driven feature development

Experimentation helps PLG companies avoid the biggest roadmap mistake: prioritizing low-impact features. Instead of building what seems logical, experimentation reveals what actually drives user behavior and business metrics.

This is particularly important as you scale beyond basic PLG practices. When you’re competing with other product-led companies, the quality of your feature decisions becomes a key differentiator. Companies that systematically test and validate features before full development consistently outperform those that rely on intuition.

The most successful approach combines quantitative testing with qualitative insights. This means not just measuring what users do, but understanding why they do it. This deeper understanding enables teams to build features that truly resonate with users rather than features that just check boxes.

5. Building an experimentation-first culture

An outcome of adding experimentation to a product-led growth strategy is that it will help build the practice into your company culture. To do that, you can follow a few key steps.

Start with infrastructure

Before you can effectively use experimentation to support PLG, you need the right infrastructure. This includes:

  • Testing platforms that can handle both simple A/B tests and complex multivariate experiments
  • Analytics systems that provide real-time insights into user behavior
  • Data pipelines that connect user actions to business outcomes
  • Collaboration tools that enable cross-functional teams to work together effectively

Establish clear processes

Successful experimentation requires discipline. Teams need clear processes for:

  • Hypothesis formation and validation
  • Test design and statistical planning
  • Resource allocation and project management
  • Results analysis and decision-making
  • Knowledge sharing and organizational learning

Foster cross-functional collaboration

The most impactful experiments often come from unexpected sources. Engineers closest to the code understand technical constraints and opportunities. Designers see user experience friction points. Customer success teams hear directly from users about pain points.

Creating space for these diverse perspectives to contribute to experimentation efforts often leads to breakthrough insights that no single team would discover independently.

The compound effect of systematic experimentation

What makes experimentation so powerful for PLG companies is its compound effect. Each successful experiment doesn’t just improve one metric. It teaches you something about your users that informs future experiments.

Over time, this creates an accelerating cycle of improvement. Companies that have been systematically experimenting for years possess a deep, nuanced understanding of their users that newcomers can’t easily replicate. This understanding becomes a sustainable competitive advantage.

Moreover, experimentation capabilities themselves improve with practice. Teams get faster at designing tests, more sophisticated in their analysis, and better at translating insights into action. The infrastructure and culture that support experimentation become organizational assets that compound over time.

Experimentation as your PLG multiplier

Product-led growth without experimentation is like driving with your eyes closed. You might reach your destination, but probably not efficiently, and certainly not safely. Experimentation transforms PLG from a collection of best practices into a systematic approach to user-centered product development.

The companies that win in today’s competitive SaaS landscape aren’t just those with the best products; they’re those that can consistently improve their products based on real user insights. They’ve made experimentation not just a tactic, but a core organizational capability.

Ready to transform your PLG strategy with systematic experimentation? The Good specializes in helping product-led companies build experimentation capabilities that drive sustainable growth.

Our Digital Experience Optimization Program™ combines strategic frameworks like ROPES with hands-on experimentation support to help you uncover the specific insights your business needs to scale. Let’s explore how experimentation can accelerate your growth →

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post How Does Experimentation Support Product-Led Growth? appeared first on The Good.

]]>
5 SaaS Growth Strategies That Work (Based On Analysis Of 15 Top AI Tools) https://thegood.com/insights/saas-growth-strategies/ Wed, 13 Aug 2025 20:42:36 +0000 https://thegood.com/?post_type=insights&p=110756 The AI boom isn’t just about better technology; it’s about smarter growth strategies. While everyone’s talking about features and capabilities, there is another, equally compelling story that lies in how these tools convert free users into paying customers at unprecedented rates. We dove deep into the user experiences of 15 top AI tools, documenting over […]

The post 5 SaaS Growth Strategies That Work (Based On Analysis Of 15 Top AI Tools) appeared first on The Good.

]]>
The AI boom isn’t just about better technology; it’s about smarter growth strategies. While everyone’s talking about features and capabilities, there is another, equally compelling story that lies in how these tools convert free users into paying customers at unprecedented rates.

We dove deep into the user experiences of 15 top AI tools, documenting over 100 monetization touchpoints, upgrade pathways, and conversion tactics. What we found were five distinct patterns that drive revenue for these leaders.

These strategies aren’t just for AI. They’re blueprints that any SaaS tool can adapt to accelerate its own growth. Here’s what we learned.

The data behind the patterns

Our analysis covered tools spanning text generation (ChatGPT, Claude), search (Perplexity), design (Ideogram, Leonardo.AI), video creation (Runway), and productivity (Grammarly, QuillBot). Each tool was examined across four critical areas:

  • Monetization elements: Upgrade CTAs, limit notifications, premium feature gates, and more
  • Monetization pathways: The specific user journeys from free to paid
  • Pricing and payment screens: Where users actually convert when they decide to upgrade
  • Missed opportunities: Places where tools could be driving more conversions
Monetization doc gif

What emerged were five clear patterns that high-converting tools use consistently.

Pattern #1: The progressive squeeze

The strategy: Start with subtle hints, then gradually increase conversion prompts as users become more invested.

Who’s doing it: Claude, ChatGPT, and Perplexity have mastered this approach.

How it works: These tools begin with gentle upgrade suggestions embedded in the interface. A small CTA in the sidebar, a mention of plan limits in account settings. As users engage more, the messaging becomes increasingly direct.

Claude exemplifies this perfectly. New users see a subtle “Free plan” indicator and a small upgrade CTA. After several conversations, users get friendly notifications about approaching limits. Only when limits are actually hit does Claude present the strong upgrade push with clear urgency messaging.

A screenshot from Claude as an example of effective SaaS growth strategies.

ChatGPT follows a similar pattern but with more touchpoints. Multiple upgrade opportunities appear once logged in, but the real conversion push happens when users try to upload files or access advanced features.

A screenshot from ChatGPT as an example of effective SaaS growth strategies.

Why it converts: Users invest time and mental energy before hitting any hard walls. By the time they reach limits, they’re already committed to the tool and see clear value in upgrading rather than switching to alternatives.

The missed opportunity: Many tools go straight to hard limits without the progressive buildup, losing users who might have converted with a gentler approach.

FREE RESOURCE


How Top AI Tools Turn Free Users Into Paying Customers


Opting In To Optimization

Pattern #2: The feature tease

The strategy: Show users exactly what they’re missing by displaying premium features prominently, then gating access.

Who’s doing it: Ideogram, Grammarly, and Leonardo.AI excel at this approach.

How it works: These tools don’t hide their premium features. Instead, they showcase them prominently with visual cues like lock icons, blurred previews, or “Pro” badges. Users can see the feature, understand its value, and often interact with locked elements that trigger upgrade modals.

Ideogram shows locked features upfront on the dashboard, displays private galleries as gated sections, and lets users click through to see upgrade benefits. When users generate images, editing options appear with clear visual indicators of which features require upgrading.

A screenshot from Ideogram as an example of effective SaaS growth strategies.

Grammarly shows blurred premium suggestions alongside free ones, lets users see statistics with tone analysis grayed out, and provides partial feature previews that create curiosity about the full experience.

A screenshot from Grammly as an example of effective SaaS growth strategies.

Why it converts: Curiosity combined with FOMO creates powerful motivation. When users can see exactly what they’re missing and how it would solve their problems, the upgrade decision becomes much easier.

Implementation tip: The key is showing enough value to create desire while maintaining a clear visual hierarchy between free and premium features.

Pattern #3: The moment of need

The strategy: Present upgrade options precisely when users are most invested and would benefit most from premium features.

Who’s doing it: Runway, QuillBot, and Character.AI time their conversion prompts perfectly.

How it works: Instead of generic upgrade CTAs, these tools interrupt workflows at strategic moments when users are actively trying to accomplish something and would most benefit from premium features.

Runway waits until users want to export in 4K resolution or remove watermarks, both of which are moments when they’re already committed to using the generated content.

A screenshot from Runway as an example of effective SaaS growth strategies.

QuillBot triggers upgrade prompts when users hit word limits mid-task, not during idle browsing.

a screenshot from Quillbot showing an example of saas growth strategies.

Why it converts: Perfect timing equals the highest conversion rates. When users are already invested in a task and premium features would immediately solve their problem, the upgrade becomes a logical next step rather than an interruption.

The psychology: This taps into the completion bias. Once users start a task, they’re motivated to finish it, making them more likely to pay to remove obstacles.

Pattern #4: The transparent countdown

The strategy: Create urgency and build trust by clearly showing usage limits, remaining credits, and reset timers.

Who’s doing it: Perplexity, Grammarly, and Copy.AI have perfected transparent limit communication.

How it works: Instead of surprising users with sudden limits, these tools constantly communicate remaining usage through progress bars, countdown timers, and clear messaging about when limits reset.

Perplexity shows “2 queries remaining today” with each search, giving users clear visibility into their usage without anxiety.

A screenshot from Perplexity as an example of effective SaaS growth strategies.

Grammarly displays credit counts and refill timers for AI features, so users can plan their usage accordingly.

A screenshot from Grammarly as an example of effective SaaS growth strategies.

Copy.AI uses a prominent word count progress bar that updates in real-time, showing exactly how much of their monthly limit has been used.

A screenshot from copy.ai an example of effective SaaS growth strategies.

Why it converts: Transparency builds trust while creating healthy urgency. Users appreciate knowing where they stand and can make informed decisions about when to upgrade rather than feeling tricked by hidden limits.

The trust factor: When users trust that limits are fair and clearly communicated, they’re more likely to see upgrading as a reasonable business transaction rather than being forced into paying.

Pattern #5: The omnipresent nudge

The strategy: Place multiple upgrade touchpoints throughout the interface without being intrusive.

Who’s doing it: ChatGPT, QuillBot, and Ideogram have mastered multi-touchpoint conversion.

How it works: These tools strategically place upgrade opportunities at different points in the user journey, including header CTAs, sidebar reminders, settings page options, and feature-specific prompts. The key is making each touchpoint feel contextual rather than repetitive.

ChatGPT places upgrade CTAs in the dropdown menu, file upload tooltips, model selection interfaces, and account settings. Each serves a different user intent and provides value beyond just asking for payment.

A screenshot from ChatGPT is an example of effective SaaS growth strategies.

QuillBot integrates upgrade opportunities into the workflow, for example, in premium mode selectors, feature benefit explanations, and contextual prompts that feel helpful rather than pushy.

Quillbot upgrade integrations are a good example of effective saas growth strategies.

Why it converts: Repetition without annoyance increases recall and provides multiple chances to convert users at different readiness levels. Some users need to see upgrade options multiple times before they’re ready to act.

The balance: The key is ensuring each touchpoint provides value or information, rather than simply asking for money repeatedly.

The standout performers

While all 15 tools showed growth-focused design, three stood out for their sophisticated monetization strategies:

Claude excels at the Progressive Squeeze, building user investment before presenting upgrade opportunities. Their limit messaging feels helpful rather than restrictive, and the upgrade pathway is seamless.

Ideogram masters the Feature Tease, showcasing premium capabilities so effectively that users understand the upgrade value before reaching any limits. Their visual hierarchy makes premium features aspirational rather than frustrating.

Perplexity nails the Transparent Countdown, creating urgency without anxiety through clear limit communication and value-focused messaging.

Common missed opportunities

Our analysis revealed several patterns where even successful tools leave money on the table:

  • Timing failures: Many tools show upgrade prompts during onboarding when users haven’t yet experienced value, rather than waiting for engagement.
  • Value communication gaps: Some tools gate features without clearly explaining the benefits, leading to confusion rather than desire.
  • Conversion pathway friction: Several tools send users to generic pricing pages rather than contextual upgrade flows that maintain momentum.
  • Limit surprises: Tools that suddenly cut off functionality without warning create frustration rather than conversion motivation.

Applying these patterns to your SaaS growth strategies

These AI growth strategies aren’t limited to AI tools. The underlying principles work for any SaaS looking to improve free-to-paid conversion:

Start with your user journey mapping

Identify key moments where users experience value and where they encounter limitations. These are your conversion opportunity points.

Audit your current upgrade messaging

Are you using the Progressive Squeeze, or do you jump straight to hard limits? Are you showing users what they’re missing with Feature Teasing?

Review your limit of communication

Do users understand their usage limits, and when they reset? Transparent Countdown reduces churn and builds trust.

Optimize your touchpoint strategy

Map where upgrade CTAs appear in your interface and ensure each serves a specific user need rather than just asking for payment.

Test your conversion timing

Are you presenting upgrade options when users are most invested (Moment of Need) or just when it’s convenient for your UI?

What does this mean for your growth strategy?

AI tools are teaching us that successful monetization isn’t always about restricting features; it can be about showcasing value, building trust, and timing conversion opportunities perfectly. The tools growing fastest aren’t necessarily those with the best AI models, but those with the smartest user experience design.

These patterns work because they align business needs with user psychology. Instead of seeing limits as barriers, users experience them as natural progression points toward greater value.

The AI boom provides a unique laboratory for studying growth tactics at scale. These tools process millions of users and can iterate rapidly, revealing what actually drives conversions versus what we think should work.

As AI capabilities become more commoditized, user experience (including monetization design) becomes the key differentiator. The tools implementing these patterns now are building sustainable competitive advantages that will persist even as the underlying technology evolves.

Taking action on these insights

The most successful SaaS companies will adapt these AI growth strategies to their own products before their competitors catch on. Start by analyzing your current monetization approach against these five patterns:

  1. Map your user journey to identify Progressive Squeeze opportunities
  2. Audit your feature visibility to implement Feature Teasing where appropriate
  3. Review your limit of communication to adopt Transparent Countdown principles
  4. Time your conversion prompts to leverage the Moment of Need psychology
  5. Optimize your touchpoint strategy using Omnipresent Nudge best practices

The data from these 15 AI tools provides a roadmap, but implementation requires careful testing and optimization for your specific user base and value proposition.

Ready to apply these AI growth strategies to accelerate your SaaS growth? The Good specializes in analyzing user experiences and implementing conversion optimization strategies that turn insights into revenue. Our team has helped dozens of SaaS companies optimize their monetization flows using data-driven approaches just like this analysis.

Get your personalized monetization strategy audit. We’ll analyze your current user experience against these proven patterns and create a prioritized optimization roadmap tailored to your product and audience. Schedule a consultation with our team to discover how these AI growth strategies can accelerate your revenue growth.

FREE RESOURCE


How Top AI Tools Turn Free Users Into Paying Customers


Opting In To Optimization

The post 5 SaaS Growth Strategies That Work (Based On Analysis Of 15 Top AI Tools) appeared first on The Good.

]]>
Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? https://thegood.com/insights/regulated-saas/ Fri, 08 Aug 2025 18:36:19 +0000 https://thegood.com/?post_type=insights&p=110753 The conversation happens on nearly every discovery call we have with a leader tasked with optimizing SaaS or software for regulated industries. It starts with optimism about growth potential, then quickly shifts to the reality of their constraints. Healthcare software companies can’t freely experiment with patient data. Financial technology firms face strict compliance requirements that […]

The post Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? appeared first on The Good.

]]>
The conversation happens on nearly every discovery call we have with a leader tasked with optimizing SaaS or software for regulated industries. It starts with optimism about growth potential, then quickly shifts to the reality of their constraints.

Healthcare software companies can’t freely experiment with patient data. Financial technology firms face strict compliance requirements that limit onsite testing capabilities. Government contractors operate under security clearances that restrict user research. Insurance platforms must navigate complex regulatory frameworks. HR and ATS software handle sensitive employee data that requires careful privacy protection.

Experimentation seems nearly impossible under these circumstances, and the product-led growth strategies these teams see working for companies riding exponential growth waves like Linktree or Lovable can’t work for them.

These regulated SaaS companies still need to grow. They have the same fundamental challenges as any SaaS business: converting leads, reducing churn, and improving user experience. But the traditional growth toolkit doesn’t fit their reality, so let’s explore what can work.

The problem with product-led growth in regulated industries

Product-led growth has become the gold standard for SaaS success.

Companies like Canva, Grammarly, and Spotify have proven that letting users experience your product before purchasing leads to higher conversion rates, lower customer acquisition costs, and sustainable growth.

The strategy is to remove obstacles to product adoption, offer free trials or freemium versions, and let the product sell itself. These companies often move quickly and test new features relentlessly as a way to “hack” growth.

The product-led growth playbook includes:

  • Free trials and freemium models that give users immediate product access
  • Continuous A/B testing on live user experiences
  • Extensive user tracking and behavioral analytics to optimize conversion funnels
  • Rapid iteration based on user feedback and behavior data
  • Self-service onboarding that guides users to their “aha moment
  • Viral growth loop, where users invite others or share content

And it works…for many. But regulated SaaS companies see these success stories and struggle to replicate them.

How do you offer a free trial for an HR tool that has to be rolled out across an entire organization to be useful? How do you minimize sign-up friction for a fintech software that requires bank information to function?

Experimenting with new features is too risky when system failure or emergency calling disruptions in telecommunications could result in massive fines.

Sometimes the stakes are too high for the product-led growth best practices that we see working in less-restrictive industries.

Regulated SaaS challenges are unique, and their growth solutions should be too

The challenges for this subset of SaaS companies are real and varied.

Compliance and privacy restrictions: Healthcare companies can’t freely test with patient data. Financial services face strict data handling requirements. Government contractors operate under security clearances.

Low traffic volume: Many regulated SaaS companies serve niche markets with limited user bases, making traditional A/B testing statistically impossible.

Long testing cycles: By the time regulated companies collect enough data from different regions and customer segments, it can take years to reach statistical significance. Different customers use different features across various geographical locations, making it difficult to design meaningful experiments that won’t disrupt service.

Risk-averse customers: Enterprise clients in regulated industries don’t want to be testing subjects for new features or experiences.

Resource constraints: Many regulated SaaS companies are highly technical but lack dedicated growth or UX teams.

Unique challenges require unique solutions, and that is what The Good can provide.

The alternative: off-site experiment-led growth

The solution isn’t to abandon growth optimization. It’s to use different methods that work within regulatory constraints.

This is where off-site experiment-led growth becomes the game-changer.

Experiment-led growth is a strategic approach that relies on continuous research, experimentation, and data-driven decision-making to drive business improvements. It allows teams to rapidly iterate on ideas that improve UX, marketing, and more.

Regulated SaaS can add an extra layer to experiment-led growth by taking things off-site or out of the product experience. Moving the growth tactics and experimentation away from the regulated environment and live user base gives teams the chance to make changes freely and quickly, gauge user reaction to those changes, and either launch with confidence or kill the ideas.

While product-led growth relies on in-product experimentation with real users, off-site experiment-led growth validates hypotheses and optimizes experiences before they ever touch your production environment. Instead of letting users test drive your product to discover value, you test drive your assumptions about users to deliver value immediately.

This approach flips the model to accommodate some of the restraints that regulated SaaS companies face. It’s no longer required to iterate on live systems with real customer data. There is an option to conduct experiments in controlled environments that don’t compromise compliance or risk customer relationships. You gather similar insights that drive product-led growth success, but through methods that work within constraints.

The result is a growth strategy that’s both data-driven and compliant, giving regulated SaaS companies access to the same optimization advantages that unrestricted companies enjoy, just through different means.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Off-site experiment-led growth tactics

Here are a few of the methods we use to deliver optimization outcomes for companies with the challenges and constraints outlined earlier in the article.

User testing

Because of the difficulty in getting customer data, there can be a disconnect between product teams and users.

Lookalike user testing solves this by bringing external participants who match the ideal customer profiles through your live experience. They complete tasks while thinking out loud, revealing friction points and confusion without exposing any sensitive data or requiring system changes.

This helps understand user behavior patterns, identify conversion barriers, and validate solutions, all without touching your production environment or compromising compliance.

AI-powered heatmaps and analytics

AI-generated heatmaps can predict user behavior with 92% accuracy without requiring any actual user data. These tools can analyze your interface and predict where users will look, what they’ll miss, and how long they’ll engage with different elements.

This is particularly valuable for regulated companies because you can understand user attention patterns and optimize layouts before the system is used.

Rapid testing

Experimentation is a proven way to get essential feedback on new features or website changes. And with A/B testing off the table in many regulated industries, rapid testing can fill in the gaps.

Unlike traditional A/B testing, rapid testing doesn’t require code changes, live traffic, or long research cycles. Instead, it uses a combination of techniques to validate hypotheses and inform decisions before anything goes live.

Rapid experimentation is not a one-size-fits-all process. Different scenarios call for different types of tests. Here are some common methods:

  • First-click tests: First-click tests evaluate whether users can intuitively find the primary action or information on a page.
  • Tree tests: Tree testing is a usability technique that helps you understand how users navigate through your website or app’s structure.
  • 5-second tests: 5-second tests assess a user’s immediate impression of a design or message.
  • Design surveys: Design surveys collect qualitative feedback on wireframes or mockups.
  • Preference tests: This test involves showing users two or more design variations and asking which they prefer and why. It’s perfect for narrowing down visual or messaging options before launching a formal test.
  • Card sorting: Card sorting is a research technique used to understand how users organize and categorize information.

These are just six of the many types of rapid experimentation.

While none deliver a 1:1 result when compared to A/B or multivariate testing, rapid experimentation offers a way for regulated SaaS companies to focus their development resources on work that has already shown positive signals from users.

For a tangible example, imagine a company struggling with positioning (a common challenge in technical, regulated industries). Five-second testing provides immediate feedback on messaging effectiveness. Users see your page for five seconds, then recall what they remember.

Competitive intelligence and market research

Structured competitive analysis and market research don’t require access to your own user base.

Understanding how competitors position themselves, what messaging resonates in your industry, and what user expectations exist can inform optimization decisions.

Also, gathering growth strategies from businesses in a similar industry with compliance or other restraints will offer a starting point to come up with new ideas that you can rapid test later on.

Getting started with optimization

Optimization can be intimidating and complex for regulated SaaS companies. Based on experiences working with teams like yours, here’s how to get started implementing growth optimization within your constraints.

1. Start with an audit or assessment of your current situation

Before making any changes, conduct a comprehensive audit of your current digital experience. This includes:

  • Technical tracking setup to understand what data you can legally collect
  • User journey mapping to identify critical conversion points
  • Competitive analysis to understand industry standards and opportunities
  • Stakeholder interviews to align on growth priorities and compliance requirements

2. Implement the methodologies we covered

Focus on techniques that provide insights without requiring on-site or in-product experimentation:

  • User testing with 5-7 participants per user type (you’ll get 80% of insights from this small sample)
  • Message testing to validate positioning and value propositions
  • Prototype testing for new features or flows before development
  • Heat mapping to understand attention patterns and interaction likelihood

3. Prioritize based on impact and compliance

Create a roadmap that balances growth potential with regulatory requirements. Focus on:

  • High-impact, low-risk optimizations that don’t require system changes
  • Messaging and positioning improvements that can be implemented quickly
  • User experience enhancements that reduce friction without compromising security
  • Qualification improvements to ensure you’re attracting the right prospects

4. Build your internal capabilities and outsource what you can’t

Many regulated SaaS companies lack dedicated growth resources. Consider:

  • Training technical teams on user experience principles
  • Establishing research processes that work within compliance frameworks
  • Creating feedback loops between customer-facing teams and product development
  • Implementing regular optimization cycles that don’t disrupt core operations
  • Outsourcing what you just can’t manage internally

Growth within constraints isn’t impossible

Regulated SaaS companies don’t need to accept mediocre growth because of their constraints. They need different approaches that work within their reality.

The key is recognizing that optimization isn’t restricted to product-led strategies or A/B testing. Understanding your users, validating your assumptions, and making data-driven decisions can deliver outcomes that are just as impactful.

Whether you’re in healthcare, financial services, government, or any other regulated industry, growth optimization is possible. It just requires the right toolkit and a willingness to think beyond traditional approaches.

Making off-site experiment-led growth work within your regulatory constraints starts with a conversation. Learn what’s actually possible when you have the right methodology and expertise guiding your optimization efforts by getting in touch with our team.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? appeared first on The Good.

]]>
Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base https://thegood.com/insights/monetization-strategy/ Thu, 17 Jul 2025 15:22:34 +0000 https://thegood.com/?post_type=insights&p=110736 Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously. But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach. The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who […]

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously.

But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach.

The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who build monetization strategies that focus on their existing user base. As users find more value in the tool and increase usage, the tool’s pricing fluctuates accordingly.

Realistically, every SaaS tool will hit a growth plateau. There aren’t infinite users that will find value in your product, even though we all wish there were.

The goal is to build growth into your monetization strategies so you don’t leave any untapped revenue in your existing user base. This ensures you don’t reach a premature growth plateau once net new users become stagnant.

The fundamental shift in monetization strategy from seats to value

Before we get started, throw your traditional SaaS monetization playbook out the window.

For years, companies have relied on seat-based pricing because it made sense. With each new hire, a new account or seat was purchased for tools. Revenue grew linearly with team size.

But now one person can do the work of two or three people. AI tools, automation, and productivity software mean that the relationship between users and value creation has completely shifted. When your customers can accomplish twice as much with half the team, seat-based pricing isn’t sustainable.

Smart companies are pivoting to value-based extraction. Instead of charging for the number of people using your software, they charge for the value you create. This isn’t just about switching to usage-based pricing; it’s about fundamentally rethinking how you capture the value your product delivers.

Consider HubSpot’s evolution. Instead of sticking to their standard seat-based pricing model as the market has evolved, they’ve created a dynamic pricing system. Users can pay for seats at their specific account tier, but also have a layer of contact-based pricing, aligning cost with the actual value delivered rather than just the number of users.

They’ve also recently added token-based pricing for certain functions in the tool, like marketing email sends, AI features, and API calls. These changes allow them to maintain revenue growth even as customers reduce their seat count.

You’re trying to capture more of the consumer surplus

Most SaaS tools have a consumer surplus. There are features or outcomes that customers would pay more for, but don’t have to because of your pricing model.

You can never eliminate all surplus (you need happy customers), but you can likely capture more of it through strategic segmentation and value extraction.

Think about your demand curve. It’s not a straight line. It’s a complex slope that varies by customer segment, use case, and willingness to pay. Most companies set one or two price points and leave massive value on the table. The companies that scale create multiple packages along that curve.

Netflix understood this when it evolved from a single $7.99 plan to Basic, Standard, and Premium tiers. Each tier captures different segments of willingness to pay while allowing customers to self-select into the option that works for them. However, the real insight wasn’t in the tiers themselves, but rather an understanding that different customer segments valued different features. Knowing that allowed Netflix to extract more value from customers who were willing to pay more while keeping price-sensitive customers from defecting.

Research changes everything

To get started on a monetization strategy based on value and capture more of the consumer surplus, companies have to build their understanding of what customers are willing to pay for.

Research from monetization and pricing expert Madhavan Ramanujam says that 20% of features drive 80% of willingness to pay. The challenge is to make sure you aren’t over-indexing on features that customers don’t actually value while underdeveloping the ones that drive revenue.

The solution is systematic research that reveals what customers actually want to pay for. Here are three methods to make it happen:

Max diff analysis: Present customers with feature lists and ask them to identify the most and least important items. With enough volume, you can rank features by their impact on willingness to pay. Features that over 50% of customers want become your “leader” features or the core value proposition that justifies your price point.

Anchoring questions: Instead of asking customers what they’d pay (which doesn’t work), ask them to compare your value to a known competitor. “If Salesforce brings your team 100 points of value, where do we rank?” This gives you relative value positioning without the discomfort of direct pricing questions.

Van Westendorp pricing: Ask customers four questions about price sensitivity: What’s acceptable? What’s expensive but you’d consider it? What’s so expensive that you wouldn’t consider it? What’s so cheap that you’d question the quality? This reveals the psychological price boundaries for different customer segments, providing a window of tenable prices that capture both the price-sensitive and high-willingness-to-pay corners of the market.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The Shopify monetization strategy: how to scale with your customers

An innovative and extremely effective monetization strategy allows you to grow with your customers. Shopify cracked this code by creating a model where their revenue increases as their customers become more successful. Instead of charging an ever-larger flat monthly fee, they take a percentage of gross merchandise volume (GMV).

This creates a virtuous cycle: Shopify is incentivized to help their customers succeed because customer success directly translates to revenue growth. When a merchant goes from $10,000 to $100,000 in monthly sales, Shopify’s revenue from that customer increases 10x.

Smaller businesses benefit from a proportional cost as they get started, and if businesses leave once they grow, Shopify doesn’t mind.

Shopify actually optimizes for this churn, not against it. As Archie Abrams, VP of Product and Head of Growth at Shopify, explains: “The way we think about churn [goes] back to Shopify’s mission and what we want to do, which is to increase the amount of entrepreneurship on the Internet.”

Instead of trying to prevent customers from leaving, Shopify focuses on lowering barriers to entry so more entrepreneurs can try starting businesses. They know most will fail, but the few who succeed generate massive value. This counterintuitive approach has helped Shopify power over 10% of U.S. e-commerce with $235 billion in GMV in 2023.

The beauty of this model lies in its retention through value creation, rather than friction. Traditional SaaS companies worry about churn because losing a customer means losing all their revenue. But when your revenue scales with customer success, churn becomes less of a concern. Your most successful customers are worth 10x or 100x more than your average customer, creating a natural buffer against churn.

Finding your untapped revenue

The process of discovering untapped revenue in your user base can be synthesized into a few steps:

Step 1: Segment your demand curve

Different customer segments have different willingness to pay. Enterprise customers might value security and compliance features, while SMBs prioritize ease of use and cost. Map these segments and understand what each values most.

Step 2: Identify value gaps

Look for places where customers are getting significant value but paying relatively little. These are your biggest opportunities for revenue expansion. Often, these are found in features that save customers time or help them make money.

Step 3: Create extraction mechanisms

Build pricing tiers, usage limits, or premium features that allow high-value customers to pay more for the value they receive. The key is making this feel like a fair exchange rather than a penalty.

The most effective monetization strategies combine multiple approaches. For example:

  • Base + usage: Provide a predictable subscription base with usage-based charges for additional value. This gives customers cost certainty while allowing you to capture upside from heavy users.
  • Tiered value: Create pricing tiers based on customer segments and use cases, not just feature lists. Each tier should feel designed for a specific type of customer.
  • Expansion revenue: Build mechanisms for customers to naturally increase their spending as they grow. This could be through additional seats, increased usage, or premium features.
  • Value-based upgrades: Tie pricing increases to value delivered, rather than just features added. When customers see clear ROI, they’re willing to pay more.

Step 4: Test and iterate

Pricing optimization is an ongoing process. Test different approaches, measure customer response, and iterate based on data. The best monetization strategies evolve continuously.

A monetization strategy that works for the long term

The future of SaaS monetization is about aligning pricing with value creation rather than resource consumption. The untapped revenue in your user base is real, measurable, and accessible, and approaching it with a value-based strategy will help you capture it.

At The Good, we specialize in helping SaaS companies optimize their monetization strategies through data-driven research and strategic experimentation.

Our services can help you identify value gaps, design pricing experiments, and implement changes that drive meaningful revenue growth. Get in touch to learn how we can help you extract more value from your existing customers.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
How to Drive Account Expansion with Collaborator & Team Features That Stick https://thegood.com/insights/account-expansion/ Sat, 12 Jul 2025 18:40:04 +0000 https://thegood.com/?post_type=insights&p=110716 Every user who finds genuine value in your product has a network of colleagues, teammates, and stakeholders who could benefit from the same solution. Yet lots of companies treat their existing users as endpoints instead of starting points. They focus on acquiring new customers rather than leveraging the growth potential already sitting in their user […]

The post How to Drive Account Expansion with Collaborator & Team Features That Stick appeared first on The Good.

]]>
Every user who finds genuine value in your product has a network of colleagues, teammates, and stakeholders who could benefit from the same solution.

Yet lots of companies treat their existing users as endpoints instead of starting points. They focus on acquiring new customers rather than leveraging the growth potential already sitting in their user base.

There are plenty of strategies to maximize your existing user base, including leveraging the power of growth loops and positive network effects as covered in other articles, but today I want to touch on how strategic feature creation that drives collaboration is an underrated revenue opportunity.

Why account expansion through collaboration can beat traditional sales tactics

The traditional approach to account expansion relies on sales teams identifying upgrade opportunities and convincing decision-makers at a company of the value. However, a new group of buyers is not accounted for in this model.

“Citizen SaaS buyers” now influence 40% of all company SaaS spending. These aren’t IT decision-makers; they’re everyday users who either A) find tools so valuable that they eventually buy them for their teams or B) see how much more effective it would be if more team members used them, so they advocate for upgrades.

These users typically start as single-seat or individual account holders, and instead of a traditional path to account expansion, their user journey finds upgrade paths via team or collaborative features. This style of collaboration-focused expansion makes upgrading feel like an extension of getting work done. It focuses on what naturally happens when users find real value.

How does it work in action?

This model of account expansion creates self-reinforcing cycles where user actions naturally drive more user actions. Unlike traditional sales funnels that end with a purchase, growth loops turn a user interaction into a potential expansion opportunity.

Here’s how a collaboration-driven growth loop works:

  • User finds value → Individual user discovers your product solves a real problem
  • User enhances value through collaboration → To maximize the solution, they need to involve teammates
  • Collaboration creates shared investment → Team builds workflows, templates, and shared resources
  • Shared investment increases dependency → Team becomes reliant on collaborative workflows
  • Dependency drives expansion → Team needs more features, seats, or capabilities
  • Expansion enables bigger problems → Larger teams tackle more complex challenges
  • Bigger problems require more collaboration → Loop repeats at a larger scale

This isn’t just theory. We’ve seen this pattern drive expansion in everything from design tools to project management platforms. The key is designing features that naturally create more collaboration opportunities. Eventually, revenue grows through authentic value creation rather than time-intensive upselling.

Understanding the types of collaboration and team features

Before diving into strategy, it’s helpful to understand the different types of collaboration features that SaaS companies use to turn individual users into team advocates. These features work best when they feel like natural extensions of your core product value rather than bolted-on additions.

Sharing and access features

These are the foundations of most collaboration strategies. Users can share specific content, projects, or workspaces with colleagues. Examples include shared documents, project folders, dashboard links, or design files. The key is making sharing feel essential to getting work done rather than optional.

Image of Notion's sharing and access feature is an example of an account expansion tactic.

Notion has clear shared workspaces, allowing groups of individual users or “teams” to share documents, templates, and files.

Invitations

Direct invitation systems let users add colleagues to their accounts or workspaces. This includes features like “Add team member,” workspace invitations, or role-based access controls. The most effective invitation systems make it obvious why adding someone will improve the work for everyone involved.

An image showing Google Meet's invite new attendees feature which is a way to drive account expansion.

Google Meet offers pre-meeting invite capabilities and makes it simple to add new attendees to a meeting with multiple invitation calls-to-action, and even provides suggestions of individuals you can add.

Real-time collaboration

Features that let multiple people work on the same thing simultaneously. This includes co-editing documents, collaborative whiteboards, shared design files, or synchronized data entry. Real-time collaboration often creates the strongest expansion pull because it makes individual work feel incomplete.

This image of Figma's real-time collaboration feature is a good example of an account expansion tactic.

Figma is a masterclass in real-time collaboration, with shared files, “jam sessions” or timed working sessions, and even name tags on cursors to see where collaborators are in the file.

Communication and feedback tools

Built-in ways for team members to communicate about shared work. This includes comment threads, @mentions, approval workflows, or status updates. These features keep conversations contextual to the work, making your product the natural hub for project communication.

This image from Airtable shows a communication feature that can be effective for account expansion.

Airtable offers commenting, tagging, and assignment features throughout the tool, allowing teams to notify each other and host conversations in relevant project spaces.

Permission and role management

Systems that let users control who can see or edit what. This includes viewer/editor roles, department-level access, guest permissions, or approval hierarchies. Good permission systems make it safe and easy to include external stakeholders in workflows.

An image of the permission and role management features in TLDV that provide account expansion opportunities.

TLDV clearly outlines the sharing permissions on videos with levels of access, including “my team,” “my organization,” and individual users. There are also general access links if you want to share beyond account holders.

Workflow and process sharing

Features that let users create templates, processes, or automated workflows that others can use. This includes shared templates, workflow automation, or standardized processes. When teams build shared workflows, they create a collective investment in your platform.

An image showing the shared workflow capabilities in Canva as an example of effective account expansion features.

Canva has brand kits, controls, and templates that can be shared amongst your team to help standardize and speed up your design workflows.

Social and activity features

Elements that show what team members are working on and create visibility into collaborative work. This includes activity feeds, presence indicators, or team dashboards. These features help teams stay coordinated while showcasing the value of collaborative work.

Image of Slack's social and activity features that aid account expansion.

Slack offers great visibility into who is online with the green or transparent status dot next to users in the sidebar, giving easy indicators of who is available for active collaboration.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

7 tactics for building collaboration features that drive account expansion and keep users around

The good news is you don’t have to pick just one type of collaboration feature. You can combine multiple types to create comprehensive collaborative experiences that make teamwork feel natural and essential. Here are some essential tactics to help you do just that.

1. Make sharing more valuable than working alone

The biggest barrier to collaboration isn’t technical, it’s behavioral. Users default to working alone unless collaboration is obviously easier and more valuable than individual work.

Design your product so that collaborative features provide immediate, obvious benefits that individual work can’t match. Don’t just make collaboration possible; make it essential for getting the best results.

For example, Figma revolutionized design by making real-time collaboration the default experience. Instead of designers working in isolation and then sharing static files, Figma made the design process inherently collaborative. Stakeholders could see work in progress, provide feedback in context, and feel involved in the creative process. This didn’t just improve design quality; it naturally expanded usage to include project managers, developers, and executives who previously only saw final designs.

2. Build features that create shared investment

When users invest time in building collaborative structures, they create switching costs that extend beyond individual preferences. The more a team builds together, the harder it becomes to leave your platform.

Provide tools that enable users to create shared resources, templates, and workflows that become more valuable as more people contribute to them. Make it easy to start collaborative structures and painful to abandon them.

A good example is Notion’s template system, which creates significant shared investment. When a team builds a comprehensive project management template with custom properties, linked databases, and automated workflows, they’re not just organizing their current work; they’re creating a system that becomes more valuable as more team members contribute to it. Removing team members from the workspace breaks the system, creating a natural resistance to downsizing.

3. Make collaboration visible and desirable

When users see colleagues accessing information, participating in decisions, or benefiting from workflows they can’t access, they naturally want to be included. Visibility drives demand for inclusion.

Make collaboration visible and valuable. Show users what they’re missing when they’re not part of collaborative workflows. Create transparency around who’s involved in what work, and make it easy to request access or suggest inclusion.

One example is Slack’s channel system, which creates visibility that drives expansion. When important decisions happen in channels users can’t access, they naturally request to be added. When they see colleagues sharing resources, celebrating wins, or coordinating work in channels they can observe but not participate in, they want to create their own channels for their work. This visibility drives organic expansion as users advocate for broader team adoption.

4. Include stakeholders who don’t use your product daily

Most SaaS tools start with individual users and try to expand outward. A better approach is to identify who needs to be involved for your primary users to be successful, and then build features that naturally include those stakeholders.

Map out who needs to be involved for your users to achieve their goals. Design features that make it easy to include those stakeholders in workflows, even if they’re not primary users of your product.

Miro understood that successful brainstorming sessions require diverse perspectives. Instead of building a tool just for facilitators, they created features that make it easy to include participants who might never use Miro independently. Guest access, simple sharing links, and intuitive contribution tools mean that workshop participants don’t need to be Miro experts to add value. This naturally expands usage to include executives, clients, and cross-functional team members who become advocates for broader adoption.

5. Recognize when users need help and suggest collaboration

The most effective collaboration features activate automatically when users hit natural collaboration points in their workflow. Instead of requiring users to remember to invite colleagues, smart systems recognize when teamwork would be valuable and make it easy to initiate.

Identify the moments in your user workflows where collaboration would be most valuable. Build features that recognize these moments and proactively suggest or facilitate collaboration.

Canva’s team features activate when users create designs that would benefit from collaboration. When a user creates a brand template, the platform suggests inviting brand managers. When they start a campaign design, it recommends involving marketing team members. When they build a presentation, they offer to share it with stakeholders for feedback. These suggestions feel helpful rather than pushy because they activate at moments when collaboration genuinely improves outcomes.

6. Support different work styles and schedules

Not all collaboration happens in real time. Some of the most powerful collaborative features work across time zones, schedules, and work styles. Asynchronous collaboration features often drive more expansion because they’re less dependent on coordinating schedules.

Build collaboration features that work when team members aren’t online simultaneously. Focus on features that let people contribute when it’s convenient for them while maintaining context for others.

Loom’s video messaging creates asynchronous collaboration opportunities that naturally expand usage. When someone creates a video explanation of a complex process, they often need to share it with multiple stakeholders who weren’t part of the original conversation. The video becomes a shared resource that multiple team members reference, comment on, and build upon. This creates natural expansion as teams recognize the value of asynchronous video communication for knowledge sharing.

7. Use access control as an expansion tool

Most SaaS companies think about permissions as security features. The smartest ones also use permissions as expansion features. Well-designed permission systems create natural opportunities for users to expand access as their needs grow.

Design permission systems that make it easy to grant appropriate access to new stakeholders without overwhelming them or compromising security. Use permission requests as expansion opportunities rather than barriers.

For example, Dropbox’s permission system creates natural expansion opportunities. When users want to share folders with specific access levels, they’re guided through options that often result in upgrading accounts to accommodate more users or storage. The permission system protects files and creates moments where users recognize the value of bringing more people into their workflows.

Ready to organically drive account expansion?

Collaboration-driven account expansion isn’t just about adding team features to your product. It’s about understanding how work really gets done and building features that make collaboration feel natural, valuable, and necessary.

The SaaS companies that master this approach turn every user into a potential growth engine. They create products so collaborative that teams can’t imagine working any other way. When collaboration becomes essential to how work gets done, account expansion becomes inevitable.

At The Good, we’ve helped SaaS companies identify and build collaboration features that drive meaningful account expansion. Our Digital Experience Optimization Program™ takes a systematic approach to understanding user behavior, designing collaborative experiences, and optimizing for sustainable growth.

Ready to transform your users into your most effective growth engine? Let’s explore how collaboration-driven expansion can accelerate your growth.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Drive Account Expansion with Collaborator & Team Features That Stick appeared first on The Good.

]]>