research Archives - The Good Optimizing Digital Experiences Wed, 26 Nov 2025 18:55:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust https://thegood.com/insights/maxdiff-analysis/ Wed, 26 Nov 2025 17:56:30 +0000 https://thegood.com/?post_type=insights&p=111202 When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like […]

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like ours,” and “How do I know this won’t be another failed implementation?”

The company had assembled a list of proof points they could showcase on their homepage: years in business, number of integrations, customer counts, implementation guarantees, security certifications, industry awards, analyst recognition, and more. But they only had space to highlight four of these benefits prominently below their hero section. They faced the classic messaging dilemma: which trust signals would actually move the needle with prospects evaluating B2B software?

This is where MaxDiff analysis becomes valuable. Instead of relying on stakeholder opinions or generic best practices, we could let their target buyers vote with data on what mattered most.

What makes MaxDiff analysis different from other survey methods

MaxDiff analysis (short for Maximum Difference Scaling) is a research methodology that forces trade-offs. Rather than asking people to rate items individually on a scale, MaxDiff presents sets of options and asks participants to identify the most and least important items in each set. This forced-choice format reveals true preferences because people can’t rate everything as “very important.”

Here’s why this matters: traditional rating scales often produce compressed results where everything scores high. When you ask customers, “How important is X on a scale of 1-10?” most people will hover around 7 or 8 for anything remotely relevant. You end up with a spreadsheet full of similar numbers and no clear direction.

MaxDiff cuts through that noise. By repeatedly asking “which of these five options matters most to you, and which matters least?” across different combinations, you build a statistical picture of relative importance. The math behind MaxDiff generates a best-worst score for each item, showing not just which options are preferred, but by how much.

For digital experience optimization, this methodology is particularly useful when you need to prioritize limited real estate on a website, determine which features to build first, or figure out which messaging will actually differentiate your brand.

How we structured the MaxDiff study for maximum insight

In the project for our client, we started by defining the target audience precisely. The company was a B2B SaaS platform serving mid-market operations teams, so we recruited 60 participants who matched their customer profile: director-level or above at companies with 50-500 employees, working in operations or supply chain roles, currently using at least two SaaS tools in their workflow, and actively evaluating solutions within the past six months.

From the initial audit and stakeholder interviews, we identified 11 potential trust signals the company could emphasize on its homepage. These included things like:

  • Concrete numbers (customer counts, uptime percentages, integrations available)
  • Credentials (security certifications, enterprise clients)
  • Promises (implementation timelines, support response times, money-back guarantees)
  • And more

Each represented something the company could truthfully claim, but we needed to know which ones would build the most trust with prospects evaluating the platform.

The survey design was straightforward. Each participant saw these 11 benefits randomized into multiple sets of five items. For each set, they selected the most important factor and the least important factor when considering whether to adopt this type of software. Participants completed several rounds of these comparisons, seeing different combinations each time.

This approach gave us enough data points to calculate a robust best-worst score for each benefit: the number of times it was selected as “most important” minus the number of times it was selected as “least important.” Positive scores indicate a strong preference, negative scores indicate a low importance, and the magnitude of the scores shows the strength of feeling.

The results revealed a clear hierarchy of trust signals

When we analyzed the MaxDiff results, the pattern was striking. The top-scoring benefits shared a common theme: they provided concrete evidence of proven reliability and satisfied users. The bottom-scoring benefits? They emphasized company scale and marketing visibility.

A chart showing the ranking of MaxDiff analysis SaaS trust signals.

The four highest-scoring trust signals were clear winners. G2 or Capterra ratings scored 38 points (the highest possible), indicating this was nearly universal in its importance. The number of active customers scored 30 points. An implementation guarantee (“live in 30 days or your money back”) scored 25 points. And SOC 2 Type II certification scored 16 points.

These weren’t arbitrary marketing metrics. They were the specific signals that would make someone think, “this platform delivers real value and other companies trust them.”

The middle tier included operational details that registered as minor positives but weren’t decisive: the number of successful implementations (7 points), availability of 24/7 support (6 points). These signals suggested competence but didn’t particularly move the needle on trust.

Then came the surprises. Years in business scored -5 points, indicating it was slightly more often selected as “least important” than “most important.” The number of integrations available scored -11 points. AI-powered features claimed scored -15 points. Employee headcount scored -36 points. And recognition as a Gartner Cool Vendor scored -55 points, the lowest possible score.

Think about what prospects were telling us: “I don’t care that you have 200 employees or that Gartner mentioned you. Show me that real companies like mine trust you and that you’ll actually deliver on your promises.”

Why customers rejected company-focused metrics

The findings revealed an insight into trust-building that extends beyond this single company. B2B buyers weigh social proof and reliability guarantees far more heavily than they weigh indicators of company scale or industry recognition.

When a business talks about its employee headcount or analyst mentions, prospects interpret this as the company talking about itself. These metrics answer the question “How big is your business?” but not “Will this solve my problem?” From the buyer’s perspective, a larger team or Gartner mention doesn’t necessarily correlate with better software or smoother implementation.

By contrast, user reviews and customer counts answer the implicit question every prospect has: “Did this work for companies like mine?” A guarantee directly addresses risk: “What happens if implementation fails?” Security certifications address legitimacy: “Is this platform secure enough for our data?”

The AI-powered features claim scored poorly, likely because it felt trendy rather than practical. Prospects for this specific business weren’t primarily concerned about cutting-edge technology; they wanted a platform that would reliably solve their workflow problems. Leading with an AI angle, while possibly true, didn’t address the core decision-making criteria.

Years in business scored negatively for similar reasons. While longevity can signal stability, in this context, it didn’t address the prospect’s immediate concerns about implementation speed and user adoption. A company could be around for years while providing clunky software with poor support.

From insight to implementation: turning research into revenue

The MaxDiff analysis gave the company a clear action plan. We recommended implementing a four-part trust signal section directly below their homepage hero, featuring the top four scoring benefits in order of importance.

This meant reworking their existing homepage structure. Previously, they had emphasized their implementation guarantee in the hero area while burying customer counts and ratings further down the page. The research showed this approach had it backward. Prospects needed to see evidence of customer satisfaction first, then the implementation guarantee as additional reassurance.

We also recommended removing or de-emphasizing several elements they had been proud of. The employee headcount mention, the Gartner recognition, and several other low-scoring items were either removed entirely or moved to less prominent positions on the site. The goal was to prevent low-value signals from crowding out high-value ones.

The broader lesson here extends beyond this single homepage optimization. The MaxDiff results provided a messaging hierarchy that the company could apply across its entire go-to-market strategy. Email campaigns, landing pages, sales conversations, demo decks, and even their LinkedIn company page could now emphasize the trust signals that actually mattered to prospects.

When MaxDiff analysis makes sense for your business

MaxDiff is particularly valuable when you’re facing a prioritization problem with limited data. It works best in these scenarios:

  • You have more options than you can implement. Whether that’s features to build, benefits to highlight, or messages to test, MaxDiff helps you choose wisely when you can’t do everything at once.
  • Stakeholder opinions are conflicting. When internal debates about priorities can’t be resolved through argument, customer data settles the question. MaxDiff provides quantitative evidence for decision-making.
  • You need to differentiate in a crowded market. If competitors are all saying similar things, MaxDiff reveals which specific claims will break through. Often, the winning messages are ones companies overlook because they seem “obvious” or “not unique enough.”
  • You’re optimizing for a specific audience segment. Generic research about “customers in general” often produces generic insights. MaxDiff works best when you recruit participants who precisely match your target customer profile.

The methodology has limitations worth noting. It requires careful setup, and you need to know which options to test before you start.

If you don’t include the right benefits in your initial list, you won’t discover them through MaxDiff.

It also works best with a reasonably sized set of options (typically 5-15 items).

And the results tell you about relative importance, not absolute importance; everything could theoretically matter, but MaxDiff reveals the hierarchy.

How to use MaxDiff findings in your optimization strategy

Once you have MaxDiff results, the application extends beyond simply reordering homepage elements. The insights should inform your entire digital experience.

Your messaging architecture should reflect the importance hierarchy. High-scoring benefits deserve prominent placement, repetition across pages, and detailed explanation. Low-scoring benefits can either be removed or repositioned as supporting rather than leading messages.

Your testing roadmap should prioritize changes based on MaxDiff findings. If customer reviews scored highest in your study, test different ways of showcasing reviews before you test other elements. Let the data guide your experimentation priorities.

Your content strategy should emphasize what customers care about. If service guarantees scored highly, create content that explains the guarantee in detail, shares stories of when it was honored, and addresses common concerns. Build your editorial calendar around the topics MaxDiff revealed as important.

Your sales enablement should align with customer priorities. If the research showed that prospects value licensing credentials, make sure your sales team knows to emphasize this early in conversations. Create collateral that highlights the trust signals that matter most.

The most effective companies use MaxDiff as one tool in a broader research program. They combine it with qualitative research to understand why certain benefits matter, behavioral analytics to see how users interact with different messages, and continuous testing to validate that the predicted preferences translate into actual conversion improvements.

Turning guesswork into growth

The SaaS company we worked with started with a dozen possible messages and no clear sense of which would build trust most effectively with B2B buyers. After the MaxDiff analysis, they had a data-backed hierarchy that let them confidently restructure their homepage and broader messaging strategy.

This is the power of asking prospects the right questions in the right way. Not “do you like this?” which produces inflated scores for everything. Not “rank these 11 items,” which overwhelms participants and produces unreliable data. But rather, through repeated forced choices, revealing the true importance of each element.

If you’re struggling with similar prioritization challenges (too many options, limited space, stakeholder disagreement about what matters), MaxDiff analysis might be the tool that breaks through the noise. It transforms subjective opinion into statistical evidence, letting your prospects vote on what will actually convince them to choose your platform.

Ready to discover which messages actually resonate with your customers? The Good’s Digital Experience Optimization Program™ includes research methodologies like MaxDiff analysis to help you prioritize changes based on real customer preferences, not guesswork.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company https://thegood.com/insights/intent-based-segmentation/ Fri, 21 Nov 2025 18:45:09 +0000 https://thegood.com/?post_type=insights&p=111181 Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do. The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different […]

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
Most SaaS companies segment users the wrong way. They group people by demographics, company size, or subscription tier. Basically, they look at who users are rather than what they’re trying to do.

The problem is that a freelance consultant and an enterprise project manager might both use your collaboration tool, but they have completely different goals, workflows, and definitions of success.

We recently worked with a leading enterprise SaaS company facing exactly this challenge. Their product served everyone from solo creators to enterprise teams, but the experience treated everyone the same.

Users logging in to track a single client project got bombarded with team collaboration features. Power users managing complex workflows couldn’t find advanced capabilities buried in generic menus.

Users felt overwhelmed by irrelevant features, engagement plateaued, and retention suffered.

The solution wasn’t another traditional segmentation model. It was intent-based segmentation; a framework for grouping users by what they’re actually trying to accomplish, then personalizing the experience to match those specific goals.

Understanding behavioral segmentation and where intent fits

Before diving into how to build intent-based clusters, it helps to understand where this approach fits within the broader landscape of user segmentation.

Most SaaS companies are familiar with demographic segmentation (company size, industry, role) and firmographic segmentation (ARR, team size, geographic location). These approaches tell you who your users are, but they don’t tell you what they’re trying to do, which is why they often fail to predict engagement or retention.

Behavioral segmentation takes a different approach. Instead of looking at static attributes, it focuses on how users actually interact with your product.

Behavioral segmentation divides users based on engagement. This includes actions like feature usage patterns, open frequency, purchase behavior, and time-to-value metrics.

While not the only way to do things, behavioral segmentation is widely regarded as more effective than demographic segmentation alone, with plenty of research backing it up.

Within behavioral segmentation, there are several approaches:

  • Usage-based segmentation looks at frequency and intensity of use.
  • Lifecycle segmentation tracks where users are in their journey.
  • Benefit-sought segmentation groups users by the outcomes they want to achieve.

Intent-based segmentation sits at the intersection of all three.

visual portraying intent based segmentation at the center of different types of user segmentation

It identifies clusters of users who share similar goals and workflows, then maps those patterns to create a more personalized experience.

Intent-based clusters answer the question: “What is this user trying to accomplish right now?”

In a recent client engagement that inspired this article, this distinction mattered. They had mountains of usage data showing which features people clicked, but no framework for understanding why certain feature combinations existed or what job users were trying to complete.

They knew “Business Professionals” used their tool, but that category was so broad it offered no actionable insights. A marketing manager building campaign timelines has completely different needs than a legal team tracking contract approvals, even though both might be classified as “business professionals.”

Intent-based clustering gave them that missing layer of insight.

Case study: How to spot the need for intent-based segmentation

Let’s talk more about the client engagement I mentioned. This is a great case study for when to use intent-based segmentation.

We work on a quarterly retainer for these clients with our on-demand growth research services. So, when they mentioned struggling with how to personalize experiences and improve retention, we opened up a research project that same day.

The team could see that certain users logged in daily and used five or more features. Great, right? Not really. When we dug deeper, we discovered something critical. Heavy feature usage didn’t predict retention. Some power users churned while casual users stuck around for years.

The issue wasn’t the quantity of features used; it was whether those features aligned with what users were trying to accomplish. A user coming in weekly to update a single client dashboard showed higher retention than someone exploring ten features that didn’t match their core workflow.

The symptoms: What teams told us was broken

During our stakeholder interviews, we heard the same frustrations across departments:

From product: “We know the top five features everyone uses, but that doesn’t help us understand why they’re using them or what to build next. Two users might both use our template feature, but one is building client proposals while the other is standardizing internal processes. They need completely different things from that feature.”

From marketing: “Our segments are too broad to be useful. ‘Business Professional’ could mean anyone from a solo consultant to an enterprise VP. When we send educational content, we can’t make it relevant because we don’t know what problem they’re trying to solve.”

From customer success: “We can see when someone is at risk of churning because their usage drops off, but we can’t predict it before it happens. By the time we notice, they’ve already decided the product isn’t right for them. We need to understand intent earlier so we can intervene proactively.”

From UX research: “Users think in terms of tasks, not features. They don’t say ‘I want to use the dependency mapping tool.’ They say, ‘I need to make sure the design team finishes before development starts.’ But our product talks about features, not outcomes.”

The underlying problem: Missing the ‘why’

What became clear was that the organization had plenty of data about behavior but no framework for understanding intent. They could answer questions like:

  • How many people use feature X?
  • What’s the average session duration?
  • Which users log in most frequently?

But they couldn’t answer the questions that actually mattered:

  • What are users trying to accomplish when they use feature X?
  • Why do some users stick around while others churn?
  • What combination of goals and workflows predicts long-term retention?

This may sound familiar. They have data about behavior but lack context about intent. Without understanding the users’ different definitions of success, they use generic personalization that recommends “similar features” and misses the mark entirely.

The four-phase framework for creating intent-based user clusters

Based on our work with the enterprise SaaS client, we developed a systematic framework for building intent-based clusters from scratch.

The process has four distinct phases, each building on the previous one.

Think of this as a directional guide rather than a rigid formula. You can adapt the scope based on your resources and organizational complexity.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Phase 1: Capture institutional knowledge and identify gaps

Most organizations have more customer insight than they realize; it’s just scattered across teams, buried in old reports, and locked in individual team members’ heads. The first phase consolidates that knowledge and identifies what’s missing.

Graphic of phase 1 in the intent based user segmentation process

1. Conduct cross-functional interviews

Start by interviewing stakeholders who interact with users regularly: product managers, customer success, sales, support, and marketing. For our client, we conducted interviews with seven team members from UX research, growth, engagement, marketing, and analytics.

The goal isn’t consensus. You want to uncover patterns and contradictions instead. Focus your conversations on these questions:

  • How do you currently describe different user types?
  • What patterns have you noticed in how different users engage with the product?
  • What data exists about user behavior that isn’t being used to inform decisions?
  • Where do personalization efforts break down today?
  • What questions about users keep you up at night?

These conversations surface institutional knowledge that never makes it into documentation.

Capture everything. The contradictions are especially valuable—they reveal where teams operate with different mental models of the same users.

2. Audit existing research and reports

Next, gather relevant user research, data analysis, and customer insight. For our client, we analyzed eight existing reports, including retention data, cancellation surveys, user studies, engagement patterns, and conversion analysis.

Look for:

  • Marketing segmentation models (usually demographic-heavy)
  • User research studies (often small sample, rich insights)
  • Behavioral analytics reports (feature usage patterns, cohort analysis)
  • Customer journey maps (theoretical vs. actual paths)
  • Support ticket analysis (pain points and use cases)
  • Cancellation surveys (why users leave)

Pay attention to gaps between how different teams think about users. Marketing might segment by company size, while product segments by feature usage. Neither is wrong, but the lack of a unified framework means teams optimize for different definitions of success.

3. Define hypotheses about intent-based variables

Based on interviews and research, develop hypotheses about what variables might define user intent. This is where you move from “who uses our product” to “what are they trying to accomplish.”

For our client, we identified several dimensions that seemed to correlate with intent:

  • Primary workflow type: Are users managing team projects, client deliverables, or personal tasks?
  • Collaboration patterns: Solo work, small team coordination, or cross-functional orchestration?
  • Usage frequency: Daily operational tool or periodic project management?
  • Success metrics: Speed (quick task completion) vs. thoroughness (detailed planning and tracking)?
  • Document complexity: Simple task lists or multi-layered project hierarchies?

The goal is to create testable hypotheses that can be validated in Phase 2.

Phase 2: Validate clusters through user research and behavioral analysis

Once you have initial hypotheses, Phase 2 tests them against real user behavior and feedback. This is where hunches become data-backed insights.

visual of phase 2 in the intent based user segmentation process

1. Develop provisional cluster groups

Transform your hypotheses into provisional clusters. For our client, we identified six distinct intent-based clusters. Let me illustrate with a fictional example of how this might work for a project management SaaS tool:

Sprint Executors: Users focused on rapid task completion and daily standup workflows. They need speed, simple task updates, and quick team coordination. Think startup teams moving fast with lightweight processes.

Client Project Coordinators: Users managing multiple client engagements simultaneously with strict deliverable timelines. They need client visibility controls, progress tracking, and professional reporting. Think agencies and consultancies.

Cross-Functional Orchestrators: Users coordinating complex projects across departments with dependencies and approval workflows. They need Gantt views, resource allocation, and stakeholder communication tools. Think enterprise program managers.

Personal Productivity Optimizers: Users treat the tool as their second brain for personal task management and goal tracking. They need customization, recurring tasks, and minimal collaboration features. Think solopreneurs and executives.

Seasonal Campaign Managers: Users with predictable high-intensity periods followed by dormancy. They need templates, bulk operations, and the ability to archive/reactivate projects easily. Think retail operations teams or event planners.

Mobile-First Coordinators: Users who primarily access the tool from mobile devices for field work or on-the-go updates. They need streamlined mobile experiences and offline sync. Think field service teams or traveling consultants.

Each cluster gets a descriptive name that captures the user’s primary intent, not just their behavior. “Sprint Executor” tells you more about what someone is trying to do than “high-frequency user.”

2. Conduct targeted user research

With provisional clusters defined, recruit users who fit each profile and conduct interviews to understand:

  • Their primary use cases and goals when they first adopted the tool
  • How they discovered and currently use the product
  • Their typical workflows from start to finish
  • What defines success in their role
  • Pain points and unmet needs
  • How they decide which features to explore
  • What would make them cancel vs. what keeps them subscribed

For our client, we conducted three to four interviews per cluster, totaling around 24 user conversations. This gave us enough coverage to validate patterns without drowning in data.

The insights were eye-opening. We discovered that one cluster had the fastest time-to-value but the lowest feature adoption. They found what they needed immediately and never explored further. Another cluster showed the highest retention but needed the longest onboarding. They invested time up front because the tool was critical to their workflow.

3. Analyze behavioral data to confirm patterns

User interviews reveal what people say they do. Behavioral data shows what they actually do. Cross-reference your clusters against:

  • Feature usage sequences (which tools appear together in sessions)
  • Time-to-value metrics by cluster (how quickly do they get their first win)
  • Retention and churn patterns (which clusters stick around)
  • Upgrade and expansion behavior (which clusters grow their usage)
  • Support ticket themes (which clusters need help with what)
  • Feature adoption curves (how exploration patterns differ)

For our client, the data revealed critical differences. The “Sprint Executor” equivalent had fast initial adoption but plateaued quickly. They found their core workflow and stopped exploring.

The “Cross-Functional Orchestrator” cluster showed slow initial adoption but deep engagement over time. They needed to learn the tool thoroughly to unlock value.

These patterns weren’t visible in aggregate data. Only by segmenting users by intent could we see that different clusters had fundamentally different paths to retention.

4. Build detailed cluster profiles

For each validated cluster, create a comprehensive profile that becomes the foundation for personalization. For example:

Cluster name: Sprint Executors

Primary intent: Complete daily tasks quickly with minimal friction and maximum team visibility

Most-used features:

  • Quick-add task creation
  • Board view for visual sprint planning
  • Mobile app for on-the-go updates
  • Real-time team activity feed

Typical workflow patterns:

  • Morning standup with task assignments
  • Throughout the day: quick updates and status changes
  • End of day: marking tasks complete and planning tomorrow

Behavioral flags that identify this cluster:

  • Creates 5+ tasks in first week
  • Returns daily within first 14 days
  • Uses mobile app within first 7 days
  • Rarely uses advanced features like Gantt charts or dependencies

Retention drivers:

  • Speed of task completion
  • Team visibility and accountability
  • Mobile accessibility

Churn risks:

  • Tool feels too complex for simple needs
  • Feature bloat is making core actions harder to find
  • Forced upgrades to access speed-focused features

Personalization opportunities:

  • Streamlined onboarding focused on quick task creation
  • Mobile-first feature discovery
  • Templates for common sprint workflows
  • Integrations with communication tools

These profiles become the single source of truth that product, marketing, and customer success can all reference.

Phase 3: Develop indicators and personalization strategies

The final phase connects clusters to action. This is where the framework moves from insight to implementation.

1. Create behavioral flags for cluster identification

Most users won’t self-identify their intent at signup. You need to infer cluster membership from behavioral signals early in their journey. The key is identifying flags that appear within the first 7-14 days. It should be early enough to personalize the experience before users decide if the tool is right for them.

visual of phase 3 in the intent based user segmentation process

For reference, the “Sprint Executor” cluster in our fictional example:

  • Created 5+ tasks in first week
  • Logged in on 4+ separate days in first 14 days
  • Used mobile app within first 7 days
  • Board or list view used more than timeline/Gantt view (80%+ of sessions)
  • Invited at least one team member within first 10 days
  • Never explored advanced dependency features
  • Average session length under 10 minutes

Versus the “Client Project Coordinator” cluster:

  • Created 3+ separate projects within first week (indicating multiple clients)
  • Used folder or workspace organization features within first 5 days
  • Set up client-specific permissions or external sharing settings
  • Created custom views or reports within first 14 days
  • Longer average session times (20+ minutes per session)
  • Uses professional or client-specific terminology in project names
  • High usage of export or presentation features

The goal is to find the minimum viable signal set that reliably predicts cluster membership. Start with more flags and refine over time based on which actually correlate with long-term behavior.

One critical finding from our client work: early behavioral flags predicted retention better than demographic data.

A user who exhibited “Client Project Coordinator” behaviors in week one showed 40% higher 90-day retention than the average user, regardless of their company size or job title.

2. Map personalization opportunities to each cluster

With clusters and flags defined, identify specific ways to personalize the experience across the user journey:

Onboarding sequences: Tailor the first-run experience to highlight features that match user intent. Show Sprint Executors how to set up their first sprint board, not the full feature catalog with Gantt charts and resource allocation tools they don’t need.

In-app messaging: Trigger contextual tips based on usage patterns. When a Client Project Coordinator creates their third project with similar structure, suggest project templates to save time.

Feature discovery: Recommend next-step features that align with cluster workflows. For Sprint Executors who’ve mastered basic task management, introduce the mobile app and integrations with their communication tools—not complex dependency mapping.

Content and education: Send targeted educational content that addresses cluster-specific goals. Client Project Coordinators get tips on professional reporting and client permissions. Sprint Executors get productivity hacks and team coordination strategies.

Upgrade paths: Present pricing tiers and feature upgrades that match cluster needs. Don’t push team collaboration features to Personal Productivity Optimizers who work solo and won’t use them.

Support prioritization: Route support tickets differently based on cluster. Client Project Coordinators might get priority support since they’re often managing paying clients. Seasonal Campaign Managers might get proactive check-ins before predicted busy periods.

For our client, this mapping revealed opportunities they’d completely missed. One cluster had been receiving generic “explore more features” emails when what they actually needed was advanced security capabilities for compliance requirements. Another cluster kept churning at the end of trial because onboarding emphasized features they’d never use instead of the speed-focused tools that matched their workflow.

Phase 4: Develop test concepts to validate impact

Turn personalization opportunities into testable hypotheses. Don’t roll everything out at once. Start with high-impact, low-effort tests that prove the value of intent-based segmentation.

For our client, we proposed several test concepts structured to validate clusters quickly and build organizational confidence in the framework. Here are a few examples.

visual of phase 4 in the intent based user segmentation process

Example Test 1: Intent-Based Onboarding Survey

Background: The organization lacked a way to identify user intent at the critical moment when users were most open to guidance: right after signup, but before they’d formed opinions about product fit.

Hypothesis: By asking users to self-identify their primary goal during their first meaningful session, we can segment them into actionable clusters that enable personalized feature discovery and messaging, resulting in improved 3-month retention rates by 5-10%.

Test design: During the first session (after initial signup but before deep engagement), present a brief survey asking: “What brings you here today?” with options that map directly to identified clusters:

☐ Coordinate my team’s daily work (Sprint Executors)

☐ Manage multiple client projects (Client Project Coordinators)

☐ Organize complex cross-functional initiatives (Cross-Functional Orchestrators)

☐ Track my personal tasks and goals (Personal Productivity Optimizers)

☐ Plan seasonal campaigns or events (Seasonal Campaign Managers)

☐ Update projects while on the go (Mobile-First Coordinators)

☐ Something else (with optional text field)

Then immediately personalize their first experience based on their response: Sprint Executors see a streamlined task creation tutorial, Client Project Coordinators get guidance on setting up client workspaces, etc.

Success metrics:

  • Primary: 3-month retention rate by selected cluster (looking for 5-10% lift)
  • Secondary: Survey completion rate (target: >80%), feature adoption aligned with cluster (target: 20% lift), time to first value-generating action
  • Guardrails: No negative impact on day 2 or day 7 retention

Acceptance criteria for “winning test”:

  • Survey completion rate >80%
  • 60% of users select a pre-set option (vs. “something else”)
  • Statistically significant retention lift in at least one cluster
  • No degradation in key engagement metrics

Acceptance criteria for “learning test”:

  • 40% of users select “something else” (suggests clusters don’t match user mental models)
  • No statistically significant difference in retention (suggests clusters exist, but personalization approach needs refinement)

Audience: New paid subscribers on first day, trial users converting to paid, reactivated users returning after 30+ days dormant. Start with 25% of eligible users to minimize risk.

Timeline: 90 days to measure primary retention metric, but early signals (completion rate, feature adoption) available within 14 days.

Example Test 2: Cluster-Specific Feature Recommendations

Background: Generic in-app messaging had low click-through rates (<5%) and wasn’t driving feature adoption. Users felt bombarded by irrelevant suggestions.

Hypothesis: For users who match behavioral flags within the first 14 days, triggering personalized feature recommendations will increase feature adoption by 20% and engagement depth by 15%.

Test design: Identify users by behavioral flags, then trigger targeted in-app messages at contextually relevant moments:

  • Sprint Executors see mobile app download prompt after completing 5 tasks on desktop: “Update tasks on the go: get the mobile app”
  • Client Project Coordinators see reporting features after creating third project: “Impress clients with professional progress reports”
  • Cross-Functional Orchestrators see dependency mapping after creating complex project: “Map dependencies to keep cross-functional teams aligned”

Success metrics:

  • Primary: Feature adoption rate for recommended features (target: 20% lift vs. control)
  • Secondary: Overall engagement depth (features used per session), message click-through rate
  • Guardrails: No increase in feature abandonment (starting but not completing flows)

Audience: Users who match cluster behavioral flags within first 14 days. Test one cluster at a time to isolate impact.

Timeline: 30 days to measure feature adoption impact.

Example Test 3: Retention Email Campaigns by Cluster

Background: Generic “tips and tricks” email campaigns had 8% open rates and weren’t moving retention metrics. Content felt irrelevant to most recipients.

Hypothesis: Segmenting email campaigns by identified cluster will improve email engagement by 50% and show a measurable correlation with retention.

Test design: Replace generic weekly tips emails with cluster-specific content:

  • Sprint Executors: “5 ways to speed up your daily standup” / “Mobile shortcuts that save 2 hours per week”
  • Client Project Coordinators: “How to impress clients with professional project reports” / “3 ways to give clients visibility without overwhelming them”
  • Personal Productivity Optimizers: “Build your second brain: advanced filtering techniques” / “Automate your recurring tasks in 5 minutes”

Send to users identified through either the onboarding survey or behavioral flags. Track engagement and retention by cluster.

Success metrics:

  • Primary: Email open rates (target: 50% lift), click-through rates (target: 100% lift)
  • Secondary: Correlation between email engagement and 90-day retention
  • Guardrails: Unsubscribe rates remain stable or decrease

Audience: Users identified as belonging to specific clusters either through survey responses or behavioral flags, minimum 14 days after signup.

Timeline: 6 weeks for initial engagement metrics, 90 days for retention correlation.

Post-test analysis framework

For each test, we established a clear decision framework:

If “winning test”:

  • Roll out to 100% of eligible users
  • Begin development on next phase of personalization for that cluster
  • Use learnings to inform tests for other clusters
  • Document what worked to build organizational playbook

If “learning test”:

  • Analyze all “something else” responses for missing clusters or unclear framing
  • Review behavioral data to see if clusters exist, but personalization was wrong
  • Iterate on messaging, timing, or format
  • Decide whether to retest with refinements or try different approach

If negative impact:

  • Immediately roll back to the control experience
  • Conduct user interviews to understand what went wrong
  • Reassess cluster definitions or personalization approach
  • Consider whether the cluster exists but needs a different treatment

The key to successful testing is starting small, measuring rigorously, and being willing to learn from failures. Not every cluster will respond to every type of personalization, and that’s valuable information. The goal isn’t perfect personalization immediately; it’s continuous improvement based on what actually moves metrics.

Intent-based segmentation mistakes and how to avoid them

Based on our experience implementing this framework, here are the mistakes that will derail your efforts:

1. Starting with too many clusters

More isn’t better. Six well-defined clusters are more useful than fifteen overlapping ones. You need enough clusters to capture meaningfully different intents, but few enough that teams can actually remember and act on them. Start with 4-6 clusters and refine over time. If you find yourself creating clusters that differ only slightly, you’ve gone too granular.

2. Confusing demographics with intent

Job title, company size, or industry might correlate with intent, but they don’t define it. We’ve seen solo consultants behave like “Cross-Functional Orchestrators” and enterprise teams behave like “Sprint Executors.” Focus on what users are trying to accomplish, not who they are on paper.

3. Creating overlapping clusters

Each cluster should be distinct in its primary intent and workflow patterns. If you’re struggling to articulate how two clusters differ behaviorally, they’re probably the same cluster with different labels. Test this by asking: “If I saw someone’s usage data, could I confidently assign them to one cluster?”

4. Ignoring edge cases entirely

Some users will span multiple clusters or switch between them based on context. That’s fine. The framework should accommodate primary intent while recognizing that users are complex. A user might primarily be a “Client Project Coordinator” but occasionally use “Personal Productivity Optimizer” features for their own task management. Don’t force rigid categorization.

5. Skipping the validation step

Your initial hypotheses will be wrong in places. User research and behavioral data keep you honest and prevent confirmation bias. We’ve seen teams fall in love with theoretically elegant clusters that don’t actually exist in their user base, or miss entire segments because they didn’t fit the initial hypothesis.

6. Treating clusters as static

User intent evolves. Someone might start as a “Personal Productivity Optimizer” and grow into a “Client Project Coordinator” as their business scales. Review and refine your clusters quarterly based on new data, product changes, and market shifts.

7. Personalizing too aggressively too soon

Start with high-confidence, low-risk personalization (like targeted email content) before you completely diverge user experiences. You want to validate that clusters behave differently before you build entirely separate onboarding flows.

8. Forgetting to measure impact

Intent-based segmentation is valuable only if it improves outcomes. Define success metrics upfront (e.g., retention lifts, engagement depth, upgrade rates, support ticket reduction) and track them by cluster. If personalization isn’t moving these metrics, refine your approach.

Making intent-based segmentation work for your organization

The framework we’ve outlined works across product categories and company sizes, but implementation varies based on your resources and organizational maturity.

If you have limited data: Start with Phase 1 and Phase 2, using qualitative research to define clusters before investing in behavioral infrastructure. You can manually tag users based on interview responses and onboarding surveys, then personalize through targeted emails and customer success outreach. As you grow, build the data systems to automate cluster identification.

If you have rich behavioral data but limited research capabilities: Reverse the order. Start with data patterns and validate through targeted interviews. Look for natural groupings in your analytics that suggest different workflow types, then talk to representative users from each group to understand their intent.

If you’re a small team: Don’t let perfect be the enemy of good. Start with 3-4 obvious clusters based on your highest-level workflow differences. The founder of a 10-person startup probably has a better intuitive understanding of user intent than a 500-person company with siloed data. Write down what you know, test it with a few users, and start personalizing.

If you’re a large enterprise: The challenge is getting organizational alignment, not defining clusters. Use Phase 1 to surface where teams already operate with different mental models, then use data to arbitrate. Create executive sponsorship for the new framework so it becomes the shared language across product, marketing, and CS.

The key is starting somewhere. Most companies know their one-size-fits-all approach isn’t working, but they keep personalizing around the wrong variables that don’t actually predict what users are trying to accomplish.

Intent-based segmentation reorients everything around the question that actually matters: What is this user trying to accomplish, and how can we help them succeed at that specific goal?

Turn insights into retention that drives revenue

Understanding user intent is just the first step. The real value comes from translating those insights into personalized experiences that keep users engaged and drive measurable revenue growth.

At The Good, we’ve spent 16 years helping SaaS companies identify their most valuable user segments and optimize experiences around what actually drives retention. Our systematic approach to user segmentation goes beyond frameworks. We help you implement experimentation strategies that prove which personalization efforts move the needle on the metrics your board cares about.

Plenty of companies struggle to implement segmentation that’s actually actionable. They end up with beautiful personas gathering dust or broad categories that don’t inform product decisions.

Intent-based segmentation is different because it connects directly to behavior you can observe and experiences you can personalize.

If you’re struggling with generic experiences that fail to resonate with different user types, or if you know your segmentation could be better but aren’t sure where to start, let’s talk about how intent-based segmentation could transform your retention strategy and drive revenue growth.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post The Exact Framework We Used To Build Intent-Based User Clusters That Drive Retention For A Leading SaaS Company appeared first on The Good.

]]>
From Data Collector to Data Connector: Embracing Research Democratization https://thegood.com/insights/research-democratization/ Mon, 16 Jun 2025 15:26:20 +0000 https://thegood.com/?post_type=insights&p=110652 As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights. “The fundamental shift that people have to make is that you’re no […]

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights.

“The fundamental shift that people have to make is that you’re no longer a data collector. You’re a data connector,” says Ari Zelmanov, former police detective and current research leader. In Ari’s view, as teams get leaner and tools get better at executing research tasks, the job of the researcher becomes standing up repositories, socializing learning mechanisms, and creating the systems that empower organizations to act on good information.

We spoke with research leaders who've successfully made this transition, transforming their teams from siloed specialists into customer-centric learning cultures. Their approaches varied, but one theme was clear: when you empower others to answer their own questions, you don't diminish your value, you multiply it.

The d word holding us back

Before diving into solutions, there's an elephant we need to address: Democratization. Many researchers worry that democratizing research will lead to poor methodologies, incorrect conclusions, or devalued expertise. But Ari feels the argument is nye.

"The only people arguing about democratization are researchers," says Ari. "Nobody else is arguing about it. We're infighting about something that we have zero control over. It's happening."

I tend to feel like anyone arguing about democratization is missing one critical point: customer centricity isn't just one person's job.

Anton Krotov, Researcher in an organization of over 10,000 people, was in the fortunate position of being very trusted by his colleagues. So much so that they believed research could answer all of their questions.

“I had already established a reputation. I was fortunate that I didn't need to sell the value of research. Quite the opposite. People came to me with too many requests. They believed research could do everything for them. I needed to set up boundaries.”

Overwhelmed with requests from colleagues, Anton realized that the solution wasn't saying no—it was saying yes in a different way. Rather than becoming a bottleneck, Anton chose to become a bridge.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Connect teams through shared intelligence

Good intelligence is the responsibility of many disciplines, not just research. To get answers quickly, Ari's teams use what he calls the "Moneyball" approach to research, a framework that prioritizes speed and accessibility over methodological purity:

"Product teams are incentivized to move fast. So, how do you make research fit into that in a way that makes sense? We built something called Moneyball Research. It's super simple: start with what you know. It could be in your repository, it could be what you know. Then you go to what data is accessible within 24 to 48 hours. That's usually internal analytics, CSAT tickets, NPS, sales conversations, and tribal knowledge. Then—and only then—do you go to primary research."

This approach shifts conversations away from methods and focuses instead on what teams need to know and how confident they need to be. "Then it's up to the researcher to be the doctor. Diagnose that, determine how they're going to collect that evidence given the time, money, and level of rigor."

René Bastijans, lead researcher at a growth-stage startup, has found creative ways to loop colleagues into data collection. His sales team is trained to lightly survey prospects during sales calls and report back to the wider team.

"We've trained our sales team to ask for specific data and enter it into Salesforce. Researchers and the product team have access to these data, and therefore, sales has allowed us to keep a pretty good pulse on the market."

This creates a healthy feedback loop that keeps everyone abreast of evolving user needs while extending the research team's reach without expanding headcount.

Invite colleagues into the research process

While it might seem counterintuitive to share methodologies and research responsibilities, successful research leaders see democratization as an opportunity rather than a threat.

To remove research bottlenecks, Anton ran internal workshops to upskill his colleagues on doing their own research. This proactive approach to education focused on tailoring training to his colleagues' specific needs: "I try to cover the cases that will be really applicable, so I don't offer any cookie-cutter material and don't go much into theory. It's really tailored to their day-to-day work."

The key is meeting people where they are and giving them tools that fit their contexts. Not everyone needs to become a master researcher, but many can learn to conduct basic customer interviews or query data effectively.

Brittany Lang, UX Research Manager and M.S. in Information Science, uses project reviews as a time to cultivate a shared point of view and continually refine her thinking.

“Before we socialize research plans, I usually take a look at it, or I have someone else on my team take a look at it. It doesn't have to be your manager that's reviewing something, but can someone give you feedback?

It's nice when coworkers leave comments and I can see what other people on the team have said and we can agree or challenge, and then have a discussion about it. I also learn in those moments too. When I'm looking at how members of my team have reviewed other work, where they're coming from and their perspective, I learn a lot from them in those moments.”

Facilitate low-risk learning

It takes more than a few ambitious researchers to imbue a company’s culture with a learning mindset, which is why rituals and learning programs are so important.

Anton’s employer formalized this approach to building safe learning environments through a program called "Gigs for Growth," a repository of side projects from different departments where employees can apply to work on learning opportunities outside their typical scope.

"It's like a company green light that you can work on learning during your full-time gig and outside of your typical work scope. Something that you would never otherwise be able to touch in the company."

Under this program, researchers can support QA engineers, sales can support marketing, and everyone gets exposure to new perspectives that inform their primary roles. "You get some really new experiences that otherwise you wouldn't be able to."

At The Good, we like to build regular, low-stakes opportunities for knowledge sharing and skill development. One of our approaches at The Good is a ritual called "Random Question of the Week." During another bi-weekly meeting, team members share client questions that stumped them or that they felt they could have answered better.

These conversations help build shared perspectives that then get turned into artifacts:

  • FAQ entries for brief, punchy answers
  • Articles for long-form perspectives
  • Policies or SOPs that outline ways of working

The result is that teams become more aligned, can answer tough questions on the spot, and save time by referring to their collective knowledge instead of rehashing the same discussions.

Another effective ritual is "Critique & Share" sessions, where team members bring questions, websites they admire, or work they're developing to get fresh perspectives from colleagues who haven't been deep in the weeds of a particular project.

Maggie Paveza, Senior Strategist at The Good, shares that it has helped her break the ice when building a shared P.O.V.

"It's pretty informal and often we're not showing our own work, so it feels less intimidating to ask your team members, 'why do you think this competitor is using this strategy,' than if it were your own work," explains Maggie.

The power of being a data connector

"The fundamental problem that research as an industry has is we've been myopically focused on the front end of the equation," says Ari. "Data collection, statistical significance, theoretical saturation—insert whatever fancy academic word you want in here. But the real power comes on the back end of the equation."

That back end is about connection, synthesis, and empowerment. When researchers shift from being data collectors to data connectors, they don't lose their expertise; they amplify it.

As Anton puts it, "Where soil is right, then you can do things. Praise people for when they do things great. You can learn from mistakes, you can learn from success."

The goal isn't to turn everyone into a researcher. It's to create an environment where customer insights flow freely, where good questions get asked by many disciplines, and where learning happens continuously rather than in bursts.

Making the shift

Building a customer-centric learning culture doesn't happen overnight, but it starts with understanding where your organization is open to change and being constructive about how you facilitate it.

Look for teams and individuals who are already curious about customers. Find the places where people are asking good questions but lack the tools or confidence to find answers. Then meet them there with the right combination of education, tools, and support.

"At the end of the day, it's about empowering decision-making," says Ari. And in a world where customer expectations evolve quickly and research teams are lean, that empowerment might be the most valuable thing researchers can provide.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
Continuous Research: The Secret Weapon For Effective Product Teams https://thegood.com/insights/continuous-research/ Fri, 25 Apr 2025 05:35:22 +0000 https://thegood.com/?post_type=insights&p=110484 Traditional product building happens in sequential phases. Following a waterfall methodology, long phases of upfront research are followed by long periods of building or implementing, before the research begins again. But this episodic style is falling out of favor with forward-thinking teams. The best product organizations are embracing continuous research, an always-on approach to gathering […]

The post Continuous Research: The Secret Weapon For Effective Product Teams appeared first on The Good.

]]>
Traditional product building happens in sequential phases. Following a waterfall methodology, long phases of upfront research are followed by long periods of building or implementing, before the research begins again.

But this episodic style is falling out of favor with forward-thinking teams. The best product organizations are embracing continuous research, an always-on approach to gathering insights that allows for more user-centered, effective products.

In one study, 83% of designers, product managers, and researchers agreed that research should be conducted at every stage of the product development life cycle. But, only 36% of them are conducting research studies after launch.

How can they bridge the gap? With continuous research.

What is continuous research?

Continuous research is an “always-on” style of research, where product teams put practices and systems in place to habitually learn from users. Rather than conducting isolated research studies or sprints, it focuses on integrating regular research activities into the product development cycle.

Why continuous research?

The benefits of continuous research are plentiful. Gathering insights regularly means quickly responding to user needs/wants, making more data-driven decisions, and reducing spend on changes that don’t work.

According to research by McKinsey, there is a direct correlation between financial success and de-risking development by continually listening, testing, and iterating with end-users. Continuous research methods are proven to positively impact the bottom line, and you can feel good knowing that they also make your customers’ lives better.

However, the under-touted benefit of continuous research is that it makes everyone’s job at your company easier. Product teams get their questions answered faster. Developers don’t have to waste time on unfriendly user features. Sales can more easily connect with customers.

No one has to wait to get on a roadmap, because there is a constant cycle of feedback and user connection that is otherwise unattainable.

Continuous research methods

So, what specifically counts as continuous research? Plenty of methods would fall under this umbrella. Here are a few of our favorites to paint a picture of what continuous research looks like in action.

Regular user interviews

Open a time on your calendar to consistently fill with customer interviews. This consistent, lightweight user research can gather immediate feedback on new features or designs.

Regular usability testing

Find time to observe users interacting with your product. Do this often, and you will uncover patterns to improve your UX.

Ongoing collection of CSAT or NPS scores

Keep a record of customer satisfaction scores (CSAT) or Net Promoter Scores (NPS) to understand over time whether users are happy with your product. This consistent record will help you determine if product optimizations have helped or hurt your experience.

Cohort comparison through onboarding surveys

Conduct onboarding surveys and then compare cohorts over time to identify trends that may not be apparent in individual feedback sessions.

Lightweight prototype testing

Get feedback on designs from initial prototype to mid-fidelity to fully mocked up. Use the consistent feedback to iterate quickly and make immediate changes as you go.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Continuous research implementation strategies

With the benefits and methods clear, you might be ready to shift your culture towards continuous research. If so, here are a few implementation strategies to set you off on the right foot.

Start small and build up consistency

Begin with a single recurring research touchpoint, such as weekly user interviews or bi-weekly prototype testing sessions. You don’t need a comprehensive, always-on strategy when you’re just starting out. Starting small will get you into the habit, and then you can find ways to expand your efforts.

Put research on the calendar

Blocking time on the calendar for research will get you into the continuous research habit. Consider something like a Friday afternoon standing meeting that you can fill with customer interviews.

Teresa Torres, a well-respected expert and author of Continuous Discovery Habits, suggests you talk to customers every week. “Continuous discovery means weekly touchpoints with customers by the team building the product, where they conduct small research activities in pursuit of a desired outcome.”

The emphasis is on taking research from something you pause to do, into something you always do. Putting it on the calendar in a consistent cadence will help you stick to it.

Get the whole team involved

One of the best parts of continuous research is that it benefits the whole team… and the whole team can be involved! While continuous research may be led by a researcher, it can also be effectively led by product managers who incorporate it into their regular schedule.

Even if other teams don’t lead the process, get them involved by:

  • Asking salespeople to ask one specific research question in each call
  • Having designers build prototype testing into their workflows
  • Sharing research findings across the organization

We have lots of expert insights on how to make B2B research work harder and get your team involved in the process, here.

Complement ongoing feedback with strategic research

Another great recommendation from Teresa Torres is to complement ongoing feedback with occasional deeper discovery work. When you have a higher-risk change or question, take the necessary time to do a deep dive into the data, testing, and analysis.

An always-on research strategy should ensure you’re solving the right problems and that you’re doing it effectively. A combination of lighter, continuous, and deep-dive research will make sure that happens.

Build your toolkit

Tools and technologies that enable continuous feedback will be a lifesaver during busy weeks when it would be easy to let research fall to the wayside. Set up automations and find the tools that make it easier for you to keep users scheduling, data collecting, and insights surfacing.

In the end, the value of continuous research comes from rapidly applying insights. These implementation strategies will create explicit pathways for research findings to influence product decisions within days, not months.

Who should leverage continuous research?

A shift to continuous research represents a cultural shift in product development. It’s not just a changing methodology; it’s a truly user-centered approach where customer feedback continuously shapes product direction. Most product teams would benefit from implementing an always-on research strategy, but it’s particularly valuable in a few circumstances.

Teams with limited resources

It might seem counterintuitive, but it is particularly valuable for teams that don’t have a big research budget. Even without the dollars to fund big studies, teams can leverage continuous research to uncover customer insights that guide development.

Growth-stage startups

It’s ideal for startups that are moving quickly to build and make decisions. They’re mostly throwing things at the wall to see what sticks, but continuous research can act as the safety net or gut check for those ideas. Run it by a customer and get some quick feedback instead of waiting to make mistakes in-market.

Products in rapidly evolving markets

If you’re in a market that is changing quickly, like AI, it’s a good idea to implement continuous research. It helps you adapt to evolving consumer needs more efficiently and to keep up with rapidly developing technological advancements. When you have an always-on research schedule, you can get your questions answered more quickly and implement changes shortly after.

Why “always-on” should be your new normal

Studies show that the compartmentalization of design, development, and research stages of product development “increases the risk of losing the voice of the consumer or of relying too heavily on one iteration of that voice.” Don’t let your organization fall into this trap.

User insights help teams innovate faster and build better products. The best teams today are those that learn from their customers as they build, putting the user experience at the center of product development and optimization. Consistent feedback loops allow them to deliver constant value and effectively respond to market changes.

As competition intensifies in SaaS, continuous research could be the difference between products that thrive and those that die.

If your team sees the value of continuous research but doesn’t have the resources to manage it in-house, The Good can help.

Our team of experts will be an on-demand research (and design and strategy) team that helps you get things done faster. No more waiting months to get your ideas on the roadmap.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Continuous Research: The Secret Weapon For Effective Product Teams appeared first on The Good.

]]>
The UX Fundamentals That Every Product Manager Needs To Know https://thegood.com/insights/ux-for-product-managers/ Fri, 14 Feb 2025 20:03:48 +0000 https://thegood.com/?post_type=insights&p=110320 A well-designed user experience (UX) can be the difference between a successful product and one that struggles to gain traction. Product managers (PMs) play a crucial role in defining product vision and strategy, but to truly create products that resonate with users, they must understand UX fundamentals. We’ve seen this time and time again with […]

The post The UX Fundamentals That Every Product Manager Needs To Know appeared first on The Good.

]]>
A well-designed user experience (UX) can be the difference between a successful product and one that struggles to gain traction.

Product managers (PMs) play a crucial role in defining product vision and strategy, but to truly create products that resonate with users, they must understand UX fundamentals.

We’ve seen this time and time again with our SaaS clients. When PMs incorporate UX principles into their decision-making, they ensure their products are not only functional but also intuitive, engaging, and aligned with user needs.

If you aren’t upskilled yet, don’t worry. In this article, we’re diving into the fundamentals every product manager should know.

What is UX?

The term user experience, or UX for short, describes the overall experience of using a product (e.g., website, digital application, etc.). It covers all aspects of a user’s interactions, including perception.

The key components of UX are, intuitively, elements that impact a user’s experience with a product. This includes:

  • Interactions: The various ways that users directly and indirectly interact with a product or service.
  • Perceptions: The subjective experiences a user has with a product or service, including their emotions and personal beliefs.
  • Context: How and where users interact with the product or services. The general environment impacting the experience.
  • Usability: The practical elements of an experience like accessibility and task completion.

Why do product managers need to know UX fundamentals?

As a product manager, understanding UX fundamentals is crucial for creating products that are both functional and delightful for users. This is true whether or not you have an internal UX team.

Knowing UX fundamentals enables you to be a better collaborator. Even if you have specialists on your side, knowing the fundamentals of UX allows you to converse, collaborate, and incorporate the user into all product decision-making.

If you’re working on upskilling your PM role to include UX fundamentals, good for you. You’re doing the work that will not only deliver better business outcomes but will also make the internet a better place.

Here are some key UX principles and practices that every product manager should know.

UX research

The core pillar of UX that can’t be ignored is: always start with research.

UX research is fundamental to successful product development and optimization. It uncovers how your customers interact with your site and the journeys they take to purchase.

Beginning to understand the experience means digging into data and leaning on UX research methods to discover how users feel when they’re on your site. Here are a few important ones to keep in mind.

Discovery research

Discovery research helps you understand use cases and user needs. It can ground you in what problems to solve and what is going on in the market.

Example: Desk research is a type of discovery research that collects material or data from public sources like reports.

Generative research

Generative methods are great for understanding what’s happening on a website and forming hypotheses about what would work better. It helps you ideate problems that need to be solved and what product interventions could support users.

Example: Interviews, surveys, and market research are all forms of generative research.

Different types of generative research that are fundamental to UX research.

Generative research is great for ideation, but in order to move forward with confidence that your solutions will actually work, you need evaluative research.

Evaluative research

Evaluative research helps you understand task completion, satisfaction, and whether users are able to accomplish core tasks within your app.

Example: Task completion analysis and user testing are evaluative research methods that offer insight into how your experience is working.

Evaluative research also helps you test treatments against the current experience, understand if core user needs are met, and if the solution will generate positive outcomes for the business.

Example: A/B testing is a form of evaluative research that delivers validation on changes.

Competitive analysis and journey mapping

Competitive analysis and journey mapping help you understand if your app delivers on value promises better or worse than other solutions. Map out the customer experience for your product and that of competitors to uncover where you excel and where you might fall short.

These are the fundamentals. You can learn more about the UX research process here, but by understanding these pillars, you’re already on your way to a more effective product.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

User-centered design

Along with UX research, another core fundamental of UX is user-centered design.

User-centered design is the iterative process of putting users at the center of the entire product design and development process. It leans on empathy and a core understanding of user needs to create experiences that align business goals with user needs.

Visual design

Before we dive into the tools used to keep design user-centered, it’s important for PMs to understand the principles of visual design in general.

Visual design is a combination of both graphic design and user experience (UX) design that uses aspects of the site, such as brand identity, internal consistency, and visually communicated goals, to provide a unified, cohesive experience to its users.

A strong visual design doesn’t detract from the site’s content, usability, or conversion potential. Instead, it enhances these functions by creating an engaging and trustworthy experience for users.

You can explore the principles (with examples) in this visual design deep dive.

Now, with visual design in mind, how do teams make sure beautiful visuals actually keep the user as the priority? User-centered design leverages tools like wireframes and prototypes to get user feedback throughout the design process.

Wireframing

A wireframe is an illustration of an application or website page. It is a simple (often greyscale) visualization. Wireframes outline components of the page (like text, images, navigation, etc) in a hierarchical format. They are an early blueprint that UX designers can then validate against user needs and business goals before committing to a final design.

Designers create wireframes to visualize and evaluate core flow and features. Those basic designs are then leveraged in user testing, and put in front of real or look-alike users for feedback.

Examples of different wireframes fidelity.

Prototyping

Prototyping is an essential part of the UX design process and can unlock your team’s ability to validate ideas before you send them to development.

In literal terms, a prototype is a first or early model of a proposed design passed to the development team before being coded onto the website. For product teams, prototypes are early samples of a product intentionally designed for testing. It is a quick way to get something to evaluate with users.

They can range from simple pen and paper sketches to highly interactive mockups in tools such as Figma. With prototypes, you can get user feedback on pages or app elements, which can be used to iterate your way to a better digital experience for your users.

To illustrate the idea, you may use a prototype when redesigning your website’s landing page. You may sketch ideas out in a wireframe and get either internal or external feedback before layering on your brand design and sending it to development for implementation.

Inclusive design

The last fundamental of user-centered design that is important to understand is inclusivity.

Not all users are the same, and therefore, it’s crucial to design a product that is functional and helpful regardless of the user’s personal circumstances.

Inclusive design is about providing an equitable experience for everyone. The goal is to never sacrifice the experience of one community of users for another. The implementation of an inclusive feature should address the needs of a minoritized community without negatively impacting mainstream users.

User-centered design is for all your users, so don’t ignore a subset that may need special accommodations or features.

Usability

Usability is a measure of how well a user can achieve specific goals within a product. It is the product’s ability to deliver an experience that a user can efficiently and easily navigate.

Another UX fundamental, usability includes elements like learnability, error prevention and recovery, memorability, and efficiency.

Learnability

This considers how easily a new user can understand and start using a system effectively. A highly learnable interface has clear cues, intuitive navigation, and minimal need for instructions or prior experience.

Error Prevention and Recovery

Error prevention and recovery is a system’s ability to minimize user mistakes and help them recover quickly if they occur. This includes designing intuitive workflows, providing clear error messages, offering undo options, and preventing irreversible actions.

Memorability

Understand how easily users can remember how to use a system after a period of inactivity. A memorable interface reduces the need for relearning and allows users to quickly regain proficiency when they return.

Efficiency

Efficiency in this context refers to how quickly and easily users can complete tasks once they are familiar with the system. High efficiency means fewer steps, minimal friction, and optimized workflows that reduce time and effort.

Iterative testing and feedback

PMs should consider the importance of iterative testing and feedback as one of the fundamentals of UX.

Similar to UX research, this incorporates forms of evaluative research like rapid testing and A/B testing to validate ideas but also includes simpler elements like feedback loops. Setting up systems with sales and customer success that send you user feedback. When you make changes, send those back to the user or teams to make sure they accomplish what they set out to.

An ethos of UX is never let timing hold you back from staying user-centered. The best product launches consider UX even on tight timelines. If you feel the need to launch quickly, you should at least perform what Emma Leyden, human-centered product leader, calls a “gut check.”

“Your ‘gut check’ can be done in low-effort ways. It won’t give you the most confident answer, but something as simple as showing a design to like friends and family before you launch can teach you a lot.”

As a good rule of thumb, Emma encourages having some kind of user research scheduled every week, even if it’s as simple as letting someone see or use the prototype of a product and voicing their thoughts aloud. You can learn a lot about the usability of a product with this kind of approach.

For more sophisticated companies, a best practice is to conduct regular UX audits on your product to encourage iterative improvement. Another good strategy is to create theme-based roadmaps that center around the user experience.

UX metrics of success

UX success metrics vary based on the goals of your company and your product. To get your footing on what kind of metrics are used for UX success, consider this framework.

Google, a UX leader, created a simple methodology to track progress toward goals that aren’t directly related to business outcomes. Called the HEART framework, it measures the quality of the user experience via five metrics:

  • Happiness: Measures of user attitudes, often collected via survey. For example:
    • Satisfaction
    • Perceived ease of use
    • Net-promoter score
  • Engagement: Level of user involvement. For example:
    • Number of visits per user per week
    • Number of photos uploaded per user per day
    • Number of shares
  • Adoption: Gaining new users of a product or feature. For example:
    • Upgrades to the latest version
    • New subscriptions created
    • Purchases made by new users
  • Retention: The rate at which existing users are returning. For example:
    • Number of active users remaining present over time
    • Renewal rate or churn rate
    • Repeat purchases
  • Task Success: Efficiency, effectiveness, and error rate. For example:
    • Search result success
    • Time to upload a photo
    • Profile creation complete

You can apply HEART to a specific feature or the entire product and help to identify goals, signals, and metrics for each of the five categories.

The HEART framework is a good starting point for keeping teams on track to deliver better user-centered products.

The goal is to integrate user-centered metrics into the decision-making process across the organization.

How PMs and UX teams can work together for better products

Now that you know the fundamentals of UX, as a PM, you are set up to successfully do your job more effectively and with the user in mind.

By embracing UX principles, product managers can ensure their products are not only functional but also delightful and effective. Integrating user-centered design, research, and continuous testing into product development leads to better outcomes for both users and businesses.

For product managers with internal UX teams, you can collaborate by doing things like:

  • Hold a kickoff meeting with both PMs and UX designers to outline objectives, assumptions, etc.
  • Conduct joint research sessions (e.g., customer interviews).
  • Develop shared documentation outlining key findings.
  • Regularly review prototypes together based on feedback loops.

If you don’t have the internal resources for UX research and design, you can hire an unbiased perspective and receive actionable recommendations. Get in touch with The Good if you’re ready to get started.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post The UX Fundamentals That Every Product Manager Needs To Know appeared first on The Good.

]]>
B2B Research Doesn’t Have To Be So Hard https://thegood.com/insights/b2b-research/ Mon, 03 Feb 2025 19:05:30 +0000 https://thegood.com/?post_type=insights&p=110269 Whether your users are knowledge workers busy with deadlines or car mechanics who rarely leave the garage, connecting with folks who buy and use B2B software products is challenging. “B2B is definitely more complex,” says Hannah Shamji, former psychotherapist and current B2B research specialist. “There are so many stakeholders involved in any type of decision. […]

The post B2B Research Doesn’t Have To Be So Hard appeared first on The Good.

]]>
Whether your users are knowledge workers busy with deadlines or car mechanics who rarely leave the garage, connecting with folks who buy and use B2B software products is challenging.

“B2B is definitely more complex,” says Hannah Shamji, former psychotherapist and current B2B research specialist. “There are so many stakeholders involved in any type of decision. Having to juggle all of those, it’s just much more of a web.”

Tools needed throughout the workday are specialized—whether for accountants, mechanics or marketers—so your participant pool is smaller by design. But it’s not just the size of the total addressable market that makes the research challenging. It can be hard to compel B2B users and decision-makers to participate in a study.

We’ve heard numerous examples of “hard to compel” users:

  • High-earning executives who can’t spare an hour
  • Managers responsible for a large task load
  • Operators who spend near-zero time in their inbox

“There’s no incentive that you can pay that would buy their time,” says Benson Low, a 20-year veteran of UX Research and a board member of ResearchOps.

In Low’s experience, compelling those who make a B2B purchase decision (often c-suite executives) with financial incentives alone doesn’t work. “They’re not going to care if you’re paying them $1,000. They probably wouldn’t give you their time.”

Researching in the enterprise space adds another layer of complexity to the recruitment process.

“You can almost play the same playbook in small to medium-sized businesses—same research methodology, approach, even recruitment,” says Low. But, in large organizations, the approach will look quite different.

“When it starts getting difficult is when your organization has an account management team supporting specific businesses. That’s where you have to work with that account management team first before you even reach out to customers.”

Overcoming the challenge

We talked to four experts with a combined 80+ years of experience in product about how they circumvent the challenges of B2B research. Although their approaches varied, one theme was clear: their path to meaningful B2B research has been through relationships.

“It’s about relationships, ultimately.”

Read on to hear how four research pros overcome challenges in B2B research.

1. Learn the problem space before you talk to customers

Because actual users will likely be hard to connect with, using their time to learn the basic details of their role or industry would be a waste.

As such, Low recommends doing internal research before even talking to customers.

“Know the product and how it's positioned in the market. Understand the business. Then you can design the right capabilities, right research sequencing, etc.”

Our experts mentioned several methods for this discovery:

  • Listening to sales calls
  • Interviewing CX staff
  • Reviewing service blueprints
  • Scouring communities to learn about the users your product serves

Internal research gives you the foundation you need to interpret customer feedback meaningfully and assures that you don’t waste precious customer interviews just coming up to speed with their lingo.

2. Relationships, relationships, relationships

Marketing, sales, and CX have a wealth of knowledge and connections that can ease the research process.

Paul Stevens, a UX leader who’s been in digital design long enough to have A/B tested print mailers, heralds the power of a relationship with your sales team.

“Really good salespeople will get you into a customer. They’ve got relationships; they can get you in. They can make that all-important introduction,” says Stevens. “You need to be best friends with your sales team.”

But salespeople won’t be ready to give up their contacts without some established trust, so our experts emphasized building trust and connection rather than focusing on information extraction.

To earn their trust, show them you’re aligned with their goals. Understand their OKRs so you can frame your initiatives in a way that is mutually beneficial. “You essentially have to research the stakeholder in order to get them to buy in on the research,” says Shamji.

3. Use team members as research proxies

Beyond just making intros, your sales team can actually be a partner in research, working prototypes and early feedback into their sales calls in a lightweight form of “testing.”

René Bastijans, who describes himself as a “recovering Product Manager,” is currently a lead researcher at a growth-stage startup. As a research team of one in a company of over 100 employees, he’s found ways to loop his colleagues into the research process.

His sales team is trained to lightly survey prospects during sales calls and report back to the wider team. This creates a healthy feedback loop that keeps everyone abreast of evolving user needs.

“We’ve trained our sales team to ask for specific data and enter it into Salesforce. Researchers and the product team have access to these data, and therefore, sales has allowed us to keep a pretty good pulse on the market.”

But it’s not just a few questions here and there that sales can support with. Bastijans works with the sales team to get quick feedback on product updates in a lightweight form of testing.

“We give them a couple of slides and they slot them into their conversation when they speak with prospects to get input from real people. That’s been working really well for us.”

Stevens advocates for relying on your team to conduct field research where you can’t.

“If you’re in a global organization, you want to do research in a country that you’re not in, and you can’t fly around the world to do it, you can put an education program together. Find like-minded people within the org. In my experience, there have been marketers in each country and they are usually aligned with design and research.”

Sending them out with a camera, a notepad, and a directive to report back on what they see has allowed Stevens to extend his reach beyond global borders. And he says teams love to participate. “I’ve never had anybody snicker. I’ve had them say ‘that’s fantastic; how can I get involved?”

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

4. Stand up a Customer Advisory Board

Because B2B customers are impossible to engage en masse, one way to circumvent the challenge of recruitment is to create a “board” of customers that gives regular feedback in exchange for value. As Low describes, the relationship should benefit both parties.

“There's a benefit for them in that we provide them with training, support, discounts, or access to new features. On the flip side, we expect them to give us feedback—how their business has been going and what their needs are.”

Sometimes called “Sales Advisory Boards” or “Customer Advisory Boards" these can start as a partner to either sales or product, but their insights will support various disciplines within the organization.

Stevens has had success in previous roles with what he calls “Customer Days,” in which the Customer Advisory Board spends an entire day on-site, rotating between different practice groups to provide insight to various business functions. “It’s not a full day of UX testing, but researchers will have a slot to talk with them.” It’s a great way to regularly solicit the perspectives of your customers.

5. Create an expert-level playbook for B2B interviews

When you do get a chance to interview B2B clients and users, Low emphasizes the need to take it seriously: put senior staff on the task and do adequate preparation.

“Make sure you pay the utmost respect to those hard-to-recruit participants. You want to make the best use of their time and be able to ask questions effectively while being able to protect the organization's brand, reputation, and business.”

Low recommends an extensive preparation process for B2B interviews, including researching the participant’s background, reviewing their account details, and chatting with their account manager, which Low says may reveal any potentially sensitive discussion points. “You just don't want anything to impact the business unintentionally.”

But preparation alone doesn’t guarantee an effective conversation—you have to have experience as well. Low recommends that only very experienced moderators conduct conversations with existing clients. “You don’t want them to be in a power dynamic issue. Then they can’t execute the research effectively.”

Ineffective moderation risks producing research that can’t be used effectively and a feeling of time wasted for the participant.

“Especially considering how small this panel likely is and how small the population is. You likely don't have too many enterprise customers, and you might want to talk to them again next year. So, build a rapport and make sure that you are able to access them. If not yourself, then your peers—other researchers, designers, or product managers on your team.”

6. Use automations to maintain trust (and stay sane)

If you’ve been introduced to customers via your sales team, it can feel like you’ve won a golden ticket. But our experts remind us that trust needs to be maintained, so they’ve built workflows that foster trust and transparency.

Bastijans uses Zapier automations that push updates at critical customer touchpoints:

  • When a prospect books an interview
  • When interviews take place

Automations Slack the sales team when a conversation is booked or takes place, and auto-magically import CRM data and update a Notion page. “Zapier has been a really huge help for me just automating mundane tasks that I would have to otherwise do manually.” For Bastijans’ research team of one, he’s been able to ramp up his output without upping the workload.

7. Hire an outsider to play a neutral third party

Depending on your research objectives, it can be hard to solicit honest feedback from your recruits. To circumvent this issue, Low recommends occasionally using outside firms to act as a neutral third party.

“If you can’t do the research because of baggage you have representing your company, you might do it in a roundabout way by getting a third party involved. This way, an independent researcher, consultancy, or research firm might do this centrally, saying ‘we’re just doing industry research,’ they can interview all sorts of customers without damaging anything.”

Agencies can be especially useful in projects that involve talking to the customers of your competitors, says Low. While participants might be hesitant to give honest feedback to a direct competitor of a company they’ve been loyal to, agencies can frame their work more neutrally to enable participants to give candid feedback.

“Essentially, you’re trying to find a Switzerland. Someone that is unbiased with no interconnection that could cloud the insights that you want to get out. So you get, from a data perspective, cleaner insights.”

Plus, agencies can often work much faster, says Low. “The difficult B2B customers that you can’t get to, or have constraints or limitations to access, an independent consultancy might do much quicker.”

8. Make sure the insights stick

While it’s one thing to find the workflows and relationships that enable excellent research, the endeavor is fruitless unless you know how to stick the landing, says Shamji. “It's great to have all the data, but are they going to action on it? Is it going to help make decisions?”

With many teams globally distributed and an average ratio of 1 researcher per 50 developers, the average researcher is, as Stevens puts it, “a very, very, very small fish in a very big pond.”

As a result, our experts say that visibility is key.

To build visibility and buy-in, Stevens suggests a healthy dose of self-promotion: of yourself, the importance of your role, and the outcomes that your research enables.

“As soon as you've got any results, you have to publicize it as much as you can, but especially the right eyes. Depending on the relationships that you have in the business, how comfortable you are, and what the C-Suite is like, there's nothing wrong with dropping a Slack message to the CEO.”

Bastijans solicits buy-in and builds visibility through what he calls “Learning Lunches,” a 25-minute presentation with Q&A, designed to circulate the latest research and keep the team rowing in the same direction.

And for research teams in their infancy, Low says it’s especially important to advocate for the importance of research within your organization. “When you’re establishing a research team, people don’t know what we do.” Rituals like Slack memos, Learning Lunches, and direct conversations can go a long way toward building user-centered thinking within your organization.

The importance of B2B research

Despite the numerous challenges of B2B research, our experts assured us that it is workable.

The sales journey is complex, the personas are many, and the execution needs to be handled delicately. It’s why Shamji sees B2B contexts as the best application for UX research.

“B2B is just more of an obvious place for research,” says Shamji. “All of the touch points like customer success and sales—they're all seeing different parts of the process, so it just kind of warrants a researcher's 360 view of what's going on.”

It’s also why Low says AI isn’t coming for the enterprise researcher’s job anytime soon. Today’s AI interventions just aren’t prepared for the task. “I actually don’t think AI is going to take over our jobs.”

The post B2B Research Doesn’t Have To Be So Hard appeared first on The Good.

]]>
What is The Strategic Value of Ongoing User Research in SaaS? https://thegood.com/insights/saas-user-research/ Thu, 21 Nov 2024 15:17:13 +0000 https://thegood.com/?post_type=insights&p=109736 Whether you’re unearthing new use cases for a core audience, testing value propositions, or mitigating the risk of a feature flop via experimentation, savvy product teams leverage research throughout the product life cycle to improve usability and increase retention. Judd Antin perhaps put the value of user experience research (UXR) best: “When research makes a […]

The post What is The Strategic Value of Ongoing User Research in SaaS? appeared first on The Good.

]]>
Whether you’re unearthing new use cases for a core audience, testing value propositions, or mitigating the risk of a feature flop via experimentation, savvy product teams leverage research throughout the product life cycle to improve usability and increase retention.

Judd Antin perhaps put the value of user experience research (UXR) best:

“When research makes a product more usable and accessible, engagement goes up, and churn rates go down. Companies need that for the bottom line. Users get a better product. Win-win.”

Still, the latest reports put typical UX research staffing at a ratio of one researcher to every 50 developers. Perhaps as a result of this imbalance, research roadmaps can be excessively long and often fail to respond to the real-time needs of product specialists.

“Getting on a roadmap can be very tricky,” says Heidi Dean, Principal Product-Led Growth Manager at Adobe. “With a limited amount of internal resources shared across a matrixed environment, you sometimes have to rely on outside help to get the insights you need.”

When DIY Just Won’t Do

In the era of “founder mode,” many product managers (PMs) and product marketing managers (PMMs) are circumventing internal resources and doing ad-hoc research themselves. And while the DIY approach can be a solution to long lead times, it’s not always feasible. A careful combination of training, tooling, and time is required for non-researchers to do their own research.

Take, for example, moderated user interviews and usability studies. “It’s a skill set that I’ve had to try and hone,” says Dean. With permission, desire, and training, Dean has upskilled in the methodologies, but she’s conscious that not everyone in a product role has the time or opportunity to do so. “Sometimes it’s hard to find the time to recruit and talk to customers,” she says.

It’s not just the lack of formal training preventing PMs from DIY-ing it. Access barriers and time also play a significant role.

While formal research teams are generally equipped with tools like Lyssna, Rally, and Pendo, many research tools operate on a per-seat basis. As a result, access to “seats” is often tightly guarded—and PMs are often left off the roster.

These access barriers can make ad-hoc projects hard to streamline. In previous roles, Dean has seen this play out as a permission-seeking exercise that manifests in added up-time for even simple projects. “There’s a lot of overhead that comes with getting access to a system like that. It can be a heavy lift.”

Add to that the challenge of fitting research into an already-packed schedule, and the barriers to DIY research can feel unsurmountable. “It’s not an easy thing to slot into existing work and commitments,” says Dean.

Using Outside Experts to Supplement Research

Luckily, with support from firms like The Good, you don’t need to be an expert in user testing to get quick insights. PMs with already-packed research roadmaps and busy schedules hire outside experts like us to cut the line and get results quicker.

“Using part of our budget to gain customer insights has been invaluable for decision-making. The insights from user research have helped us unlock new opportunities and validate hypotheses,” says Dean.

The impact isn’t just doing more with less, but doing it reliably faster, says Software Director of Product Marketing Gabrielle Nouhra, who leverages The Good for UX research, rapid testing, and on-site experimentation, and thinks of The Good as “an extension to the product team.”

“The speed at which we obtain actionable findings has been impressive. We are receiving rapid results within weeks and taking immediate action based on the findings, unlike past survey research that often took much longer to yield insights.”

The Multiplying Force of Long-term Partners

Operating somewhat behind the scenes, outside vendors can be a multiplying force that enables product managers, according to Dean. “Your team’s work is additive to our roadmap and helps us meet the demands of our stakeholders looking for customer insights.”

If a good research partner amplifies their impact, why aren’t more product teams leveraging research vendors?

It all comes down to cost and time.

Whether it’s familiarizing them with your business model, metrics, or past insights, standing up a relationship with a new vendor is work, and the process is imprecise. “You try to do the best download that you can, but things are always going to get missed or misinterpreted,” says Dean.

That investment cost is why Dean says that when comparing a long-term partner to one on retainer, “there’s no comparison.”

“When you work with somebody long term, they learn your products, the organization and your stakeholders. They understand the pain points that you’re dealing with, and then you just develop a shorthand.”

Retainer relationships mean time saved, which is why, in Dean’s view, dollars spent with a long-term partner go a lot farther. “Using a partner to help with our research needs has been an efficient use of our resources,” she says.

That manifests in not just time saved but a less arduous process altogether. “It’s streamlining things. It makes everything easier. I can get a lot more done using you guys than even I can with my team.”

Assuring the Success of the Relationship

Once you’ve found the right partner and begun building a backlog of research, the results are compounding. Partners with historical product knowledge can mitigate the pain of reorgs by retaining institutional knowledge.

They can also act as a scaffolding to support new hires. Nouhra knows this firsthand. When she was onboarded to a new role, her first task was to review existing research. Having both a catalog of existing, high-quality research and a partner at The Good who could walk her through it has been an empowering resource that enabled her to dive in quickly.

“You brought me up to speed on so much when I joined—beyond test results and our catalog of research, you were able to share what product updates had been proposed, which were implemented, and what the tradeoffs were to get them live. This gave me a headstart with my product and cross-functional teams.”

Knowing that all good things take time, we asked Dean and Nouhra for their tips for a lasting, high-impact partner. Here’s what they said.

Invest in Up-front Relationship Building

While Nouhra raves about the time-savings of a good partner, she cautions that the dividends are born of an up-front investment. Acting purposefully at the outset can set you up for success. “Take the time to invest in the upfront so that you can reap the benefits of the partnership down the line.”

“Having a partner that's always by your side, you've already done the investment. You can actually get a lot more out of it in the short term because they know the background, and they know your customers, and they know your site experience.”

Include Your Vendor in the Scoping Conversations

When other internal stakeholders are involved, Dean recommends letting the vendor in early, even during the scoping phase. That way, they can ask clarifying questions and quickly speak about the budget implications of various methodologies. It’s an approach that saves time, and it helps identify assumptions and biases that might otherwise arise if the conversation stayed internal, according to Dean.

“When the vendor asks questions, it can draw out the unspoken details. It comes across as ‘I want to make sure we do the best work for you guys.’ So there's a built-in trust that we're all trying to get to, and there's a joint exercise of figuring out what that is.”

Establish a Client-side Conduit

To assure mutual success, Dean recommends assigning a single person to be responsible for mitigation in the event of an issue with regard to the vendor-stakeholder relationship.

While it’s possible that no issues will arise, in Dean’s view, just having someone client-side own the relationship makes her stakeholders feel supported. “They know that I'm personally vested in their success. I'm not just throwing them over the fence to a vendor.”

Getting Started

The benefits of user research in SaaS are proven and they aren’t singular. Conducting frequent, consistent research delivers compounding results. Your whole organization can benefit from the learnings if you pass user insights between development, sales, marketing, product, and more.

But, if like many product leaders, conducting your own research gets put on the back-burner due to competing initiatives, ditch the one-off engagement approach.

To get the results you’re looking for, you need to commit to a long-term research partner. Invest time and resources upfront, and you’ll be rewarded with insights that will propel growth. The longer you engage with the right partner, the easier it will be to glean more insights and, in turn, improve your product experience, marketing, and more.

If you want to understand if The Good might be that long-term partner for your business, get in touch. We start with a thorough audit of your current practices and digital experience to ensure you get everything you need (and nothing you don’t) from working with us. Check out our program and get in touch.

The post What is The Strategic Value of Ongoing User Research in SaaS? appeared first on The Good.

]]>
How Emma Leyden’s Approach to Human-Centered Product Management Delivers Results https://thegood.com/insights/human-centered-product-management/ Thu, 03 Oct 2024 20:32:29 +0000 https://thegood.com/?post_type=insights&p=109504 It’s no secret that building a great product requires a solid understanding of your customer. But how do you capture that information, and what do you do with it when you’ve found it? Emma Leyden knows. As a senior product manager who specializes in human-centered design, she’s spent years using unique research techniques to create […]

The post How Emma Leyden’s Approach to Human-Centered Product Management Delivers Results appeared first on The Good.

]]>
It’s no secret that building a great product requires a solid understanding of your customer. But how do you capture that information, and what do you do with it when you’ve found it?

Emma Leyden knows. As a senior product manager who specializes in human-centered design, she’s spent years using unique research techniques to create explosive product growth.

Recently, Emma was with Polygence, a high-growth seed-stage startup that connects students with mentors. Before that, she was with IDEO, an award-winning global design firm, where she honed her ability to uncover user insights through user research to make product decisions. And before that, she worked at Title Nine, an ecommerce women’s clothing company.

Emma is committed to designing user-centric and data-driven products that drive engagement. She believes that if you create great experiences for customers, business value will follow.

“I’ve really always been focused on understanding the deep needs,” she tells us. “Once I figure out a user’s need, I can use those insights to inform product decisions and fuel growth.”

Emma brings a robust basket of tools to the table: UX research, design, agile methodologies, and honed skill of marrying user needs and business objectives. We had the pleasure of working with Emma when she was with IDEO, and we caught up with her recently to get her best tips on how today’s product leaders can make a measurable impact on their organizations.

Today, we’re sharing insights from that chat, including:

  • What human-centered product management is
  • Why Emma can’t live without experimentation
  • Her favorite tools for product research
  • How to pick an agency partner

A Human-Centered Approach to Product Management

There is something uniquely refreshing about talking to a digital leader who remembers there is a real person on the other side of the screen. For Emma Leyden, a human-centered approach to product management keeps her focused on what really matters: the end user.

“As a product manager, you should feel as if the customer is in the room with you as you’re making product decisions,” Emma says. “You should be able to speak with the voice of the customer because you’ve talked to so many people, but also because you’ve synthesized enough and you understand your audience.”

What Is Human-Centered Product Management?

Human-centered product management is exactly what it sounds like. A style of work that has a person anchoring everything you do. It can show up in the way you talk to customers, experiment, or lead your team.

Talking To Your Customers

In some cases, human-centered product management means literally bringing customers into the office. For example, in one role, Emma’s team saw sports bra sales fall 5% year over year. To uncover why, she brought in five women who consistently shop for sports bras to learn what’s important to them when buying a bra.

After her conversations with real customers, she learned about the vulnerability and body issue challenges of purchasing a bra online. She also learned that the site’s photography wasn’t meeting their needs, and color choices within the same style were important.

“It wasn’t an extensive study. It was just five women, but talking to them helped us change the way we merchandised our products.” Once you learn your customer, you can take a human-centered approach to those decisions.

Data & Experimentation

Though crucial to the success of human-centered product management, physically talking to the customer isn’t the only way to inform your work. Emma incorporates a diverse mix of strategies to ensure she is making the right decisions.

“Everything should be data-driven when you're making a product decision,” Emma says. Every single stakeholder - including developers, designers, and leaders - will ask you for data to justify your decision. So you always have to have data to back it up and also to track if your enhancements improved something.”

Where does the data come from? Experimentation. “Experimentation is a tool to gut-check your decisions,” Emma tells us. It’s an important way to identify possible improvements as well as validate what you think you know. “It might not give you 100% or even 80% confidence, but it can tell you if you’re headed in the right direction or not.”

It embodies the “human-centered” spirit by keeping your personal bias out of the picture.

“I can't live without A/B testing. There have been so many times in my career I found myself too close to the product. I thought I had a handle on things, but an A/B test showed that I wasn’t right. It’s actually fun to be surprised like that.”

Data and experimentation can tell stories about customers if you know how to listen. The right metrics, when considered together, can paint a picture of delight, frustration, and everything in between, keeping leaders focused on the customer experience.

Leading Your Team

Human-centered product management means building products and making decisions based on and for the user. But it also means collaborating with your team and connecting with them where they are.

“Having strong leadership skills is important, but also I think strong collaboration skills are key as well.” Collaborating with different experts, in many cases, means learning to speak their language. “For example, if you have the skill to say to a designer, ‘Here’s where the engineer is coming from’ or ‘Here’s how they’re going to interpret your design,’ you can make a more efficient communication channel, which makes the team work faster and ship better products.”

A good product manager can bridge the gaps and translate messages across teams because of that human-centered approach. It also keeps you open-minded and ready for the unexpected. It's important as a product leader to bring in people who might not always be part of the ideation phase but can offer a lot of valuable input. That’s because creativity doesn't just come from the top.

“I have a deep belief that everyone is creative. I think that engineers are some of the most creative people in any organization. When I say that, CEOs look at me shocked, but engineers are closest to the work and want to ship products that will actually be used, so they have a good idea of what should be built.”

Using Creative Research Methods to Gain Confidence In Product Decisions

The good news for product leaders everywhere is that you don’t need millions of dollars and thousands of customer conversations to take a human-centered approach. Sometimes, you just need a cardboard box.

If you’re going to embrace creativity from unexpected places across your organization, why not get a little “out-of-the-box” in your own techniques and tactics? Emma shared some of her favorite tools for creative product research. Hopefully, these drive home the point that anyone can take a human-centered approach to product management regardless of budget, time, or audience size.

Low-Fidelity Prototyping

User testing with prototypes is one of Emma’s main tools for getting a gut check before taking design and messaging to production.

“The point of a prototype is to communicate the absolute bare minimum of a feature enhancement and see how users react,” she says. “You can throw a prototype together, put it in front of someone, and learn a lot quickly.”

Emma uses two types of prototyping:

  • Prototyping for designers: Designers are visual people, so to help them understand what you want from a product, you have to give them something visual. It doesn’t have to be pretty, but a quick mockup or sketch is a powerful way to bridge the communication gap.
  • Prototyping for users: These don’t need to be fully developed, but they have to be something users can use. “This does not need to be fancy,” she tells us “I’ve literally made prototypes with cardboard and put them in front of users.” The point is to communicate the absolute bare minimum of a feature.

“One of the beauties of prototyping is that when you take the design elements out of it, you're stripping down the feedback,” Emma says. It prevents users from being influenced by unimportant details or things that can be tested and changed later. It lets you focus on the functionality and usability of a product.

Out-of-the-box User Research

Emma also likes to use creative, outside-the-box UX research techniques to uncover insights to inform design and product decisions. Here are a few of the fun examples she shared that may come in handy for your own efforts.

The Scavenger Hunt Approach for Discoverability Insights

The scavenger hunt approach is useful when you’re trying to validate whether users can find information. In this test, Emma asks a user (or a group of users) to find a piece of information in a website, webpage, or document.

How they search and how long it takes them to find the information helps you understand their mental models and whether the site, page, or document matches their thinking.

"In one case, we knew a specific piece of information was key from a previous test, but we had to validate if users could easily find it," Emma explains. "People were scanning through the document like crazy, and we quickly learned that what we thought was obvious on page six was actually buried too deep."

Hot Dot Voting for Honest Feedback

Hot dot voting is an exercise where Emma gives users access to the digital workspace of some product. Then she asks them to add green dots to portions of the workspace that resonate with them and red dots to portions they find confusing or frustrating.

A Hot Dot Voting mockup being used as a tool for human-centered product management

"The beauty of this method is that it gives people time to think,” Emma tells us. “They’re not being put on the spot to say something they like or dislike at the moment, which can lead to biased answers. Instead, they reflect quietly and provide more thoughtful responses."

Ultimately, this technique produces valuable conversations about the product. The facilitator gets a chance to see the themes people follow when exploring or using the product.

Turning Creative Research & Design Into Business Success

Emma’s human-centered approach has served her well in her career. She has a long history of creating impactful change at every organization she’s been a part of. Sticking to her unique approach has delivered huge results and some key learnings along the way.

Turning User Insights Into A 733% Sales Increase

In one instance, Emma delved into a product that was underperforming at Polygence. It was intended to serve as an add-on to the core product, but customers weren’t buying. After talking with users (students and program mentors), operational staff, and salespeople, she discovered that the product was built to serve two very distinct user needs. Customers found this dichotomy confusing.

The solution was to split the product into two separate products, each serving a different purpose. The new products were given clean messaging and offered to different customer segments alongside the core product.

The results of Emma’s research approach created a 733% sales increase. “This is an example of where good research and strategic thinking can help you make simple choices that make a big impact on business metrics,” she tells us.

Using Experimentation To Increase Clicks By 250%

In another case, Emma learned that what users say they want isn’t always what they really want. At IDEOU, customers requested more price transparency, so Emma’s team displayed course prices throughout the website. Unfortunately, this had a negative impact on sales.

After running A/B tests, she learned that user feedback didn’t match their behavior. When she removed prices, they saw a 250% increase in clicks to the enroll button.

“This was a clear example (that actually happens often) where a user says they want something, but their behavior is actually different,” Emma says. “Experimentation is important because it helps you understand how much to follow what users say.”

Finding an External Partner for Product Success

While Emma has had plenty of success on her own, she’s no stranger to calling in external partners who can make her optimization team stronger.

Hiring an agency is a lot like finding a romantic partner. You can’t grab just anyone. You have to find the one that’s right for you. Emma tries to look deeper into potential relationships with external partners, beyond the initial pitch.

“When you get a slide deck from an agency, they’ll try to show you how they’re going to move the needle and get good results,” she tells us. “But I think you should go beyond that. You should try to understand if they truly understand your business and if they align with your values.”

Furthermore, Emma likes asking hard questions. She wants to know, for instance, what happens if a test has a strong negative result. How the agency responds will tell you a lot. Do they answer honestly or do they sugarcoat their response?

How does she build good agency relationships? With a practiced vetting process.

Emma believes relationships with agencies should be collaborative. Neither side should dictate the relationship, what needs to be done to move the needle or the pace. You and the agency should be on the same team with the same priorities, but each brings different perspectives to the table.

“Once you start a working relationship, everything beyond the kickoff call should feel mutual. It should not feel like they are talking at you for the whole time, and then you get to ask a question at the end.”

Why was Emma attracted to The Good? We value the same kind of partnership that Emma requires in an agency. We both recognize that great product development comes from a collaborative effort between the internal stakeholders who know the customers well and external partners who know optimization.

One Final Piece of Advice? Approach Product Growth With Nuance For The Best Results

There are some folks with a "test everything" mindset, where nothing is launched without testing, but there are other leaders who advocate for almost the opposite. "Founder mode" is about instinct and speed. So, to wrap up our conversation, we asked Emma a question that frequently occurs in the industry: Is there one “right” approach to product growth?

“Your approach depends on the size and status of your company,” Emma says. “If you’re a small seed stage startup, you’re launching and learning and doing your experimentation post-launch. But a more established company has an expectation of a certain experience, so they have to be more thoughtful about what and how often they launch.”

While product intuition is important, it’s important to keep in mind we all have our biases. Sometimes, it’s hard to see our products from different perspectives, which is why testing is important. If you feel the need to launch quickly, you should at least perform what she calls a “gut check.”

“Your ‘gut check’ can be done in low-effort ways. It won’t give you the most confident answer, but something as simple as showing a design to like friends and family before you launch can teach you a lot.”

As a good rule of thumb, Emma encourages having some kind of user research scheduled every week, even if it’s as simple as letting someone see or use the prototype of a product and voicing their thoughts aloud. You can learn a lot about the usability of a product with this kind of approach.

Getting Results As A Product Leader

Emma’s incredible results are a testament to her human-centered approach to product design. We hope more product managers take such a deep interest in their customers to design incredible products and experiences.

Good product leaders like Emma know that staying user-centered and making informed, data-backed decisions is the key to success. Hiring an agency like The Good can help you do just that. Our team can amplify your impact with the tools, technique, and expertise that you just can’t find in a single hire.

Learn more about our Digital Experience Optimization Program™. We bring all the pieces you need to complete an optimization puzzle and build a better digital journey.

The post How Emma Leyden’s Approach to Human-Centered Product Management Delivers Results appeared first on The Good.

]]>
This Is The Best Heatmap Software For Researchers (Yes, It Downsamples, And That Is OK) https://thegood.com/insights/hotjar/ Fri, 14 Jun 2024 14:52:25 +0000 https://thegood.com/?post_type=insights&p=108742 A researcher is only as good as their tools. If you want to make the best decisions, you need to arm yourself with the best information. Analytics data, user interviews, and surveys are helpful in their own ways, but there is powerful insight in observing people use the site or app. This gives you a […]

The post This Is The Best Heatmap Software For Researchers (Yes, It Downsamples, And That Is OK) appeared first on The Good.

]]>
A researcher is only as good as their tools. If you want to make the best decisions, you need to arm yourself with the best information.

Analytics data, user interviews, and surveys are helpful in their own ways, but there is powerful insight in observing people use the site or app.

This gives you a clear, comprehensive, and unbiased view of their experience.

How do you get this valuable view? With a tool like Hotjar.

Hotjar is one of our favorite research tools. It’s a staple of our workflow and a key way we develop insights to optimize our clients’ digital experiences.

Sometimes, new clients will ask us to use their preferred tool, but we usually resist. That’s how confident we are in Hotjar’s value. Right now, it’s the best heat mapping software on the market for professional researchers.

We’d like to take a moment to explain what makes Hotjar so great and how it helps us create better experiences for our clients. We’ll also address a common criticism of Hotjar’s platform.

What is Hotjar?

Hotjar is an analytics tool that helps digital brands understand how users interact with their websites. It provides insights into user behavior through visual representations so you can identify areas for improvement and enhance the overall user experience.

hotjar webpage header

Unlike traditional analytics tools that offer simple numerical data, Hotjar provides visual feedback through heatmaps, session recordings, surveys, user interviews, and feedback polls.

These tools let you see exactly how users navigate your site, where they click, and how far they scroll. This information helps you make data-driven decisions to optimize your website, which ultimately leads to a better digital experience for everyone (not to mention higher conversion rates and increased revenue for you).

What are Heatmaps, Scroll Maps, and Click Maps?

Before we dig into why Hotjar is the best heatmap software, let’s get an understanding of the tool’s primary value.

If you look at analytical data, it can seem like conversions just happen on their own. But in reality, there are dozens of little variables that affect how and when your visitors decide to take action.

For instance, a visitor might read some content, explore some images, or watch a video on the page before finally taking the next step. These “footprints” can provide key insight to help you optimize the experience and drive more conversions.

Unfortunately, you can’t get this kind of data out-of-the-box in Google Analytics. (That isn’t to say Google Analytics is a bad tool, but it doesn’t provide everything you need.) And if you have customer tracking set up that tells the fuller story, all that can do is tell you what’s happening. It won’t show you.

Therefore, you need specialty tools to show exactly what your visitors do on your site: heatmaps, scroll maps, click maps, and session recordings.

Heatmaps: Where People Pay Attention

A Nielsen eye-tracking study made pretty big waves when it proved what we all suspected: people don’t read on the web. We scan.

In fact, we scan in a fairly predictable F-shaped pattern. We start on the far left-hand side, scan to the right, and then drop down and to the left to repeat.

The result is that some spots on the page get the majority of our attention. Other spots are basically ignored.

heatmap scanning in an f shape

That Nielsen study is an example of a heatmap. It shows us where users focus their attention. We can use it to learn whether design elements are effective and how to optimize the page.

Areas that receive a lot of attention are shown in warmer colors, like red and orange, and areas that receive little attention get cooler colors, like green and blue.

For instance, consider the following two images. When the baby is facing forward, the face receives the majority of the reader’s attention (indicated by the hot red spot). The title and text are far “cooler,” meaning they get less attention.

heatmap of baby looking forward

But look what happens when the baby faces the content. The attention on the face gets transferred to the text.

heatmap of baby looking at the text

The direction of a face is a simple visual cue, but we wouldn’t see its effect without the help of the heatmap.

Obviously, this is a simple example. It’s not always so cut-and-dry. However, it shows us that heatmaps help us understand what our users are paying attention to. Armed with that information, we can create an experience that meets their needs and our goals.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Scroll Maps: Whether People Consume Your Content

You design long, beautiful pages. But does anyone read them? Do they actually make the experience better for your users?

Scroll maps help us understand where people scroll to on a page and how long they spend there. These maps use the same hot-cold color grading as heat maps. If users spend a lot of time in one area, the map shows it as red or orange. If they never scroll to a part of the page at all, it gets the super-cold blue.

Check out the following scroll map example. Essentially, this map tells us that no one scrolls below the fold.

fully heatmap

Suppose this page’s juiciest offer is below the fold. In this case, most users will never see it because they don’t have a reason to explore further.

Does this mean information farther down the page is less valuable to users? Not necessarily. The following scroll map shows a page that’s almost entirely hot, meaning users care about all of the content.

hotjar software purdy and figg heatmap

Scroll maps are another powerful tool to help you optimize your pages. Like heatmaps, they tell you what users care about and offer insight into improving your pages.

Click Maps: Whether People are Close to Converting

The click is one of our most valuable signals because it represents engagement with the content. In some cases, a click indicates a prized conversion.

If people click your call to action, it’s a sign that your page is well-optimized. If they click elsewhere, it means they find something else more valuable or need more information.

Click maps show where someone clicks on your page. They reveal whether your users are interested in what they’re looking at.

Let’s look at an example. In this click map, you’ll notice most of the clicking takes place around the selector tabs on the left (represented by the warm zone). There’s also some clicking on the menu and the logo.

hotjar software clickmap

This map indicates that the page is working as intended. Users interact with the intended components and then explore other areas of the site.

Click tracking is part of Hotjar’s heat-mapping feature, but it doesn’t just show you where the click happens. You also get to learn where the user moved their cursor. This is another layer of user behavior that makes us love Hotjar.

5 Features That Make Hotjar the Best Tool for Researchers

Now that you understand Hotjar’s value offering let’s explore what specifically makes it the best tool on the market.

1. Separate Instances for Each Map

Having access to lots of different types of data is great, but some tools pump them all into the same report, which paints a muddy picture and makes accurate analysis difficult.

We love that Hotjar provides heatmaps, scroll maps, and click maps in separate instances with clear markings. This separation helps our team focus on the information we’re looking for so there are no misunderstandings.

Separate Instances for Each Map

2. Filters for Session Recordings

Session recordings typically take a while to sort through, especially if you have many of them. You can watch them at speeds faster than real-time, but they still take a while to watch.

This means we end up spending a lot of time watching dozens of irrelevant recordings for each page, often without learning anything valuable. It’s a major time suck.

Fortunately, Hotjar lets us filter our recordings to reduce the number of sessions we’re forced to watch. We can quickly drill down to the sessions that have the most impact on whatever we’re trying to learn.

Here’s a list of all the filters you can apply to your bank of session recordings.

  • Path/URL – Explore where users have or haven’t navigated. You can focus on viewed pages, specific landing pages, exit pages, or traffic channels.
  • Session – Refine data based on broader details about the session, such as new/returning users, country, duration, or page count.
  • Behavior – This includes actions performed/experienced by users during the session, such as clicks, events, rage clicks, entered text, refreshed page, U-turns, or errors.
  • User Attributes – Sessions from specific users based on custom attributes you’ve passed to Hotjar from your data.
  • Technology – Refine collected session data based on technology used during a session, such as device, screen resolution, browser, operating system, or Hotjar user ID.
  • Feedback – Filter sessions where a user submitted feedback through a feedback widget or Net Promoter Score widget.
  • Experiment – Explore sessions based on inclusion in an experiment.
  • Date Filter – Filter sessions based on relative or custom date ranges.

Our favorite filters include device type, landing page, pages visited, duration, relevance (engagement), and new vs returning user.

3. Keyboard Shortcuts for Quick Navigation

When you’re watching session recordings, Hotjar enables you to stop/play or go forward/back using the keyboard. This is a huge time saver, letting you bounce around recordings quickly to find the information you need.

Some competitors allow this kind of movement, but their buttons are small and out of the way. You have to click them manually, which takes your attention away from the video. As far as we know, no one else provides keyboard shortcuts so you can zip through the recording with ease.

4. Quickly Find Usability Issues

There’s only so far people will go to find what they need on your website or in your app. If your digital experience is hard to use, you’ll struggle to convert visitors. It’s simple logic.

In a HubSpot survey, 76% of respondents said the most important factor in a website’s design is the ability to find what they’re looking for.

Using Hotjar is a great way to identify usability issues that prevent users from taking the next step, whether that’s completing a purchase, opening a new user account, consuming content, or whatever else they need.

For instance, if you notice users opening your menu and hovering around without clicking, it tells you they couldn’t locate something they wanted. Maybe it’s worth testing different menu structures to facilitate a better experience.

5. Identify Moments of “Rage”

Sometimes, users become so frustrated that they click repeatedly to make the site work. This is often caused by slow page speed, confusion, or broken elements.

These are serious moments of frustration that you must avoid.

We like that Hotjar’s click maps can show you where users rage clicked. This helps us focus on the biggest causes of frustration in their experience.

Hotjar Identify Moments of “Rage”

Any rage-click issues you identify are easy wins. Solve them quickly before other users experience the same frustration.

Downsampling: A Common But Misguided Criticism of Hotjar

Whenever you consider analytics tools, you’ll likely read complaints of downsampling. Some tools use it. Tools that don’t use it often plaster it over their marketing as a point of value.

Downsampling refers to the process where a tool shows you a random percentage of your total session data instead of the full 100%.

Many analytics tools use downsampling for their free or lower-priced tiers and then encourage you to upgrade to higher-priced options to get access to 100% of your data. Essentially, this means that lower-tier accounts never see some of their data.

Downsampling can pose problems for detailed and precise metrics, such as conversion rates, in tools like Google Analytics. Calculating these nuanced numbers requires a complete understanding of the total number of visitors and sessions. Any reduction in the data can skew the results.

However, when using Hotjar for heat mapping, the situation is different.

Heatmaps are primarily used to identify patterns of user behavior on your website. Whether you’re looking at 100% of your sessions or a subset of sessions, the trends and themes that emerge from the data are usually consistent.

Some tools claim to be superior because they don’t downsample, but in our opinion, this really isn’t a concern when it comes to heat mapping. The fear of missing out on data is often exaggerated to encourage users to switch to more expensive plans.

Even with a sample of data, Hotjar’s ability to visualize user interactions, such as clicks and scrolls, allows you to make informed decisions about your website optimizations.

Go with Hotjar for Reliable Insights

Hotjar provides reliable and valuable data to help you understand user behavior patterns. The insights you gain are still robust and actionable. The app is simple to use for beginners and pros alike.

Hotjar is what we use to optimize sites like Pendleton, The Economist, and Fully. If you want to empower yourself (and your organization) with the best information to optimize your digital experience, Hotjar is the way to go.

You can sign up for a free account here.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post This Is The Best Heatmap Software For Researchers (Yes, It Downsamples, And That Is OK) appeared first on The Good.

]]>
Holiday Shopping Trends: Separating Short-Term Hype from Long-Term Help https://thegood.com/insights/ecommerce-holiday-shopping-trends/ Wed, 17 Oct 2018 17:13:40 +0000 https://thegood.com/?post_type=insights&p=86592 Of all the holiday shopping trends, the one most important to your ecommerce website is increased traffic. Shoppers come alive during the last quarter of the year, and they’re looking for something to buy. Ecommerce professionals who know how to convert that traffic into sales are a valuable asset to any company. Ecommerce professionals who […]

The post Holiday Shopping Trends: Separating Short-Term Hype from Long-Term Help appeared first on The Good.

]]>
Of all the holiday shopping trends, the one most important to your ecommerce website is increased traffic. Shoppers come alive during the last quarter of the year, and they’re looking for something to buy. Ecommerce professionals who know how to convert that traffic into sales are a valuable asset to any company.

Ecommerce professionals who waste the opportunity by chasing after the newest holiday shopping trends end up having to explain why they missed their targets. Many retailers focus on the trendy tips and tricks to get more sales with holiday traffic, only to miss their holiday revenue goals. So what’s the real key to drawing more revenue from the holiday season in the long term?

Instead of joining the race to the bottom of the pricing barrel through seasonal discounts (and slashing profits along with prices), leverage your increased holiday traffic to give your customers what they really want.

You don’t have to give away the store if you focus on the seven principles in this article and avoid the temptation to compete on price alone.

Successful ecommerce executives and managers realize the methods people use to purchase products have changed, but the fundamental desires that drive those purchases haven’t changed at all.

In this article, we’ll list those desires and point you to proven tactics you can use to boost revenues—not just during the holiday shopping season, but all year long.


You don’t have to resort to seasonal discounts if you focus on the seven principles given in this article.
Share on X


How to convert traffic into sales during the holidays (and beyond)

Holiday shopping trends tend to revolve around the excitement generated by Black Friday deals and lower prices in general.

Steep discounts aren’t all your best prospects want, though. They have other desires of equal or greater importance.

They come to you looking for solutions. Give them more than low prices and you’ll make the competition look like hucksters.

Here are seven tactics you can use to keep shoppers (and sales) on track.

1. Don’t make your visitors think.

They’ll come back often when you make it easy and quick for shoppers to buy from you. They’re more likely to stay on your site and check out with an order.

If your site takes more than a few seconds to load or the navigation options aren’t easy to understand, most of the bumped-up seasonal traffic you get will go elsewhere to shop.

It’s that simple.

You don’t have to look further than your own experience to verify the truth of that concept. When you’re shopping online, you don’t want to sit around waiting for a page to load, or spend countless minutes searching for an item. You want things fast and obvious.

Give your shoppers what they want: a seamless experience. Eliminate friction and confusion wherever possible. If you need help figuring out where your stuck points are, ask The Good for a landing page assessment.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

2. Put your visitors in the driver’s seat

Shoppers love it when you place the controls in their hands. They’ll reward you with more sales and rave reviews.

They come to you to solve a problem, but that doesn’t mean you have to pitch ideas to them constantly. If you’ve ever walked into a department store and been shadowed by an overzealous salesperson, you know what that feels like.

Sometimes, you just want to be left alone to shop (but you also want help readily available when you need it). You can create that same experience on your ecommerce website.

Power shoppers, in particular, want to be in control of their time. You can make that happen by providing:

  • Clear navigation
  • Product search functionality
  • Ready access to essential information
  • Easy access to product research
  • Easy-to-reach customer service
  • No-hassles checkout procedure

By helping shoppers control their time, you’re making shopping easy for them. Your visitors won’t have to jump from website to website to find what they want. You provide everything they need and the right atmosphere to shop—on your site.

When prospects feel in control of their shopping, they’re more likely to buy.


Sometimes, you just want to be left alone to shop (but you also want help readily available when you need it). Here’s how to create that same experience on your ecommerce website.
Share on X


3. Provide choices but limit them

Prospects want plenty of choices, but they don’t want to be confused. Do this right and they’ll never forget you.

Some people like the red model; others like the blue. Some want lightweight fabric, and others want heavier material. The more you’re able to give shoppers what they want, the more goods you’ll sell—within reason.

Study after study shows that shoppers buy more when the options are limited. Give them too many choices and you’ll send them into a state known as “cognitive overload.”

The question then concerns how to give shoppers their cake and let them eat it, too. You can do that by using strategic, iterative A/B testing to determine the most effective ways to display an abundance of product options, while avoiding cognitive overload.

Your customers do want more choices, but they don’t want to be overwhelmed. It’s a fine line to walk.

4. Delight your visitors

Your website visitors want to be engaged. Make them happy. Keep them curious. Don’t be dull. They’ll come back to have more fun (and buy more goods).

How can you make your ecommerce site more enjoyable for your prospects? Is there an interactive feature you can build (without sacrificing speed) to help them learn and have fun at the same time?

Does your brand lend itself to humor or lightheartedness? Make them laugh and they’ll tell others. Don’t be inappropriate, of course, but a good chuckle can endear your brand to the right prospects.

Have you heard of the Michelin Man? The Jolly Green Giant? How about Mickey Mouse or the Pillsbury Doughboy? All of those brand mascots inject a sense of good-natured fun into the selling process…and shoppers love to be entertained.

5. Be available and authentic

Have you ever tried to ask a customer service question on an ecommerce website and ended up hopping from link to link and page to page trying to figure out how to get past the FAQ and stock responses?

Did you enjoy the experience?

Your visitors don’t like getting the run around either. They want fast, accurate answers. Give them personal, qualified support, and they’ll love you for it.

Here are a few of the most common ecommerce customer service nightmares:

  • Answers are buried on a FAQ page that requires reading all the questions to find your answer
  • The live chat keeps you waiting forever
  • It’s impossible to find a customer service contact method at all

Anything that barricades your prospects from getting the help they need is going to cost you in conversions and revenue.

Personal service goes beyond customer service, though. The entire shopping experience and every point of contact should convey to the shopper: “You are important to us. You really do matter.”

Personal service makes prospects feel special. The more special they feel, the more likely they are to become your customers and come back again.

6. Earn your visitor’s trust

Today’s shoppers want transparency. They respond to openness and honesty. Give them all the information they need to erase their doubts, and you’ll get the order.

Shoppers want full disclosure, not fine print. Make your shipping rates and return policy clear and simple. If your policies aren’t customer-friendly, change them, or at least be up-front and state them early in the shopping funnel. Never try to hide the bad news.

When you make a mistake—and you eventually will—admit it promptly and make it right. Few things will get you roasted on social media quicker than trying to blame your shortcomings on your customers. Stand up for your staff, but don’t make excuses for them.

Transparency develops loyalty and trust.

7. Win visitors on the best experience, not the best deal.

Shoppers know the lowest price isn’t always the best buy. Work on boosting your brand reputation (social status) instead of trying to figure out how to get cheaper labor and materials to reach rock-bottom prices.

It’s easy to get caught up in a price war. Let the competition lower prices until they price themselves out of business, or the quality drops so low shoppers stop trusting them.

That’s not your path.

When you strengthen this tactic and the six tactics already mentioned (make it easy, give them control, provide choices, engage them, make them feel special, and be honest), you’re building social currency and laying a firm, lasting foundation.

You’ll prosper while your competitors spring up to take cheap shots and then disappear. A good reputation will last you much longer and provide an immensely higher return on investment than will below-cost prices.

Give the bargain shoppers a pleasant surprise now and then, but don’t make them your primary audience. You probably can’t compete with Amazon on price alone, for example, but you can blow them out of the water on quality and customer service.

Build your brand and it will stand.


Strengthen these seven points of customer contact to get out of the race to the bottom of the pricing barrel.
Share on X


Ecommerce websites enjoy traffic booms during the holiday shopping season. Many will slash prices in an attempt to make more sales and gain more attention.

Prices are important but don’t be baited into thinking that they are the only thing shoppers care about. There’s much more to ecommerce than pop-ups and marketing copy with “SALE!” plastered in bold red letters all over the page.

Don’t be tempted to chase after the latest holiday shopping trends. Stay the course and grow your customer experience. Go back over the seven-point list of conversion optimization principles provided in this article, and you’ll soon be on the path to long-term success online.

For help figuring out your next step, contact The Good to help uncover stuck points and leverage holiday shopping traffic to break sales records this year (and next).

The post Holiday Shopping Trends: Separating Short-Term Hype from Long-Term Help appeared first on The Good.

]]>