Natalie Thomas - The Good https://thegood.com Optimizing Digital Experiences Thu, 28 Aug 2025 21:33:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 How to Validate Website Design Changes: A Decision Framework https://thegood.com/insights/website-design-changes/ Thu, 28 Aug 2025 21:23:05 +0000 https://thegood.com/?post_type=insights&p=110805 How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it? The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have […]

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
How do you know if that new homepage design, updated pricing page, or streamlined onboarding flow will actually improve conversions before you build it?

The default answer has been A/B testing. But while A/B testing remains the gold standard for high-stakes decisions, it’s not always the right tool for every design change. Many teams have fallen into the trap of either testing everything (creating bottlenecks and slowing innovation) or testing nothing (making changes based purely on intuition).

There’s a better way. By understanding when different validation* methods are most appropriate, SaaS teams can make faster, more confident design decisions while maintaining the rigor needed for their most critical changes.

*Note: We know validation is a bad word in the research community because it implies “proving you’re right,” but we feel it’s easier to read and more quickly comprehensible for those not in research disciplines. We’re using “validation” in this article, but “evaluation” or “confirm or disconfirm” would be more acute in other settings.

The real cost of a bad experimentation strategy

When teams lack a clear strategy for validating decisions, they create what researcher Jared Spool calls “Experience Rot” – the gradual deterioration of user experience quality from moving too slowly or focusing solely on economic outcomes rather than user needs.

The costs manifest in several ways:

  • Opportunity cost: Market opportunities disappear while waiting for test results that may not even be necessary
  • Resource waste: Development time gets tied up in prolonged testing initiatives for low-risk changes
  • Analysis paralysis: Teams debate endlessly about what to test next instead of making decisions
  • Competitive disadvantage: Competitors gain ground while you’re stuck in lengthy validation cycles

The key is matching your experimentation method to the decision you’re making, rather than forcing every design change through the same validation process.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A framework for design validation decisions

The path to better validation starts with two fundamental questions about any proposed design change:

  1. Is this strategically important? Does this change significantly impact key business metrics or user experience?
  2. What’s the potential risk? What happens if this change performs worse than expected?

Using these dimensions, you can map any design change into one of four validation approaches:

A decision making framework for validating decisions regarding website design changes.

High Strategic Importance + Low Risk = Just ship it

If you can’t explain meaningful downsides to a design change but know it’s strategically important, you probably don’t need to validate it at all. These are your quick wins.

Examples for SaaS teams:

  • Adding customer testimonials to your pricing page
  • Improving mobile responsiveness
  • Fixing broken links or outdated screenshots
  • Adding clearer error messages in your product

Why this works: The upside is clear, the downside is minimal, and the time spent testing could be better invested elsewhere.

Low Strategic Importance = Deprioritize

Not every design change needs validation because not every change is worth making. Some modifications simply aren’t worth the time and resources, regardless of the validation method you might use.

Examples of low-impact changes:

  • Minor color adjustments to non-critical elements
  • Changing footer layouts
  • Tweaking secondary page designs that get little traffic
  • Adjusting spacing that doesn’t affect usability

When to reconsider: If data later shows these areas are creating friction, they can move up in priority.

High Strategic Importance + High Risk = Validation territory

This is where both A/B testing and rapid testing methods become valuable. The critical next decision becomes: can you reach statistical significance within an acceptable timeframe, and are you technically capable of running the experiment?

When to use A/B testing vs rapid testing

This decision tree helps determine if your website design changes should be tested or if another approach should be used.

When to use A/B testing for design changes

A/B testing remains your best option for design changes when:

  • You have sufficient traffic on the experience: Generally, you need 1,000+ visitors per week to the page being tested
  • The change is reversible: You can easily switch back if the results are negative
  • You need statistical confidence: Stakes are high enough to justify the time investment
  • Technical capability exists: Your team can implement and track the test properly

Examples of SaaS use cases for A/B testing:

  • Complete homepage redesigns
  • Pricing page layouts and messaging
  • Sign-up flow modifications
  • Core product onboarding changes
  • High-traffic landing page variations

When to use rapid testing for design changes

When A/B testing isn’t right due to traffic constraints, technical limitations, or time pressures, rapid testing provides a faster path to validation.

Rapid testing methods work particularly well for SaaS design validation because they can:

  • Validate concepts before development: Test wireframes and mockups before building
  • Narrow down options: Compare multiple design variations quickly
  • Identify usability issues: Spot problems before they reach real users
  • Provide qualitative insights: Understand the “why” behind user preferences

Examples of SaaS use cases for rapid testing:

  • New feature naming and messaging
  • Dashboard navigation restructuring
  • Enterprise sales page designs (low traffic)
  • Value proposition clarity testing
  • Multi-option comparisons (6-8 variations)

The natural next question might be “which rapid testing method should I use?” Here is another decision tree framework to help answer that.

This framework is a guide to determining which rapid testing method is best suited for your website design changes.

Incorporate your experimentation strategy into your design process

With a decision-making strategy for how and what to test, you’ll need to incorporate the strategy into your design process. The most successful SaaS teams don’t treat validation as an afterthought. They build it into their process from the beginning:

  • During ideation: Use rapid testing to validate concepts and narrow options before detailed design work
  • During design: Test wireframes and mockups to identify issues before development
  • Before launch: Use A/B testing for high-stakes changes, rapid testing for others
  • After launch: Continue testing iterations based on user feedback and performance data

The compounding benefits of a sound experimentation strategy

The goal isn’t to replace A/B testing with rapid methods or vice versa. Both have their place in a mature experimentation strategy. The key is understanding when each approach provides the most value for your specific situation and constraints.

Teams that master this balanced approach to validation see remarkable improvement, including:

  • 50% better A/B test win rates (because rapid testing helps identify winning concepts)
  • Faster time-to-market for design improvements
  • More confident decision-making across the organization
  • Better team morale from seeing results from their work more quickly

Perhaps most importantly, they avoid the extremes of either testing nothing (high risk) or testing everything (slow progress).

For SaaS teams serious about optimization, the question isn’t whether to validate design changes; it’s whether you’re using the right validation method for each decision.

Start by auditing your current design change process. Are you testing changes that should be implemented immediately? Are you implementing changes that should be tested? By aligning your validation approach with the strategic importance and risk level of each change, you can move faster without sacrificing confidence in your decisions.

And if you aren’t sure how to get started, our team can help.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post How to Validate Website Design Changes: A Decision Framework appeared first on The Good.

]]>
Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base https://thegood.com/insights/monetization-strategy/ Thu, 17 Jul 2025 15:22:34 +0000 https://thegood.com/?post_type=insights&p=110736 Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously. But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach. The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who […]

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
Product leaders are rightfully obsessed with acquisition. They pour resources into new sign-ups and track monthly active users religiously.

But over many years of working with SaaS teams, there is something counterintuitive we’ve learned about this approach.

The companies that scale fastest aren’t always the ones acquiring the most users. They’re often the ones who build monetization strategies that focus on their existing user base. As users find more value in the tool and increase usage, the tool’s pricing fluctuates accordingly.

Realistically, every SaaS tool will hit a growth plateau. There aren’t infinite users that will find value in your product, even though we all wish there were.

The goal is to build growth into your monetization strategies so you don’t leave any untapped revenue in your existing user base. This ensures you don’t reach a premature growth plateau once net new users become stagnant.

The fundamental shift in monetization strategy from seats to value

Before we get started, throw your traditional SaaS monetization playbook out the window.

For years, companies have relied on seat-based pricing because it made sense. With each new hire, a new account or seat was purchased for tools. Revenue grew linearly with team size.

But now one person can do the work of two or three people. AI tools, automation, and productivity software mean that the relationship between users and value creation has completely shifted. When your customers can accomplish twice as much with half the team, seat-based pricing isn’t sustainable.

Smart companies are pivoting to value-based extraction. Instead of charging for the number of people using your software, they charge for the value you create. This isn’t just about switching to usage-based pricing; it’s about fundamentally rethinking how you capture the value your product delivers.

Consider HubSpot’s evolution. Instead of sticking to their standard seat-based pricing model as the market has evolved, they’ve created a dynamic pricing system. Users can pay for seats at their specific account tier, but also have a layer of contact-based pricing, aligning cost with the actual value delivered rather than just the number of users.

They’ve also recently added token-based pricing for certain functions in the tool, like marketing email sends, AI features, and API calls. These changes allow them to maintain revenue growth even as customers reduce their seat count.

You’re trying to capture more of the consumer surplus

Most SaaS tools have a consumer surplus. There are features or outcomes that customers would pay more for, but don’t have to because of your pricing model.

You can never eliminate all surplus (you need happy customers), but you can likely capture more of it through strategic segmentation and value extraction.

Think about your demand curve. It’s not a straight line. It’s a complex slope that varies by customer segment, use case, and willingness to pay. Most companies set one or two price points and leave massive value on the table. The companies that scale create multiple packages along that curve.

Netflix understood this when it evolved from a single $7.99 plan to Basic, Standard, and Premium tiers. Each tier captures different segments of willingness to pay while allowing customers to self-select into the option that works for them. However, the real insight wasn’t in the tiers themselves, but rather an understanding that different customer segments valued different features. Knowing that allowed Netflix to extract more value from customers who were willing to pay more while keeping price-sensitive customers from defecting.

Research changes everything

To get started on a monetization strategy based on value and capture more of the consumer surplus, companies have to build their understanding of what customers are willing to pay for.

Research from monetization and pricing expert Madhavan Ramanujam says that 20% of features drive 80% of willingness to pay. The challenge is to make sure you aren’t over-indexing on features that customers don’t actually value while underdeveloping the ones that drive revenue.

The solution is systematic research that reveals what customers actually want to pay for. Here are three methods to make it happen:

Max diff analysis: Present customers with feature lists and ask them to identify the most and least important items. With enough volume, you can rank features by their impact on willingness to pay. Features that over 50% of customers want become your “leader” features or the core value proposition that justifies your price point.

Anchoring questions: Instead of asking customers what they’d pay (which doesn’t work), ask them to compare your value to a known competitor. “If Salesforce brings your team 100 points of value, where do we rank?” This gives you relative value positioning without the discomfort of direct pricing questions.

Van Westendorp pricing: Ask customers four questions about price sensitivity: What’s acceptable? What’s expensive but you’d consider it? What’s so expensive that you wouldn’t consider it? What’s so cheap that you’d question the quality? This reveals the psychological price boundaries for different customer segments, providing a window of tenable prices that capture both the price-sensitive and high-willingness-to-pay corners of the market.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The Shopify monetization strategy: how to scale with your customers

An innovative and extremely effective monetization strategy allows you to grow with your customers. Shopify cracked this code by creating a model where their revenue increases as their customers become more successful. Instead of charging an ever-larger flat monthly fee, they take a percentage of gross merchandise volume (GMV).

This creates a virtuous cycle: Shopify is incentivized to help their customers succeed because customer success directly translates to revenue growth. When a merchant goes from $10,000 to $100,000 in monthly sales, Shopify’s revenue from that customer increases 10x.

Smaller businesses benefit from a proportional cost as they get started, and if businesses leave once they grow, Shopify doesn’t mind.

Shopify actually optimizes for this churn, not against it. As Archie Abrams, VP of Product and Head of Growth at Shopify, explains: “The way we think about churn [goes] back to Shopify’s mission and what we want to do, which is to increase the amount of entrepreneurship on the Internet.”

Instead of trying to prevent customers from leaving, Shopify focuses on lowering barriers to entry so more entrepreneurs can try starting businesses. They know most will fail, but the few who succeed generate massive value. This counterintuitive approach has helped Shopify power over 10% of U.S. e-commerce with $235 billion in GMV in 2023.

The beauty of this model lies in its retention through value creation, rather than friction. Traditional SaaS companies worry about churn because losing a customer means losing all their revenue. But when your revenue scales with customer success, churn becomes less of a concern. Your most successful customers are worth 10x or 100x more than your average customer, creating a natural buffer against churn.

Finding your untapped revenue

The process of discovering untapped revenue in your user base can be synthesized into a few steps:

Step 1: Segment your demand curve

Different customer segments have different willingness to pay. Enterprise customers might value security and compliance features, while SMBs prioritize ease of use and cost. Map these segments and understand what each values most.

Step 2: Identify value gaps

Look for places where customers are getting significant value but paying relatively little. These are your biggest opportunities for revenue expansion. Often, these are found in features that save customers time or help them make money.

Step 3: Create extraction mechanisms

Build pricing tiers, usage limits, or premium features that allow high-value customers to pay more for the value they receive. The key is making this feel like a fair exchange rather than a penalty.

The most effective monetization strategies combine multiple approaches. For example:

  • Base + usage: Provide a predictable subscription base with usage-based charges for additional value. This gives customers cost certainty while allowing you to capture upside from heavy users.
  • Tiered value: Create pricing tiers based on customer segments and use cases, not just feature lists. Each tier should feel designed for a specific type of customer.
  • Expansion revenue: Build mechanisms for customers to naturally increase their spending as they grow. This could be through additional seats, increased usage, or premium features.
  • Value-based upgrades: Tie pricing increases to value delivered, rather than just features added. When customers see clear ROI, they’re willing to pay more.

Step 4: Test and iterate

Pricing optimization is an ongoing process. Test different approaches, measure customer response, and iterate based on data. The best monetization strategies evolve continuously.

A monetization strategy that works for the long term

The future of SaaS monetization is about aligning pricing with value creation rather than resource consumption. The untapped revenue in your user base is real, measurable, and accessible, and approaching it with a value-based strategy will help you capture it.

At The Good, we specialize in helping SaaS companies optimize their monetization strategies through data-driven research and strategic experimentation.

Our services can help you identify value gaps, design pricing experiments, and implement changes that drive meaningful revenue growth. Get in touch to learn how we can help you extract more value from your existing customers.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Building A Monetization Strategy That Doesn’t Leave Untapped Revenue in Your User Base appeared first on The Good.

]]>
What Mentorship Looks Like In Today’s Flat, Lean, And Growing Orgs https://thegood.com/insights/mentorship/ Thu, 26 Jun 2025 18:11:24 +0000 https://thegood.com/?post_type=insights&p=110673 The org chart isn’t what it used to be. As hierarchies flatten and teams are stretched thin, the traditional mentorship model (where wisdom flows from senior to junior) is shifting. Maybe you find yourself as the most senior person in your function, surrounded by brilliant colleagues who work in completely different disciplines. Or maybe you’re […]

The post What Mentorship Looks Like In Today’s Flat, Lean, And Growing Orgs appeared first on The Good.

]]>

The org chart isn’t what it used to be. As hierarchies flatten and teams are stretched thin, the traditional mentorship model (where wisdom flows from senior to junior) is shifting.

Maybe you find yourself as the most senior person in your function, surrounded by brilliant colleagues who work in completely different disciplines. Or maybe you’re managing a team while still figuring out your own career trajectory. The old playbook of “find a mentor who’s two levels above you” doesn’t apply when there are only three levels total.

We spoke with three product professionals navigating these workplace realities to gain a deeper understanding of what mentorship looks like today. From their perspective, mentorship isn’t disappearing. It just takes a little creativity to find these days.

Their stories show that finding mentorship requires intention and a willingness to look beyond your immediate team. And those who crack the code on developing in a community with a mentor, despite flat, resource-constrained environments, see higher job satisfaction and better retention.

A great mentor understands your why

If you’re lucky enough to have a manager with expertise in your discipline, managers can be a great source of mentorship, but according to Brittany Lang, UX Research Manager and proud mentor, growing talent can be an overlooked aspect of management. “ I think a lot of times there’s just not a lot of energy put into growing people,” says Brittany. “It’s extra effort, but it’s important if you wanna keep people.”

For Brittany, her approach to growing people starts with understanding their “why.”

“It's the most important thing to do as a research leader—to understand who my people are and what they want out of this job.”

Understanding her team members’ driving purpose helps her keep her team motivated to cross the finish line.

“If I'm asking them to do something and I can't give them an explanation of why or how it connects to those goals they have and their ‘why,’ then I'm losing them—and they're losing out. It's a lose-lose, and it should be a win-win situation.”

Beyond understanding their why, Brittany ensures that each team member has had opportunities to grow by keeping track of what they've accomplished and what they still need to do. It’s a part of how she keeps her team motivated. And the way she sees it, when her team is intrinsically motivated to do the work, it’s mutually beneficial to the company and the employee. “That's the dream,” she says.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Manager ≠ mentor

Not everyone is so lucky to have a manager like Brittany.

Sumita Paulson, UX Designer & Strategist, who has been both a mentor and a mentee throughout her 15-year career, says not all managers are mentors. “In a period of 15 years, I’ve only had two real mentors,” says Sumita.

Sumita notes that while it might be your manager's job to make sure you have work, “they don’t always make it their job to make sure it’s rewarding and tee you up for your next big success.” Whether due to the time pressures of their role or the lack of organizational structure to support it, managers don’t always see it as their job to tend to their employees’ careers.

“It’s not often that you find someone who aligns with your goals and wants to help you get there. A mentor is someone who is first amenable to and interested in helping you grow, then takes a proactive role.”

Sumita has found that managers who have taken a proactive approach understanding her goals and interests are the ones who created fertile ground for a mentor-mentee relationship. “Being interested in you as a person is the key thing.”

To spot a potential mentor, Sumita advises paying attention to who shows earnest interest in your goals.

“If they’re asking broader and more intentional questions beyond the job, that’s a sign that they want to get a better sense of who someone is as a person and what is interesting to them. They are starting to invest in your story.”

Mentorship can come from anywhere

Managers sometimes demonstrate a willingness to mentor. But where should you look for mentorship if your organization is relatively flat?

At one startup, Data Analyst and UX Researcher Anton Krotov was the sole research expert among a team of experts, without a research manager. “I was working with people outside of my field completely. So I was a senior person, and there was nobody else with a more senior expertise to ask advice from.”

Anton found himself as the sole researcher among a team of extremely talented and senior colleagues whom he needed to confidently serve—developers, product managers, designers, etc. That’s when he embarked on finding mentors outside of his company.

He leveraged outside mentors to help him upskill on new methodologies related to his role and to understand the ethical considerations of working with children.

“When I started to work with educational products oriented for very early school-aged kids, like primary school kids, I needed to do some in-person research, like focus groups. But I came with experience mostly in usability studies with adult people who articulate their wants and needs very differently. So what I needed to do was to find an anthropologist-slash-psychologist who was working with kids and could really explain to me how to do that right.”

Key to that relationship's success was working with a mentor who gave him homework, which Anton explained “could expand their value beyond our 30-minute time slot.” The value went beyond education and included accountability and reflection.

“That real value person-to-person mentorship gives to you is reinforcement. You come back to your mentor, bringing the results of your first try, second try, and you discuss that. That is the most valuable tool in upskilling.

I haven't found anything yet that would've beaten mentorship in terms of result, return on investment, confidence, and the feedback of my colleagues who saw me now more capable than before.”

Making mentorship work—in any org structure

Whether you're a manager looking to develop your team or an individual contributor seeking growth, mentorship might mean finding creative places to establish and develop relationships.

Mentorship doesn't have to look like the traditional model. It can be cross-functional, external, or even peer-to-peer. What matters is the intentionality behind the relationship and the commitment to growth on both sides.

Developing talent isn't just about individual growth; it's about organizational resilience. As Brittany noted, when team members are intrinsically motivated and growing in their roles, everyone wins.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post What Mentorship Looks Like In Today’s Flat, Lean, And Growing Orgs appeared first on The Good.

]]>
From Data Collector to Data Connector: Embracing Research Democratization https://thegood.com/insights/research-democratization/ Mon, 16 Jun 2025 15:26:20 +0000 https://thegood.com/?post_type=insights&p=110652 As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights. “The fundamental shift that people have to make is that you’re no […]

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
As AI capabilities expand and research teams stay lean, many researchers find themselves supporting hundreds, if not thousands, of colleagues in their organizations. For them, the model of centralized research is creating bottlenecks that slow decision-making and limit the reach of customer insights.

“The fundamental shift that people have to make is that you’re no longer a data collector. You’re a data connector,” says Ari Zelmanov, former police detective and current research leader. In Ari’s view, as teams get leaner and tools get better at executing research tasks, the job of the researcher becomes standing up repositories, socializing learning mechanisms, and creating the systems that empower organizations to act on good information.

We spoke with research leaders who've successfully made this transition, transforming their teams from siloed specialists into customer-centric learning cultures. Their approaches varied, but one theme was clear: when you empower others to answer their own questions, you don't diminish your value, you multiply it.

The d word holding us back

Before diving into solutions, there's an elephant we need to address: Democratization. Many researchers worry that democratizing research will lead to poor methodologies, incorrect conclusions, or devalued expertise. But Ari feels the argument is nye.

"The only people arguing about democratization are researchers," says Ari. "Nobody else is arguing about it. We're infighting about something that we have zero control over. It's happening."

I tend to feel like anyone arguing about democratization is missing one critical point: customer centricity isn't just one person's job.

Anton Krotov, Researcher in an organization of over 10,000 people, was in the fortunate position of being very trusted by his colleagues. So much so that they believed research could answer all of their questions.

“I had already established a reputation. I was fortunate that I didn't need to sell the value of research. Quite the opposite. People came to me with too many requests. They believed research could do everything for them. I needed to set up boundaries.”

Overwhelmed with requests from colleagues, Anton realized that the solution wasn't saying no—it was saying yes in a different way. Rather than becoming a bottleneck, Anton chose to become a bridge.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Connect teams through shared intelligence

Good intelligence is the responsibility of many disciplines, not just research. To get answers quickly, Ari's teams use what he calls the "Moneyball" approach to research, a framework that prioritizes speed and accessibility over methodological purity:

"Product teams are incentivized to move fast. So, how do you make research fit into that in a way that makes sense? We built something called Moneyball Research. It's super simple: start with what you know. It could be in your repository, it could be what you know. Then you go to what data is accessible within 24 to 48 hours. That's usually internal analytics, CSAT tickets, NPS, sales conversations, and tribal knowledge. Then—and only then—do you go to primary research."

This approach shifts conversations away from methods and focuses instead on what teams need to know and how confident they need to be. "Then it's up to the researcher to be the doctor. Diagnose that, determine how they're going to collect that evidence given the time, money, and level of rigor."

René Bastijans, lead researcher at a growth-stage startup, has found creative ways to loop colleagues into data collection. His sales team is trained to lightly survey prospects during sales calls and report back to the wider team.

"We've trained our sales team to ask for specific data and enter it into Salesforce. Researchers and the product team have access to these data, and therefore, sales has allowed us to keep a pretty good pulse on the market."

This creates a healthy feedback loop that keeps everyone abreast of evolving user needs while extending the research team's reach without expanding headcount.

Invite colleagues into the research process

While it might seem counterintuitive to share methodologies and research responsibilities, successful research leaders see democratization as an opportunity rather than a threat.

To remove research bottlenecks, Anton ran internal workshops to upskill his colleagues on doing their own research. This proactive approach to education focused on tailoring training to his colleagues' specific needs: "I try to cover the cases that will be really applicable, so I don't offer any cookie-cutter material and don't go much into theory. It's really tailored to their day-to-day work."

The key is meeting people where they are and giving them tools that fit their contexts. Not everyone needs to become a master researcher, but many can learn to conduct basic customer interviews or query data effectively.

Brittany Lang, UX Research Manager and M.S. in Information Science, uses project reviews as a time to cultivate a shared point of view and continually refine her thinking.

“Before we socialize research plans, I usually take a look at it, or I have someone else on my team take a look at it. It doesn't have to be your manager that's reviewing something, but can someone give you feedback?

It's nice when coworkers leave comments and I can see what other people on the team have said and we can agree or challenge, and then have a discussion about it. I also learn in those moments too. When I'm looking at how members of my team have reviewed other work, where they're coming from and their perspective, I learn a lot from them in those moments.”

Facilitate low-risk learning

It takes more than a few ambitious researchers to imbue a company’s culture with a learning mindset, which is why rituals and learning programs are so important.

Anton’s employer formalized this approach to building safe learning environments through a program called "Gigs for Growth," a repository of side projects from different departments where employees can apply to work on learning opportunities outside their typical scope.

"It's like a company green light that you can work on learning during your full-time gig and outside of your typical work scope. Something that you would never otherwise be able to touch in the company."

Under this program, researchers can support QA engineers, sales can support marketing, and everyone gets exposure to new perspectives that inform their primary roles. "You get some really new experiences that otherwise you wouldn't be able to."

At The Good, we like to build regular, low-stakes opportunities for knowledge sharing and skill development. One of our approaches at The Good is a ritual called "Random Question of the Week." During another bi-weekly meeting, team members share client questions that stumped them or that they felt they could have answered better.

These conversations help build shared perspectives that then get turned into artifacts:

  • FAQ entries for brief, punchy answers
  • Articles for long-form perspectives
  • Policies or SOPs that outline ways of working

The result is that teams become more aligned, can answer tough questions on the spot, and save time by referring to their collective knowledge instead of rehashing the same discussions.

Another effective ritual is "Critique & Share" sessions, where team members bring questions, websites they admire, or work they're developing to get fresh perspectives from colleagues who haven't been deep in the weeds of a particular project.

Maggie Paveza, Senior Strategist at The Good, shares that it has helped her break the ice when building a shared P.O.V.

"It's pretty informal and often we're not showing our own work, so it feels less intimidating to ask your team members, 'why do you think this competitor is using this strategy,' than if it were your own work," explains Maggie.

The power of being a data connector

"The fundamental problem that research as an industry has is we've been myopically focused on the front end of the equation," says Ari. "Data collection, statistical significance, theoretical saturation—insert whatever fancy academic word you want in here. But the real power comes on the back end of the equation."

That back end is about connection, synthesis, and empowerment. When researchers shift from being data collectors to data connectors, they don't lose their expertise; they amplify it.

As Anton puts it, "Where soil is right, then you can do things. Praise people for when they do things great. You can learn from mistakes, you can learn from success."

The goal isn't to turn everyone into a researcher. It's to create an environment where customer insights flow freely, where good questions get asked by many disciplines, and where learning happens continuously rather than in bursts.

Making the shift

Building a customer-centric learning culture doesn't happen overnight, but it starts with understanding where your organization is open to change and being constructive about how you facilitate it.

Look for teams and individuals who are already curious about customers. Find the places where people are asking good questions but lack the tools or confidence to find answers. Then meet them there with the right combination of education, tools, and support.

"At the end of the day, it's about empowering decision-making," says Ari. And in a world where customer expectations evolve quickly and research teams are lean, that empowerment might be the most valuable thing researchers can provide.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post From Data Collector to Data Connector: Embracing Research Democratization appeared first on The Good.

]]>
How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense https://thegood.com/insights/why-rapid-test/ Fri, 23 May 2025 20:04:02 +0000 https://thegood.com/?post_type=insights&p=110602 The right tool for the right job. It’s a principle that applies everywhere, from construction sites to surgical suites, yet for digital product development, many teams are singularly focused on A/B testing. Don’t get me wrong, A/B testing is incredibly powerful. It’s the gold standard for high-stakes, high-traffic decisions where statistical significance matters most. But […]

The post How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense appeared first on The Good.

]]>
The right tool for the right job. It’s a principle that applies everywhere, from construction sites to surgical suites, yet for digital product development, many teams are singularly focused on A/B testing.

Don’t get me wrong, A/B testing is incredibly powerful. It’s the gold standard for high-stakes, high-traffic decisions where statistical significance matters most. But when it becomes your only tool, you create unnecessary constraints that can paralyze decision-making and slow innovation.

The reality is that different decisions require different levels of rigor, confidence, and investment. Luckily, there is a complementary approach that fills critical gaps in your experimentation toolkit. By understanding when each method is most appropriate, teams can make faster, more informed optimizations while maintaining the rigor needed for their most high-stakes decisions.

Creating “experience rot”

A/B testing borrowed its methodology from medical intervention studies, where 95% confidence intervals and statistical significance aren’t just nice-to-haves; they’re life-or-death requirements.

But we’re not rocket scientists, and we don’t always need the same level of assurance in product decisions to move towards the right outcome.

An infographic of the evidence hierarchy inherited from medical disciplines.

A/B testing can be overkill for the decisions product teams need to make daily. Yet teams have become so committed to this single methodology that they’ve created what researcher Jared Spool calls “Experience Rot,” the gradual deterioration of user experience quality from teams moving too slowly or focusing solely on economic outcomes.

The costs of slow testing cycles are tangible and measurable:

  • Market opportunities disappear while waiting for test results
  • Competitors gain ground during lengthy testing phases
  • Development resources get tied up in prolonged testing initiatives
  • Customer frustration builds as issues remain unfixed
  • Decision fatigue sets in as teams debate what to test next

But the problem runs deeper than just speed. Many teams face contexts where A/B testing simply isn’t feasible. Regulatory challenges in healthcare and finance, low-traffic scenarios for B2B products, technical constraints, and organizational politics all create barriers to traditional experimentation.

By the time a test idea passes through all the bureaucratic loopholes and oversight at an organization, it’s often no longer lean enough to justify testing. Without an alternative testing method, teams are left without any data at all.

So, how do we:

  1. Circumvent the challenges of A/B testing, and
  2. Prevent experience rot?

Enter rapid testing

Rapid testing isn’t about cutting corners or accepting lower-quality insights. It’s about matching your research method to the decision you’re trying to make, rather than forcing every question through the same rigorous, but often slow, process.

Like A/B testing, rapid testing helps you understand if your solutions are working. Unlike A/B testing, rapid tests are conducted with smaller sample sizes, completed in days rather than weeks or months, and often provide qualitative insights that A/B tests can miss.

“The speed at which we obtain actionable findings has been impressive,” says Gabrielle Nouhra, Software Director of Product Marketing, who leverages rapid testing with The Good for research and experimentation. “We are receiving rapid results within weeks and taking immediate action based on the findings.”

The key is understanding when each approach makes sense. Not every decision requires the same level of rigor, and smart product teams create systems that allow critical insights to move faster.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A framework for decision making

So, how do you decide when to use rapid testing versus A/B testing? The decision starts with two critical questions: Is this strategically important? And what’s the potential risk? With those two questions in mind, you can map your ideas on a simple 2×2.

A framework to use for decision making and deciding why to rapid test.

High Strategic Importance + Low Risk = Just Ship It. If you can’t explain meaningful downsides to a change but know it’s strategically important, you probably don’t need to test it at all. These are your quick wins.

Low Strategic Importance = Deprioritize. Not everything needs to be tested. Some changes simply aren’t worth the time and resources, regardless of the method you use.

High Strategic Importance + High Risk = Test Territory. This is where both A/B testing and rapid testing live. The next decision point becomes: Can you reach statistical significance within an acceptable timeframe? Are you technically capable of running the experiment?

If the test isn’t technically feasible or traffic constraints make the time-to-significance longer than is acceptable, rapid testing becomes your best option for de-risking the decision.

A decision tree to determine whether to test something and why to rapid test or use another approach.

Rapid testing in practice

Rapid testing encompasses various methodologies, each suited to different types of questions. Here are just a few examples:

First-Click Testing helps confirm where users would naturally click to complete a task. Perfect for interface design decisions and navigation optimization.

Preference Testing goes beyond simple A/B comparisons to evaluate multiple options, often six to eight variations, helping teams understand which labels, designs, or approaches resonate most with their target audience.

Tree Testing reveals where users might stray from their intended path, using nested structures to understand navigation behavior without the distraction of full visual design.

A framework to use when determining which rapid testing method is best suited for your needs.

The beauty of these methods lies in their speed and specificity. Rather than testing entire page redesigns, rapid testing allows you to validate specific hypotheses quickly. Which onboarding segments will users self-identify with? Where should we place a new feature to maximize engagement? Which design elements increase trust among new visitors?

Rapid tests can also guide our A/B testing strategy. If we’re entertaining multiple options for new nomenclature within an app experience and we’re just trying to understand which label users think would be most accurate or most likely to represent those outcomes, running a rapid test can narrow down those options and help us decide what to A/B test.

Building a rapid testing practice

Implementing rapid testing effectively requires more than just choosing the right method. Teams that see the best results follow several key principles:

  1. Impact pre-mortems: Before testing, clearly define what success looks like and what impact you expect if implemented. This helps connect testing activities to business outcomes and prevents post-hoc justification of results.
  2. Acuity of purpose: Keep tests focused on specific questions rather than trying to evaluate everything at once. A/B testing often encourages comprehensive evaluations, but rapid testing works best with precise hypotheses.
  3. Pre-defined success criteria: Establish clear benchmarks before you start testing. If 80% of users can complete a task, is that a win? What about 60%? Define these thresholds upfront to avoid moving goalposts when results come in.
  4. Mute context: When testing specific elements, remove unnecessary context that might distract from the core question. Full-page designs can overwhelm participants and dilute feedback on the element being tested.
  5. Sunlight: Even experienced researchers benefit from collaborative review of test plans. Transparency builds confidence in the process, and a peer review of test designs helps identify potential issues before execution.
  6. Share: Circulate your impact, what you’ve learned about your audience, and get people excited about the work. The goal is to build visibility, create a case for why this work is valuable, and encourage people to make decisions with data.

The compound effects of speed

Teams that successfully implement rapid testing alongside their existing A/B testing programs see remarkable results. Our clients report 50% improved A/B test win rates, better customer satisfaction scores, and significantly faster time-to-insights.

But perhaps most importantly, they report better team morale. There’s something energizing about seeing results from your work quickly, about being able to iterate and improve based on real user feedback rather than lengthy committee discussions.

It’s never too late to pivot. The idea is to move from long-term decision making, where we send something through the whole development and design cycle only to come up with a lackluster outcome, to form a process that helps us get quick, early signals.

Making the transition

The goal isn’t to replace A/B testing. It remains the gold standard for high-stakes, high-traffic decisions. But by adding rapid testing to your toolkit, you can accelerate the decisions that don’t require months of statistical validation while still maintaining confidence in your choices.

As decision scientist Annie Duke writes in Thinking in Bets, “What makes a great decision is not that it has a great outcome. It’s the result of a good process.” Rapid testing gives teams a process for rational de-risking that emphasizes both speed and quality.

The question isn’t whether you should test your ideas; it’s whether you’re using the right testing method for each decision. In a world where speed increasingly determines competitive advantage, teams that master this balance will consistently outpace those stuck with only one tool in their kit.

Ready to accelerate your decision-making process? Our team specializes in helping product teams implement rapid testing alongside existing experimentation programs. Get in touch to learn how we can help you cut testing time without sacrificing insight quality.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How To Make User-Centered Decisions When A/B Testing Doesn’t Make Sense appeared first on The Good.

]]>
Three Green Flags to Look For in a Research Vendor https://thegood.com/insights/research-vendor/ Wed, 09 Apr 2025 16:19:11 +0000 https://thegood.com/?post_type=insights&p=110459 It seems like everyone is talking about the flattening of the talent stack these days. Tech leads are doubling as product managers, product managers are playing designer, and researchers are lending their talents to the insights team. Anyone with a laptop is doing more with less. Perhaps no corner of the product industry is witnessing […]

The post Three Green Flags to Look For in a Research Vendor appeared first on The Good.

]]>
It seems like everyone is talking about the flattening of the talent stack these days. Tech leads are doubling as product managers, product managers are playing designer, and researchers are lending their talents to the insights team. Anyone with a laptop is doing more with less.

Perhaps no corner of the product industry is witnessing democratization more than UX research.

Despite “research” working its way into the job descriptions of more and more disciplines, experienced, high-caliber researchers will always have their place in industry. Whether it’s to supplement your team’s capacity, tap into deep expertise, or get an objective outside perspective, research vendors are valuable for a host of reasons. But between traditional agencies and the recent increases of independent and fractional labor, how do you know you’re talking to someone with the chops to execute at a high level?

We asked product research experts Hannah Shamji and Jon MacDonald how to spot a great research vendor. Read on to hear their perspective on what “green flags” to look for when vetting your next research partner.

They Ask a Healthy Number of Questions

Most experienced researchers have chosen the wrong method at least once in their careers. And the outcome is always disappointing. “Picking the wrong research method leaves you with results you effectively can’t use, and is a huge waste of resources,” says Jon MacDonald, Founder and CEO at The Good.

Fortunately, that painful lesson has its upside in learning value. Careful to avoid diving headfirst into a low-utility approach, experienced researchers ask plenty of questions before jumping into execution. This assures they understand both the problem space and how the research will be actioned on.

“If they are asking questions, it tells you they want to understand the business context,” says Hannah Shamji, former psychotherapist turned customer researcher. “If they’re just jumping in and not really scoping things out, it’s probably a sign they’re not the right fit.”

That heavy lifting up front helps shape a clear scope, but the conversation is more than just a learning exercise. A strong vendor will then massage the methodology to fit the business challenge. “I think it’s important to not lead with a method unless you have a very clear diagnosis,” says Hannah.

Jon agrees. “A good researcher will avoid a cookie-cutter approach,” says Jon. It’s why his team kicks off every project with a conversation designed to uncover nuance, align on business goals, and extract the institutional knowledge embedded within the team. It’s a process Jon calls “diagnosing before prescribing.” And it’s why The Good doesn’t respond to RFPs.

“If a scope is completely mapped out before involving a vendor, we often find that it’s poorly suited to yield the outcomes they’re after,” says Jon.

By forming scope through a collaborative process that starts with a conversation, research vendors are well-equipped to help craft an approach that’s appropriate and effective.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

They Can Walk You Through the Tradeoffs

If you’re at the stage where you’re vetting research vendors, you probably have some idea of how to get the job done, i.e., through a survey or customer interviews. But Hannah warns that because research is so “accessible-sounding,” it’s common to chat with clients who start out asking for one form of research but really need another.

From Hannah’s point of view, a true expert will help you navigate the tradeoffs of one method vs another. They’ll help you understand how an approach impacts your time, budget, and expected outcomes. “There are a lot of easy, accessible go-tos like running a survey and talking to customers, but there are so many other forms of research that can close the gap,” Hannah says.

“The difference between an executioner and consultants is that if you want someone to do, that’s a slightly different hire. If you want someone who will help you navigate the tradeoffs, it’s a different conversation.”

Jon agrees.

“Our clients love chatting through their needs with us because we’re really good at helping them outline the constraints and requirements of the task at hand and figuring out where to get the most leverage. We’re a thought partner. So by being brought in early enough, we can help them think through what they need to learn with new research versus where we can rely on historical or secondary research.”

In an ideal world, we would execute at the perfect balance of depth, speed, and cost. But at the speed of business today, most contexts leave us wanting for either time, budget, or rigor. A good research vendor will help you navigate the tradeoffs and make an informed decision.

“Sometimes you need to be scrappy, sometimes you need to go deep,” Hannah explains. "Being able to juggle your timeline and adapt the methods to your needs is key. Not everything needs significant rigor.”

As such, Hannah recommends being up front with your vendor and communicating what your priorities are—being honest about your budget, when you can act on the findings, who’s involved, and what’s in your power to change (versus what authority lives on another team). This context will help your vendor deliver “just enough research.”

They Are Flexible in Their Collaboration Style

For Hannah, research services are best done in a way that meets the team where they are at. That tailored collaboration style is what Jon calls a “one size fits one” approach.

As such, our experts believe a strong research vendor tailors their engagement to the company's needs, understanding that research roles can shift depending on the stage of the business. "Depending on who’s involved, I think about research differently," Hannah says. "There are certain stages where it’s not helpful to bring in a vendor with a buttoned-up process."

For instance, Hannah finds that early-stage founders seeking product-market fit may benefit more from hands-on coaching than outsourced research, “so they can stay close to the data and be at the frontline of it.”

For those early-stage founders, Hannah recommends working with a partner who will open up their process or even take a more coaching-based approach. That way, the feedback loops are faster and the learnings are gathered first-hand. “You want to own the process yourself and minimize the gap between learning and doing.”

This manifests in conversational snapshots of the data as it’s rolling in. "Sometimes I will drip out the findings as I get them because I know they need to move," Hannah notes. “I’ve had sales [people] jump on a call with me in the middle of me doing research because they just want to ask some questions to fill in the gaps with what I’m learning.”

For Jon, it’s about figuring out how involved a partner wants to be. “Some people want email updates almost daily, others just want a report in their inbox when things are wrapped. We try to work in a way that gives them their desired level of input and transparency.” This kind of adaptability ensures that research remains a business enabler rather than a bottleneck.

How To Choose The Right Research Partner

Choosing the right research vendor isn’t just about credentials or experience; it’s about fit.

To set yourself up for success, look for vendors who:

  • Ask the right questions and diagnose problems before prescribing solutions.
  • Can communicate tradeoffs to determine a path forward that fits your needs
  • Are flexible in their collaboration style, tailoring their approach to the company’s stage and objectives.

By keeping these green flags in mind, businesses can ensure they partner with a research vendor who will deliver value, not just data.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Three Green Flags to Look For in a Research Vendor appeared first on The Good.

]]>
How to Make the Move From Intuition-led to Data-driven https://thegood.com/insights/intuition-led-to-data-driven/ Fri, 28 Mar 2025 21:10:55 +0000 https://thegood.com/?post_type=insights&p=110423 If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of […]

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of CEOs want a data-driven organization, the reality is that many organizations are still largely intuition-run. It takes more than a compelling argument in those contexts to turn the tide.

If you’re spearheading the shift from an intuition-driven to a data-driven practice, it can be an uphill battle and a lonely one at that. We spoke with Hanna Grevelius, CPO at Golf Gamebook & Advisor, and Maggie Paveza, Digital Strategist at The Good, about how they’ve navigated data-imperfect conditions throughout their careers and successfully advocated for data-first principles.

Whether you’re working with limited data or as your company’s first A/B testing specialist, their stories make one thing clear: doing it alone doesn’t have to be so daunting.

Keep reading to hear about:

  • How they learned to work with data
  • How to leverage data to build prioritization intuition
  • When guessing is appropriate
  • How to be an advocate for data-first practices

1. It’s OK to learn on the job

For those with only a passable knowledge of statistics, it can seem intimidating to dive headfirst into data-driven decision making. But it doesn’t take a data science degree to be able to act on good data. In fact, few teams employ full-time analysts at early stages of growth. Most teams get by early on with the skills of a few generalists, who, it turns out, often learn on the job.

“Quantitative methods are something that I’ve learned in my career,” says Maggie Paveza, Senior Digital Strategist at The Good. Having previously worked as a UX Researcher at Usertesting.com, Maggie started with a strong foundation in qualitative research before adding quantitative methods to her toolkit, which she says helps her tell a fuller story. “The qualitative research forms the why; the quantitative research forms the what.”

For Hanna Gervelius, CPO at GolfGamebook, her relationship data started from close collaboration with Product Managers.

“My role when I started was in support, answering customer support emails. In trying to understand the scalability of issues, I got to work and talk a lot to product managers who really helped me understand we need to look at the data to know: is it one person who experienced the bug? Is it from a specific version of the app? Is it related to the device or operating system they were on?”

Hanna says learning how to dig for data helped her contextualize customer pain. And through that practice, she built the skills necessary to transition into Product Management. “It was through support that I started to understand that we should look into the data, then eventually I moved over to work on Product Management.”

When she added A/B testing to her toolkit, that took her passion for data to a whole new level.

“It’s so clear when you A/B test that even a small change can have a big impact. When you start seeing the difference, that really sparks an interest.”

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

2. Use data to define your focus

Once Hanna could confidently dive into the data, she started to use it in her practice, evaluating where traffic hits the app most frequently and focusing on those high-value, high-traffic areas first. This exercise in opportunity sizing taught her that it’s ok to shift focus in light of new data.

Maggie takes a similar approach to prioritization. She uses traffic data to understand what areas of a site or app are highly trafficked, and before proposing a test, she always verifies that an A/B test would see significance within an acceptable amount of time.

“We rely on prioritization methodologies to understand if running a test in an area would have a significant revenue impact and if an A/B test would help us gauge in a number of weeks or longer.”

If you’re just starting out with a new property, Maggie and Hanna both suggest building a foundational understanding of traffic patterns and to regularly refine your strategy. Priorities often shift as a result.

3. In the absence of data, start with a guess

One valuable skill that came later in their careers was understanding the value of a lead. Boosting form fills can feel invigorating, but without an understanding of what portion of that audience might become a deal later, it’s hard to know if your work is making a difference. Assigning a dollar amount to a lead is a powerful tool to evaluate your performance.

But if you’re joining an organization without mature data practices, leads often have no value assigned. And without institutional knowledge, it can be intimidating to make a guesstimate. But to Hanna, it’s worth starting with a guess to set initial priorities.

Hanna advises using a rough calculation to estimate the value of a metric (with things like average deal value and percent of pipeline that converts), which can help you get an early read.

“Over time, you can start adjusting it higher or lower. But trying to put a value on it and making decisions based on that is the best way to still work in a data-driven way even when you don’t have all the answers.”

Hanna warns that an estimate is just that, and that staying above board about where the data comes from is key to retaining trust.

“What’s really important in that estimation reporting is that you’re always super clear that you’re estimating—that it could be a lot higher and a lot lower, because if you start making critical budget decisions on it, you can end up in a dangerous situation.”

4. Be the change you want to see

For those who know the clarity that data can bring to the decision-making process, working within a data-poor organization can be challenging. But Hanna says it’s fairly easy to lead others to data advocacy, even if you’re not in a C-suite. “Most people nowadays want to be data-driven,” Hanna says. In her opinion, it doesn’t take a fancy title to turn others into advocates.

“If you are working in an org where you are the only person who is responsible for testing, the best thing you can do is try to spread that knowledge. Get them involved and feel a sense of ownership. Try to make it so that you’re not the only one who cares about A/B testing and being data-driven.”

In order to build stewardship throughout the organization, Hanna’s advice is to walk through your thinking, specifically by walking colleagues through the potential upside to testing, and the risks of not. “That can help people who are not so interested in testing to be a bit more curious and to want to understand.”

In Hanna’s experience, your passion can be quite contagious. “Data and testing, it opens up a world that is so fun.”

As for how she does it, Hanna shares her excitement by showing rather than telling. “As soon as you have the test going, share a bit of the data early on,” she says. Rather than being cagey about how inaccurate early test data is, she uses it as a teaching moment.

“All of us who work in the testing space know that data from one day or three days is probably going to be completely wrong, and you can say that also. But show it to that person. Show that ‘this is super early, we have no idea if this is going to be correct or not, and stat sig, but after one day this is what it looks like’”

And of course, once you run successful tests down the line, Maggie’s experience tells her that there is nothing more powerful than sharing a win with your team.

Artfully navigating the shift

Advocating for data-driven decision-making in intuition-led companies isn’t always easy, but it’s a challenge worth taking on.

As Maggie and Hanna’s experiences show, starting small, whether by learning on the job, prioritizing based on data, making informed estimates, or sharing early insights, can lead to big shifts in mindset.

By fostering curiosity and collaboration, you can help transform your organization’s approach to decision-making, making data a natural and valued part of the process.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

]]>
B2B Research Doesn’t Have To Be So Hard https://thegood.com/insights/b2b-research/ Mon, 03 Feb 2025 19:05:30 +0000 https://thegood.com/?post_type=insights&p=110269 Whether your users are knowledge workers busy with deadlines or car mechanics who rarely leave the garage, connecting with folks who buy and use B2B software products is challenging. “B2B is definitely more complex,” says Hannah Shamji, former psychotherapist and current B2B research specialist. “There are so many stakeholders involved in any type of decision. […]

The post B2B Research Doesn’t Have To Be So Hard appeared first on The Good.

]]>
Whether your users are knowledge workers busy with deadlines or car mechanics who rarely leave the garage, connecting with folks who buy and use B2B software products is challenging.

“B2B is definitely more complex,” says Hannah Shamji, former psychotherapist and current B2B research specialist. “There are so many stakeholders involved in any type of decision. Having to juggle all of those, it’s just much more of a web.”

Tools needed throughout the workday are specialized—whether for accountants, mechanics or marketers—so your participant pool is smaller by design. But it’s not just the size of the total addressable market that makes the research challenging. It can be hard to compel B2B users and decision-makers to participate in a study.

We’ve heard numerous examples of “hard to compel” users:

  • High-earning executives who can’t spare an hour
  • Managers responsible for a large task load
  • Operators who spend near-zero time in their inbox

“There’s no incentive that you can pay that would buy their time,” says Benson Low, a 20-year veteran of UX Research and a board member of ResearchOps.

In Low’s experience, compelling those who make a B2B purchase decision (often c-suite executives) with financial incentives alone doesn’t work. “They’re not going to care if you’re paying them $1,000. They probably wouldn’t give you their time.”

Researching in the enterprise space adds another layer of complexity to the recruitment process.

“You can almost play the same playbook in small to medium-sized businesses—same research methodology, approach, even recruitment,” says Low. But, in large organizations, the approach will look quite different.

“When it starts getting difficult is when your organization has an account management team supporting specific businesses. That’s where you have to work with that account management team first before you even reach out to customers.”

Overcoming the challenge

We talked to four experts with a combined 80+ years of experience in product about how they circumvent the challenges of B2B research. Although their approaches varied, one theme was clear: their path to meaningful B2B research has been through relationships.

“It’s about relationships, ultimately.”

Read on to hear how four research pros overcome challenges in B2B research.

1. Learn the problem space before you talk to customers

Because actual users will likely be hard to connect with, using their time to learn the basic details of their role or industry would be a waste.

As such, Low recommends doing internal research before even talking to customers.

“Know the product and how it's positioned in the market. Understand the business. Then you can design the right capabilities, right research sequencing, etc.”

Our experts mentioned several methods for this discovery:

  • Listening to sales calls
  • Interviewing CX staff
  • Reviewing service blueprints
  • Scouring communities to learn about the users your product serves

Internal research gives you the foundation you need to interpret customer feedback meaningfully and assures that you don’t waste precious customer interviews just coming up to speed with their lingo.

2. Relationships, relationships, relationships

Marketing, sales, and CX have a wealth of knowledge and connections that can ease the research process.

Paul Stevens, a UX leader who’s been in digital design long enough to have A/B tested print mailers, heralds the power of a relationship with your sales team.

“Really good salespeople will get you into a customer. They’ve got relationships; they can get you in. They can make that all-important introduction,” says Stevens. “You need to be best friends with your sales team.”

But salespeople won’t be ready to give up their contacts without some established trust, so our experts emphasized building trust and connection rather than focusing on information extraction.

To earn their trust, show them you’re aligned with their goals. Understand their OKRs so you can frame your initiatives in a way that is mutually beneficial. “You essentially have to research the stakeholder in order to get them to buy in on the research,” says Shamji.

3. Use team members as research proxies

Beyond just making intros, your sales team can actually be a partner in research, working prototypes and early feedback into their sales calls in a lightweight form of “testing.”

René Bastijans, who describes himself as a “recovering Product Manager,” is currently a lead researcher at a growth-stage startup. As a research team of one in a company of over 100 employees, he’s found ways to loop his colleagues into the research process.

His sales team is trained to lightly survey prospects during sales calls and report back to the wider team. This creates a healthy feedback loop that keeps everyone abreast of evolving user needs.

“We’ve trained our sales team to ask for specific data and enter it into Salesforce. Researchers and the product team have access to these data, and therefore, sales has allowed us to keep a pretty good pulse on the market.”

But it’s not just a few questions here and there that sales can support with. Bastijans works with the sales team to get quick feedback on product updates in a lightweight form of testing.

“We give them a couple of slides and they slot them into their conversation when they speak with prospects to get input from real people. That’s been working really well for us.”

Stevens advocates for relying on your team to conduct field research where you can’t.

“If you’re in a global organization, you want to do research in a country that you’re not in, and you can’t fly around the world to do it, you can put an education program together. Find like-minded people within the org. In my experience, there have been marketers in each country and they are usually aligned with design and research.”

Sending them out with a camera, a notepad, and a directive to report back on what they see has allowed Stevens to extend his reach beyond global borders. And he says teams love to participate. “I’ve never had anybody snicker. I’ve had them say ‘that’s fantastic; how can I get involved?”

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

4. Stand up a Customer Advisory Board

Because B2B customers are impossible to engage en masse, one way to circumvent the challenge of recruitment is to create a “board” of customers that gives regular feedback in exchange for value. As Low describes, the relationship should benefit both parties.

“There's a benefit for them in that we provide them with training, support, discounts, or access to new features. On the flip side, we expect them to give us feedback—how their business has been going and what their needs are.”

Sometimes called “Sales Advisory Boards” or “Customer Advisory Boards" these can start as a partner to either sales or product, but their insights will support various disciplines within the organization.

Stevens has had success in previous roles with what he calls “Customer Days,” in which the Customer Advisory Board spends an entire day on-site, rotating between different practice groups to provide insight to various business functions. “It’s not a full day of UX testing, but researchers will have a slot to talk with them.” It’s a great way to regularly solicit the perspectives of your customers.

5. Create an expert-level playbook for B2B interviews

When you do get a chance to interview B2B clients and users, Low emphasizes the need to take it seriously: put senior staff on the task and do adequate preparation.

“Make sure you pay the utmost respect to those hard-to-recruit participants. You want to make the best use of their time and be able to ask questions effectively while being able to protect the organization's brand, reputation, and business.”

Low recommends an extensive preparation process for B2B interviews, including researching the participant’s background, reviewing their account details, and chatting with their account manager, which Low says may reveal any potentially sensitive discussion points. “You just don't want anything to impact the business unintentionally.”

But preparation alone doesn’t guarantee an effective conversation—you have to have experience as well. Low recommends that only very experienced moderators conduct conversations with existing clients. “You don’t want them to be in a power dynamic issue. Then they can’t execute the research effectively.”

Ineffective moderation risks producing research that can’t be used effectively and a feeling of time wasted for the participant.

“Especially considering how small this panel likely is and how small the population is. You likely don't have too many enterprise customers, and you might want to talk to them again next year. So, build a rapport and make sure that you are able to access them. If not yourself, then your peers—other researchers, designers, or product managers on your team.”

6. Use automations to maintain trust (and stay sane)

If you’ve been introduced to customers via your sales team, it can feel like you’ve won a golden ticket. But our experts remind us that trust needs to be maintained, so they’ve built workflows that foster trust and transparency.

Bastijans uses Zapier automations that push updates at critical customer touchpoints:

  • When a prospect books an interview
  • When interviews take place

Automations Slack the sales team when a conversation is booked or takes place, and auto-magically import CRM data and update a Notion page. “Zapier has been a really huge help for me just automating mundane tasks that I would have to otherwise do manually.” For Bastijans’ research team of one, he’s been able to ramp up his output without upping the workload.

7. Hire an outsider to play a neutral third party

Depending on your research objectives, it can be hard to solicit honest feedback from your recruits. To circumvent this issue, Low recommends occasionally using outside firms to act as a neutral third party.

“If you can’t do the research because of baggage you have representing your company, you might do it in a roundabout way by getting a third party involved. This way, an independent researcher, consultancy, or research firm might do this centrally, saying ‘we’re just doing industry research,’ they can interview all sorts of customers without damaging anything.”

Agencies can be especially useful in projects that involve talking to the customers of your competitors, says Low. While participants might be hesitant to give honest feedback to a direct competitor of a company they’ve been loyal to, agencies can frame their work more neutrally to enable participants to give candid feedback.

“Essentially, you’re trying to find a Switzerland. Someone that is unbiased with no interconnection that could cloud the insights that you want to get out. So you get, from a data perspective, cleaner insights.”

Plus, agencies can often work much faster, says Low. “The difficult B2B customers that you can’t get to, or have constraints or limitations to access, an independent consultancy might do much quicker.”

8. Make sure the insights stick

While it’s one thing to find the workflows and relationships that enable excellent research, the endeavor is fruitless unless you know how to stick the landing, says Shamji. “It's great to have all the data, but are they going to action on it? Is it going to help make decisions?”

With many teams globally distributed and an average ratio of 1 researcher per 50 developers, the average researcher is, as Stevens puts it, “a very, very, very small fish in a very big pond.”

As a result, our experts say that visibility is key.

To build visibility and buy-in, Stevens suggests a healthy dose of self-promotion: of yourself, the importance of your role, and the outcomes that your research enables.

“As soon as you've got any results, you have to publicize it as much as you can, but especially the right eyes. Depending on the relationships that you have in the business, how comfortable you are, and what the C-Suite is like, there's nothing wrong with dropping a Slack message to the CEO.”

Bastijans solicits buy-in and builds visibility through what he calls “Learning Lunches,” a 25-minute presentation with Q&A, designed to circulate the latest research and keep the team rowing in the same direction.

And for research teams in their infancy, Low says it’s especially important to advocate for the importance of research within your organization. “When you’re establishing a research team, people don’t know what we do.” Rituals like Slack memos, Learning Lunches, and direct conversations can go a long way toward building user-centered thinking within your organization.

The importance of B2B research

Despite the numerous challenges of B2B research, our experts assured us that it is workable.

The sales journey is complex, the personas are many, and the execution needs to be handled delicately. It’s why Shamji sees B2B contexts as the best application for UX research.

“B2B is just more of an obvious place for research,” says Shamji. “All of the touch points like customer success and sales—they're all seeing different parts of the process, so it just kind of warrants a researcher's 360 view of what's going on.”

It’s also why Low says AI isn’t coming for the enterprise researcher’s job anytime soon. Today’s AI interventions just aren’t prepared for the task. “I actually don’t think AI is going to take over our jobs.”

The post B2B Research Doesn’t Have To Be So Hard appeared first on The Good.

]]>
7 Expert Tips On How Retail Brands Can Launch A DTC Ecommerce Channel https://thegood.com/insights/omnichannel/ Fri, 10 Jan 2025 20:45:19 +0000 https://thegood.com/?post_type=insights&p=110208 Emarketer predicts that direct-to-consumer sales will peak at 14.9% of all ecommerce sales in 2025, yet many brands still rely solely on retail sales as a business strategy. New channels let you tap into additional audiences and volume, spurring growth and helping your business through harder times. After all, there’s a reason our parents always […]

The post 7 Expert Tips On How Retail Brands Can Launch A DTC Ecommerce Channel appeared first on The Good.

]]>
Emarketer predicts that direct-to-consumer sales will peak at 14.9% of all ecommerce sales in 2025, yet many brands still rely solely on retail sales as a business strategy.

New channels let you tap into additional audiences and volume, spurring growth and helping your business through harder times. After all, there’s a reason our parents always told us not to keep all of our eggs in one basket.

We’re big fans of an omnichannel approach, so we talked to a group of experts on the topic to get their thoughts on why and how to navigate it like a pro.

The case for DTC

For many retail products, adding a DTC channel is a surefire way to increase margins and grow your business.

However, adding a DTC channel isn’t just a way to grow revenue and margin. The upsides also include better relationships with your customers and greater insights to help you grow your brand.

Build stronger relationships with your customers

Dustin Kochis, VP of Sales at Ka’Chava, sees DTC as a powerful tool for connecting with customers. “We view DTC as a way to build stronger relationships with our customers. It’s allowed us to educate them on our mission and products in ways that retail can’t.”

Building these relationships via DTC is an approach that even legacy brands can leverage. Take Andy Wang of KC HiLiTES: when Wang acquired the decades-old company, he knew he wanted to take them digital. But it wasn’t just the margins he was after.

Wang had a vision for a new brand identity that he wasn’t satisfied to leave in the hands of retail partners to represent. To emphasize the brand’s quality and heritage, he built an image-driven website that not only sells products but tells the brand story better than retail alone could.

“We went direct to consumer not only because of the revenue and margin, but because it gives us the ability to control our own destiny. If you can build a quality relationship with your customer, that becomes a moat for your business.”

Own your data

While the benefits of DTC on education and relationships are laudable, there’s a notable third-order benefit of having an ecommerce channel that is arguably just as important: owning your data.

“Having a direct customer channel is everything because it allows you to learn about your customers,” says Sam Selby, Co-founder, COO & President at Used Mobile Homes USA.

Wang agrees. “People are too fixated on margin and revenue. There’s a treasure chest of information when users touch your website or interact with your content.”

Once Wang built his DTC arm, the benefits of owned data began to fuel his business in new ways.

“When we went direct to consumer, we got all of the emails and the psychographic details. It’s a huge help in understanding who your customers are. When you go through distribution, that gets lost,” Wang told us. “That’s another really important thing that has become an asset. Using emails, analytics, and attribution models, that is where the gold is.

The use cases for good data go both ways. When Myra Ryder, Director of Brand Strategy at Ka’Chava, brought on a new VP of sales to help them go into retail, the existing data from DTC channels was critical in helping them to form a go-to-market strategy. “You really need the data,” says Ryder.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

7 tips for thoughtfully entering DTC

If you’re sold on its benefits, the next logical step is to do some digging into how to make the transition from retail to DTC painless. Here are seven tips from the experts to help you make it as smooth as possible.

1. Define your DTC value proposition

Launching a DTC channel isn’t just about making your products available. It’s about understanding what unique value you’ll offer customers through this new channel.

According to Selby, to succeed, you need to articulate why someone should buy directly from you versus through retail. “You have to figure out what your unique value is because you’re not going to out-Amazon Amazon, and you’re not going to out-Target Target.”

2. Align your DTC and retail strategies

One common concern for retail-driven brands is how a DTC channel might impact existing retail partnerships.

Dustin Kochis, VP of Retail Strategy and Sales at Ka’Chava, emphasizes the importance of alignment. “Your DTC and retail strategies should complement each other, not compete. We’ve found success by ensuring that our retail presence reinforces the brand while DTC drives education and deeper customer engagement.”

For example, offering exclusive products or bundles online can differentiate your DTC channel without undercutting retail partners on price.

Selby shares, “To capture incremental customers without cannibalizing between channels, make sure you’re not just duplicating what your retail partners already offer.”

3. Establish a clear DTC marketing plan

Most business leaders already know the value of a marketing plan, but when opening a new channel, it becomes even more important.

Myra and Dustin from Ka’Chava say, “Whether it’s through partnerships, Instagram, or other ad campaigns, often there’s a familiarity with the brand once the consumer sees the product in-store. Leverage those educational, top-of-funnel touchpoints to stay ahead and top of mind.”

KC HiLiTES beefed up its digital marketing plans to generate a fast margin when it launched DTC. “We used a series of growth hacks to generate that margin. Then we had money to dump into marketing and could control the brand perception at scale,” said Andy.

4. Know what value your retail partners bring

Selby emphasized the importance of knowing the value of retail partners, measuring their value, and finding ways to optimize the channel so you can make the most of the DTC transition. “You really need to understand how retail is different because that will help you take advantage of DTC most effectively.”

Wang finds insight in the pricing models of retail partners. “If the price is the same anywhere you can find it on the web, at the end of the day, that distributor or dealer has to add some value beyond price. If your brand is powerful enough, it makes it so that that dealer has to add some strategic value.” That could come in extra exposure, a new customer base, or strategic learnings from their other partnerships.

“We only work with the partners that add value to our company as a whole,” finished Andy.

5. Invest in expertise

To thrive in DTC, your website needs to offer a seamless and engaging experience. Myra Ryder, Director of Brand Strategy at Ka’Chava, highlights the role of storytelling and user-centric design; “Customers expect more than just a product listing online. They want an experience that reflects your brand’s story and values. Your website should inspire trust, simplify the buying process, and communicate your unique benefits.”

She adds, “You need to find specialists and give it 110%… don’t divert people and time from other teams.” For Myra, that meant partnering with specialists like Dustin to own retail entry and The Good to optimize their DTC website.

6. Prepare for omnichannel success

Moving to an omnichannel approach comes with logistical and operational complexities. Andy Wang advises brands to prepare for changes in demand and fulfillment.

“When you add a DTC channel, you’re not just adding revenue—you’re adding complexity. From inventory management to customer service, everything needs to scale to meet new demands.”

This can include integrating inventory systems, ensuring consistent pricing, and aligning marketing efforts across channels.

And while channel conflict can set you back, there are plenty of ways to combat it. Even tweaks to the actual product can make your omnichannel strategy more successful. When Ka’Chava entered into retail, for example, they changed the packaging to reflect what DTC customers learned about via the website and shrunk down the retail package to land at a price point that was more palatable for new customers.

7. Don’t fret cannibalization

Kochis says there is “no magic formula” for predicting how a new channel will impact another channel’s sales, and the endeavor is short-sighted anyway.

When brought on to help Ka’Chava enter retail, he warned the team about the possibility of seeing an initial dip in ecommerce sales, but he said not to worry. “It’s a long play,” says Kochis.

Unlock a DTC strategy like the experts

Launching DTC is not a set-it-and-forget-it endeavor. As we established, data is your biggest ally in DTC. Track everything—from website traffic and conversion rates to customer feedback. Use these insights to continually refine your approach.

Expanding into DTC can unlock new opportunities for growth and customer engagement, but it requires careful planning and execution. By defining your value proposition, aligning with retail strategies, optimizing the digital experience, and adapting to omnichannel demands, you can set your brand up for success.

Ready to take the leap? We have many resources on DTC ecommerce from 15+ years of optimizing for brands like yours. Check them out here.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post 7 Expert Tips On How Retail Brands Can Launch A DTC Ecommerce Channel appeared first on The Good.

]]>
Which Rapid Testing Method Should I Use? https://thegood.com/insights/rapid-experimentation/ Wed, 18 Dec 2024 16:00:00 +0000 https://thegood.com/?post_type=insights&p=110108 “Research” often means “identify problems to solve.” But it can also mean “verify that proposed solutions actually solve problems.” The most buzzy way to get that validation is via A/B testing. But many don’t have the budget, appetite, time, or team to even get started. Enter: Rapid testing. Like A/B testing, rapid testing helps you […]

The post Which Rapid Testing Method Should I Use? appeared first on The Good.

]]>
“Research” often means “identify problems to solve.” But it can also mean “verify that proposed solutions actually solve problems.”

The most buzzy way to get that validation is via A/B testing. But many don’t have the budget, appetite, time, or team to even get started.

Enter: Rapid testing.

Like A/B testing, rapid testing helps you understand if your solutions are actually working.

Unlike A/B testing, rapid tests are fast, done with small sample sizes, and offer a level of qualitative insight not afforded via experimentation alone.

Rapid testing is no substitute for A/B testing, but it has a ton of applications:

  • Get a gut check when true A/B testing is not a viable option
  • Understand where new features might be confusing or unclear
  • Evaluate time-to-success and pass/fail rates of task flows
  • Narrow down your options from many to few when deciding what messages to test in the market

Think of it as your canary in the coal mine. A utility to mitigate the risk of feature flop.

In this article, we’ll explore what rapid experimentation is, its benefits, the types of rapid tests you can run, and when to use each. If you’re looking to de-risk your decisions and innovate faster, keep reading for a framework to get you started.

What is rapid experimentation?

Rapid experimentation or rapid testing refers to a collection of tactics we use to get quick feedback for operational decisions. This type of testing helps teams make agile decisions around design, copy, and other site elements.

Rapid experimentation is a lean approach to validating ideas, designs, or features in a quick, iterative manner. It focuses on qualitative insights and directional data.

Instead of waiting weeks for results, you can gather actionable insights in days or even hours. This method enables teams to:

  • Understand whether users grasp a new concept
  • Identify potential usability issues
  • Test multiple variations of an idea before committing to development

In short, rapid experimentation helps you answer the question: “Am I moving in the right direction?”

Why do teams use rapid experimentation?

Rapid experimentation delivers value in multiple ways, particularly for SaaS teams that need to move fast and make data-informed decisions.

While rapid testing uses less qualified participants and smaller sample sizes than traditional A/B testing, the tradeoff is exponentially faster results. Rapid testing delivers value by:

  • Speeding up results: Unlike A/B testing, which can take weeks to produce reliable results, rapid tests can be designed, executed, and analyzed in days. This speed allows teams to iterate quickly.
  • Limiting politics of A/B testing: Which A/B tests get run is informed by rapid test data instead of executive opinions.
  • Narrowing down many ideas: When you need to identify the best few ideas out of many, rapid testing is an efficient way to do so.
  • Lowering costs: Because rapid tests require smaller sample sizes and fewer resources, they’re accessible to teams with limited budgets.
  • Identifying problems early: Rapid experimentation helps uncover potential usability issues or misunderstandings before they’re baked into a feature or product. This can save significant rework down the line.
  • Increasing qualitative depth: Where A/B testing provides numbers, rapid tests provide context. Understanding the “why” behind user behavior can inform better solutions.
  • De-risking decisions: By testing ideas early and often, teams can reduce the risk of releasing features or products that fail to meet user needs.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

What are the types of rapid tests?

Rapid experimentation is not a one-size-fits-all process. Different scenarios call for different types of tests.

Here are some common methods:

Task Completion Analysis

Task completion analysis allows us to quickly test new ideas to understand time-on-task and success rates.

Typically, users are asked to complete a specific task, such as signing up for a trial or finding a key feature. Teams observe where users struggle and measure success rates, time-to-completion, and drop-off points.

First-Click Tests

First-click tests evaluate whether users can intuitively find the primary action or information on a page. Participants are given a task and asked to click where they think they should start. This is ideal for evaluating navigation or CTA placement.

Tree Test

Tree testing is a usability technique that helps you understand how users navigate through your website or app’s structure. It focuses on how well people can find information within a system.

By stripping away visual elements and focusing solely on the structure (the “tree”), you can identify whether the content organization makes sense or if users are getting lost.

Sentiment Analysis

Sentiment analysis lets us preview how users might respond and react to a treatment. It allows us to evaluate user emotions and opinions about a product or experience. Typically, feedback is collected through surveys, reviews, or user interviews, and responses are analyzed to identify positive, neutral, or negative sentiments. Teams use this data to uncover pain points, gauge satisfaction, and prioritize improvements.

5-Second Tests

5-second tests assess a user’s immediate impression of a design or message. They show participants an interface or design for five seconds and then ask them what they remember or understand. This is great for defining the value propositions or headlines that are most memorable.

Design Surveys

Design surveys collect qualitative feedback on wireframes or mockups. They can help validate designs before investing in development to implement them on your site.

Preference Tests

This test involves showing users two or more design variations and asking which they prefer and why. It’s perfect for narrowing down visual or messaging options before launching a formal test.

Card Sorting

Card sorting is a research technique used to understand how users organize and categorize information. You present participants with a set of cards, each representing a piece of content or functionality, and ask them to group these cards in a way that makes sense to them.

This process reveals how people naturally think about and structure information. It lets you uncover insights into how users might intuitively organize menu items, product categories, or any other structured content on your site. Ultimately, this helps you design a website or app that aligns with their expectations.

These are just six of the many types of rapid experimentation.

How to choose the right method for your scenario

With so many options, it can be challenging to know which rapid testing method to use in a given situation. Each method has strengths and weaknesses, and choosing the wrong one can result in wasted effort or inconclusive results.

If you’re interested in getting started with rapid testing but aren’t sure which method is right for your scenario, we devised a simple way to narrow down the options.

A framework developed by The Good for determining which rapid testing method to use.

In this decision tree, you can ask questions to help understand which rapid testing method best suits your needs.

A few caveats:

  • There are more methods than are covered here; this is just a sample
  • Test types can be used in combination in some instances, and
  • There are always exceptions to the rule

There’s no substitute for experience, but if you’re just getting started with this kind of research, I hope this gives you a head start.

Using this framework ensures you select the method best suited to your goals, saving time and effort while delivering more meaningful results.

The Telegraph used rapid testing to increase registrations

So, what might rapid testing look like in action?

During a Digital Experience Optimization Program™, we worked with The Telegraph to improve their paywall experience as a part of their goal to reach a million subscribers.

The first part of any DXO Program™, our team conducted a thorough audit of the end-to-end customer experience to uncover the biggest barriers and opportunities for conversions. Once we had the research plan and were armed with a strategic roadmap, it was on to the next phase of the program. We took hypothesized improvements and tested them with The Telegraph’s ideal audience to confirm they would move the needle before investing in implementation.

Thanks to rapid testing, we were able to design, test, and decide on the first phase of implementations in a matter of days.

One rapid test we ran for The Telegraph assessed site banner color and layout. When shown two banner variants, visitors had a clear preference — 78% of participants found content easier to read against a yellow background. Recall tests also showed visitors were more likely to remember key details in this variant as well, further supporting it as the preferable option.

Two banner variants used in a rapid test The Good conducted for The Telegraph.
Two banner variants we ran for The Telegraph; the yellow was the winner.

We ran over 20 similar tests to assess cookie notification placement and design, desktop and mobile paywall presentation, brand headlines, offer messaging, and more. Each test leveraged the method relevant to the hypothesis we hoped to validate with experimentation. We chose the testing methodology using a similar thought process to the rapid testing decision tree framework shared earlier.

And the best part? We did this in just a few weeks, something that would have been impossible to accomplish via A/B testing due to resource constraints. David Humber, Head of Conversion at The Telegraph, also credits the efficiency and effectiveness of the rapid tests to having a team of external experts come in. “You do less spinning of the wheels because you’re having somebody come in that’s got this additional expertise as their bread and butter.”

Overall, identifying small wins in numerous places added up to a significant impact for The Telegraph in both improved metrics and an understanding of the customer.

Upskill your team with external support

While rapid experimentation is a powerful tool, getting started can feel overwhelming. How do you design effective tests? What metrics should you measure? And how do you ensure your insights lead to meaningful improvements?

This is where The Good can help. Our team specializes in UX research and digital experience optimization for SaaS companies. From designing and executing rapid tests to implementing insights, we’re here to guide you every step of the way. With our proven frameworks and expertise, you can:

  • Validate ideas faster and more effectively
  • Reduce the risk of feature flop
  • Build a culture of experimentation within your team

Ready to get started? Contact us to learn how we can help you make better decisions faster.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Which Rapid Testing Method Should I Use? appeared first on The Good.

]]>