experimentation Archives - The Good Optimizing Digital Experiences Wed, 15 Oct 2025 16:57:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times https://thegood.com/insights/fritz-oconnor/ Thu, 04 Sep 2025 20:09:59 +0000 https://thegood.com/?post_type=insights&p=110835 Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide. Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two […]

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
Building operational excellence in marketing isn’t just about implementing the latest tools or following industry best practices. It requires a deep understanding of customers, systematic thinking, and the ability to lead teams through uncertainty with data as your guide.

Fritz O’Connor, former VP of Marketing at Ironman 4×4 America, exemplifies this approach. With over two decades of experience spanning manufacturing, sales, and marketing leadership, Fritz has developed a methodology for building high-performing organizations that deliver results consistently, even in challenging circumstances.

A marketing leader built for manufacturing

Fritz’s career journey reads like a masterclass in understanding customers across different industries. Starting in the printing and paper industry, he cut his teeth in structured sales training programs that taught him the fundamentals of professional sales and business operations.

“I’ve spent my entire career in sales and marketing roles. Almost exclusively in the manufacturing sector for companies that make stuff,” Fritz explains. This foundation in manufacturing would prove invaluable throughout his career, giving him deep insight into the complexity of bringing physical products to market.

His two-decade tenure at GE further refined his skills across diverse business environments. “We always used to say we can work in any industry, anywhere in the world, and still get paid by the same company,” he recalls. This experience working across plastics, appliances, and GE Corporate gave him a unique perspective on how great companies operate at scale.

But it was during his time at GE Corporate that Fritz discovered what would become his career-defining framework: differential value proposition (DVP). Working in a marketing consulting role with virtually every business in GE’s global portfolio, he helped launch this customer-centric approach to messaging and positioning throughout the organization.

This systematic approach to understanding and serving customers became foundational to Fritz’s ongoing success.

Implementing systems and frameworks that take teams from features to solutions

Originally coined by the founder of Valkre Solutions, Jerry Alderman, the DVP framework transforms how companies think about customer messaging and competitive positioning. Fritz became a master at implementing this methodology across diverse organizations.

“What are you offering? Be it a product or service that is better than the customer’s next best alternative,” Fritz explains. This might seem simple, but the implications are profound. Rather than competing on features or price, DVP focuses on solving customer problems in ways that competitors simply cannot match.

The challenge, as Fritz learned during his GE implementation, is that DVP represents a fundamental shift in thinking. "Every business, product, or service has a value proposition, but not every value proposition is differential. So many companies have the same value proposition. The white space is that differential part."

"It's about switching thinking from a feature to a benefit. For example, a blue appliance is not a differential value proposition. It's a feature."

Fritz teaches teams to make this shift by leading with problems and solutions.

"It's how it makes the consumer or customer's life better, how it solves that problem. You have to identify what the problem is. You have to articulate how you can fix that problem in a different way, better than anybody else."

This shift from features to solutions requires teams to understand their customers' actual problems, not just their stated needs.

For leaders, this translates directly into more effective product messaging, clearer value propositions, and ultimately, higher conversion rates.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Overcoming the "this is how we've always done it" challenge

One of Fritz's biggest career wins (and ongoing challenges) centers around implementing the Differential Value Proposition (DVP) methodology across organizations. The implementation at GE became both a success story and a learning experience in change management.

"As you can imagine, anytime you try and launch a new process in a company the size of GE, you can be met with resistance. Especially when you're coming out of corporate."

This resistance taught Fritz a crucial lesson about implementing change: "I don't view that as a challenge or a stumbling block, but as a fantastic and wonderful opportunity because when you flip those people, they become your biggest proponents."

His approach centers on listening first, then demonstrating value in the stakeholder's own language. "It's a listening journey. You've gotta understand what the challenges are that of the people with whom you're working, whether it's an external customer or an internal customer."

"Proactively listen and walk in the shoes of the people I'm working with. When I'm trying to introduce something as significant as DVP or other business tools."

This listening approach helps identify the real challenges and resistance points, making it possible to address them effectively.

The foundation: accountability, responsibility, and challenge

But having the right frameworks isn't enough. Fritz learned that execution depends on creating the right team culture. He is quick to credit his teams as the backbone of his successful projects, and one of the ways he supports them is with clear organizational principles.

"I have a few underlying business principles that I've gained along the way that are the foundational threads for me," Fritz explains. "One is, any team I work with or works for me, my job is to make them as successful as possible."

This people-first approach manifests through three guiding principles:

  • Accountability: Holding yourself and your team responsible for deliverables and outcomes
  • Responsibility: Taking ownership of significant business challenges
  • Challenge: Embracing difficult problems that create meaningful business impact

"The way I do that is through three guiding principles, which are accountability, responsibility, and challenge," Fritz notes. "I want to be entrusted with significant responsibility that is helping to solve a significant business challenge."

These principles translate into a simple but powerful operational mantra: deliver on time, complete with excellence.

"I know those all sound like buzzwords, but they're not meant to be. And we don't treat them as such. We treat them as very simple guiding principles to keep us focused."

Putting it all together at Ironman 4x4

When Fritz joined Ironman 4x4 America, he found the perfect opportunity to apply all of these frameworks.

Ironman 4x4 is a global company that sells off-road parts and accessories for 4x4 vehicles (lift kits, suspension parts, bumpers, etc.). They have been around since the 1950s, but were new to the United States, so Fritz had the opportunity to find new ways to market their complex "fitment" products, or parts that must work with specific vehicle makes and models. This complexity creates both technical and marketing challenges that Fritz's team had to solve systematically.

His sales background gave him an invaluable perspective on marketing effectiveness. "If you spend any time in sales, that means you're around customers, whether those are B2B or B2C customers. And you learn what's important to them."

This customer proximity taught him the critical principle of "show me, don't tell me." Rather than relying on feature lists or industry awards, effective marketing demonstrates value through customer experiences and outcomes.

"We always, in both sales and marketing, it's easy to get into the trap of just talking, talking, talking, describing stuff, talking about features and benefits. Talking about the industry's best. Nobody cares about your industry. They care about how your product or service is going to impact them."

The key to marketing complex products, Fritz knew, is understanding how customers think about their problems. Rather than leading with technical specifications, the focus should be on the customer's end goal and the emotional drivers behind their purchase decisions.

Fritz emphasizes the importance of demonstrating value rather than just describing it: "Really, visual storytelling, video storytelling, placing the customer in the scene so they understand your value. That ability comes from firsthand experience of seeing that happen in the sales arena."

A data-driven website replatforming

His POV shaped everything he was involved in at Ironman 4x4 America, from new product introduction processes to website optimization. Fritz implemented structured new product integration toll gates with clear deliverables and cross-functional accountability, ensuring every product launch was executed with precision across creative, digital, and channel marketing.

His customer-centered thinking and frameworks proved essential when his team tackled a complex website migration from an outdated platform to Shopify. The project was based on their understanding that a website change was necessary to better serve their audience and increase ecommerce sales.

Working with The Good on a DXO Program™, the Ironman 4x4 team executed the redesign and replatforming with data-driven methodology. Rather than relying on opinions about what the site should look like, they embraced rapid prototyping and continuous testing.

"Any decision made without data is just an opinion, right?" Fritz notes, referencing CEO Luke Schnacke's philosophy.

"We try to be very data-driven, which is why it was so important for us to work with The Good, to get that data and share it with the team managing the website replatforming so that they were making data-driven decisions on design and functionality."

They didn’t wait for a “perfect website” to figure out what customers wanted. They tested and got feedback throughout the entire process to make sure they were developing the right ideas.

"I realized we were never going to do it perfectly," Fritz recalls. The team was getting bogged down in opinions about checkout processes, product customizers, and overall site design. "We could end up using half our development budget on building something that doesn't perform."

"Ultimately, we agreed to launch and then test the heck out of it. We didn't want to overburden the development pipeline with projects that don't have a financial impact."

This represents a fundamental shift in thinking. They went from trying to build the perfect site to building a testable foundation for continuous improvement.

The beauty of working with The Good in this situation, Fritz explains, was "the rapid prototyping, the test and learn. We could very quickly get feedback and iterate and then test and learn again."

Multiplying results through partnership

Leveraging an external partnership accelerated progress beyond what internal resources could achieve alone and held the team accountable to the frameworks and goals of staying user-centered and data-driven.

"If you're not an expert, I would recommend doing a website project with a company like The Good. It wasn't a cost, it was an investment," Fritz emphasizes. "And I think that Ironman 4x4 is the beneficiary of the investment that they made with The Good as they migrated over to Shopify and learned about what customers would like."

The partnership enabled intentional, studied testing with proper dependencies and measurable results tracking.

"That whole test and learn methodology is done in a very structured, deliberate way. Making changes in a waterfall, with the proper dependencies articulated, and then tracking the measurable benefits of changes, and then tweaking accordingly from there."

This approach breeds confidence because it's entirely data-driven, removing guesswork from critical business decisions.

Lessons for marketing and sales leaders

For marketing and sales leaders looking to build similar operational excellence, Fritz's approach provides a roadmap: start with principles, understand your customers deeply, make decisions based on data, and never underestimate the power of strategic partnerships to unlock potential.

Start with principles, not tactics

Before implementing any marketing or optimization program, establish clear guiding principles. Fritz's framework of accountability, responsibility, and challenge provided a foundation that influenced every decision and created lasting organizational change.

Understand your customer's next best alternative

Move beyond feature-benefit messaging to understand what your customers would do if your solution didn't exist. This "next best alternative" thinking is the foundation of truly differential value propositions.

Convert resistance through understanding

When facing organizational resistance to change, focus on understanding stakeholder concerns rather than pushing solutions. Meet people where they are and demonstrate value in their language.

Embrace data-driven decision making

Resist the temptation to rely on opinions or best practices. Instead, create structured testing methodologies that let customer behavior guide optimization decisions.

Invest in external partnerships strategically

Recognize when external expertise can accelerate progress. The right partnerships provide capabilities and perspectives that internal teams may not possess, ultimately delivering better results faster.

Starting an optimization journey

Fritz's approach to building and scaling teams, including Ironman 4x4's US marketing operations, demonstrates how principled leadership, customer-centric thinking, and strategic partnerships can create sustainable competitive advantages.

"There's no obstacle too big that can't be overcome with data and optimization, right?" Fritz states emphatically. "The whole point of being data-driven and optimizing is to get time back and to become more efficient."

His advice for other leaders facing similar challenges?

"Get to yes. Figure out how to do it. Don't say, this is why I can't do it. Say this is how I'm going to do it. Here are things I need to do in order to do it. Then hold yourself accountable. Make it happen. Do it."

The secret, according to Fritz, lies in celebrating small wins that compound over time: "Little steps, I always like to say, celebrate the little wins. Go after the little wins because they compound on one another and then all of a sudden you're gonna look back and go, holy mackerel, I can't believe I am where I am."

The secret is consistency: "And it starts with data as your foundation and optimization as the accelerator."

For ecommerce leaders looking to build similar operational excellence, Fritz's framework provides a proven template: establish clear principles, understand customer problems deeply, make data-driven decisions, and never underestimate the power of strategic partnerships to accelerate growth.

Ready to optimize your ecommerce experience with data-driven methodology? Learn more about The Good's Digital Experience Optimization Program™ and discover how strategic partnerships can unlock your growth potential.


The Good helps ecommerce brands like Ironman 4x4 optimize their digital experiences through research-backed testing and strategic partnerships. Our team combines deep technical expertise with proven methodologies to deliver measurable results for growing brands.

The post Fritz O’Connor Stays User-Centered and Leads with Data During Uncertain Times appeared first on The Good.

]]>
Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? https://thegood.com/insights/regulated-saas/ Fri, 08 Aug 2025 18:36:19 +0000 https://thegood.com/?post_type=insights&p=110753 The conversation happens on nearly every discovery call we have with a leader tasked with optimizing SaaS or software for regulated industries. It starts with optimism about growth potential, then quickly shifts to the reality of their constraints. Healthcare software companies can’t freely experiment with patient data. Financial technology firms face strict compliance requirements that […]

The post Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? appeared first on The Good.

]]>
The conversation happens on nearly every discovery call we have with a leader tasked with optimizing SaaS or software for regulated industries. It starts with optimism about growth potential, then quickly shifts to the reality of their constraints.

Healthcare software companies can’t freely experiment with patient data. Financial technology firms face strict compliance requirements that limit onsite testing capabilities. Government contractors operate under security clearances that restrict user research. Insurance platforms must navigate complex regulatory frameworks. HR and ATS software handle sensitive employee data that requires careful privacy protection.

Experimentation seems nearly impossible under these circumstances, and the product-led growth strategies these teams see working for companies riding exponential growth waves like Linktree or Lovable can’t work for them.

These regulated SaaS companies still need to grow. They have the same fundamental challenges as any SaaS business: converting leads, reducing churn, and improving user experience. But the traditional growth toolkit doesn’t fit their reality, so let’s explore what can work.

The problem with product-led growth in regulated industries

Product-led growth has become the gold standard for SaaS success.

Companies like Canva, Grammarly, and Spotify have proven that letting users experience your product before purchasing leads to higher conversion rates, lower customer acquisition costs, and sustainable growth.

The strategy is to remove obstacles to product adoption, offer free trials or freemium versions, and let the product sell itself. These companies often move quickly and test new features relentlessly as a way to “hack” growth.

The product-led growth playbook includes:

  • Free trials and freemium models that give users immediate product access
  • Continuous A/B testing on live user experiences
  • Extensive user tracking and behavioral analytics to optimize conversion funnels
  • Rapid iteration based on user feedback and behavior data
  • Self-service onboarding that guides users to their “aha moment
  • Viral growth loop, where users invite others or share content

And it works…for many. But regulated SaaS companies see these success stories and struggle to replicate them.

How do you offer a free trial for an HR tool that has to be rolled out across an entire organization to be useful? How do you minimize sign-up friction for a fintech software that requires bank information to function?

Experimenting with new features is too risky when system failure or emergency calling disruptions in telecommunications could result in massive fines.

Sometimes the stakes are too high for the product-led growth best practices that we see working in less-restrictive industries.

Regulated SaaS challenges are unique, and their growth solutions should be too

The challenges for this subset of SaaS companies are real and varied.

Compliance and privacy restrictions: Healthcare companies can’t freely test with patient data. Financial services face strict data handling requirements. Government contractors operate under security clearances.

Low traffic volume: Many regulated SaaS companies serve niche markets with limited user bases, making traditional A/B testing statistically impossible.

Long testing cycles: By the time regulated companies collect enough data from different regions and customer segments, it can take years to reach statistical significance. Different customers use different features across various geographical locations, making it difficult to design meaningful experiments that won’t disrupt service.

Risk-averse customers: Enterprise clients in regulated industries don’t want to be testing subjects for new features or experiences.

Resource constraints: Many regulated SaaS companies are highly technical but lack dedicated growth or UX teams.

Unique challenges require unique solutions, and that is what The Good can provide.

The alternative: off-site experiment-led growth

The solution isn’t to abandon growth optimization. It’s to use different methods that work within regulatory constraints.

This is where off-site experiment-led growth becomes the game-changer.

Experiment-led growth is a strategic approach that relies on continuous research, experimentation, and data-driven decision-making to drive business improvements. It allows teams to rapidly iterate on ideas that improve UX, marketing, and more.

Regulated SaaS can add an extra layer to experiment-led growth by taking things off-site or out of the product experience. Moving the growth tactics and experimentation away from the regulated environment and live user base gives teams the chance to make changes freely and quickly, gauge user reaction to those changes, and either launch with confidence or kill the ideas.

While product-led growth relies on in-product experimentation with real users, off-site experiment-led growth validates hypotheses and optimizes experiences before they ever touch your production environment. Instead of letting users test drive your product to discover value, you test drive your assumptions about users to deliver value immediately.

This approach flips the model to accommodate some of the restraints that regulated SaaS companies face. It’s no longer required to iterate on live systems with real customer data. There is an option to conduct experiments in controlled environments that don’t compromise compliance or risk customer relationships. You gather similar insights that drive product-led growth success, but through methods that work within constraints.

The result is a growth strategy that’s both data-driven and compliant, giving regulated SaaS companies access to the same optimization advantages that unrestricted companies enjoy, just through different means.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Off-site experiment-led growth tactics

Here are a few of the methods we use to deliver optimization outcomes for companies with the challenges and constraints outlined earlier in the article.

User testing

Because of the difficulty in getting customer data, there can be a disconnect between product teams and users.

Lookalike user testing solves this by bringing external participants who match the ideal customer profiles through your live experience. They complete tasks while thinking out loud, revealing friction points and confusion without exposing any sensitive data or requiring system changes.

This helps understand user behavior patterns, identify conversion barriers, and validate solutions, all without touching your production environment or compromising compliance.

AI-powered heatmaps and analytics

AI-generated heatmaps can predict user behavior with 92% accuracy without requiring any actual user data. These tools can analyze your interface and predict where users will look, what they’ll miss, and how long they’ll engage with different elements.

This is particularly valuable for regulated companies because you can understand user attention patterns and optimize layouts before the system is used.

Rapid testing

Experimentation is a proven way to get essential feedback on new features or website changes. And with A/B testing off the table in many regulated industries, rapid testing can fill in the gaps.

Unlike traditional A/B testing, rapid testing doesn’t require code changes, live traffic, or long research cycles. Instead, it uses a combination of techniques to validate hypotheses and inform decisions before anything goes live.

Rapid experimentation is not a one-size-fits-all process. Different scenarios call for different types of tests. Here are some common methods:

  • First-click tests: First-click tests evaluate whether users can intuitively find the primary action or information on a page.
  • Tree tests: Tree testing is a usability technique that helps you understand how users navigate through your website or app’s structure.
  • 5-second tests: 5-second tests assess a user’s immediate impression of a design or message.
  • Design surveys: Design surveys collect qualitative feedback on wireframes or mockups.
  • Preference tests: This test involves showing users two or more design variations and asking which they prefer and why. It’s perfect for narrowing down visual or messaging options before launching a formal test.
  • Card sorting: Card sorting is a research technique used to understand how users organize and categorize information.

These are just six of the many types of rapid experimentation.

While none deliver a 1:1 result when compared to A/B or multivariate testing, rapid experimentation offers a way for regulated SaaS companies to focus their development resources on work that has already shown positive signals from users.

For a tangible example, imagine a company struggling with positioning (a common challenge in technical, regulated industries). Five-second testing provides immediate feedback on messaging effectiveness. Users see your page for five seconds, then recall what they remember.

Competitive intelligence and market research

Structured competitive analysis and market research don’t require access to your own user base.

Understanding how competitors position themselves, what messaging resonates in your industry, and what user expectations exist can inform optimization decisions.

Also, gathering growth strategies from businesses in a similar industry with compliance or other restraints will offer a starting point to come up with new ideas that you can rapid test later on.

Getting started with optimization

Optimization can be intimidating and complex for regulated SaaS companies. Based on experiences working with teams like yours, here’s how to get started implementing growth optimization within your constraints.

1. Start with an audit or assessment of your current situation

Before making any changes, conduct a comprehensive audit of your current digital experience. This includes:

  • Technical tracking setup to understand what data you can legally collect
  • User journey mapping to identify critical conversion points
  • Competitive analysis to understand industry standards and opportunities
  • Stakeholder interviews to align on growth priorities and compliance requirements

2. Implement the methodologies we covered

Focus on techniques that provide insights without requiring on-site or in-product experimentation:

  • User testing with 5-7 participants per user type (you’ll get 80% of insights from this small sample)
  • Message testing to validate positioning and value propositions
  • Prototype testing for new features or flows before development
  • Heat mapping to understand attention patterns and interaction likelihood

3. Prioritize based on impact and compliance

Create a roadmap that balances growth potential with regulatory requirements. Focus on:

  • High-impact, low-risk optimizations that don’t require system changes
  • Messaging and positioning improvements that can be implemented quickly
  • User experience enhancements that reduce friction without compromising security
  • Qualification improvements to ensure you’re attracting the right prospects

4. Build your internal capabilities and outsource what you can’t

Many regulated SaaS companies lack dedicated growth resources. Consider:

  • Training technical teams on user experience principles
  • Establishing research processes that work within compliance frameworks
  • Creating feedback loops between customer-facing teams and product development
  • Implementing regular optimization cycles that don’t disrupt core operations
  • Outsourcing what you just can’t manage internally

Growth within constraints isn’t impossible

Regulated SaaS companies don’t need to accept mediocre growth because of their constraints. They need different approaches that work within their reality.

The key is recognizing that optimization isn’t restricted to product-led strategies or A/B testing. Understanding your users, validating your assumptions, and making data-driven decisions can deliver outcomes that are just as impactful.

Whether you’re in healthcare, financial services, government, or any other regulated industry, growth optimization is possible. It just requires the right toolkit and a willingness to think beyond traditional approaches.

Making off-site experiment-led growth work within your regulatory constraints starts with a conversation. Learn what’s actually possible when you have the right methodology and expertise guiding your optimization efforts by getting in touch with our team.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Regulated SaaS Companies Need a Different Approach to Growth. What Actually Works? appeared first on The Good.

]]>
Is “Test and Learn” or “Launch and Learn” Better?  https://thegood.com/insights/test-and-learn-vs-launch-and-learn/ Sun, 02 Mar 2025 21:07:15 +0000 https://thegood.com/?post_type=insights&p=110344 If you’ve worked in SaaS or digital media for a while, you’re likely privy to the long debate between “test and learn” and “launch and learn.” A hot topic in the 2010s, it argues the merits of shipping fast against the merits of validating pre-launch. Over time, it’s been argued under different names like “test […]

The post Is “Test and Learn” or “Launch and Learn” Better?  appeared first on The Good.

]]>
If you’ve worked in SaaS or digital media for a while, you’re likely privy to the long debate between “test and learn” and “launch and learn.”

A hot topic in the 2010s, it argues the merits of shipping fast against the merits of validating pre-launch. Over time, it’s been argued under different names like “test everything” or “founder-led growth.” But it all boils down to the same question: Is it better to validate before or after launching?

Every so often, it’s worth revisiting these hotly debated topics to ground ourselves (and our products) in strategic decision-making.

We have much more data, knowledge, and tech than when the debate started. So, let’s take a look at where the “test and learn” vs “launch and learn” stands and how to make your own decision on which approach is best.

Defining “test and learn” and “launch and learn”

First, full transparency. We’re big advocates for a “test and learn” culture.

As one of the first players in conversion rate optimization, The Good coined many strategies that support experimentation-led growth. We wholeheartedly believe that all ideas are hypotheses to be tested.

However, we also understand that everything has nuance, and there is no one-size-fits-all approach to product optimization. Each method has merits, depending on the business context.

Here’s how “test and learn” and “launch and learn” stack up in an apples-to-apples comparison.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Test and learn

Definition: Test and Learn is an iterative approach focused on experimentation, where hypotheses are tested through small-scale trials before broader implementation. It emphasizes data collection and analysis to inform decisions.

Methodology:

  • Establish clear hypotheses before testing.
  • Conducts controlled experiments (e.g., A/B testing) to evaluate specific variables.
  • Utilizes metrics to assess performance against predefined KPIs.

Objectives:

  • Minimize risk by validating ideas before full-scale rollout.
  • Foster innovation through iterative learning, allowing teams to pivot based on results.
  • Provide insights into customer behavior and preferences.
  • Refine products or marketing strategies based on empirical data.
  • Encourage a culture of continuous improvement and adaptation based on feedback.

Challenges:

  • Time-consuming nature of extensive testing cycles.
  • Potential for analysis paralysis if not managed properly.

Launch and learn

Definition: Launch and Learn focuses on quickly deploying products or features to the market, with the understanding that adjustments will be made based on real-world user feedback post-launch.

Methodology:

  • Rapidly launch new offerings.
  • Implement a feedback loop to continuously gather user input post-launch.
  • Leverage insights to inform subsequent iterations.

Objectives:

  • Accelerate time-to-market for new products or features.
  • Gather immediate insights from actual users.
  • Get faster identification of market needs based on user experience.
  • Flexibility to pivot based on immediate user reactions.

Challenges

  • Difficulty in managing customer expectations post-launch.
  • Risks potential negative user experiences.
AspectTest and LearnLaunch and Learn
ApproachIterative experimentationRapid deployment
FocusData-driven decision-makingReal-world feedback
Risk ManagementMinimizes risk through controlled tests and careful validationAccepts risk with quick market entry
Feedback TimingPre-launch and post-launch insightsPost-launch insights only
Innovation StyleEncourages confidence and constant refinementPromotes fast iteration based on user response

4 considerations when picking the right approach for your product

So, which one is better?

As with most things in optimization (and the world), the answer is “it depends.”

While there is no blanket approach to experimentation, there are some important considerations that can help guide your decision.

1. Stage of growth

Whether you choose a “test and learn” or “launch and learn” approach often depends on your company size and resource availability. Let your stage of growth be the primary guide for which experimentation approach you use.

For companies looking to find product-market fit, a “launch and learn” approach is often executed to get fast, real-time feedback. But to take your business from product-market fit to scale, it’s crucial to move past product-led growth best practices and take a “test and learn” approach.

When you are just starting to implement PLG practices, you may rely on hunches or best guesses. But as you grow, experimentation should happen pre-launch.

2. Risk level

Another consideration when picking an experimentation strategy is the level of risk associated with the changes or launch.

For example, if you’re working on a feature or journey that impacts the core user experience, you should always “test and learn” prior to launch. It would be a pretty big risk to “launch and learn” something broken in the core product experience and suddenly see your churn rate skyrocket.

However, if you’re launching a fix for a feature that is already broken, finding a quick, usable solution is more important than adhering to a strict “test and learn” approach.

3. Confidence level

Product management leader Marc Abraham advocates for a confidence check before launch to understand how much or little testing is needed.

He outlines the confidence levels as:

  • “High Confidence: Our confidence in the upcoming release is high because we tested it thoroughly internally, have launched a similar product or feature before, or if there’s an issue the fallout will be small.
  • Low Confidence: Our confidence in the upcoming release is low because we haven’t fully tested it, it’s based on new technology, or creates a totally new user experience.”

These are great guidelines for getting started. And if you are still unsure, you can perform what Emma Leyden calls a “gut check.”

“Your ‘gut check’ can be done in low-effort ways. It won’t give you the most confident answer, but something as simple as showing a design to friends and family before you launch can teach you a lot.”

While product intuition is important, remember we all have our biases. Sometimes, it’s hard to see our products from different perspectives, which is why testing or validating your ideas prior to launch is essential.

4. Product nature

Build it, and they will come!

That’s the motto of many “launch and learn” advocates, and rightfully so. If there is no product built in the first place, there is nothing to learn about.

But that’s only true if what you’re launching is simple and functional.

The complexity (or simplicity) of the product/feature can be a major consideration when deciding on your experimentation approach. Complex, high-investment products should use “test and learn” to validate the user experience and also support your investments pre-launch.

Whatever you choose, make sure you learn

While we’re champions of “test and learn,” we know that time-crunched growth leaders don’t always have that luxury. The most important takeaway is to never launch and leave.

Regardless of the approach, the goal should always be to learn. Collect and analyze both quantitative and qualitative data and use those insights to iterate.

Abraham says, “I view releasing something without learning from it as a cardinal sin. It’s very important to continuously learn from real users and actual usage (or not) about your key hypotheses.”

Experiment-led growth

If you’re ready to move from product-market fit to scale and would like to improve your experiment-led growth practices, The Good can help.

We build a culture of experimentation within SaaS companies and spur growth through better UX across the product lifecycle.

Our methodologies discover untapped opportunities and improve KPIs, including registration, activation, engagement, monetization, expansion, and win-back.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Is “Test and Learn” or “Launch and Learn” Better?  appeared first on The Good.

]]>
Compound Learnings From Your Experimentation Program With An End Of Year Roadmapping Exercise https://thegood.com/insights/compound-learnings/ Fri, 22 Sep 2023 19:03:20 +0000 https://thegood.com/?post_type=insights&p=105569 A dream without a plan is just a wish. You’ve probably heard this from a coach or mentor at some point, and that’s because it is applicable to most things in life. I’m here to remind you that it’s also true for improving your digital experience and optimization process. You have a goal to deliver […]

The post Compound Learnings From Your Experimentation Program With An End Of Year Roadmapping Exercise appeared first on The Good.

]]>
A dream without a plan is just a wish. You’ve probably heard this from a coach or mentor at some point, and that’s because it is applicable to most things in life. I’m here to remind you that it’s also true for improving your digital experience and optimization process.

You have a goal to deliver the best experience in your industry, whether it’s ecommerce, SaaS, digital media, or something else. You want to convert more of your visitors, grow your user base, or increase subscribers.

If you’re thinking, “I already have a plan and plenty of experiments in progress.” First, congratulations! We love to hear that. And second, this is actually the perfect article for you.

I’m sharing a step-by-step exercise for the end of the year that takes learnings from past experiments or tests and helps you leverage them to compound results.  

This will inform the plan that will turn your dream of a better digital experience for your customers into a reality.

Why should you be thinking about this now?

As Q3 quickly comes to a close, the holiday season and planning for next year are right around the corner.

For most ecommerce, SaaS, or digital media companies, you’re probably beginning some sort of holiday campaign that was planned months ago.

The end of the year is mapped out, so make time on your calendar next quarter to start considering your roadmap for next year. What are your goals for your digital experience? And how are you learning from the past year to inform the next steps to reach them?

For many companies, the best way to start answering these questions is with a three-step optimization process that reviews your digital experience and tests from the past year to help inform your plan for the future.

We recently went through this optimization process with a client, so let’s take a look at what this looks like in action.

Step 1: Review Key Analytics Data

The first step in the optimization process is to review progress from the year.

Specifically, run a data report in Google Analytics for a 12-month period and pay special attention to the following:

  • How session count changed over the year
  • Where that traffic came from
  • How conversions changed by:
    • Device type
    • Top user groups
    • Channel groupings
    • KPIs

This helps you understand where there was growth and where to focus your efforts next year.

sample of

Here are what some of the key learnings from the analysis might look like:

Example Key Learning #1

Learning: Traffic trends show the highest session count in X month

Details: Breakdown by source and medium shows a decrease in session count YOY from Google organic and CPC, with a larger increase in traffic associated with social campaigns, email, etc.

Opportunity: Prep for email campaign for mid-year

Example Key Learning #2

Learning: X% increase in goal conversion rate compared to the previous year.

Details: There was an increase in overall goal completions compared to the previous year, slowing down in X months. We see similar growth in mobile and desktop conversion rates (+X%, X%)

Opportunity: Examine mobile experience to continue growth in this area

Example Key Learning #3

Learning: Form submissions increased

Details: Looking at the form submission trends, we see an increase across all goals with the largest impact on X form submissions

Opportunity: Revisit form analysis on key landing pages

These learnings and the associated details give the context that will inform any seasonal opportunities, growth areas, or sticking points.

Step 2: Review Tests From The Past Year

Next, if you have an active conversion optimization or experimentation program, review:

  • All of the tests that you have run in the past year
  • The results of those tests
  • The learnings from the results

Each learning can become a follow-up concept, with the metric results gathered from the test helping prioritize a roadmap.

For example, if a test on a certain element of the site did not produce a huge impact on conversions, the learning may be that you need to make bigger changes in that area of the site in order to have an impact. This is something you want to pull out and keep in mind if you decide to test other optimizations to that element next year.

tests from optimization process

A good process for this step is to outline all of the tests that you have run, then one-by-one refresh yourself and your team on the hypothesis, background, and learnings from the experiments.

If you’re presenting this to the C-suite or other organizational stakeholders, we’d recommend you start with an overview slide with site area and key metrics and then one slide per test to get into the details. Review the main highlights and only get into the details if they are particularly relevant (or if your stakeholders are interested).

Opting In To Optimization

Over a decade of conversion optimization learnings packaged into just a handful of immutable laws.

GET YOUR COPY
Opting In To Optimization

Step 3: Identify Pending Questions From Experiments and Conduct New Research

The next step is to identify any pending questions from your past experiments and supplement them with research as necessary.

For example, you may want to run new session recordings or user tests on winning or losing variants of experiments to understand what additional opportunities for optimization or why your hypothesis may have failed.

In action, this could look something like this. If the mobile variant of a paid landing page test didn’t perform as well as the desktop variant, you may establish new research questions and run additional user testing.

In this scenario, let’s say you revisited session recordings of users who opted into the mobile variant of your experiment. You can see users are missing the main CTA and then abandoning. This behavior suggests that users do not feel like this is the primary action that they should take on the page, so there may be an opportunity to rework how you present the CTA in your next iteration of this test to position the CTA to guide users towards that goal.

By reviewing what was learned in the first test (that the mobile variant wasn’t performing as well as desktop) and supplementing the research, it became a compound learning (identified that there wasn’t a clear enough CTA).

This illustrates how the exercise can offer new opportunities for a future roadmap. You have the background from original research, you’re armed with insights from the variant that didn’t perform like we hypothesized. You’ll be able able to apply that to a future roadmap.

In summary, after reviewing tests, do an experience review of key pages, based on past learnings and current user behavior. Use that information to compound learnings and identify opportunities to build a better user experience.

(Bonus) Step 4: Prioritize Opportunities & Allow Your Learnings to Compound

Now that you’ve done the 3-step optimization process, it’s time to ideate on opportunities. This is also where you begin to prioritize a roadmap for the following year.

Here is a bonus, in-depth article to guide you through the process: How to Build an Efficient A/B Testing Roadmap.

The goal of this step is to get your roadmap together, taking into account the exercises we just went through. So, make sure your roadmap includes:

  • Opportunities identified by analytics data from the previous year
  • Follow-up concepts based on tests you ran last year
  • Uncovered sticking points from your experience review of key pages
  • Any testing opportunities you have put off to revisit in the following year

The deliverable can look something like the above. It should outline the areas of the site you want to test, prioritized by what concept you would run.

Consistency Compounds, So Keep Your Learnings Top Of Mind All Year Long

That’s a peek into some of our strategies for making sure learnings compound YoY. The longer you run an experimentation program, the better your experience becomes. This is because you’re consistently compounding your learnings about users and improving your optimization process.

To summarize, before you create next year’s roadmap, make sure to:

  1. Review key analytics data
  2. Review tests from the past year
  3. Identify pending questions and supplement them with research

The exercise of reviewing your site data, experiments, and learnings from the past year will help you build on any optimizations to create an even better digital experience.

Keep in mind that visibility into learnings can help guide future concepts throughout the year, not just at the end. A great way to do this is to keep a dashboard of your key data, learnings, and insights handy. But, that’s a topic for a different day.

If you’d like your learnings to compound and your wins or results to do the same, we can help you launch or expand your experimentation program. The Good works with ecommerce and product marketing teams to optimize the digital experience with research, validation, and implementation. Get in touch here.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

The post Compound Learnings From Your Experimentation Program With An End Of Year Roadmapping Exercise appeared first on The Good.

]]>
Encouraging A “Test And Learn” Culture To Keep Innovation Fresh After 30 Years In Business https://thegood.com/insights/studio-m-case-study/ Fri, 17 Mar 2023 20:11:21 +0000 https://thegood.com/?post_type=insights&p=103591 It’s 10:30 am on a Wednesday (but it could be any day of the week) when a bell chimes in the office. The Studio M team kicks back from their workstations and takes a moment to consider how they might make the task they’re working on easier or more efficient. At 10:45 am, the team […]

The post <strong>Encouraging A “Test And Learn” Culture To Keep Innovation Fresh After 30 Years In Business</strong> appeared first on The Good.

]]>
It’s 10:30 am on a Wednesday (but it could be any day of the week) when a bell chimes in the office. The Studio M team kicks back from their workstations and takes a moment to consider how they might make the task they’re working on easier or more efficient.

At 10:45 am, the team returns to work, their creative juices replenished and in full flow.

Innovation runs through Studio M’s veins. The 80s brainchild of Curt and Sue Todd has evolved from its humble beginnings to become one of the most prolific family-run home and garden DTC and wholesale businesses. Today, it’s famed for inventing MailWraps®, the first decorative magnetic mailbox cover, and its most recent patented product, Art Poles. But it’s the constant commitment to a culture of innovation and experimentation that has given the brand its competitive edge.

Maura Godat joined the company 10 years ago as an intern—a favor from her brother-in-law during her college days. Over the past decade, she has risen to become the Creative Team Director, overseeing resource management and timeline planning for the expanding product and marketing teams.

Photo of Stuio M homepage

There has been a lot of change at the company since Maura started—something that she handles strategically and thoughtfully with her team. “If we’re talking about a general sense of change in an organization, it’s really just about making sure people feel like they’re part of the change,” she says.

Involving the entire team in decisions, reassuring people that their opinions are valued, and encouraging them to feel part of the change has been a huge driver behind the success of Studio M’s experimentation program. “Everything is more successful when everyone in the organization, whatever the project they’re working on, feels like a part of the solution,” says Maura.

The brand is always coming up with new product ideas and iterating on and improving products that are already out there. “We add a little twist to make them better,” says Maura. Studio M has always had an experimental streak, and the brand is not afraid to explore and play. As the company has embraced a continuous improvement mindset, Maura has helped her team implement lean principles. “We’re always pushing continuous iteration and improvement across all teams, whether it’s production, warehouse, admin, or creative. Everybody’s challenged,” she explains.

It’s this willingness to think outside the box and innovate while still trusting the data that has made The Good’s partnership with Studio M so successful.

The web’s best resources for converting more visitors into buyers.

A decade of conversion insights in one collection.

START READING NOW
Opting In To Optimization

Building testing and innovation into the heart of the decision-making process

Studio M wanted to put itself in a position where it had little to no competition, and focusing on a unique product mix was a big part of that journey. “Things you just don’t see from other vendors–those were the things that stuck around,” says Maura when describing how the brand chose to explore different product lines.

But, as is the case with many small, family-owned businesses, it’s hard to test everything. This was a struggle for Maura, who’s a keen advocate for A/B testing anything and everything. “It’s super helpful when you’re trying to make an informed decision, but for us, it’s not always feasible because it’s a lot of extra effort to implement two versions of something or to even track it at all,” she admits.

It wasn’t just products that were the subject of Studio M’s testing and innovation, though. As part of its growth plan, Maura and her team implemented many different marketing tactics, including the most effective strategy—PPC campaigns. Pay Per Click ads were huge for the business, but when the traffic started pouring in, Maura quickly realized she had to turn her attention to conversion.

“At that point, we realized it was time to focus on the customer journey once they get to the site,” says Maura. “We wanted to know what shoppers were seeing and what was making them go away.”

Partnering with an expert optimization team was all part of the learning process, and Maura was keen to work alongside someone who could take the reins.

“Partnering with The Good was huge because, as a small company with small teams, we just don’t have the resources to always do the due diligence and dig through the data to decide what the outcome of each test was.”

Maura Godat, Creative Team Director, Studio M

The biggest value-add for Maura was she could leave experimentation in our capable hands and give back her own time and energy to her team.

Moving beyond assumptions with a data-driven partner

When you’ve been working at a business for 10 years like Maura has, it’s easy to assume you know what customers want and what’s important to them. Maura didn’t want to fall into this trap, so it was important for her to partner with an expert that could go beyond assumptions and use tangible, data-led evidence.

Our partnership with Studio M was an excellent match. We’re both insight-led companies that aren’t afraid to dig around to find what works. “Reading about The Good’s processes, case studies, and site teardowns—it was just very evident that your approach is data-driven,” Maura said about choosing to work with us.

It was important for Maura to get tangible outcomes—something that she and her teams had struggled with due to a lack of resources and time. And, with a background of relentless testing and experimentation, it was very clear that Studio M didn’t just want a surface-level site refresh. They wanted evidence of the impact of every new iteration–and there was a lot to test. A jam-packed product catalog, multiple product variations, and over a decade’s worth of content all lent themselves perfectly to multiple tests.

But to begin with, we focused on improving three core areas:

  1. The product page experience
  2. Category pages
  3. Navigation

One of the biggest challenges we faced was the sheer breadth and depth of Studio M’s product catalog. As a result, we did a lot of testing with the goal of exposing shoppers to a wider range of products. For starters, we adjusted the content shown in the top-level navigation bar and provided links back to related categories on product-level pages. We also had a winning mega menu design, pictured below.

Studio M navigation showing different collections and products available on their site

We also wanted to shine a spotlight on one of Studio M’s unique differentiators—its library of talented artists. To do this, we featured artists across the site and linked back to their collections, creating deeper shopper connections and exposing customers to new products in the process.

Winning back time and scaling innovation: how our integrated partnership created a seamless research and testing hub for a busy team

The Good’s custom Digital Experience Optimization Program™ suited Maura and Studio M’s needs perfectly. We agreed on a done-for-you process, meaning Maura could take back her time for other parts of the business while we conducted the experimentation and research behind the scenes.

We were keen to work closely with Studio M to identify where they could benefit from optimization, particularly across such a huge product catalog. But we also wanted to make sure that every product had its moment in the limelight and that the brand’s unique differentiators were brought to the forefront.

The integration with Studio M was seamless—made possible, no doubt, by the businesses’ commitment to testing and experimentation.

Trust played a big part in our successful partnership, and it was a huge help having the entire team on board—something that Maura has been fostering over her time at Studio M.

The best part was our ability to help Maura and the team shake up decades-old assumptions and continue to push for innovation.

“Sometimes, our team thought something would be super impactful for shoppers, but it wasn’t. So it just goes to show that it’s always worth testing. It’s always worth experimenting.

Maura Godat, Creative Team Director, Studio M

So, while we took care of the data, testing, and tangible outcomes, Maura and her team leveraged their experience in the home and garden industry and their well-honed intuition. “I think if you make every decision based on data it’s going to slow you down big time,” says Maura. “So you have to go with your experience and your gut sometimes, but it’s nice to have that balance of a team that’s looking at the data and digging in to show a clear winner.”

We were, in all intent and purpose, an extension of Maura’s existing team. We were given free rein to run with the information we had—and while it was a collaborative effort, we were there to make Maura’s job easier, not harder. “When you feel like you have a partner who’s just got it, it saves so much of your time and energy,” says Maura. “It really feels like a help instead of a handhold. It’s just super refreshing.”

The innovation bell chimes again: pairing years of industry experience with creative new ways of thinking

It’s 10:30 am and the bell chimes. It’s that time of the day again—time to test, innovate, and experiment, and we’re right here beside the Studio M team.

“You have a super dynamic, dedicated team that really feels like an extension of our team that helps drive growth by making our site the best it can be for users,” says Maura. “Putting yourselves in the shoes of the user and doing a deep dive into what that experience is was incredibly valuable for us. We can’t wait to get back into it—it was just a really fun time.”

As a small brand that often didn’t have the traffic to reach a conclusive test result, Studio M leaned into the partnership with The Good to provide recommendations.

“Sometimes traffic wasn’t as high as we anticipated it being, or a clear winner wasn’t presenting itself. The Good was able to look at different data points in different ways and see that, while it might not be meeting a specific goal, it still accomplishes something else.”

Maura Godat, Creative Team Director, Studio M

We were able to take the years of experience and industry knowledge that Studio M has and pair that with thoughtful and creative ways to look at the test results and data.

“Build and they will come”: Implementing a test and learn culture in your own organization

Encouraging a test-and-learn culture has allowed Studio M to continue to innovate after decades in business. Business and consumer preferences are in a constant state of motion, and the only way to stay on top and gain a competitive edge is to learn and learn some more.

If you want to become an innovative powerhouse like Studio M, follow in their footsteps—build your innovative product and drive traffic to your site. Once you have the traffic coming in, turn your attention to conversions to create an enjoyable customer experience.

We are here to help.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post <strong>Encouraging A “Test And Learn” Culture To Keep Innovation Fresh After 30 Years In Business</strong> appeared first on The Good.

]]>
Being Bold Brings Big Benefits (like a £530K increase in annualized revenue) https://thegood.com/insights/emanualonline-case-study/ Fri, 03 Mar 2023 04:23:03 +0000 https://thegood.com/?post_type=insights&p=103194 Every successful leader embraces experiments in the face of unknown outcomes. This fearlessness is innate for Chad Shen-Ina, owner of eManualOnline, who isn’t afraid to challenge the status quo when it’s not working. Chad’s focused goals and flexible mindset made for an easy partnership as we developed a strategy and tackled each goal head-on. Find out […]

The post Being Bold Brings Big Benefits (like a £530K increase in annualized revenue) appeared first on The Good.

]]>
Every successful leader embraces experiments in the face of unknown outcomes. This fearlessness is innate for Chad Shen-Ina, owner of eManualOnline, who isn’t afraid to challenge the status quo when it’s not working.

Chad’s focused goals and flexible mindset made for an easy partnership as we developed a strategy and tackled each goal head-on.

Find out how Chad Ina’s fearless approach with The Good helped eManualOnline achieve impressive results like:

  • A 50% win rate
  • £530K annualized revenue gains
  • 9:1 ROI

An industry leader

Trusted for over 15 years by over a million sellers, eManualOnline offers over 2.5 million digital repair manuals for download, making it the largest repair manual database online.

Their site boasts an impressive catalog across all industries, including automotive, household electronics, construction equipment, and more.

And not only professionals can purchase their manuals. eManualOnline is for everyone curios about repair, whether you’re a professional mechanic, a repair shop, or someone who loves a good DIY project.

eManualOnline has a long-standing commitment to providing quality manuals with quality service at a reasonable price, something they’ve maintained through all of their years of service.

From simple tweaks to bold reworks of complex elements

Goals provide the foundation of any meaningful experimentation. When goals are set, they become the guiding force driving us toward success. Unclear goals and lack of direction lead to failed projects and wasted time.

Fortunately for us, eManualOnline shared many goals that helped shape our testing approach. These included:

  • Finding high-priority improvement areas with analytics data
  • Increasing conversion rates, a common goal among our clients
  • Increasing directional guidance: Finding ways to guide users to the most relevant products pages in as little time as possible
  • Simplifying the purchase process
  • Minimizing the clicks to checkout
  • Improving side and top navigation for user intent

Conducting a site-wide audit illuminated 3 key challenges

After establishing specific goals, we conducted an audit of eManualOnline to identify any areas of friction and improvements.

Using both qualitative and quantitative data provided insights into not only how customers interact on the site but why these interactions occur.

Our team gathered insights from this research, which illuminated 3 key challenges.

We reviewed heat maps and session recordings of customers scrolling up and down category pages, revealing a potential need for better filtering options and content hierarchy.

User testing revealed that users weren’t sure how eManualOnline delivers their product. Mixed messages throughout the site made customers wary and untrusting of the website.

For example, various locations mentioned mixed delivery methods: DVDs, downloads, links, and emails. Users were likely to lose confidence if delivery methods were unclear.

image showing user testing for eManualOnline checkout process
User testing revealed confusion among customers and doubt in eManualOnline’s website credibility.

Additional session recordings showed that users landing from ads were likely to bounce, but those that engaged had a higher than typical likelihood to use breadcrumbs.

This indicated that the products they were seeing might not match their needs. So, while users were bouncing, they would likely be willing to dig deeper into the site, given the right pathway.

screenshot of eManualOnline product page showing user's making use of breadcrumbs
Through these session recordings, we found users willing to use breadcrumb navigation.

User testing of eManualOnline showing high bounce rate in product pages
Product pages had high bounce rates.

Once we identified these challenges, we explored competitive and comparative examples to draw inspiration for our A/B testing strategy and find ways to maximize paid traffic from ads.

Hundreds of millions in revenue generated with our strategic optimization programs.

But don’t take our word for it. Hear about the amazing results from 15+ years in business, straight from the source.

SEE HOW
Opting In To Optimization

For example, we noticed that Target uses promoted filters on their collections pages to encourage users to filter for the most relevant results for their needs.

We hypothesized that emphasizing the number of results and rearranging content on the page encourages users to filter more narrowly, leading to a reduction in the time it takes to find what they need. For example, placing categories above the fold than placing product tiles below the fold.

screenshot of Target's category page for men's wear that served as a basis for eManualOnline

Next, we found inspiration from Etsy’s PDP (product detail pages), which displays icons with a description to clearly communicate that the product is a digital download.

Etsy product detail page that served as inspiration

And finally, we drew additional inspiration from Etsy’s product pages. We found they offer related searches and categories to entice users to stay on the site rather than navigating off-site to comparison shop.

screenshot of related searches shown on the Etsy product pages which served as an inspiration for our work with eManualOnline

Putting our research-backed ideas to the test

We used the above research and examples to inspire our testing strategy and devise hypotheses to tackle eManualOnline’s challenges, resulting in great wins and conversion rate increases.

1. Category page filters

When we first tested the category pages, we found users were hit with content fatigue, unsure of where to go and what to do next.

Although users found what they wanted using the side filters, heatmaps, and session recordings revealed users constantly scrolling up and down on category pages, potentially indicating a need for better filtering options and content hierarchy.

Our hypothesis? Simplifying the content in the side filters will increase usability and increase transactions.

Our results? We A/B tested 1 variant against the control. Variant 1 removed superfluous categories from the sidebar.

Overall, variant 1 had a 10.84% advantage over the control at 98% statistical significance.

Our learnings? Simplifying the sidebar with fewer extraneous options improved usability and increased conversions.

2. Offline download delivery priming

Our user testing reveals users were confused about how eManualOnline delivers the manuals, as some are digital downloads and others are physical editions.

Because of the mixed delivery method messages throughout the site, customers felt a lack of trust when confronted with the website.

Our hypothesis? Highlighting delivery methods will clarify any confusion and increase transactions.

Our results? We A/B tested 2 variants: variant 1, the control, and variant 2, making delivery methods clear at various touchpoints.

Variant 2 showed a 14% lift over the control, with an overall lift in per-session value of 13% at 99% statistical significance.

Our learnings? Clarifying access methods for offline downloads results in stronger purchase intent.

3. Promoted categories – Collections pages

During our session recordings, we found users willing to navigate through breadcrumbs, but a still-high bounce rate on product pages, which left room for ad spend improvements.

Our hypothesis? Adding navigational elements to the collections page would encourage deeper page depth from paid ads, decrease bounce rates, and lead to increased transactions.

Our results? We A/B tested 2 variants: variant 1, the control, and variant 2, which added promoted category pills to collections pages.

Variant 2 showed a 23.72% lift over the control, with an overall life in per-session value of 23.73% at 98% statistical significance.

Our learnings? Promoting categories and using category pills helps users find the best fit products and leads to an overall increase in conversions.

When another challenge struck, The Good was there

When challenges arise (and they always do), it’s always good to have a helpful friend. That’s something eManualOnline found when Google’s algorithm change removed them from Google Shopping.

Their products are downloads and not physical goods, so Google removed eManualOnline. This change presented a major challenge because, like many businesses, a significant portion of revenue could be attributed to Google Shopping.

But, even though their site went through a stressful event, they recovered some of their revenue through the Digital Experience Optimization Program™ improvements from The Good.

Working together led to over £500,000 in revenue gains

While eManualOnline was on the Digital Experience Optimization Program™, we had a 50% win rate on our testing.

Because of their willingness to try new things in lieu of the familiar, we were able to make both bold changes and small tweaks, which yielded large dividends of £530K in annualized revenue gains.

Experimentation is an active science that, when successful, can lead to extraordinary things.  

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post Being Bold Brings Big Benefits (like a £530K increase in annualized revenue) appeared first on The Good.

]]>
5 Tests We Would Run On (Almost) Any Ecommerce Site https://thegood.com/insights/5-tests-to-run-on-any-website/ Mon, 19 Dec 2022 20:25:55 +0000 https://thegood.com/?post_type=insights&p=102355 This might hurt to hear, but it doesn’t matter how great your products are if your website visitors can’t quickly find what they need. A minor inconvenience or delay in the user experience can make the difference between someone completing a purchase or losing interest and leaving. While it can be a challenge to come […]

The post 5 Tests We Would Run On (Almost) Any Ecommerce Site appeared first on The Good.

]]>
This might hurt to hear, but it doesn’t matter how great your products are if your website visitors can’t quickly find what they need.

A minor inconvenience or delay in the user experience can make the difference between someone completing a purchase or losing interest and leaving.

While it can be a challenge to come up with the right areas for optimization, after a decade-plus in the business, we’ve identified patterns across our clients. There are specific areas of ecommerce websites that when optimized, consistently deliver results. And there are a few tests that we’ve had success with time and time again.

Our team analyzed our vault of patterns and these are 5 of our favorite data-backed tests to help you gather insights on your customers and increase conversions.

Let’s dive in and explore the test ideas we would run on (almost) any ecommerce site!

Test Idea #1: Quality Tiles

The Test Idea: Replace product tiles on the category page with “quality tiles” that feature different brand messaging.

Why We Love It: Testing product tiles with different key messages (brand values, free shipping, etc) teaches the brand what its customers care about… and increases conversions.

The Quality Tiles Test In Action

One of our clients, Beckett Simonon, an online leather goods retailer, needed a long-term optimization program to guarantee sustainable improvements to their sitewide conversion rates.

To identify potential conversion blockers, we analyzed Beckett Simonon’s website analytics and user experience flow through usability testing and heat maps to uncover lucrative conversion opportunities.

Our research revealed that users:

  1. Relied heavily on product images to decide on purchases
  2. Didn’t understand Beckett Simonon’s differentiated value

Our team hypothesized that focusing the company’s messaging throughout key image-driven moments would help users understand their product’s unique values, like ethical responsibility.

So, we A/B tested different messaging, Variant 1 focusing on the company’s ethical responsibility practices and Variant 2, focusing on the company’s enduring product quality.

company values of laid out in tiles

We tested the product tile variations on the Category page. Below you can see the control with their original product imagery and the variant with our quality tiles.

Test idea being applied to product page of shoe company (t)

Variant 2 includes language surrounding the company’s sustainability efforts, like ethical labor conditions and reputable suppliers.

We found that the Ethical Responsibility test variant produced a 5% higher conversion rate than the control, resulting in a return on investment of 237%!

Why It Works

Emphasizing sustainability aligns customers with Beckett Simonon’s values. It connects shoppers to the brand.

Other companies might offer high-quality leather boots but may not source their products from suppliers who care about the environment as Beckett Simonon does.

The test taught the brand what the customers care about and gave customers the extra confidence they needed to make a purchase.

A quality tile test could offer similar results for your brand.

If customers linger on your category pages but never commit to the purchase, try a quality tile test to reveal what your customers care about and how you can deliver that message through the shopping journey.

Test out various messages like:

  • Your brand’s values
  • Your USP (unique selling proposition)
  • Special offers like free shipping

Test Idea #2: Categories In Your Navigation

The Test Idea: Add popular product categories as the items in your navigation.

Why We Love It: There is almost always a way to test something in the navigation and optimize for a better customer experience. Putting categories front and center reduces friction and propels visitors through the sales funnel.

The Category Navigation Test In Action

One client came to us with a menu listing items in their header navigation like Shop, Connect, Discover, Rewards, etc. On top of that, the Shop drop-down listed product categorization with labels like Collections and Themes.

This categorization wasn’t clear enough for new visitors. We had the opportunity to raise awareness of the product catalog by surfacing top categories in the persistent menu navigation.

So, we tested adding top product categories into the top-level menu navigation for better browsing opportunities, product catalog awareness, and increased transactions.

test idea of adding categories to navigation menu

Our variant increased conversion rates over the control, resulting in over $100,000 of revenue gains.

Why It Works

Clear categorization helps visitors easily find the product they want. That means a quicker time between landing on the website and making a purchase decision… and a better customer experience.

Test by using your original navigation menu as the control and your categories for the variant. If you’re unsure which categories to feature in your top-level navigation bar, use Google Analytics to find your top categories and most-visited web pages.

An easily navigable website encourages visitors to explore various product category pages, ultimately reducing any friction along the way.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

Test Idea #3: The Etsy Test

The Test Idea: Feature similar products or “other customers also viewed” items to inspire on-site comparison and decrease abandonment.

Why We Love It: Offering your customers alternatives keeps them interacting and engaging with your website, instead of going to a competitor for comparison shopping.

The Etsy Test In Action

Etsy displays similar products above the fold, encouraging shoppers to stay on-site even if the product they clicked wasn’t what they needed or wanted.

They also add the product prices below the image if customers want to shop for the best deal on similar products.

Etsy product page showing prices below the products

Why It Works

This test is excellent for brands that run a lot of Google Shopping ads. Most people who land on a site from a Google product listing will be comparison shopping. They know the product they are looking for, but don’t necessarily know or care about the brand.

This means the standard practice of surfacing add-ons on the product page to increase average order value won’t be relevant to their needs.

Instead, it aims to keep people on-site by surfacing appealing alternatives to the product they are already viewing. Let them comparison shop the products on your site rather than comparing your brand to other Google Shopping listings.

Etsy displays similar items above the fold to encourage shoppers to stay on-site even if the product they landed on wasn’t quite the right fit.

The Test Idea: Test a friendly, instructional search prompt.

Why We Love It: Instructional search encourages intentional browsing, improves UX, and boosts conversions.

The Instructional Search Test In Action

During research for one client, user tests and session recordings revealed that customers primarily navigate through the search bar. We also found that customers only engage with select menu categories.

Our team hypothesized that adding friendly microcopy and enhancing search bar visibility would encourage the use of search and in turn, increase transactions.

We put it to the test. We visually emphasizedthe search bar with a white background and updated the language to “Try ‘search term.’”

It delivered over $3,000,000 in revenue gains.

winning test idea of increasing search bar visibility

Why It Works

According to Comprend, 59% of web visitors frequently use a website’s internal search navigation, and 15% would rather use the search function than the hierarchical menu.

You don’t want your customers to zombie scroll through endless products to find what they need. Internal search encourages customers to easily locate what they need.

If you run this test on your site, you could try:

  • Adding friendly microcopy to your search bar (include a key product or category for inspiration)
  • Exposing the search bar on mobile
  • Adding a white or light background to the search bar

Test Idea #5: The Buy Box

The Test Idea: Dig into the content in the buy box area to find areas for optimization. Test the buy box layout, product descriptions, or reviews.

Why We Love It: When you optimize the buy box, you improve how quickly and easily customers can understand your product… and make a purchase.

The Buy Box Test In Action

For one client heat maps and session recordings showed how customers interact with the mobile product pages. Users struggled to see product images, which led to a lack of engagement with other elements in the buy box.

We decided to test out a new layout. Instead of placing the product image first, we prioritized the product name and description, placing the product image below.

winning test of changing layout that increased conversion rate by 26.3%

Decluttering and reorganizing their buy box resulted in a 26% increase in conversions.

Why It Works

Moving around the content in the buy box helps us understand where and what the customer needs to see to make a purchase decision.

Putting key product information above the fold, especially on mobile, removes any guesswork for the customer.

If customers are getting stuck on the product page, test variations of your buy box layout to optimize the user experience. And remember, oftentimes less is more, so don’t be shy about testing our reductions in the amount of content or text you feature.

The ultimate goal is to optimize everything on your product page, but the buy box can be a great place to start.

Understand Your Customer & Boost Conversions With 5 Of Our Favorite Tests

In this article, we unlocked our secret vault of test ideas to help inspire your optimization efforts.

While every website is different, these are 5 of the tests we’d run on almost any site to learn more about the user and deliver a better customer experience.

  1. Replace product tiles on the category page with “quality tiles” that feature different messaging to educate customers about your brand.
  2. Test categories in the homepage navigation to reduce hangouts on your website, propelling customers further down the funnel.
  3. Optimize content in the buy box area to improve how quickly and easily customers can understand your product and complete a purchase.
  4. Test a friendly, instructional desktop search prompt to encourage intentional browsing and improve UX.
  5. Try the Etsy test to cross-sell similar products and encourage customers to comparison shop on your website.

If you don’t have time to optimize your own site, we’re here to help.

Our services dive deep into your site to uncover areas for improved customer experience opportunities and opportunities to maximize conversions.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post 5 Tests We Would Run On (Almost) Any Ecommerce Site appeared first on The Good.

]]>
How Laird Superfood’s Team Saw 17:1 ROI On Optimization Efforts with a “Stay Curious, Test To Learn” Mindset https://thegood.com/insights/laird-superfood/ Thu, 01 Dec 2022 17:59:04 +0000 https://thegood.com/?post_type=insights&p=102287 At the heart of a successful ecommerce team is unending curiosity. There’s a relentless pursuit to expose and resolve customer pain points over and over again. Exemplary ecommerce teams leave no stone unturned, making incremental changes based on data and methodical testing. To see it in action, look no further than the ecommerce pros at […]

The post How Laird Superfood’s Team Saw 17:1 ROI On Optimization Efforts <strong>with a “Stay Curious, Test To Learn” Mindset</strong> appeared first on The Good.

]]>
At the heart of a successful ecommerce team is unending curiosity. There’s a relentless pursuit to expose and resolve customer pain points over and over again.

Exemplary ecommerce teams leave no stone unturned, making incremental changes based on data and methodical testing.

To see it in action, look no further than the ecommerce pros at Laird Superfood: Alisha Runckel, Vice President of Ecommerce and Growth, and Angela Williams, Ecommerce Manager.

Alisha and Angela’s “stay curious and test to learn” mindset has contributed to a best-in-class ecommerce site, a robust subscriber base, and the tripling of their conversion rate, subsequently resulting in hundreds of thousands of dollars in incremental revenue.

We talked to them about the principles that led to their success and what it takes to go from having a few loyal fans to building a subscription empire with sold-out releases.

Laird Superfood: Plant-Powered & Delicious

Laird Superfood aims to provide great-tasting plant-based products that are high-quality, convenient, and available to all. Die-hard fans will know them for their original lineup of plant-based creamers, but in recent years they have expanded to offer premium coffee, baking mixes, and even snack foods with the acquisition of Picky Bars.

The company was founded on the idea that nourishing, plant-based foods can fuel you all day. It began when its founder, pro surfer Laird Hamilton, started adding plant-based foods to his morning coffee to increase his surfing performance. The result was a coffee blend with fantastic flavor and tons of energy.

In 2015, Laird and his friend Paul Hodge launched Laird Superfood. Today they still operate as if products are made for friends and family.

The company believes that the foods you eat should be good for you and the earth, and should enable you to perform at your highest level. They value integrity, authenticity, health, and performance.

A New Website & 3 Key Principles Laid The Foundation Of Laird Superfood’s Success

Today, Laird Superfood has a model website that every ecommerce company would love to emulate. But it wasn’t always that way.

When Alisha started at Laird Superfood, the company was small – just a handful of people on the corporate team. She joined to lead their marketing efforts and learned right away that their ecommerce website was full of disjointed code, and lacked the flexibility to test or change anything easily.

At the time, Laird Superfood was a one-size-fits-all, “batch-and-blast” brand. They didn’t segment customers or investigate data cohorts. They had no way to personalize the experience for customers.

Alisha quickly understood that the company needed a clean slate. Alongside a contract developer, Alisha relaunched a new theme and migrated several systems to new apps better suited to the company’s needs.

With a new site came the capability to test, learn, and implement. This was essential, as Alisha was determined to lead the team using three key principles:

  • Data-driven decision-making
  • Testing everything
  • Always listening to customers

Following those three principles at every step of their growth was foundational to their success.

“There are no silver bullets,” she told us. “Ecommerce success is an accumulation of good decisions made over time”.

One of the crucial good decisions Alisha made was hiring Angela. Snagged from the customer service team, Angela has a natural inclination toward solving problems for customers with the goal of endearing them to the brand.

With a new set of tools at her disposal, Angela quickly learned the value of testing and became fundamental to the ecommerce team. “This team would be lost without Angela,” Alisha told us.

Laird Superfood was poised for success with a new site and a growing team. But as all clever ecommerce leaders know, those are just the first steps towards the ultimate goal of building a seamless customer experience that converts.

Fostering A Culture Of Experimentation On The Ecommerce Team

Wary of ego plaguing her effort to scale Laird Superfood’s ecommerce vertical the right way, Alisha focused on the antidote: making sure the whole team embodies a “question everything” mentality. This mindset drives them to use the data, challenge their assumptions, and stay curious about website users.

“I think curiosity wins in this business,” Alisha told us. “If you can stay curious and continue to ask questions, you’re going to be successful. Asking, ‘Why are they doing that? Why is that happening?’ every day is so important.”

Alisha knew that gut-based decisions are often based on inaccurate information. The only way to make smart, incremental changes is to base every decision on data.

“When people follow their gut instincts, it can be successful to a point, but if you’re not testing it first, I think you’re going to be very surprised. So we’ve prioritized testing for everything that we do.”

After too many assumptions, you could end up building your website on a house of cards—making decisions based on incorrect assumption after incorrect assumption.

And without experimentation, if by chance you happen to be right, you don’t know why you’re right. But the why is critical to the learning process. You can’t replicate an effect if you don’t know why it happened.

“Sometimes it’s better when you’re wrong because you learn something new and that forms other strings of hypotheses,” she says.

Take their navigation, for example. At first, they wanted to modernize the menu by consolidating everything under one Shop button. Consolidating categories under “Shop” would allow the team to make other important site destinations front and center.

After testing, however, they quickly learned that it wasn’t the right approach for their brand. Their customers prefer to see those primary shopping categories broken out on the page. That was just one assumption they threw out thanks to testing.

In order to foster a culture of curiosity and experimentation in the team, Alisha and Angela place a high value on collaboration and asking questions. They’re politely critical about everything, no matter how small or seemingly irrelevant. They shop online and explore their competitors throughout the week for inspiration, then share their ideas in a weekly brainstorming session.

“Nobody feels like they can’t ask questions,” Alisha says. “Nobody feels like they don’t have a voice. As a manager and a leader, that is – by far – the most important thing.”

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

Using Data To Understand The Customer & Build A More Personalized Experience

Alisha and Angela’s curiosity-first mindset infiltrated every component of the site: the navigation, header, footer, product filtering, the type of homepage content, product detail page structures, and more.

“We prioritize the user’s experience on our site. So, we focus on projects that will improve their experience, whether that be adding information, improving flow, or making it easier for customers to find what they’re looking for,” shares Angela.

No element, image, or bit of copy was spared from their consideration. Even seemingly simple elements, like the language of their shipping threshold offer are tested.

“[Testing the shipping threshold messaging was] just a simple test, but one that allowed us to use the right language that resonated the most with our customers,” Angela said. It seems like a small element, but it helped the team better understand what the customer values.

While they continued to finetune their site with incremental changes (like tweaks to messaging) the Laird Superfood team simultaneously took on mammoth projects aimed at delivering a more personalized customer experience.

For example, together, Alisha and Angela built Laird Superfood’s Daily Ritual Quiz. If you’re new to the brand and you’ve never tried their products, the quiz will help you find the best products for your needs, based on your responses. At the end, you get the chance to subscribe to those products. These subscriptions offer a number of perks, like free coupons, extra reward points, and more. It’s a powerful way to help new customers find exactly what they need.

The quiz certainly wasn’t easy. It required a number of workarounds and customizations, especially when it came to taking payment. But the results are undeniable. They acquired thousands of new subscribers in the first month.

What’s more, the quiz documents user responses and builds a profile that helps them personalize future content and offers within their CRM. If a user says they don’t like turmeric, Laird Superfood will never talk to you about turmeric. If the user likes sweet foods, Laird Superfood will focus its messaging on the sweetest products to that user.

These initiatives provide an exceptional experience for their users and collect valuable data that they can use for ongoing relationships with customers.

And ultimately, they increase conversions.

Success Working With The Good On The Digital Experience Optimization Program™

In many ways, The Good is a third player on Laird Superfood’s ecommerce team. There is a shared perspective that to optimize your website, you must start with data, research, and listening to the customer.

When Alisha was originally exploring optimization partners, she immediately recognized The Good’s data-centered mindset and the appreciation of incremental change that were the foundation of her own growth philosophy.

During the first conversation, “there was no doubt in my mind that working with The Good was the right decision for us to make,” she said.

Both teams are highly collaborative, each offering testing ideas for improvement. The Good then designs and develops variants for testing and analyzes the data being collected.

“The Good has definitely been pivotal in our increase in conversion rate,” says Angela. “We rely on them for a lot. The concepting, the design for the test, the execution, and then analyzing the A/B test results to see if we should run iterations or implement winning tests on our website. That partnership has been really pivotal in our success so far.”

The results are undeniable.

Since 2019, Laird Superfood and The Good have implemented 57 experiments with a win rate of 54%.

This year so far, Laird Superfood’s ROI is 17:1 (meaning they earned $17 for every $1 they spent).

These results came from incremental changes and the relentless “test to learn” mindset we keep mentioning.

For example, one test focused on replacing an unnecessary element with social proof. Through a combination of preference testing, competitive analysis, and A/B testing, The Good helped Laird Superfood create a new homepage design.

With the validation of A/B testing, Laird Superfood and The Good removed out-of-place “Add to Cart” buttons and replaced them with each product’s star rating. This helps users understand the degree that other users like those products (social proof).

The results were a 6.63% uplift in conversion rate, which is incredible for such a seemingly small change.

Screenshot of Laird Superfood website before and after optimization

To be clear, this was just one variant of many experiments to find the highest-converting version of this element. Good testing requires testing many ideas to learn about your users.

Most importantly, small changes like this produce valuable information that can be used on subsequent tests. We learned that Laird Superfood customers are motivated by social proof, specifically ratings and reviews of other customers.

So, this one test could lead to a series of social proof experiments that influence countless elements of the ecommerce store and all of the business’s marketing activities, thereby improving the customer experience and driving more potential customers to product detail pages.

A Powerful Partnership With 17:1 ROI

Alisha and Angela believe working with a collaborative and inspired agency is one of the many smart moves that made Laird Superfood successful.

Small changes to the conversion rate, when stacked up, can unlock serious revenue, but only if you abandon your assumptions and approach every opportunity objectively.

“Your conversion rate is so important,” Alisha told us. “If you have a 2% or 2.5% conversion rate, you’re leaving 97.5% percent of the people out. So if you can capture even half a percent more, that’s meaningful. And it pays for itself.”

Together, the ecommerce team at Laird Superfood and The Good turned the site into a conversion beast that emphasizes the customer experience. They used data to make smart decisions that boost the customer experience and let incremental changes add up over time.

Alisha calls their partnership with The Good a “no-brainer” and The Good feels the same way. Alisha and Angela’s culture of curiosity, objective thinking, and methodical experimentation make them powerful ecommerce experts.

And something we all agree on: the key is to “stay curious and test to learn.”

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post How Laird Superfood’s Team Saw 17:1 ROI On Optimization Efforts <strong>with a “Stay Curious, Test To Learn” Mindset</strong> appeared first on The Good.

]]>
How to Build an Efficient A/B Testing Roadmap https://thegood.com/insights/building-ab-testing-roadmap/ Thu, 06 Aug 2020 17:43:27 +0000 https://thegood.com/?post_type=insights&p=93162 If you’re thinking about starting an optimization program for your digital product, you’re likely inundated with questions about how to get started – specifically, how to build an efficient A/B testing roadmap. How should you decide what to optimize on your site, and in what order? How do you identify high-value testing opportunities? How can […]

The post How to Build an Efficient A/B Testing Roadmap appeared first on The Good.

]]>
If you’re thinking about starting an optimization program for your digital product, you’re likely inundated with questions about how to get started – specifically, how to build an efficient A/B testing roadmap.

How should you decide what to optimize on your site, and in what order? How do you identify high-value testing opportunities? How can you prevent multiple tests from interfering with each other? 

The optimization process can be a bit overwhelming for any stage of business, which is why creating an A/B testing roadmap is such a critical step. The process of creating an optimization roadmap is essential because it requires you to define your goals, align with stakeholders, and assess priorities and risks; it’s not just about outlining a testing schedule. 

Whether you’re a researcher, an analyst, a marketer, or an optimization specialist, this insight is designed to connect those from any discipline with the clear steps they need to create an optimization roadmap. We’ll cover how to: 

Start with clear objectives

To set a testing program off to a good start, teams and individuals should make sure they are aligned on a clear understanding of what they are hoping to achieve within a testing program. The objectives you select may depend on a number of factors including the maturity of your brand, the saturation of the market, and the competitive landscape. 

Examples of common A/B testing roadmap objectives include: 

  • Improve conversion rate to purchase
  • Increase average order value
  • Increase new user product engagement

For teams just starting out, we recommend focusing on only 1-3 primary testing objectives and ranking those in order of priority. Increasing conversion rates might be more important to your team than improving email signups, so knowing where to put your time and attention and aligning across the testing team (and other stakeholders) is a non-negotiable first step.   

Conduct research to establish website challenges

When you’re aligned on what you’re aiming to improve with your testing program, start the research. Research is conducted to set baselines, surface patterns, and establish challenges. 

There are two types of research, and both are important: quantitative and qualitative.

Quantitative research techniques are used to establish metric baselines and surface potential friction points in the user journey. Qualitative research adds a human element to data patterns; It tells a story that the data sometimes can’t on its own. 

Quantitative Research: Every team approaches research differently, but our method relies on starting with the data. By looking at one to two years of analytics reports, we form hypotheses for what’s happening across the user journey. Scope should include reviewing things like channel mix, landing pages, time-on-site, and events across the site experience.

After a thorough data analysis, you should have a good understanding of two things: optimization areas and baseline metrics.

  • Optimization Areas are the pages or areas of the site that can be optimized to improve the customer experience and influence your established goals.
  • Baseline Metrics are numbers that represent how your product or website is performing today in areas that are important to you. (Advanced reports will include how those metrics change based on traffic channel, device type, landing page, or seasonal fluctuation.)

Qualitative Research: When it comes to telling a story with the data, we look to qualitative research. Qualitative methodologies like conducting user testing, cataloging session recordings, and designing open format surveys are non-negotiable for our team; while the data can show us where users are dropping off, qualitative methods tell us why. 

State the challenges and isolate A/B testing roadmap opportunities

After thorough research, you should have clarity on the established challenges. For our team, this manifests in a literal list of friction points that need to be addressed, but for the purpose of demonstration, let’s imagine that we wrote all of our challenges on yellow sticky notes. 

A/B testing roadmap begins with established website challenges like in this image

A note on challenges: as you compile a database of challenges stay user-focused, rather than product-focused. The problem with product-focused challenges is that they tend to hint at a solution that you’ve already thought up. The prescriptive nature of product-focused challenges will have you optimizing through brute force rather than thoughtful finessing. User-focused challenges work to address user needs and improve the customer experience. By focusing on user needs you’ll unearth new ways to solve the challenges presented to you in the research phase. So stay user-focused. 

Looking at the well-researched, established challenges in front of you, it’s time to divide and conquer. 

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

When it comes to sussing out what makes a good testing opportunity, our team uses a combination of a point system and a gravity method, but every team does this differently. A simple version of defining testing opportunities is to simply put challenges into three buckets: Implement, Test, & Consider. 

  • Implement. This bucket should contain all of the problems that are so low-risk that it’s an easy decision to just solve them immediately.
    • Examples: website bugs, form field errors, and missing content. 
  • Test. This bucket is generally the largest group of concepts. Use this bucket for challenges where the solution may be less clear, the challenge could be solved with multiple solutions, or the test itself will teach us something valuable about our audience.
    • Examples: hero messaging where internal interests are divided, filter layouts where the important filter categories are undefined. 
  • Consider. This bucket is usually a small but mighty list of challenges just not suited for testing, either requiring deeper consideration or great resources to address.
    • Examples: platform limitations, reshooting product images, or rethinking product names. 
divide your established website challenges into three categories like in this image

Step away and get inspired 

After compiling a deep bench of testing opportunities, much of the hard work (for this round) is done, so give yourself a pat on the back, acknowledge the milestone, and go explore. This is where a hunger for good user experiences comes in handy.

In order to gain some fresh perspective, it’s at this point that we recommend stepping away from the problem in front of you. This could mean looking at competitive user experiences, drawing on your experience in the real-world, or sleeping on the problem. 

One way our team at The Good formalizes this process is with a collaborative weekly meeting where we evaluate web experiences for three buckets of content: stealable, not stealable, and questionable. This open-format approach tends to be quite fun, and it’s been a great way to maintain a culture of experimentation and collaboration. These sessions train our eyes and inspire debate, but they also fuel inspiration that we bring to the design phase. Win-win! 

As you explore, take note of the moments when you say “that might work for this challenge” – those are the hypotheses forming and they represent the spark of inspiration. You’ll capture your hypotheses in the next step. 

Formulate Testing Hypotheses

I hesitate to call hypothesizing a whole step unto itself, because for our team, hypotheses are an important part of the design process; the design and hypothesis happen in a constant dance where who’s in the lead can shift and change. 

Maggie Paveza, Strategist here at The Good says: 

“I typically have the ‘what I hope to impact’ part of the hypothesis down as a result of the research phase, but the ‘how I plan to do it’ really comes out of creating the design or having a seed of an idea.”

Whether the chicken or the egg comes first, the important step here is to catalog a hypothesis and attach it (either physically on post-its, or digitally in a project management tool) to the challenge that it’s solving for.

This assures that you don’t lose sight of the user challenge, which is our driving force, after all! 

Helpful Hint: Hypotheses are not simply the inverse of challenges. Remember how we advised on surfacing user-focused challenges? A user-focused challenge can inspire multiple hypotheses, and a hypothesis can solve for multiple challenges. Read more on crafting a good hypothesis.

Prioritize your tests

Once your testing opportunities are defined and you’ve accumulated several worthy testing hypotheses, many folks will want to jump right into design. But you probably don’t have the resources or desire to just start designing every solution at once. This is where careful A/B testing roadmap prioritization comes in. 

Be warned that prioritizing is not always simple. For individuals with some experience, identifying the biggest opportunities will probably be second-nature. For teams however, there are usually politics involved, which is where various established prioritization models* can come in handy. 

For those just beginning a testing program, we recommend keeping it simple: organize your tests by funnel point (or page) and select a few particularly exciting opportunities across various points of the conversion funnel. This assures you’ll minimize decision paralysis, and it has the benefit of keeping your team motivated; Working on what’s exciting will keep your team invested in the process long enough to gain momentum. 

the next step in the A/B testing roadmap, prioritize your test concepts

Prioritize, design, critique, then repeat

Once you have a prioritized list of testing opportunities, congratulations! You have created a testing roadmap. But simply having a roadmap does not mean that the prioritization is done. 

At this point, you’re ready to move on to design. As you work through the design phase, allow yourself the authority to re-prioritize your roadmap. We recommend regular meetings to collectively evaluate upcoming test hypotheses and designs before they go to development. 

Formalizing this pre-development review is a valuable way to improve your testing skills and keep the visual design aligned to brand guidelines. But there is a more important outcome of this meeting, which is that a natural micro-prioritization happens simultaneously.

Reviews will occasionally reveal that a design cannot be executed within the testing environment or that additional creative and/or development resources are needed. In those cases, you may find that you either need to simplify a design or altogether deprioritize a concept while you compile needed collateral. This micro-prioritization assures your team maintains momentum. They’ll progress with easier tests while compiling the needed resources for more complex challenges in the meantime. The resulting sprints will contain a healthy mix of testing opportunities with varying levels of ease and impact, and your team will learn a lot in the process. 

As you get more sophisticated with experimentation, make sure you’re armed with a great prioritization system like the ADVIS’R Prioritization Framework™. It’s best for teams who are already running experiments, wants to develop a more systematic approach to experimentation and have oversight from a decision-maker who wants transparency into the process.

ADVIS'R Model

What’s next for your A/B testing roadmap?

As you build your A/B testing capabilities, don’t let overthinking get in the way of actually launching tests. Eventually you may want to plan your testing roadmap for the clearest results, but in the early days of experimentation done is better than perfect. 

Young testing teams often want to increase their A/B  testing velocity, but taking the proper approach to measuring the impacts of your tests will help your team grow in expertise. As you launch and close your tests, measuring the impact can be as much fun as finding the A/B testing roadmap and opportunities! Evaluate testing outcomes with a keen eye for iteration and other potential tests. 

As you tackle your ideas and your existing roadmap grows shorter, be sure to conduct periodic conversion research to surface new opportunities and keep an open feedback loop with your audience. We’re all about helping new testing teams cultivate a culture of experimentation, so if you’re looking for expert advice on how to build the strength and collaborative skills of your new testing team, reach out to us.

Happy testing! 

*A word on prioritization models: Prioritization models come in all shapes and sizes, and there is no one right way.  These models facilitate in surfacing the biggest opportunities, overcoming bias, and putting effort in the right place. But the right model for your team depends on factors including who is in the room and what the culture of the organization is. Ease, for instance might be very important to a team of 1, but not as important to a team with a highly experienced A/B test developer.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How to Build an Efficient A/B Testing Roadmap appeared first on The Good.

]]>
How to Conduct High Impact User Testing: Part 3 – Analyzing the Results https://thegood.com/insights/user-testing-part-3/ Fri, 19 Jul 2019 22:35:04 +0000 https://thegood.com/?post_type=insights&p=90664 This is the third and final installment of our three-part series, How to Conduct High Impact User Testing. To start from the beginning, read Part 1: Thoughtful Panel Selection. Analyzing and interpreting the results of your user testing sessions can be a daunting task. A standard test with five to ten participants can potentially produce […]

The post How to Conduct High Impact User Testing: Part 3 – Analyzing the Results appeared first on The Good.

]]>
This is the third and final installment of our three-part series, How to Conduct High Impact User Testing. To start from the beginning, read Part 1: Thoughtful Panel Selection.

Analyzing and interpreting the results of your user testing sessions can be a daunting task. A standard test with five to ten participants can potentially produce sixty or more UX issues. It’s important that you know how to filter through these issues so you don’t end up wasting your time developing solutions that don’t address the broader usability issues at hand. 

We previously covered in parts one and two of this series how to select appropriate user testers and what questions you should be asking those users. In this final part we’ll be focusing on how you can turn the data that was collected in your user testing sessions into quick wins, A/B test ideas, and bigger strategies. 

How to properly analyze and interpret your testing results

After you’ve invested your time and resources into selecting a great panel of testers, formulating tasks and questions, then collecting all of the results of the testing session, you’re ready to interpret those results into usable findings.

Through our experience (listening to dozens of user recordings every day!), we’ve identified a number of advantageous tips that you can use to get the most value out of your user research.

Return to Your Original Objectives

As you start to analyze your user testing sessions, reflect on the objectives you created when you first launched your user tests. What were the goals that you and your team established for the test? Reflecting on these objectives will allow you to generate valuable insights that you can share with your team. 

Don’t take user tester suggestions literally

User testers often suggest solutions to the micro-frustrations they encounter, but users are notoriously bad at thinking outside the box, and we caution you from letting these off-the-cuff “solutions” directly guide your design process. In our experience we’ve heard some pretty wild suggestions from user testers, but oftentimes these suggestions don’t address the root of the issue. 

As the old (most likely made up, but regularly cited) adage goes, Henry Ford said, “If I had asked people what they wanted, they would have said faster horses.” While user frustrations are valid, their suggestions are typically laser-focused on the problem directly in front of them and lack the data and context that researchers are armed with. 

So rather than getting caught up with tedious UX fixes, wait until enough feedback is collected to roll up these issues into a hypothesis that will address greater opportunities.

“If I had asked people what they wanted, they would have said faster horses.” –Henry Ford

Capture quotes that synthesize misunderstandings, not just opinions 

Direct user quotes are a great way to understand what page elements miss the mark, but make sure to capture the misunderstandings as well. Users will frequently say they’re likely to do something but end up getting lost along the way. 

Quotes can contain more subjective feedback, but capturing the language that they’re using can be valuable as it helps you understand sentiment, both positive and negative. Even if what the user is saying doesn’t necessarily match up with their actions, it allows us to see what they’re noticing. This information is most easily found in the form of user feedback/direct quotes because it’s nearly impossible to tell what users are expecting to find on your site through heatmaps or analytics alone.

Don’t trust the user. Be skeptical and read between the lines

Listening to what your user testers are saying is important, but it’s even more important to pay attention to what they do. Users will often rate their experience on a site favorably despite having struggled more than they would on a more optimized site experience. Even when users spend longer than average looking for pertinent information, they often rate ease of navigability highly. So take users opinions with a grain of salt. 

Users are often willing to forgive and forget a subpar site experience because they believe they have failed the site, rather than the site has failed them. Even if a user is not providing verbal feedback, pay attention to how long they spend on a task versus what they do not notice. 


Users are often willing to forgive and forget a subpar site experience because they believe they have failed the site, rather than the site has failed them. #usertesting #UXdesign
Share on X


Prioritize the Identified Issues

It’s important that you prioritize the issues you’ve identified throughout the user testing sessions to help you determine which problems are more critical than others. It can be easy to underestimate the volume of problems even a handful of user testers may uncover on your site, so once you have a complete list, begin organizing them based on impact and significance. You don’t want to find yourself caught up with tedious minor fixes to your UI when there are larger glaring issues that should be addressed first. 

Save yourself time and create a spreadsheet of all the problems you’ve identified. Organize them based on the frequency that the issue occurred, and the impact it had on the overall user experience. Then use your spreadsheet as a checklist as you determine what usability issues to tackle first on your site.

It’s time to start testing!

Analyzing and interpreting the results of your testing sessions is easily the most time-consuming but most valuable aspect of the user research process. Once you’ve invested the time and resources into a series of user testing sessions, it’s crucial that you draw as much insight and value as you can from the results. 

We hope the tips we provided can help you run a more successful and efficient user testing program in the future. This concludes our three-part series on user testing. If you missed parts one or two, make sure you read those as well and let us know your thoughts!

If you’re looking for the best usability testing platform to use for your next round of user research, make sure to check our comprehensive list of every usability testing tool imaginable.

About the authors:

Natalie Thomas is the Director of CRO & UX Strategy at The Good. She works alongside ecommerce and lead generation brands every day to produce sustainable, long term growth strategies.

Maggie Paveza is a CRO Strategist at The Good. She has over five years of experience in UX research and Human-Computer Interaction, and acts as an expert on the team in the area of user research. 

The post How to Conduct High Impact User Testing: Part 3 – Analyzing the Results appeared first on The Good.

]]>