prioritization Archives - The Good Optimizing Digital Experiences Wed, 26 Nov 2025 18:55:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust https://thegood.com/insights/maxdiff-analysis/ Wed, 26 Nov 2025 17:56:30 +0000 https://thegood.com/?post_type=insights&p=111202 When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like […]

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
When a SaaS company approached us after noticing friction in their trial-to-paid conversion funnel, they had a specific challenge: their website was generating demo requests, but prospects weren’t converting to customers. User research revealed a trust problem. Potential buyers were saying things like, “I need more proof this will actually work for a company like ours,” and “How do I know this won’t be another failed implementation?”

The company had assembled a list of proof points they could showcase on their homepage: years in business, number of integrations, customer counts, implementation guarantees, security certifications, industry awards, analyst recognition, and more. But they only had space to highlight four of these benefits prominently below their hero section. They faced the classic messaging dilemma: which trust signals would actually move the needle with prospects evaluating B2B software?

This is where MaxDiff analysis becomes valuable. Instead of relying on stakeholder opinions or generic best practices, we could let their target buyers vote with data on what mattered most.

What makes MaxDiff analysis different from other survey methods

MaxDiff analysis (short for Maximum Difference Scaling) is a research methodology that forces trade-offs. Rather than asking people to rate items individually on a scale, MaxDiff presents sets of options and asks participants to identify the most and least important items in each set. This forced-choice format reveals true preferences because people can’t rate everything as “very important.”

Here’s why this matters: traditional rating scales often produce compressed results where everything scores high. When you ask customers, “How important is X on a scale of 1-10?” most people will hover around 7 or 8 for anything remotely relevant. You end up with a spreadsheet full of similar numbers and no clear direction.

MaxDiff cuts through that noise. By repeatedly asking “which of these five options matters most to you, and which matters least?” across different combinations, you build a statistical picture of relative importance. The math behind MaxDiff generates a best-worst score for each item, showing not just which options are preferred, but by how much.

For digital experience optimization, this methodology is particularly useful when you need to prioritize limited real estate on a website, determine which features to build first, or figure out which messaging will actually differentiate your brand.

How we structured the MaxDiff study for maximum insight

In the project for our client, we started by defining the target audience precisely. The company was a B2B SaaS platform serving mid-market operations teams, so we recruited 60 participants who matched their customer profile: director-level or above at companies with 50-500 employees, working in operations or supply chain roles, currently using at least two SaaS tools in their workflow, and actively evaluating solutions within the past six months.

From the initial audit and stakeholder interviews, we identified 11 potential trust signals the company could emphasize on its homepage. These included things like:

  • Concrete numbers (customer counts, uptime percentages, integrations available)
  • Credentials (security certifications, enterprise clients)
  • Promises (implementation timelines, support response times, money-back guarantees)
  • And more

Each represented something the company could truthfully claim, but we needed to know which ones would build the most trust with prospects evaluating the platform.

The survey design was straightforward. Each participant saw these 11 benefits randomized into multiple sets of five items. For each set, they selected the most important factor and the least important factor when considering whether to adopt this type of software. Participants completed several rounds of these comparisons, seeing different combinations each time.

This approach gave us enough data points to calculate a robust best-worst score for each benefit: the number of times it was selected as “most important” minus the number of times it was selected as “least important.” Positive scores indicate a strong preference, negative scores indicate a low importance, and the magnitude of the scores shows the strength of feeling.

The results revealed a clear hierarchy of trust signals

When we analyzed the MaxDiff results, the pattern was striking. The top-scoring benefits shared a common theme: they provided concrete evidence of proven reliability and satisfied users. The bottom-scoring benefits? They emphasized company scale and marketing visibility.

A chart showing the ranking of MaxDiff analysis SaaS trust signals.

The four highest-scoring trust signals were clear winners. G2 or Capterra ratings scored 38 points (the highest possible), indicating this was nearly universal in its importance. The number of active customers scored 30 points. An implementation guarantee (“live in 30 days or your money back”) scored 25 points. And SOC 2 Type II certification scored 16 points.

These weren’t arbitrary marketing metrics. They were the specific signals that would make someone think, “this platform delivers real value and other companies trust them.”

The middle tier included operational details that registered as minor positives but weren’t decisive: the number of successful implementations (7 points), availability of 24/7 support (6 points). These signals suggested competence but didn’t particularly move the needle on trust.

Then came the surprises. Years in business scored -5 points, indicating it was slightly more often selected as “least important” than “most important.” The number of integrations available scored -11 points. AI-powered features claimed scored -15 points. Employee headcount scored -36 points. And recognition as a Gartner Cool Vendor scored -55 points, the lowest possible score.

Think about what prospects were telling us: “I don’t care that you have 200 employees or that Gartner mentioned you. Show me that real companies like mine trust you and that you’ll actually deliver on your promises.”

Why customers rejected company-focused metrics

The findings revealed an insight into trust-building that extends beyond this single company. B2B buyers weigh social proof and reliability guarantees far more heavily than they weigh indicators of company scale or industry recognition.

When a business talks about its employee headcount or analyst mentions, prospects interpret this as the company talking about itself. These metrics answer the question “How big is your business?” but not “Will this solve my problem?” From the buyer’s perspective, a larger team or Gartner mention doesn’t necessarily correlate with better software or smoother implementation.

By contrast, user reviews and customer counts answer the implicit question every prospect has: “Did this work for companies like mine?” A guarantee directly addresses risk: “What happens if implementation fails?” Security certifications address legitimacy: “Is this platform secure enough for our data?”

The AI-powered features claim scored poorly, likely because it felt trendy rather than practical. Prospects for this specific business weren’t primarily concerned about cutting-edge technology; they wanted a platform that would reliably solve their workflow problems. Leading with an AI angle, while possibly true, didn’t address the core decision-making criteria.

Years in business scored negatively for similar reasons. While longevity can signal stability, in this context, it didn’t address the prospect’s immediate concerns about implementation speed and user adoption. A company could be around for years while providing clunky software with poor support.

From insight to implementation: turning research into revenue

The MaxDiff analysis gave the company a clear action plan. We recommended implementing a four-part trust signal section directly below their homepage hero, featuring the top four scoring benefits in order of importance.

This meant reworking their existing homepage structure. Previously, they had emphasized their implementation guarantee in the hero area while burying customer counts and ratings further down the page. The research showed this approach had it backward. Prospects needed to see evidence of customer satisfaction first, then the implementation guarantee as additional reassurance.

We also recommended removing or de-emphasizing several elements they had been proud of. The employee headcount mention, the Gartner recognition, and several other low-scoring items were either removed entirely or moved to less prominent positions on the site. The goal was to prevent low-value signals from crowding out high-value ones.

The broader lesson here extends beyond this single homepage optimization. The MaxDiff results provided a messaging hierarchy that the company could apply across its entire go-to-market strategy. Email campaigns, landing pages, sales conversations, demo decks, and even their LinkedIn company page could now emphasize the trust signals that actually mattered to prospects.

When MaxDiff analysis makes sense for your business

MaxDiff is particularly valuable when you’re facing a prioritization problem with limited data. It works best in these scenarios:

  • You have more options than you can implement. Whether that’s features to build, benefits to highlight, or messages to test, MaxDiff helps you choose wisely when you can’t do everything at once.
  • Stakeholder opinions are conflicting. When internal debates about priorities can’t be resolved through argument, customer data settles the question. MaxDiff provides quantitative evidence for decision-making.
  • You need to differentiate in a crowded market. If competitors are all saying similar things, MaxDiff reveals which specific claims will break through. Often, the winning messages are ones companies overlook because they seem “obvious” or “not unique enough.”
  • You’re optimizing for a specific audience segment. Generic research about “customers in general” often produces generic insights. MaxDiff works best when you recruit participants who precisely match your target customer profile.

The methodology has limitations worth noting. It requires careful setup, and you need to know which options to test before you start.

If you don’t include the right benefits in your initial list, you won’t discover them through MaxDiff.

It also works best with a reasonably sized set of options (typically 5-15 items).

And the results tell you about relative importance, not absolute importance; everything could theoretically matter, but MaxDiff reveals the hierarchy.

How to use MaxDiff findings in your optimization strategy

Once you have MaxDiff results, the application extends beyond simply reordering homepage elements. The insights should inform your entire digital experience.

Your messaging architecture should reflect the importance hierarchy. High-scoring benefits deserve prominent placement, repetition across pages, and detailed explanation. Low-scoring benefits can either be removed or repositioned as supporting rather than leading messages.

Your testing roadmap should prioritize changes based on MaxDiff findings. If customer reviews scored highest in your study, test different ways of showcasing reviews before you test other elements. Let the data guide your experimentation priorities.

Your content strategy should emphasize what customers care about. If service guarantees scored highly, create content that explains the guarantee in detail, shares stories of when it was honored, and addresses common concerns. Build your editorial calendar around the topics MaxDiff revealed as important.

Your sales enablement should align with customer priorities. If the research showed that prospects value licensing credentials, make sure your sales team knows to emphasize this early in conversations. Create collateral that highlights the trust signals that matter most.

The most effective companies use MaxDiff as one tool in a broader research program. They combine it with qualitative research to understand why certain benefits matter, behavioral analytics to see how users interact with different messages, and continuous testing to validate that the predicted preferences translate into actual conversion improvements.

Turning guesswork into growth

The SaaS company we worked with started with a dozen possible messages and no clear sense of which would build trust most effectively with B2B buyers. After the MaxDiff analysis, they had a data-backed hierarchy that let them confidently restructure their homepage and broader messaging strategy.

This is the power of asking prospects the right questions in the right way. Not “do you like this?” which produces inflated scores for everything. Not “rank these 11 items,” which overwhelms participants and produces unreliable data. But rather, through repeated forced choices, revealing the true importance of each element.

If you’re struggling with similar prioritization challenges (too many options, limited space, stakeholder disagreement about what matters), MaxDiff analysis might be the tool that breaks through the noise. It transforms subjective opinion into statistical evidence, letting your prospects vote on what will actually convince them to choose your platform.

Ready to discover which messages actually resonate with your customers? The Good’s Digital Experience Optimization Program™ includes research methodologies like MaxDiff analysis to help you prioritize changes based on real customer preferences, not guesswork.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

The post MaxDiff Analysis: A Case Study On How to Identify Which Benefits Actually Build Customer Trust appeared first on The Good.

]]>
How User-Centered Prioritization Helps Improve Feature Adoption Rates https://thegood.com/insights/feature-adoption/ Thu, 29 May 2025 22:05:08 +0000 https://thegood.com/?post_type=insights&p=110621 Imagine launching a feature and knowing it will be a hit. What if you could flip the script on wasted development efforts and build only what your users truly crave? For most SaaS companies, a high feature adoption rate is linked to increased upgrades, retention, and loyalty. When users fully adopt a product, they integrate […]

The post How User-Centered Prioritization Helps Improve Feature Adoption Rates appeared first on The Good.

]]>
Imagine launching a feature and knowing it will be a hit. What if you could flip the script on wasted development efforts and build only what your users truly crave?

For most SaaS companies, a high feature adoption rate is linked to increased upgrades, retention, and loyalty. When users fully adopt a product, they integrate it into their daily workflow and continually find value.

But alarmingly, a reasonable feature adoption rate in SaaS is between 20% and 30%, and similarly, only 20% of launched features are used. We can all understand that some features won’t hit the mark, but should we really accept that up to 80% of the features we build will go unused?

I don’t think so.

By refining the prioritization process, we can make sure you’re working on the right features that will drive value for users and improve feature adoption rates. And it starts with understanding your user.

Reasons for low feature adoption

As product teams focus on developing innovative capabilities and addressing technical debt, the gap between feature development and feature adoption widens.

Underperforming features are a drain on your company’s time and resources. But what causes low adoption rates in the first place?

  • Lack of awareness: The new feature isn’t presented/marketed to users in a compelling way
  • Wrong messaging: The marketing message doesn’t resonate with users, and they’re unaware of the benefits
  • Bad feature: The feature doesn’t actually address a user’s need or pain point

While these are the three most commonly cited reasons for low feature adoption, we’ve found that these symptoms often stem from underlying issues with how features are prioritized for development and release. Teams let internal assumptions, stakeholder requests, or competitive pressures (rather than genuine user insights) drive priorities. In turn, the wrong features are released, spurring feature bloat, low adoption rates, and more.

Think of that ‘AI-powered suggestion’ feature that no one uses. Was it truly solving a user need, or just a cool tech demo?

We’ve seen firsthand with clients how prioritization directly impacts performance. When companies prioritize effectively, they stay focused on what is proven to deliver results. And when they don’t, the opposite happens.

What is user-centered prioritization?

There are plenty of ways to address low feature adoption, but user-centered prioritization might be the Trojan horse you didn’t see coming.

User-centered prioritization is an approach that places the user at the heart of every decision regarding feature development and enhancement.

It’s a systematic way to ensure that the features you build truly solve your users’ problems, meet their needs, and provide the most value. This contrasts with traditional prioritization methods that might heavily weigh internal opinions, market trends, or ease of development.

With user-centered prioritization, you leverage user research, behavior analytics, and feedback loops to make data-driven development decisions. By understanding not just what users say they want, but how they actually behave, product teams can make more strategic choices about which features to build, when to release them, and how to position them for maximum adoption.

User-centered prioritization is the first step to higher feature adoption rates

We’ll get to some specific strategies in a minute, but for now, I want to provide some additional context on why user-centered prioritization is the first step to higher feature adoption.

It’s more than just a method; it’s a mindset. The core idea is to build products and services that truly solve user problems and provide a positive experience.

When faced with a long list of potential features or improvements, user-centered prioritization helps teams decide what matters most. It can:

  • Identify pain points and focus on features that directly address user frustrations or obstacles.
  • Hone in on the most frequent and important tasks users want to accomplish, then prioritize content and features that support these tasks.
  • Visualize the user journey and break it down into actionable user stories, prioritizing those with the highest potential impact on user satisfaction and business goals.
  • Classify and prioritize usability problems based on their impact on user task completion, frequency, and ease of fix.

To make any of this happen, you need a deep understanding of users. Prioritization begins with thorough user research. Use various methods such as interviews, surveys, observational studies, usability testing, and analyzing user data to gather insights. Try to build an understanding of how and where users will interact with the feature.

In essence, user-centered prioritization ensures that product development efforts are aligned with what users truly need and value, leading to ethical and successful products.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

Strategies for improving feature adoption rates with user-centered prioritization in the customer journey

So, what does it look like in action when a user-centered approach dictates feature development and deployment? Here are eight strategies.

1. Build the right features

    The foundation is to build the right features, and to ensure you don’t do it in a vacuum. As we have covered, you need user research to understand the problems you are trying to solve.

    Before moving to development, test concepts and prototypes with real users to ensure the feature addresses a need and has a clear value proposition.

    Prioritize features that will deliver the most significant value to users, not just those that are “nice to have” or technically interesting.

    2. Be clear about the value proposition of your feature

      Users need to understand why a feature is beneficial for them and how it solves a problem, not just what it does. Articulate this clearly in all communications.

      Each feature should address a distinct user pain point or enable a new, valuable capability. Ideally, use language that has been tested and proven to clearly convey the value of the feature.

      3. Make onboarding frictionless

        Segment users (by role, industry, goals, etc.) and tailor onboarding experiences. A marketing professional might need a different introduction than a developer.

        Guide users to the core value of the product and its key features as quickly as possible. This is the moment they realize “this product is for me.”

        Instead of static tours, interactively prompt users to take the desired action so that you’re teaching them while they accomplish tasks.

        4. Create context in the app experience

          Use subtle, in-context cues to highlight new features or explain specific UI elements when a user is in a relevant area.

          For more significant feature announcements, use in-experience banners or modals that appear at relevant moments. Behavioral triggers can also deliver guidance based on what a user is currently doing or has done.

          When a feature’s area is empty, use this space to explain the feature’s purpose and guide the user on how to get started. Make it easy to find the features your user is looking for.

          5. Educate and clearly communicate with users

            As mentioned above, prioritize in-app methods for immediate context, but be sure to supplement with marketing materials like:

            • Targeted emails that announce new features, explain their benefits, and link directly to the feature in the product. Segment these emails to ensure relevance.
            • Blogs that add in-depth explanations, use cases, and technical details for those who want them.
            • For complex or high-impact features, host live or recorded webinars to demo features and answer questions.
            • Social media, including short, engaging content (videos, graphics) to announce features and drive interest.

            6. Personalize the feature

              Not all features are for every user. To be sure the right features are being shown to the target user, you can hide or highlight features based on a user’s role or permissions.

              Allow users to tailor their experience, making the most relevant features easily accessible, and use machine learning to suggest features or workflows based on a user’s past behavior or similar user segments.

              7. Gather data and feedback

                Instead of relying on just feature adoption rates, gather supplemental data and feedback to understand why users are or aren’t adopting the feature. Use micro-surveys (e.g., after a user interacts with a new feature) to get immediate feedback on usability and value. Monitor overall satisfaction with NPS & CSAT surveys, conduct regular user interviews, and look for recurring issues in support tickets.

                Make sure to analyze all this information across different user segments to identify differences and tailor strategies.

                8. Iterate on the feature

                Don’t just launch and leave a feature; you can continue iterating on the experience and messaging post-launch until you figure out what works. Test different onboarding flows, in-app messages, or feature designs to see what drives higher adoption.

                Feature adoption is an ongoing process. Regularly review data, implement changes, and measure their impact. Don’t stop promoting after the announcement.

                By adopting these strategies, SaaS companies can move beyond simply launching features to truly integrating them into their users’ workflows, maximizing the value delivered, and ultimately driving sustainable growth.

                A good feature adoption rate is always improving

                We’ve often touted the uselessness of benchmarks. And while they are meaningless for setting goals, they can help to paint a picture of industry averages and to set expectations. In the case of feature adoption rates, if you’re below that 20% mark, you should strongly consider building a more user-centered prioritization process.

                Incorporating user feedback early and often can significantly reduce development time and costs. Instead of building features based on incorrect assumptions, you’ll focus resources where they’ll have the most impact, leading to higher ROI.

                The direct link between user-centered prioritization and feature adoption is clear.

                The days of simply building features and hoping for the best are over. If you’re ready to take a different approach, our team is available to support.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post How User-Centered Prioritization Helps Improve Feature Adoption Rates appeared first on The Good.

                ]]>
                How to Make the Move From Intuition-led to Data-driven https://thegood.com/insights/intuition-led-to-data-driven/ Fri, 28 Mar 2025 21:10:55 +0000 https://thegood.com/?post_type=insights&p=110423 If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of […]

                The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

                ]]>
                If your bookshelf looks anything like mine, I don’t have to extoll the virtues of data-driven practices to you. Case studies from HBR have shown that A/B testing increased revenue at Bing by 10-25% each year, and  companies that used data to drive decisions were best-positioned to navigate the COVID-19 crisis. But while 83% of CEOs want a data-driven organization, the reality is that many organizations are still largely intuition-run. It takes more than a compelling argument in those contexts to turn the tide.

                If you’re spearheading the shift from an intuition-driven to a data-driven practice, it can be an uphill battle and a lonely one at that. We spoke with Hanna Grevelius, CPO at Golf Gamebook & Advisor, and Maggie Paveza, Digital Strategist at The Good, about how they’ve navigated data-imperfect conditions throughout their careers and successfully advocated for data-first principles.

                Whether you’re working with limited data or as your company’s first A/B testing specialist, their stories make one thing clear: doing it alone doesn’t have to be so daunting.

                Keep reading to hear about:

                • How they learned to work with data
                • How to leverage data to build prioritization intuition
                • When guessing is appropriate
                • How to be an advocate for data-first practices

                1. It’s OK to learn on the job

                For those with only a passable knowledge of statistics, it can seem intimidating to dive headfirst into data-driven decision making. But it doesn’t take a data science degree to be able to act on good data. In fact, few teams employ full-time analysts at early stages of growth. Most teams get by early on with the skills of a few generalists, who, it turns out, often learn on the job.

                “Quantitative methods are something that I’ve learned in my career,” says Maggie Paveza, Senior Digital Strategist at The Good. Having previously worked as a UX Researcher at Usertesting.com, Maggie started with a strong foundation in qualitative research before adding quantitative methods to her toolkit, which she says helps her tell a fuller story. “The qualitative research forms the why; the quantitative research forms the what.”

                For Hanna Gervelius, CPO at GolfGamebook, her relationship data started from close collaboration with Product Managers.

                “My role when I started was in support, answering customer support emails. In trying to understand the scalability of issues, I got to work and talk a lot to product managers who really helped me understand we need to look at the data to know: is it one person who experienced the bug? Is it from a specific version of the app? Is it related to the device or operating system they were on?”

                Hanna says learning how to dig for data helped her contextualize customer pain. And through that practice, she built the skills necessary to transition into Product Management. “It was through support that I started to understand that we should look into the data, then eventually I moved over to work on Product Management.”

                When she added A/B testing to her toolkit, that took her passion for data to a whole new level.

                “It’s so clear when you A/B test that even a small change can have a big impact. When you start seeing the difference, that really sparks an interest.”

                Enjoying this article?

                Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

                2. Use data to define your focus

                Once Hanna could confidently dive into the data, she started to use it in her practice, evaluating where traffic hits the app most frequently and focusing on those high-value, high-traffic areas first. This exercise in opportunity sizing taught her that it’s ok to shift focus in light of new data.

                Maggie takes a similar approach to prioritization. She uses traffic data to understand what areas of a site or app are highly trafficked, and before proposing a test, she always verifies that an A/B test would see significance within an acceptable amount of time.

                “We rely on prioritization methodologies to understand if running a test in an area would have a significant revenue impact and if an A/B test would help us gauge in a number of weeks or longer.”

                If you’re just starting out with a new property, Maggie and Hanna both suggest building a foundational understanding of traffic patterns and to regularly refine your strategy. Priorities often shift as a result.

                3. In the absence of data, start with a guess

                One valuable skill that came later in their careers was understanding the value of a lead. Boosting form fills can feel invigorating, but without an understanding of what portion of that audience might become a deal later, it’s hard to know if your work is making a difference. Assigning a dollar amount to a lead is a powerful tool to evaluate your performance.

                But if you’re joining an organization without mature data practices, leads often have no value assigned. And without institutional knowledge, it can be intimidating to make a guesstimate. But to Hanna, it’s worth starting with a guess to set initial priorities.

                Hanna advises using a rough calculation to estimate the value of a metric (with things like average deal value and percent of pipeline that converts), which can help you get an early read.

                “Over time, you can start adjusting it higher or lower. But trying to put a value on it and making decisions based on that is the best way to still work in a data-driven way even when you don’t have all the answers.”

                Hanna warns that an estimate is just that, and that staying above board about where the data comes from is key to retaining trust.

                “What’s really important in that estimation reporting is that you’re always super clear that you’re estimating—that it could be a lot higher and a lot lower, because if you start making critical budget decisions on it, you can end up in a dangerous situation.”

                4. Be the change you want to see

                For those who know the clarity that data can bring to the decision-making process, working within a data-poor organization can be challenging. But Hanna says it’s fairly easy to lead others to data advocacy, even if you’re not in a C-suite. “Most people nowadays want to be data-driven,” Hanna says. In her opinion, it doesn’t take a fancy title to turn others into advocates.

                “If you are working in an org where you are the only person who is responsible for testing, the best thing you can do is try to spread that knowledge. Get them involved and feel a sense of ownership. Try to make it so that you’re not the only one who cares about A/B testing and being data-driven.”

                In order to build stewardship throughout the organization, Hanna’s advice is to walk through your thinking, specifically by walking colleagues through the potential upside to testing, and the risks of not. “That can help people who are not so interested in testing to be a bit more curious and to want to understand.”

                In Hanna’s experience, your passion can be quite contagious. “Data and testing, it opens up a world that is so fun.”

                As for how she does it, Hanna shares her excitement by showing rather than telling. “As soon as you have the test going, share a bit of the data early on,” she says. Rather than being cagey about how inaccurate early test data is, she uses it as a teaching moment.

                “All of us who work in the testing space know that data from one day or three days is probably going to be completely wrong, and you can say that also. But show it to that person. Show that ‘this is super early, we have no idea if this is going to be correct or not, and stat sig, but after one day this is what it looks like’”

                And of course, once you run successful tests down the line, Maggie’s experience tells her that there is nothing more powerful than sharing a win with your team.

                Artfully navigating the shift

                Advocating for data-driven decision-making in intuition-led companies isn’t always easy, but it’s a challenge worth taking on.

                As Maggie and Hanna’s experiences show, starting small, whether by learning on the job, prioritizing based on data, making informed estimates, or sharing early insights, can lead to big shifts in mindset.

                By fostering curiosity and collaboration, you can help transform your organization’s approach to decision-making, making data a natural and valued part of the process.

                Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

                The post How to Make the Move From Intuition-led to Data-driven appeared first on The Good.

                ]]>