improve user experience Archives - The Good Optimizing Digital Experiences Wed, 21 May 2025 16:39:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 How Kalah Arsenault’s Team Stood Up An A/B Testing Program & Doubled Volume With A New Prioritization Model https://thegood.com/insights/marketing-optimization/ Thu, 20 Feb 2025 21:16:49 +0000 https://thegood.com/?post_type=insights&p=110334 Optimization isn’t a one-size-fits-all practice. Each organization has unique data, needs, and goals, on top of the always-evolving technology stack that supports experimentation. So, as a leader, it’s important to adapt. Kalah Arsenault knows this well. Over the course of her career, she’s been tasked with everything from turning data into actionable insights and advocating […]

The post How Kalah Arsenault’s Team Stood Up An A/B Testing Program & Doubled Volume With A New Prioritization Model appeared first on The Good.

]]>
Optimization isn’t a one-size-fits-all practice. Each organization has unique data, needs, and goals, on top of the always-evolving technology stack that supports experimentation.

So, as a leader, it’s important to adapt. Kalah Arsenault knows this well.

Over the course of her career, she’s been tasked with everything from turning data into actionable insights and advocating for data-driven analysis to building experimentation programs.

Currently, she leads the marketing optimization team at Autodesk, the global leader in 3D design, engineering, and entertainment software.

We had the chance to sit down with her and get the inside scoop on:

  • Standing up an A/B testing program
  • A simple prioritization model making an impact
  • Measuring and circulating optimization learnings

Marketing optimization for a leading software company

As the marketing optimization team lead, Kalah digs into all the nooks and crannies of the company’s marketing efforts to make it more effective and efficient.

The marketing optimization team at Autodesk sits on the operations team at the intersection between marketing operations and technology. Partnering with marketing teams to improve campaign effectiveness, Kalah and the marketing optimization team bridge the gap between data, marketing know-how, and testing expertise.

When shakeups a few years ago halted all A/B testing on the Autodesk website, Kalah was eager to partner with the website team to re-enable experimentation. A self-proclaimed marketing, analytics & optimization enthusiast, Kalah brings a consistent data-backed ethos to her work. And her background tee’d her up for success. Kalah jump-started her professional life in advertising and ecommerce. The experience working in stakeholder-facing roles gave her a unique ability to turn data into stories and prove the value of iterating your way to success.

Standing up an A/B testing program

The challenge was clear. Without an experimentation program in place, the team was left without the data needed to fuel good decision-making.

“The data will tell you what is the right choice and it takes decision-making out of the process,” she said when asked how data plays a role in her decision-making. It can even go so far as to be said that they don’t just affect the process, they are the process. “Experimentation and data can be the decision-making process.”

So, it was crucial to get the A/B testing program back on its feet in order to bring that clarity to the work she was doing day-to-day.

To start, Kalah and her team put their experience into practice, creating an A/B testing roadmap. This was a crucial step, requiring them to define goals, align with stakeholders, and assess priorities and risks of optimization. Because of a new organizational structure, on top of the complexity of rebuilding the A/B testing program, there was an added obstacle to working across different marketing teams.

The optimization and web teams worked together to establish clear parameters, agreements, and definitions of what could or could not be tested. There is now a huge, pre-approved sandbox to play in, allowing optimizers the chance to find iterations that improve UX and marketing KPIs.

Whether you’re a researcher, an analyst, a marketer, or an optimization specialist, a well-made roadmap connects you with the clear steps needed to begin experimenting.

For Kalah, this meant:

  • Identifying objectives for the testing program
  • Establishing marketing and website challenges
  • Isolating testing opportunities
  • Formulating testing hypotheses
  • Prioritizing testing opportunities

With frameworks in place, they were ready to get back to work.

While other optimization leaders can follow a similar strategy of aligning with stakeholders and building a roadmap, standing up an A/B testing program is no small feat. So, if you don’t have the resources or a dedicated team like Autodesk, she has some advice.

“What I primarily suggest is hiring someone who specializes in the practice. I think the expertise to identify optimization opportunities, design the tests, see it through implementation, measure the results, and provide recommendations and next steps is incredibly impactful.”

And while there are some savvy marketers that can do this, she emphasizes that “it's a separate skill set and expertise.” So whether you hire for that as a full-time role or you look to agencies to bring that expertise, Kalah strongly recommends companies consider experts to lead the charge.

For example, “at a high level, a test may show one version outperforming another,” she says. “But digging deeper often reveals different results by segment, whether by job profile, country, or industry. We aim to look beyond primary KPIs to fully understand what’s driving the outcome.” That level of nuance is hard to find in a busy marketer, so it’s best to have dedicated optimizers around who can take the time to know and understand audiences.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

A prioritization model to drive velocity

With the A/B testing program back up and running, Kalah and her team had their plates full finding efficiencies and improvements across Autodesk’s marketing efforts.

Not only were there opportunities identified in their research, but teams across the organization were submitting requests and ideas for their consideration.

The list was long. The optimization team wasn’t sure what the “most important first thing” to work on was, and marketing stakeholders didn’t understand why their projects weren’t top of the list for testing. There was an opportunity to clarify and get more done quickly.

The solution? A prioritization model aimed at:

  • Increase testing volume
  • Aligning teams
  • Saving time

While lots of testing folks would hear “prioritization model” and go straight to the mathematical elements, Kalah needed a model that was simple, easy to calculate, and transparent for all parties.

Kalah and her team built out an auto-calculated prioritization model as part of their optimization requests intake process. It involves three elements:

  • Business impact: Measured based on whether the request aligns with the marketing plan, which is agreed upon by everyone from CMO to entry-level marketing team member.
  • Level of effort: Internal criteria that identify a higher or lower level of effort.
  • Urgency: Assess the request with questions like: Does it need to be executed immediately? Does it impact a larger project immediately? Does this effort have backing from a senior leader in marketing?

The intake process asks questions related to the criteria mentioned, and then logic set up in Asana auto-calculates the prioritization of the experiment or optimization. “This is what saves us time and energy,” she says. It eliminates looping conversations and time to manually prioritize things amongst the team.

Kalah emphasizes the power of this setup. “We don't do mathematical calculations to assess the level of business impact or length of time to reach statistical significance. That's too resource-intensive, and we'd be spending all our time assessing and prioritizing. With our automated prioritization model, we can spend our time on launching and analyzing tests and making business impact.”

And it worked.

“We were able to double the amount of tests our team took on within one year. So from this compared to last year we doubled the volume of testing with a new operating and prioritization model.”

Measuring marketing optimization success

The volume of tests is just one of the key metrics Kalah identified for measuring her team’s success.

It can be tough to find just one KPI to prove the value of optimization, given the nature of working across teams, products, and audiences. So, instead of focusing on 1:1 measurement, they look at a variety of metrics, including:

  • Volume of tests
  • Volume of analyses
  • Customer satisfaction score
  • How many marketers are seeing/learning from the findings

In the end, her team’s goal is to look at how marketing campaigns are performing and then give advice on how to make them better. So, while each test or optimization has its own KPI related to growth, as a team they are measured more holistically.

With prioritization and test volume locked down, she is ready to move the needle on insights shared.

“I'd really love to put more energy towards amplifying the impact of each test and getting the findings out to as many marketing teams as possible. We've already seen this working through a newsletter that's sharing our testing results and analysis work. We've also been hosting quarterly brown bag style meetings with the most universally applicable test results that marketers could implement themselves.”

This year, Kalah is also hoping to find new ways to turn insights into action. “I am also hoping to dive into data visualizations and figuring out how to make our findings more snackable and basically getting to a place where people want to read them and it's easy and enjoyable.”

These goals directly align with her team’s measurements for success. Other optimization leaders can take a page from Kalah’s playbook here by letting individual tests focus on the marketing metrics and determine departmental success based on insights, experiments, or other relevant measurements.

How can you replicate some of Kalah’s success?

Kalah’s advice to those new to optimization is simple yet impactful: start small, and stay curious. “Get to know your data, experiment with tools, and don’t be afraid to make tweaks,” she says. “You might be surprised at the impact even small changes can have.”

Her overarching message is one of optimism and opportunity. “Optimization is about evolving and improving—for your customers, your organization, and yourself,” she concludes.

Yet, good optimization leaders know that you can’t do it all alone. Internally, Kalah’s team employs a mix of full-time employees, contractors, and agency partners to meet the demands of scaling optimization efforts. “Contractors and agencies can help manage peaks in the workload,” she notes.

“I come from an agency background. I've always been a fan of working with full-time employees, but I realized as we're trying to scale and grow the amount of impact we're making as a team, it's really important to have contractors or agency partners to support higher demand and the peaks and valleys of work.”

By embracing a data-driven mindset, prioritizing strategically, and fostering cross-team collaboration, Kalah exemplifies what it means to lead impactful optimization efforts. If you need an expert partner to help manage a robust roadmap, get to know our Digital Experience Optimization Program™.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

The post How Kalah Arsenault’s Team Stood Up An A/B Testing Program & Doubled Volume With A New Prioritization Model appeared first on The Good.

]]>
How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For https://thegood.com/insights/how-to-read-a-heatmap/ Mon, 10 Feb 2025 17:49:14 +0000 https://thegood.com/?post_type=insights&p=110285 Wondering why users leave your site without converting? You may have a gut-instinct answer to the question. You might even have ideas for how to tweak design, rewrite headlines, or add new features in an attempt to get users to stick around. But guesswork isn’t a strategy. Expert researchers don’t guess. They use data, and […]

The post How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For appeared first on The Good.

]]>
Wondering why users leave your site without converting?

You may have a gut-instinct answer to the question. You might even have ideas for how to tweak design, rewrite headlines, or add new features in an attempt to get users to stick around. But guesswork isn’t a strategy.

Expert researchers don’t guess. They use data, and one of their most powerful tools is the heatmap.

When used correctly, heatmaps reveal where users hesitate, what grabs their attention, and where they drop off—all critical insights for optimizing conversions. But the real magic isn’t simply generating a heatmap; it’s knowing how to read it.

In this guide, we’ll break down how to read a heatmap like an expert so you can stop guessing and start making informed, high-impact changes to your website.

Intro to heatmap analysis

Heatmap analysis shows real data points that represent actual human behavior. And when those behaviors form visibly discernable patterns, we use these patterns to form hypotheses about user wants and needs.

Heatmaps can help answer questions about user behavior and uncover sticking points in the customer journey.

Like footprints in the sand, heatmaps show us where users have been. And we use that information to infer and imply intent.

Types of heatmaps

At The Good, we primarily use three types of heatmaps: Click maps, movement maps, and scroll maps.

These types of heatmaps provide insights that answer critical UX and conversion questions, such as:

  • Are users seeing my key content?
  • What elements are they engaging with?
  • Where are they paying the most attention?

By analyzing these patterns, we can pinpoint where users get stuck, what’s drawing their attention, and where they drop off—and take action to improve the experience.

Scroll Maps

Scroll maps visually depict typical scroll depth on any web page. Key insights you can glean from scroll maps:

  • Where users drop off (high exit points may signal a false bottom)
  • Whether important content is being seen
  • If users are scanning or engaged

Tools typically use scales to show you the portion of users who scroll to different parts of your page. Red or “Hot” areas of your Heatmap indicate that all or almost all your users have seen this part of the page. As you move down the page, the colors will get “colder” according to the percentage of users who scroll down to that point.

The lines on the page below indicate where 25%, 50%, and 75% of users dropped off, meaning they left the page or clicked on something, therefore not scrolling further.

While shallow scrolling is not inherently negative, it may indicate lost user attention or that a page does not look scrollable.

The same goes for deeper scroll depths. It is not inherently positive or negative to see a deeper scroll depth. Depending on the surrounding context, deeper scroll depth may indicate that users are failing to find meaningful content higher on the page, and therefore go looking by scrolling down.

Movement Maps

Movement maps show where users have hovered their mouse on a page. They are valuable because they show us where the majority of user attention is focused. Movement maps can show:

  • What content users are reading or skimming
  • Where their attention is most concentrated
  • Whether key information is being overlooked

Movement maps help us infer what content is most valuable to users during the decision-making process.

Reading movement maps is similar to reading eye-tracking heat maps. For many users, their mouse movement follows their gaze, so knowing where mouse movement occurs tells us what content users are reading or paying attention to.

Based on our experience, concentrated left-to-right movement over text generally indicates intentional reading, since many mouse movements tend to follow the user’s eyes.

In this example, we see side-to-side movement over FAQs, indicating users are reading each question to determine which one may reveal helpful information about the services being offered. We looked at movement clusters in the FAQs, which when paired with data about the most highly clicked FAQ items helped us determine what questions users needed answered to have the confidence to purchase services.

In contrast, up-and-down movement may indicate areas that users are simply skimming rather than intently reading. Take this example: seeing vertical movement patterns indicated to us that users may be scanning the resources available (rather than reading). User testers told us that the content did not look worthwhile, so those two bits of data together told us this area needed some fresh content and a redesign.

Click Maps

Click maps show us what elements users click on most commonly. Click maps can uncover insights including:

  • What elements drive engagement (or get ignored)
  • If users are clicking on non-clickable elements (indicating confusion)
  • Which navigation links or CTAs attract the most interest

Hot spots, shown in red, have the highest concentration of clicks. Transparent blue spots represent a low density of clicks.

In the click map below, we see a list of “All Products” with one notable hot spot in the middle of the menu. What, you may wonder, is in the middle of the list that is drawing so many clicks?

The answer is in the name: Paints. Here we see an example of a company with a clear specialty and a large portion of their sales going to one category. Yet, when we saw this heatmap we realized they were making the user work hard to find these most popular products by burying them in the middle of the list.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

18 Heatmap patterns to look out for

To get the most value out of heatmaps, researchers have to analyze how different heatmap elements interact, compare trends across pages, and validate findings with other data sources like session recordings or analytics.

In this section, we’ll walk through the most common heatmap patterns, what they look like, and what they reveal about user behavior so you can start making smarter, data-backed decisions.

The Spot Specific Pattern

Where it appears: Click map

What it looks like: Highly concentrated heat activity on an individual spot in a sea of text.

What it means: Users might have a specific interest related to a need. They could also be clicking on a non-clickable element within a paragraph or looking for information that is slightly buried within other text. It may be an indicator that you need to rearrange a menu or better highlight certain features of a product.

Gapped Patterns

Where it appears: Click map

What it looks like: In a list of items, there is one that gets no heat activity.

What it means: It usually means that a user doesn’t know what to expect if they were to click here, or they are simply disinterested.

Primacy vs Recency Pattern

Where it appears: Click map

What it looks like: Concentrated click activity on the first and last items in a list.

What it means: Typical of menus, users often engage most with the first and last item in any list. Named after the psychological phenomenon where users are best at recalling the first and last words in any list.

Filter Hot Spots

Where it appears: Click map

What it looks like: Concentrated clicks on certain areas of a filter, and a lack of clicks on other areas of a filter.

What it means: Users generally rely heavily on certain filters and less on others. Knowing what filters are helpful to users might tell us how we should rearrange filters or give us context for what users care about in their products.

Consistent Browsing Pattern

Where it appears: Click map

What it looks like: Strong click patterns across products on category pages.

What it means: This tells us that users are interested in a variety of products on the category page and are clicking on various product pages.

Spotted Browsing Pattern

Where it appears: Click map

What it looks like: Strong amount of clicks on only a few product images on category pages.

What it means: This tells us that users are most interested in specific products. These might be flagship products (as in this example).

Strong Pagination Pattern

Where it appears: Click map

What it looks like: Concentrated activity on the pagination with little activity on filters or product tiles.

What it means: Users might not have very intentional browsing behavior, and instead of engaging with product tiles and narrowing down their search, they are simply going from page to page to see all products.

Click Indecision

Where it appears: Movement map

What it looks like: Horizontal heat patterns found in the middle of two clickable elements, usually between 2 or more different elements positioned next to each other. Can be found on a menu navigation or even dual CTAs.

What it means: Users are hovering between clickable elements. They might be experiencing a bit of uncertainty in their browsing experience. They’re not sure where to click because both options are similar in nature or unclear.

F-Shaped Reading

Where it appears: Movement map

What it looks like: Concentrations of heat in the shape of an F on the page. The direction begins with the user tracing the page from top to bottom and then from left to right.

What it means: Users are assessing the content on the page but they are not necessarily reading it.

Source.

Commitment Reading

Where it appears: Movement map

What it looks like: Blocks of heat activity usually on content pages or chunks of text.

What it means: Users are high-intent and they’re learners. These patterns show strong interest in the information displayed and intentional reading.

Source.

Layer Cake Pattern

Where it appears: Movement map

What it looks like: Users read headlines but overlook the associated subtext.

What it means: They are interested in the content but are reviewing the page at a high level.

The downside of this pattern is that users could be missing content related to their needs or diminish the influence of the content’s intended purpose to promote a desired course of action.

Scrolling Pattern

Where it appears: Movement map

What it looks like: A vertical heat pattern that travels down the page. On low-traffic pages, this might be represented by dots that align in a vertical fashion, as with the example here.

What it means: This signifies that users are scrolling down the page, without necessarily reading the content. They might be looking for something that they are not finding, or the content might be arranged in a fashion that is best for scanning. If this is paired with truly little click engagement, we might assume that the content is not very valuable.

Truncated Scanning

Where it appears: Movement map

What it looks like: Users skip a consistently repeated word in a text.

What it means: Users are reading content faster, likely because the content is repetitive and it’s easy to recall the skipped word.

Dropdown Residue

Where it appears: Movement map

What it looks like: A spotted heat residue in a rectangular fashion positioned below the top navigation.

What it means: This is residual activity of users strongly considering items in the drop-down menu or some drop-down element on the page. Residue will be concentrated in the areas where users are actually scanning and considering the content.

Image Hover

Where it appears: Movement map

What it looks like: Heat activity around images on a page. Could be on a category page or rows of photos.

What it means: Imagery is dynamic–a secondary image shows when the users hover over the primary image. The user is hovering around the image to see the second photo.

Content Avoidance

Where it appears: Movement map

What it looks like: The inverse of the image-specific pattern, content avoidance happens when people explicitly avoid an area with their mouse, almost creating a frame.

What it means: This might mean that users perceive this as an ad and are intentionally avoiding it, or have “banner blindness” and simply don’t see the content as relevant to their visit.

False Bottom

Where it appears: Scroll map

What it looks like: On scroll maps, there is a high drop-off on the page (drop-off is above the halfway mark on the page).

What it means: Users might perceive that they’ve reached the end of the page. This is extremely common when email signups are in the middle of a page (see example right) and when there is a strong color contrast, full-bleed section early in the page. These things signal the footer is coming, so they often make users think they’ve seen everything they need to see.

Halted Pattern

Where it appears: Scroll map

What it looks like: Drop-off is right above the fold, and nearly no users scroll below it.

What it means: Either most users are finding something to click on above the fold, there is a high bounce/abandon rate, or there is a false bottom. It could also be some combination of the three.

What is the best tool for heat mapping?

Not all heatmaps are created equal. The best heat mapping tool is the one that provides clear, actionable insights without adding unnecessary complexity.

For most teams, Hotjar will be a great go-to solution. It’s lightweight, easy to set up, and provides a suite of heatmaps—including click maps, scroll maps, and movement maps—that help you understand user behavior at a glance.

Why Hotjar?

  • Comprehensive Behavior Tracking: Hotjar captures how users interact with your site—where they click, how far they scroll, and what elements they hover over.
  • Fast Insights, No Heavy Lifting: Unlike enterprise tools that require complex setup, Hotjar makes it easy to get started and see results quickly.
  • Paired with Session Recordings: Heatmaps alone tell part of the story; Hotjar lets you connect heatmap insights to real visitor session recordings for deeper analysis.

While it’s our top pick, if Hotjar isn’t the right fit, another good option is Microsoft Clarity.

Turning heatmap data into actionable strategies

Reading a heatmap like an expert researcher isn’t just about spotting red and blue zones—it’s about understanding the “why” behind user behavior and knowing what to do next.

But if you don’t have the time or resources to build a research team, you don’t have to go it alone. At The Good, we specialize in turning heatmap data into clear, actionable strategies that drive real results.

Want to skip the learning curve and get expert insights now? Let’s talk.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post How To Read A Heatmap Like An Expert Researcher: Patterns To Look Out For appeared first on The Good.

]]>
A Guide For Preventing Form Fatigue To Increase Conversions & Improve UX https://thegood.com/insights/form-fatigue/ Mon, 27 Jan 2025 19:21:18 +0000 https://thegood.com/?post_type=insights&p=110253 While terms like scroll fatigue or decision fatigue are commonplace in UX, a quick search for resources on form fatigue doesn’t surface much. But, with over 15 years of experience optimizing digital experiences, we know how prevalent it can be. Drawing from those years of experience improving SaaS platforms, we’ve identified and addressed form fatigue […]

The post A Guide For Preventing Form Fatigue To Increase Conversions & Improve UX appeared first on The Good.

]]>
While terms like scroll fatigue or decision fatigue are commonplace in UX, a quick search for resources on form fatigue doesn’t surface much. But, with over 15 years of experience optimizing digital experiences, we know how prevalent it can be.

Drawing from those years of experience improving SaaS platforms, we’ve identified and addressed form fatigue across various products. In this article, we’ll show you how to uncover and fix it effectively.

Keep reading to learn:

  • Research methods for uncovering form fatigue
  • User behavior patterns that indicate your users suffer from form fatigue
  • Actionable strategies to improve form fatigue and increase conversions

What is form fatigue?

Form fatigue occurs when a user gets frustrated and/or exhausted by the complexity or length of a digital form. The poor design of the form directly contributes to this sense of fatigue and causes them to abandon.

Psychologically, users are conditioned to prefer experiences that require minimal cognitive effort. We want experiences that accomplish our goals simply and quickly. When a user experience does not meet those instincts, conversion rates drop.

Form fatigue is typically caused by things like:

  • Content fatigue: When excessive textual/visual content on a page overwhelms users, hindering their ability to find relevant content for successful task completion.
  • Heavy cognitive load: When undue mental effort is required to accomplish a task, causing analysis paralysis or frustration, leading to abandonment.
  • High interaction cost: When a task or interaction requires significant time and/or effort to accomplish, possibly creating frustration and resulting in abandonment.

How to identify form fatigue

When working on a product day in and day out, you might be too close to the forms to know if fatigue is happening. That is where research can help.

Getting an external, real user perspective can expose things like content fatigue, heavy cognitive load, or high interaction cost in your forms.

So, the best way to identify form fatigue is through user research. While there are plenty of methods, the best for this particular scenario include:

  • Session recordings
  • Heatmaps
  • Scroll maps
  • Click maps
  • User tests

With your raw data in hand, look out for some specific patterns that might indicate form fatigue:

  • Scanning: A user scrolls over content (text or images) at a higher scroll rate on mobile, while on desktop they might hover over some words or phrases, or completely skip over content altogether.
  • Halted Scrolling: The user pauses on the site to possibly engage with content/reorient themselves or this pause may indicate that the user perceives a false bottom.
  • U-turns: When a user back navigates to the previous page they were just on, using either breadcrumbs or the back button.

These research patterns can point to moments when users are experiencing form fatigue and the digital experience can be optimized.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

7 ways to prevent form fatigue

If you suspect form fatigue or uncover evidence of it in your research, don’t fret. There is plenty you can do to fix it. For companies building new forms, these tips can also be used to prevent form fatigue in the first place.

1. Execute the 10 principles of good form design

The first, and arguably the most important, way to limit form fatigue is to understand and act on the principles of good form design. Website forms are one of your most important onsite elements. They are the crux of a user’s path to conversion.

Bad form design can cause users to drop off during critical conversion opportunities, leaving them frustrated or confused, while great form design creates a seamless user experience that can increase conversion rates and leave users feeling excited about a product or company.

These are the ten established form design principles to help you create better experiences:

  1. Priming: Prepare users by setting clear expectations about the form’s purpose, length, and benefits before they begin.
  2. Error Prevention: Design forms to minimize user mistakes by using constraints, clear labels, and smart defaults.
  3. Error Recovery: Make it easy for users to identify, understand, and fix errors with real-time validation and clear messaging.
  4. Feedback: Provide immediate, actionable responses to user inputs to build confidence and guide progression.
  5. Proximity: Group related fields together logically to make forms easier to navigate and process mentally.
  6. Convention: Follow familiar design patterns to ensure users can complete the form intuitively without unnecessary friction.
  7. Momentum: Encourage users to keep going by visually or textually reinforcing their progress through the form.
  8. Proof: Build trust and reduce hesitation with evidence like security assurances, testimonials, or recognizable logos.
  9. Demonstrated Value: Highlight the benefits of completing the form so users feel their effort is worthwhile.
  10. Perceived Effort Level: Design forms to appear simple and manageable by minimizing visible fields and breaking longer forms into steps.

To learn more, we explore these principles and include 32 good form design examples in this companion article.

2. Ask for minimal information upfront

In research and testing for clients, we have found that asking for less information upfront may help to prevent form fatigue and in turn, increase initial registrations. The highest converting forms ask for only the necessary information in order to register, saving additional information for post-registration. That could be as little as just the email or include name and other essential information.

Once the user is registered, they can be guided through additional steps to help personalize the account to their needs, for example, more personal information, settings, shipping preferences, choosing a plan, adding orders, etc.

3. Reduce form length perception

For forms that can’t reduce the information required, research shows users’ perception of form length can be as important as the actual length.

You can reduce perceived effort with strategies like:

  • Chunking forms into steps: Break longer forms into smaller, manageable sections and use clear step titles (e.g., “Step 1: Account Details”).
  • Collapsible sections: Use collapsible form fields to make the interface less overwhelming while still providing access to all necessary fields.
  • Auto-advance fields: Automatically move users to the next field when input is complete (e.g., credit card information split into boxes).

4. Make clear suggestions

Simplify decision-making by limiting options and highlighting recommended choices. You can use autofill and predictive text to reduce manual input and create an intuitive, logical flow that guides users naturally through the form.

5. Optimize for mobile or desktop

At this point, we shouldn’t even have to say it, but you’d be surprised how often teams forget to tailor the experience for the correct device. Form fatigue is exasperated when the design doesn’t function on the user interface being navigated. The design should adapt for mobile or desktop users, regardless of whether you are an app-first or desktop-first product.

One essential way to do this is by adjusting keyboard inputs. For example, when a field is asking for a zip code or phone number, default to the numeric keyboard on mobile to make it as simple as possible to fill out the form.

6. Use gamification to entertain

Gamifying the form-filling experience can motivate users to complete it. So, when you have an extensive form that needs filling and can’t be simplified, add elements like milestones, progress rewards, and personal messages to keep users entertained and motivated. Celebrate small wins when users complete sections and consider unlocking discounts, offers, or badges as users complete each step. It’s hard to be fatigued when you’re having fun.

7. Leverage post-signup emails

Preventing form fatigue can also happen by supplementing information in other ways. Use post-signup emails to collect information that isn’t imperative to registration. For example, a user’s birthday could come in handy for rewards later on, but it is better to collect it post-signup to prevent form fatigue.

Additionally, the email body can link the user to connect new apps to their account, access more discounts, watch tutorials, download resources, or contact their team.

Many SaaS companies also send emails from a real person to encourage users to respond if they have questions or need help. These personal follow-ups can also help recapture users who abandon the form initially.

To prevent form fatigue in UX design, focus on strategies that simplify and streamline the user’s form-filling experience. Remember, the goal is to make form completion feel easy and painless for the user.

Ready to eliminate form fatigue and boost conversions?

Form fatigue can quietly undermine your UX efforts, leading to missed conversions and frustrated users. However, with thoughtful research, clear design principles, and actionable strategies, you can create forms that not only engage users but also encourage them to complete the journey.

At The Good, we specialize in helping businesses like yours eliminate friction and create digital experiences that drive results. See this form improvement example from our work with Helium 10.

If you’re ready to optimize your forms and increase conversions, reach out to our team today. Let’s work together to turn your users into loyal customers.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post A Guide For Preventing Form Fatigue To Increase Conversions & Improve UX appeared first on The Good.

]]>
Which Rapid Testing Method Should I Use? https://thegood.com/insights/rapid-experimentation/ Wed, 18 Dec 2024 16:00:00 +0000 https://thegood.com/?post_type=insights&p=110108 “Research” often means “identify problems to solve.” But it can also mean “verify that proposed solutions actually solve problems.” The most buzzy way to get that validation is via A/B testing. But many don’t have the budget, appetite, time, or team to even get started. Enter: Rapid testing. Like A/B testing, rapid testing helps you […]

The post Which Rapid Testing Method Should I Use? appeared first on The Good.

]]>
“Research” often means “identify problems to solve.” But it can also mean “verify that proposed solutions actually solve problems.”

The most buzzy way to get that validation is via A/B testing. But many don’t have the budget, appetite, time, or team to even get started.

Enter: Rapid testing.

Like A/B testing, rapid testing helps you understand if your solutions are actually working.

Unlike A/B testing, rapid tests are fast, done with small sample sizes, and offer a level of qualitative insight not afforded via experimentation alone.

Rapid testing is no substitute for A/B testing, but it has a ton of applications:

  • Get a gut check when true A/B testing is not a viable option
  • Understand where new features might be confusing or unclear
  • Evaluate time-to-success and pass/fail rates of task flows
  • Narrow down your options from many to few when deciding what messages to test in the market

Think of it as your canary in the coal mine. A utility to mitigate the risk of feature flop.

In this article, we’ll explore what rapid experimentation is, its benefits, the types of rapid tests you can run, and when to use each. If you’re looking to de-risk your decisions and innovate faster, keep reading for a framework to get you started.

What is rapid experimentation?

Rapid experimentation or rapid testing refers to a collection of tactics we use to get quick feedback for operational decisions. This type of testing helps teams make agile decisions around design, copy, and other site elements.

Rapid experimentation is a lean approach to validating ideas, designs, or features in a quick, iterative manner. It focuses on qualitative insights and directional data.

Instead of waiting weeks for results, you can gather actionable insights in days or even hours. This method enables teams to:

  • Understand whether users grasp a new concept
  • Identify potential usability issues
  • Test multiple variations of an idea before committing to development

In short, rapid experimentation helps you answer the question: “Am I moving in the right direction?”

Why do teams use rapid experimentation?

Rapid experimentation delivers value in multiple ways, particularly for SaaS teams that need to move fast and make data-informed decisions.

While rapid testing uses less qualified participants and smaller sample sizes than traditional A/B testing, the tradeoff is exponentially faster results. Rapid testing delivers value by:

  • Speeding up results: Unlike A/B testing, which can take weeks to produce reliable results, rapid tests can be designed, executed, and analyzed in days. This speed allows teams to iterate quickly.
  • Limiting politics of A/B testing: Which A/B tests get run is informed by rapid test data instead of executive opinions.
  • Narrowing down many ideas: When you need to identify the best few ideas out of many, rapid testing is an efficient way to do so.
  • Lowering costs: Because rapid tests require smaller sample sizes and fewer resources, they’re accessible to teams with limited budgets.
  • Identifying problems early: Rapid experimentation helps uncover potential usability issues or misunderstandings before they’re baked into a feature or product. This can save significant rework down the line.
  • Increasing qualitative depth: Where A/B testing provides numbers, rapid tests provide context. Understanding the “why” behind user behavior can inform better solutions.
  • De-risking decisions: By testing ideas early and often, teams can reduce the risk of releasing features or products that fail to meet user needs.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox.

What are the types of rapid tests?

Rapid experimentation is not a one-size-fits-all process. Different scenarios call for different types of tests.

Here are some common methods:

Task Completion Analysis

Task completion analysis allows us to quickly test new ideas to understand time-on-task and success rates.

Typically, users are asked to complete a specific task, such as signing up for a trial or finding a key feature. Teams observe where users struggle and measure success rates, time-to-completion, and drop-off points.

First-Click Tests

First-click tests evaluate whether users can intuitively find the primary action or information on a page. Participants are given a task and asked to click where they think they should start. This is ideal for evaluating navigation or CTA placement.

Tree Test

Tree testing is a usability technique that helps you understand how users navigate through your website or app’s structure. It focuses on how well people can find information within a system.

By stripping away visual elements and focusing solely on the structure (the “tree”), you can identify whether the content organization makes sense or if users are getting lost.

Sentiment Analysis

Sentiment analysis lets us preview how users might respond and react to a treatment. It allows us to evaluate user emotions and opinions about a product or experience. Typically, feedback is collected through surveys, reviews, or user interviews, and responses are analyzed to identify positive, neutral, or negative sentiments. Teams use this data to uncover pain points, gauge satisfaction, and prioritize improvements.

5-Second Tests

5-second tests assess a user’s immediate impression of a design or message. They show participants an interface or design for five seconds and then ask them what they remember or understand. This is great for defining the value propositions or headlines that are most memorable.

Design Surveys

Design surveys collect qualitative feedback on wireframes or mockups. They can help validate designs before investing in development to implement them on your site.

Preference Tests

This test involves showing users two or more design variations and asking which they prefer and why. It’s perfect for narrowing down visual or messaging options before launching a formal test.

Card Sorting

Card sorting is a research technique used to understand how users organize and categorize information. You present participants with a set of cards, each representing a piece of content or functionality, and ask them to group these cards in a way that makes sense to them.

This process reveals how people naturally think about and structure information. It lets you uncover insights into how users might intuitively organize menu items, product categories, or any other structured content on your site. Ultimately, this helps you design a website or app that aligns with their expectations.

These are just six of the many types of rapid experimentation.

How to choose the right method for your scenario

With so many options, it can be challenging to know which rapid testing method to use in a given situation. Each method has strengths and weaknesses, and choosing the wrong one can result in wasted effort or inconclusive results.

If you’re interested in getting started with rapid testing but aren’t sure which method is right for your scenario, we devised a simple way to narrow down the options.

A framework developed by The Good for determining which rapid testing method to use.

In this decision tree, you can ask questions to help understand which rapid testing method best suits your needs.

A few caveats:

  • There are more methods than are covered here; this is just a sample
  • Test types can be used in combination in some instances, and
  • There are always exceptions to the rule

There’s no substitute for experience, but if you’re just getting started with this kind of research, I hope this gives you a head start.

Using this framework ensures you select the method best suited to your goals, saving time and effort while delivering more meaningful results.

The Telegraph used rapid testing to increase registrations

So, what might rapid testing look like in action?

During a Digital Experience Optimization Program™, we worked with The Telegraph to improve their paywall experience as a part of their goal to reach a million subscribers.

The first part of any DXO Program™, our team conducted a thorough audit of the end-to-end customer experience to uncover the biggest barriers and opportunities for conversions. Once we had the research plan and were armed with a strategic roadmap, it was on to the next phase of the program. We took hypothesized improvements and tested them with The Telegraph’s ideal audience to confirm they would move the needle before investing in implementation.

Thanks to rapid testing, we were able to design, test, and decide on the first phase of implementations in a matter of days.

One rapid test we ran for The Telegraph assessed site banner color and layout. When shown two banner variants, visitors had a clear preference — 78% of participants found content easier to read against a yellow background. Recall tests also showed visitors were more likely to remember key details in this variant as well, further supporting it as the preferable option.

Two banner variants used in a rapid test The Good conducted for The Telegraph.
Two banner variants we ran for The Telegraph; the yellow was the winner.

We ran over 20 similar tests to assess cookie notification placement and design, desktop and mobile paywall presentation, brand headlines, offer messaging, and more. Each test leveraged the method relevant to the hypothesis we hoped to validate with experimentation. We chose the testing methodology using a similar thought process to the rapid testing decision tree framework shared earlier.

And the best part? We did this in just a few weeks, something that would have been impossible to accomplish via A/B testing due to resource constraints. David Humber, Head of Conversion at The Telegraph, also credits the efficiency and effectiveness of the rapid tests to having a team of external experts come in. “You do less spinning of the wheels because you’re having somebody come in that’s got this additional expertise as their bread and butter.”

Overall, identifying small wins in numerous places added up to a significant impact for The Telegraph in both improved metrics and an understanding of the customer.

Upskill your team with external support

While rapid experimentation is a powerful tool, getting started can feel overwhelming. How do you design effective tests? What metrics should you measure? And how do you ensure your insights lead to meaningful improvements?

This is where The Good can help. Our team specializes in UX research and digital experience optimization for SaaS companies. From designing and executing rapid tests to implementing insights, we’re here to guide you every step of the way. With our proven frameworks and expertise, you can:

  • Validate ideas faster and more effectively
  • Reduce the risk of feature flop
  • Build a culture of experimentation within your team

Ready to get started? Contact us to learn how we can help you make better decisions faster.

Find out what stands between your company and digital excellence with a custom 5-Factors Scorecard™.

The post Which Rapid Testing Method Should I Use? appeared first on The Good.

]]>
Directional Guidance: What It Is And How To Improve It https://thegood.com/insights/directional-guidance/ Mon, 26 Feb 2024 19:56:12 +0000 https://thegood.com/?post_type=insights&p=106975 On any road, hundreds of visual and sensory cues offer directional guidance. Speed bumps signal the driver to slow down, the rest stop sign reminds them they can take a break on a long trip, and rumble strips alert when they’re swerving into dangerous territory. Curbs keep drivers and pedestrians on separate sections of the […]

The post Directional Guidance: What It Is And How To Improve It appeared first on The Good.

]]>
On any road, hundreds of visual and sensory cues offer directional guidance.

Speed bumps signal the driver to slow down, the rest stop sign reminds them they can take a break on a long trip, and rumble strips alert when they’re swerving into dangerous territory. Curbs keep drivers and pedestrians on separate sections of the road, while curb cuts offer an optional designated crossing area.

All of these indicators intuitively keep us on the right path while occasionally offering alternative routes or opportunities we may not have been thinking of. Similarly, in digital experience design, directional guidance nudges users on a path toward their end goal.

It helps users find what they are looking for by adding visibility to elements that will increase their motivation or intent. It displays compelling options of where the user can go next.

The placement of specific website elements can either guide a user toward their ideal product or take them off track. Strategizing and keeping directional guidance top of mind as you optimize can help direct users through a complicated digital journey.

What is directional guidance in UX?

Directional guidance in UX is a sum of strategies, tactics, or elements that optimization experts implement to help users accomplish a specific goal on a website.

It’s an umbrella term that encompasses anything put on a user’s path to help them find what they want.

“Directional guidance doesn’t just increase the findability of what a user knows they want, it increases the discoverability of things a user didn’t even know they wanted. It adds utility across a website so users can build a mental model of what is in the company’s catalog and how to get there.”

Natalie Thomas, Director of Digital Experience Optimization & UX Strategy at The Good

In digital experience design, directional guidance can be as direct as a clear call to action and easy-to-use navigation or as indirect as personalizing recommendations and surfacing relevant content. It’s like a personal website concierge telling you, “right this way.”

Directional guidance is one of The Good’s six Heuristics for Digital Experience Optimization™. They are:

  1. Priming & Expectation Setting
  2. Trust & Authority
  3. Ease
  4. Benefits & Unique Selling Points
  5. Directional Guidance
  6. Incentives

5 ways to improve directional guidance (with examples)

So, what are some specific ways to improve directional guidance on your website?

Of course, we have to caveat that you should develop strategies relevant to your specific goals and users, but here are a few ideas to get the wheels turning.

1. Add suggested products to predictive search

Increasing the use of search is a great way to encourage intentional browsing, but often users need a helping hand to guide them to relevant products or pages.

Featuring popular or relevant products in search suggestions can improve product discoverability, increase the helpfulness of search, and help users quickly and easily navigate the site.

suggested product in predictive search to improve directional guidance in a website

Deeper customizations might include featuring different products based on user segment, search terms entered, seasonality, or geographic area of the site.

2. Change sort order

Sort orders often default to standard settings that don’t support user goals.

Testing alternative default sort orders (by popularity, by price) can help users quickly discover the products that are right for them and improve directional guidance.

For example, we tested sorting products by featured rather than families to improve the visibility of product listings and increase engagement on category pages.

This resulted in an 8.5% increase in conversion rate.

filtering options in an online shoe brand's website

3. Add quick links

Even within a well-organized menu, users can struggle to navigate to relevant information quickly.

Showcasing what actions users can take with “quick links” promotes directness toward relevant pages.

exposed categories on website

Improving directional guidance with quick links can encourage deeper page depth from paid ads, decrease bounce rates, and lead to increased transactions.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

4. Improve visual hierarchy of mobile menu

A key principle of visual design, visual hierarchy, is crucial to improving directional guidance.

Many websites have poor content hierarchy, particularly in mobile navigation. Users can suffer from indecision without the guidance of a well-organized and directional menu.

Separating shopping-focused links from other kinds of content in the navigation can increase directional guidance and decrease distraction for would-be shoppers.

winning test for visual design hierarchy

For one of our clients, re-ordering the popular navigation menu links in line with the primacy and recency effect positively impacted conversions, leading to over $4MM in annual revenue gains

5. Add sticky elements

Another way to improve directional guidance is making key elements sticky and therefore, easily accessible to users as they browse.

Sticky CTA

When users are considering a product, they will scroll through product detail pages to find details that make them more confident to purchase. For example, adding a sticky CTA like a sticky add-to-cart or a sticky buy button, can provide easy access to add-to-cart once the user decides.

Adding a sticky CTA is great for brands with longer PDPs, specifically increasing engagement with product details, reducing friction, and increasing adds to cart.

example of sticky CTA to improve directional guidance in an app

Sticky search

High-intent users often have an idea of what product they are looking for, and search users generally convert 5-7x better than non-search users. Using search can help them quickly find the product they have in mind.

So, making the search bar sticky improves directional guidance by encouraging use of search and helping guide shoppers to right-fit products.

This is especially true for sites that have a large amount of high-intent users, many SKUs, or sites where users primarily navigate with the search bar.

screenshot of sticky search bar on a website

What are examples of poor directional guidance?

Now that you have five ways to improve directional guidance, what are some signs that your site suffers from poor directional guidance?

Low visibility or low discoverability of items: If your items or products are hidden behind multiple clicks or proverbial corners, your users can’t find what they are looking for (or discover something they don’t know they need!)

Unclear system status: If there is a break in communication between the computer or digital product and the user, then you have poor directional guidance. For example, giving an error message right when a user starts typing their password before they have even clicked ‘login.’

Content fatigue: When your site has too much content (text, images, links, etc.), the user might not find the one product that is meant for them which will trigger a purchase decision.

Confusing or unclear language: Speaking in brand language that isn’t clear to the user removes information scent and prevents them from moving down the funnel.

These are just a few things to look out for in your user testing and research to uncover poor directional guidance in your digital experience.

Is wayfinding the same as directional guidance?

At this point, UX practitioners may be asking themselves, “how is this different from wayfinding?”

Wayfinding falls under the umbrella of directional guidance but is not the same thing.

To differentiate the term, our team often uses the analogy of an airport. Wayfinding is like hanging signs in the airport. While helpful, imagine all the information you needed in an airport was on a sign. You wouldn’t know what to read or look at next.

Instead, airports analyze foot traffic and incorporate strategic pathways, seating areas, audio cues, and symbols to both get you where you need to go AND offer helpful stops along the way. This is directional guidance.

Wayfinding is the signs and cues pointing you to your gate, while the directional guidance might be a water fountain and bathroom along the way for a convenient stop before your flight. Things you may not have realized you needed, placed strategically to help you uncover your needs.

For digital experience design, it is similar.

“Wayfinding is about navigation, while directional guidance is about having the right information on the page, in the right place, so that users know what to buy or sign up for.”

 Maggie Paveza, User Experience and Optimization Strategist at The Good

Are directional cues the same as directional guidance?

Another common mistake is using directional cues interchangeably for directional guidance.  

Directional guidance and directional cues work together to keep the user on their path, but directional cues specifically are visual hints that guide a user to the most important elements on the screen.

There are explicit directional cues and implicit directional cues, including:

  • Explicit directional cues:
    • Eye gaze
    • Arrows
    • Gesturing or pointing
    • Object positioning
    • Lines
  • Implicit directional cues:
    • Contrasting colors
    • White space
    • Visual hierarchy
    • Framing or encapsulation

Again, directional guidance is the umbrella term, and directional cues may support that overarching goal of guiding the user to where they need to go.

The importance of directional guidance in digital experience optimization

The job of product marketing and ecommerce leaders is to guide the user to the best-fit product for them. Directional guidance is the umbrella term for doing just that.

It’s a combination of many strategies, tactics, or elements, including wayfinding, feature discovery, merchandising, information architecture, bundling, navigation, and more. Finding the right way to make these all work together for your user is the key to optimization.

It can be a lot to accomplish without an external, user-centered POV. If you’d like support in your efforts, contact us.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

The post Directional Guidance: What It Is And How To Improve It appeared first on The Good.

]]>
How (And When) To Use a Sticky Add to Cart Button For Improved User Experience https://thegood.com/insights/sticky-add-to-cart/ Fri, 05 Aug 2022 14:32:54 +0000 https://thegood.com/?post_type=insights&p=100436 Brands want to make it as easy as possible for customers to shop on their mobile devices. Browsing through products on a tiny screen can be frustrating, not to mention it can be time-consuming to scroll all the way back to the top of a page just to add a product to the cart. Enter […]

The post How (And When) To Use a Sticky Add to Cart Button For Improved User Experience appeared first on The Good.

]]>
Brands want to make it as easy as possible for customers to shop on their mobile devices. Browsing through products on a tiny screen can be frustrating, not to mention it can be time-consuming to scroll all the way back to the top of a page just to add a product to the cart.

Enter the sticky add to cart button.

This fun-sounding feature provides a simple solution for ecommerce brands wanting to convert more sales on mobile. We’ve tried and tested it with a number of brands to bring you this comprehensive guide to the sticky add to cart button.

What is a sticky add to cart button?

The sticky add to cart button is a design element ecommerce brands use to keep the add to cart button visible on a mobile screen while customers are browsing. It can hover near the top or the bottom of the screen to give shoppers the chance to tap it wherever they are on the page.

While sticky add to cart buttons can increase conversions on desktop sites, we’re diving deep into the mobile version for the purpose of this piece. This is because sticky add to cart buttons are particularly useful on smaller screens where it’s easy to lose the add to cart button.

Screenshot example of Touchland's online mobile version with a sticky add to cart button.

Hand sanitizer brand Touchland implements a sticky add to cart bar on its mobile site but not on the desktop version.

Screenshot example of Touchland's desktop version without a sticky add to cart button.

Sticky add to cart buttons can include:

  • Simple, action-focused copy (i.e. “add to cart”)
  • Branded colors
  • The name of the product
  • An image of the product

The importance of mobile conversion optimization for ecommerce brands

79% of smartphone users purchased a product through their mobile device in the last six months of 2021. An increasing number of shoppers are turning to their handheld devices to browse products on the go, but to turn window shoppers into real customers, store owners have to make it easy to purchase.

In the past, shoppers happily browsed products on their mobile devices but would turn to their desktops when they wanted to make a purchase. This is changing as an increasing number of ecommerce brands implement mobile-friendly features on their site. To stand out and enjoy the lion’s share of sales, brands need to optimize the mobile version of their sites and make it as easy as possible for customers to purchase mid-browse.

Sticky add to cart buttons provide an effective solution since they are a consistent reminder that shoppers can make a purchase directly from their mobile. This ultimately improves the customer experience to increase sales.

Should you use a sticky add to cart button?

You should always start your conversion optimization program with quantitative and qualitative research. This will lead you to data-backed testing ideas, rather than just assumptions and intuition.

A few things you can try before determining whether a sticky add to cart test would show a lift in conversions on your site.

  • User testing: Ask users that fit your target customer profile to browse your mobile site and provide their feedback. Identify pain points and consider if a sticky add to cart button might solve them.
  • Heat and scroll mapping: Use a heatmap tool to see where users are dropping off on your product page. If there’s a high drop-off, a sticky add to cart may help encourage users to scroll further.
  • Analytics: Data is your friend when it comes to any kind of A/B testing. Maybe you already have a sticky add to cart button, but if lots of people are clicking, yet aren’t proceeding with the sale, you know there’s a different point of friction further through the checkout process that’s increasing cart abandonment.

One particular client of ours increased their conversions by over 4% simply by making their add to cart button sticky.

The plant-based food brand used product pages that featured robust content about their product benefits – giving users a lot of valuable information but requiring a lot of scrolling. This meant if users did engage with that product page content, they didn’t have an easy way to return to the top.

We implemented a sticky add to cart button on product pages to increase the visibility of purchasing actions and increase engagement on the page.

winning test - pdp sticky add to cart

The test included a control and a variant. The control version didn’t include a sticky add to cart button, but the variant displayed a different sticky add to cart button depending on the action a shopper was taking:

  • Scroll down: Showed sticky add to cart button in a contrasting color
  • Scroll up: Showed sticky add to cart button and the menu

If a shopper hadn’t selected a size, the sticky add to cart button included the product name, star rating, and review count. If they had chosen a size, the button included all of the above plus the product price.

The variant produced a 4.09% lift over the control version. Based on the lift in per session value, we estimate that implementing the winning variation would produce noticeable revenue gains for the online store.

5 elements to consider when creating a sticky add to cart button

Sticky add to cart buttons have the potential to pump up your conversion rates – but, like any design element, there’s a knack to getting them right. Ultimately, what works for one brand might not work for another and vice versa, so we highly recommend performing user testing and also A/B testing your sticky add to cart buttons to find one that performs well.

Before you get stuck in one scenario, consider these important elements:

  • The device: Will you be implementing a sticky add to cart on mobile, desktop, or both? The design of your button should vary depending on the device (we highly recommend a sticky add to cart for your mobile site)
  • Customer intent: What do shoppers want to do on the product page? Is there content they might want to read rather than be greeted with a very obvious “buy now” button?
  • Button text: What information do shoppers need to know before they add a product to their cart? Can you add it to the sticky button? We recommend including some persuasive elements like review count and star ratings, but don’t cram your button with information
  • Where it’s shown: Does it activate on scroll up or scroll down? If you have a robust navigation, you’ll want to make sure it’s not taking up too much valuable screen real estate if you already have other sticky navigational elements. Additionally, what portion of the page should users see a sticky add to cart? We like to show it after they’ve already scrolled past a certain amount of content – for example past the buy box area.
  • Other checkout elements: Sticky add to cart buttons can bolster other checkout elements – we added the shipping time near a sticky add to cart button for our client Wheelership which worked really well for them

There are a number of sticky cart Shopify apps in the Shopify store that let you create and customize a floating bar for your site or use one of the existing templates.

Enjoying this article?

Subscribe to our newsletter, Good Question, to get insights like this sent straight to your inbox every week.

How to test sticky add to cart buttons

Sticky add to cart buttons are very flexible. There are a ton of tools and apps you can use to quickly implement one on your site but, to ensure the most success, you should A/B test your buttons.

Start with a control version that doesn’t include a sticky add to cart button and create a variant or variants that include different elements. Here are some of the elements you can change and experiment with during the testing phase:

  • Colors: Test different colors, including neutral colors and colors that contrast with your brand
  • Copy: Play around with the text on your buttons
  • Elements: Experiment with including persuasive content like star ratings, review count, price, and a product photo
  • Incentives: Try adding an incentive, like free shipping, or entice shoppers with an upsell

You can also change the button to match the shopper’s intent. For example, with the client we mentioned above, we implemented a different colored button with different text when a shopper was scrolling up and down the page. We also changed the copy on the button to include the price if the customer had already selected a size.

Let’s take a look at a sample sticky add to cart A/B test with two different variants.

  • Control: No sticky add to cart
  • Variant #1: Sticky add to cart in the same color as the website header that includes the product name and price
  • Variant #2: Sticky add to cart in the same color as the website header that doesn’t include the product name and price

If variant #1 outperforms variant #2, you can run another test with an additional set of variants:

  • Control: No sticky add to cart
  • Variant #1: Sticky add to cart in the same color as the website header that includes the product name and price
  • Variant #2: Sticky add to cart in a contrasting color that includes the product name and price

Again, if variant #1 outperforms variant #2 you can assume that shoppers prefer a branded add to cart button. However, you can continue to test the copy and what elements are included until you find a variant that outperforms all other variants.

8 examples of sticky add to cart button designs from mobile ecommerce sites

Ecommerce sites are quickly wising up to the conversion gains of sticky add to cart buttons. We’ve put together a list of some of the top brands that are implementing these features in their own unique way.

1. Gymshark

Gymshark changes the copy depending on where the shopper is located. For UK customers, the call to action says “add to bag” and for US shoppers it says “add to cart”.

2. Skims

Skims’ sticky add to cart button is more of a banner that runs the full length of the page. It’s branded to fit in seamlessly with the rest of the site.

Screenshot example of the brand, Skim's "Add To Bag" button.

3. Bailey Nelson

Eyewear brand Bailey Nelson features its checkout button at the top of the page. The copy includes the name of the product, the color, and the price.

Screenshot example of the brand, Bailey Nelson's checkout button at the top of the page with a light purple "Select & Buy" button.

4. Holland Cooper

Holland Cooper has a much larger sticky bar than most sites and it only appears when a customer has scrolled almost all the way to the bottom of the page. The banner encourages shoppers to “make a selection” and the copy includes the name of the product, the color, and the price.

Screenshot example of the brand, Holland Cooper's light gray "Please Make a Selection" sticky bar.

5. Lazy Oaf

Lazy Oaf’s stick buy button blends in perfectly with the monochrome color scheme of the site. It’s strategically placed above another sticky button that asks shoppers if they want 10% off.

Screenshot example of the brand, Lazyoaf's "Add to bag" button design.

6. Ruggable

Ruggable’s quick buy button is a banner at the bottom of the page. It’s in a bright yellow color that’s in contrast to the site’s more neutral blue branding.

Screenshot example of Ruggable's bright, bold and yellow add to cart button.

7. Caraway

Caraway’s popup add to cart button allows shoppers to select the color of their chosen product from a dropdown menu wherever they are on the page.

Website screenshot of Caraway's popup add to cart button for selecting product color options.

8. Greetabl

Greetabl actually uses its sticky add to cart button to promote an upsell when a customer already has an item in their cart. Shoppers have the option to “add bonus” or “skip bonus”.

Website screenshot of Greetabl additional bonuses at checkout.

Increase conversions with a sticky add to cart button

Convenience is key for mobile shopping. Customers should be able to add items to their cart without having to scroll all the way back up the page. Sticky add to cart buttons make this possible, improving the user experience and providing ample opportunity for shoppers to convert.

Start by testing different variants to find one that works best for your brand and continue to tweak and test until you’re experiencing the desired conversion rates. If you haven’t already, test out a sticky add-to-cart on your mobile site. And, if you’d like a helping hand, consider exploring the Digital Experience Optimization Program™ for customized, actionable recommendations for your brand.

Now It’s Your Turn

We harness user insights and unlock digital improvements beyond your conversion rate.

Let’s talk about putting digital experience optimization to work for you.

The post How (And When) To Use a Sticky Add to Cart Button For Improved User Experience appeared first on The Good.

]]>
Understanding The UX Research Process: A Guide For Ecommerce Brands https://thegood.com/insights/ux-research-process/ Thu, 14 Apr 2022 20:30:51 +0000 https://thegood.com/?post_type=insights&p=98453 Understanding the user experience is crucial in knowing how your customers interact with your site and the journeys they take to purchase. Without this information, you’re essentially flailing around in the dark and potentially losing out on conversions and sales.  But beginning to understand the experience means digging deep into data and leaning on UX […]

The post Understanding The UX Research Process: A Guide For Ecommerce Brands appeared first on The Good.

]]>

Key Takeaways

By the end of this article, you should have the knowledge and resources to “check the box” in these areas…

  • Understand the UX research process and its power in building a better customer experience
  • Leverage a series of research methods that span the different stages of the UX research process
  • How you can use UX research to improve your ecommerce website and make the internet a better place

Understanding the user experience is crucial in knowing how your customers interact with your site and the journeys they take to purchase. Without this information, you’re essentially flailing around in the dark and potentially losing out on conversions and sales. 

But beginning to understand the experience means digging deep into data and leaning on UX research methods to discover how shoppers really feel when they’re on your site. 

Often, it’s not enough to just take their word for it. Instead, you can combine a handful of research methods that bring to light their true behavior through cold hard facts (analytics) and unique perspectives (getting the information directly from your customers). 

The UX research process: an overview

The user experience research process is traditionally made up of four distinct stages: discover, explore, test, and listen. Here at The Good, we break these out even further to better meet the needs of ecommerce leaders. 

We’ll dive deeper into each step in our process later in the article, but for now, here’s an overview. 

  1. Set goals: Before jumping into ‘discover’, we recommend level setting on the project clearly outlining your goals.
  2. Establish context: Here you begin the discovery process, uncovering the information you don’t already know and eliminating any assumptions you might be making about shoppers and their behavior.
  3. Research: Next, execute your research to understand the problems and friction points your customers face. This is our version of the ‘explore’ phase.
  4. Synthesize: An extension of your research or ‘explore’ phase, organize your findings and pinpoint ways to address your user needs.
  5. Optimize: This is our version of the ‘test’ and ‘listen’ phases of your ux research process. Begin to implement tests and tweak/optimize your efforts through an iterative cycle. 

There are multiple different methods that can be used throughout these various stages which we’ll discuss later on. But when used correctly and strategically, this process can reap huge rewards: research shows that for every $1 you invest in UX, you can bring in $100 in return. On top of this, a website that has been designed with user intent in mind can raise conversion rates by 200% and, in some cases by up to 400%. 

Why UX research is so important for ecommerce brands

Aside from the dramatic increase in ROI, UX research is paramount for ensuring every decision you and your team make is backed by data. The more you know, the more strategic and laser-focused you can be with your design and every element you put on each page. 

As a result, every effort you take will be far more effective because you’re making design decisions based on actual fact rather than what you think customers want or a vague gut instinct. 

The most important thing is to remain consistent with your UX efforts. The design research process isn’t a one-and-done situation; it’s an ongoing system that improves your site and its conversion rate incrementally with every new finding and every new insight. 

The best part is you’re creating a site that you know shoppers will enjoy using and will be successful at using. 

The focus of this piece is on how ecommerce brands can use UX research to tighten up the sales cycle and improve the customer journey. Digital experience optimization at its core is the marrying of user goals and business goals. Your research will show you the areas for improvement on your site, and meet user needs in the process. This will lead to more sales, happier customers, and increased retention rates. 

We’ll be deep-diving into the types of user research methods we use and explain how each one fits into the overall process and what outcomes it can generate. We’ll also highlight some of the UX research tools we use to get there. 

Our UX research process for ecommerce brands: a guide 

Let’s take a look at how we uncover key website optimization opportunities for clients. 

We use primary research to glean insights into the customer journey and analyze it. We do this by using both qualitative and quantitative research methods to gain a 360 degree perspective of shoppers and get both qualitative data and quantitative data.

  • Quantitative: analytics data, heatmaps data, customer surveys
  • Qualitative: user testing, heuristic analysis, comparative or competitive reviews 

Here’s how that looks in action.

5 steps in our user experience research process

Step 1: Define goals, constraints, and requirements

It’s crucial to start your research process with a goal setting exercise. If you don’t know why you’re here, your research will be scattered. 

Define your goals

Goals will typically fall into quantitative or qualitative buckets. Quantitative goals are measurable while qualitative goals are more subjective.

Common ecommerce quantitative research goals include: 

  • Improve conversion rates by a certain amount
  • Decrease customer acquisition cost by a specific percentage
  • Increase average order value by a dollar amount
  • Increase return visits to your site
  • Decrease cart abandonment by a certain percentage
  • Improve customer satisfaction scores

Common qualitative ecommerce research goals include less measurable ideas such as: 

  • Improve our homepage
  • Better represent our company values in the path to purchase
  • Make our content easier to manage

List Constraints or Requirements

After you put together your goals, look at them and ask yourself: “what else do I need to consider to accomplish this goal?” This usually produces a list of constraints or requirements.

Often a goal has additional considerations like technical limitations or objectives from the leadership team that will impact the research and optimization process. 

Establish Key Research Questions

To wrap up step one, brainstorm and establish your key research questions and assumptions.

What are you trying to find out? What questions do you hope you’ll answer through this ux research process? 

Step 2: Define the context

Next up in your ux research process is to define the context of your situation by looking at top channels, users demographics, and journeys.

You probably have existing traffic to your site, so to set the stage for your research you’ll need to consider where those visitors come from and how they are behaving. With Google Analytics you can uncover a wealth of data to inform this step in the process.

As you review data, you should be able to: 

  • Understand who is coming to your site: Take a look at demographics like age and gender to build a more realistic user profile. Then review how they landed on your page – was it via a social media ad? Organic traffic? This is key to understanding the customer journey.
  • Understand the context or mindset of your shoppers: There are often key ideas you’ll need to keep in mind as you conduct research. For example, are they comparison shopping from Google? Did they see an ad and are visiting your site for the first time? Make sure you have enough context before moving on. 
  • Identify user problems or challenges: With your data, you should be able to hone in on any points of friction. For example, is there a steep drop off at checkout? Or do users tend to disappear after checking out your product page?

Step 3: Do The Research

The most important step in the ux research process is of course actually doing the research!

You’ve established your research goals, context and questions, so now you can create a plan for each and begin the execution.

Let’s take a few research questions as examples, and analyze how we might put together a plan for each. 

Research Question 1: What words and images are users engaging with? 

Your site is basically like a digital storefront. When someone walks into a brick-and-mortar store, they want it to look nice and be easy to find the products they’re looking for. It’s the same for an online store. 

To explore the effectiveness of the content you already have on your site to see what’s working and what’s not, consider using observational research. 

Observational Research: 

  • Click, movement, and scroll maps: use tools to determine which parts of the page users focus on the most, which elements they click, and how much of the page they read
  • Over the shoulder observation: watch and track how visitors actually use your site, drawing a map of their journey to identify any sticking points 

Knowing which media elements are most effective means you can build pages that are conversion-driven with every scroll.

Research Question 2: What stands in between user desire and fulfillment? 

Your website might be the prettiest site in the world, but if it’s not converting, there’s something wrong. When we look at conversion effectiveness, we’re essentially checking to see how visitors use your site.

User Feedback: 

  • Surveys & polls: send a set of research questions to a segment of your audience to identify their attitudes and preferences
  • User testing: run user tests with a handful of people (who may or may not necessarily be customers) to get their feedback on what they like and dislike about your site. Usertesting.com and UserInput.io are two great tools for this step
  • Reviews theming: analyze the reviews that customers have left and thematically group them to find areas for improvement

Research Question 3: What parts of our customer experience are out of date? 

Customer preferences change over time, and what was once a slick website and fulfillment process can quickly become stale in the wake of new technologies, like up-and-coming payment methods and new communication channels. Test out the “stickiness” of your site and fulfillment process to see if there are any points that might cause customers to drop off or not come back. 

In this scenario, heuristic research could be a great fit. 

Heuristic Research:

  • Heuristic analysis: gather a small group of trained professionals to analyze the usability of your site using heuristic principles
  • Comparative and competitive analysis: analyze competitor sites to find industry trends 
  • Test purchase and unboxing analysis: go through the entire fulfillment process from start to finish to pinpoint any problem areas or places that could be improved

Experiencing a customer journey from start to finish helps pinpoint any problem areas or places that could be improved. 

Step 4: Synthesize Your Results 

Once your research is complete, take a breath and take a step back. 

It’s time to group your research findings by theme so that you know what to address, and in what order. We recommend sticking all of the challenges you uncovered onto the wall (using post-its, or whatever you have on hand), and beginning to separate them by theme. 

Once you have meaningful groups, you can put together an optimization roadmap.

The process of creating an optimization roadmap is essential because it requires you to define your goals, align with stakeholders, and assess priorities and risks. It’s not just about outlining a testing schedule, though that is a key part of the process. 

At the end of this step, you’ll have a list of testing concepts in their prioritized order so that you’re ready to get started optimizing.  

Step 5: Optimize 

In step 5, you’re tackling the challenges you uncovered during your ux research process. 

It’s time to stop thinking about your issues, and start solving them. Assuming you’ve uncovered a host of findings, this step holds you accountable to actually do something, like run A/B tests, with the research you conducted.

Other common ecommerce user research methods 

For ecommerce brands working on conversion optimization, we recommend sticking to a strategy similar to what we outlined above. 

But, as we mentioned, different research methods fit into the different stages of the design process and you might need to dip your toes into other areas. 

Here’s a thorough list of methods that might come in handy during your own research, broken out based on the “classic” ux research process stages for clarity. 

Research methods for the “discover” stage

  • Field study: observing people in their natural environment or their own context 
  • User interview: asking users open-ended questions on a specific topic with the goal of learning more about it 
  • Diary study: encouraging end users to self-report their behavior and experiences over a set period of time  
  • Requirements gathering: bringing together all stakeholders involved to discuss the research goals of the research project and to iron out the finer details 

Research methods for the “explore” stage

  • Journey mapping: visualizing the route a customer takes to purchase using analytics and website behavioral data 
  • Comparative/competitive analysis: understanding the features, functions, and flows of competitor sites 
  • Heatmap analysis: using heat mapping tools to determine which parts of the site and page visitors spend the most time on 
  • Write user stories: highlighting user pain points with short, simple descriptions of who they are, what they want, and why they want it
  • Persona building: creating detailed fictional characters to represent each segment of your target audience and building out user personas
  • Task analysis: studying the tasks customers perform to reach their goals on your site
  • Design review: examining your existing UX design to uncover any friction points or usability issues 

Research methods for the “test” stage

  • User testing: asking a user group to share their experiences while actively navigating through your site to collect information about usability
  • Over the shoulder observation: watching as focus groups interact directly with your site to determine what actions they take 
  • Accessibility evaluation: assessing your content and design against the most common problems to bring to light the most challenging areas
  • Benchmark testing: evaluating your website through key metrics to measure its performance against benchmark figures

Research methods for the “listen” stage

  • Analytics review: measuring and analyzing activity and user behavior on your website 
  • Survey: sending a set of research questions to a segment of your audience to identify their attitudes and preferences
  • Search-log analysis: digging into your on-site search function to discover what shoppers are searching for and how well your content meets their needs
  • FAQ review: checking out the most common questions you receive from customers via support channels or other means 

You don’t have to use all of these research techniques. Cherry pick two to three from each stage and use those to gather a variety of data and insights. As you can see from the graph below, some methods prove more popular than others. 

ux research process nng
Source

User experience research with The Good

Our goal is to turn more visitors into buyers, and our finely honed UX research method allows us to really get into the nitty-gritty of how visitors use your site. 

Through a series of research methods that span the different stages of the research process, we gain a deep understanding of who your customers are, how they use your site, and what features will make their journey considerably more enjoyable. 

“The Good’s research expertise and dedication to improving the user experience has made them a valuable partner. Their fresh insight and recommendations have helped us move the needle on customer engagement and drive product growth. What we learn enables informed decision making that’s truly user-driven and evidence-based.”

– Aditya Lakshminarayan, Product Marketing Manager at Adobe Document Cloud. 

Keep in mind that the research process is not a one-and-done situation. Ideally, you’re getting fresh data monthly or quarterly and fine-tuning your understanding of your customers accordingly. But, at a minimum, you should be collecting consistent data across seasons and re-evaluating your site on an annual basis.  

If you are interested in contracting expert support for the iterative research process, we can help. Learn more about how The Good can help with your UX research needs.

The post Understanding The UX Research Process: A Guide For Ecommerce Brands appeared first on The Good.

]]>