Every marketing experimentation process has to have a solid hypothesis.
That’s a must – unless you want to be roaming in the dark and heading towards a dead-end in your experimentation program.
Hypothesizing is the second phase of our SHIP optimization process here at Invesp.
It comes after we have completed the research phase.
This is an indication that we don’t just pull a hypothesis out of thin air. We always make sure that it is based on research data.
But having a research-backed hypothesis doesn’t mean that the hypothesis will always be correct. In fact, tons of hypotheses bear inconclusive results or get disproved.
The main idea of having a hypothesis in marketing experimentation is to help you gain insights – regardless of the testing outcome.
By the time you finish reading this article, you’ll know:
- The essential tips on what to do when crafting a hypothesis for marketing experiments
- How a marketing experiment hypothesis works
- How experts develop a solid hypothesis
The Basics: Marketing Experimentation Hypothesis
A hypothesis is a research-based statement that aims to explain an observed trend and create a solution that will improve the result. This statement is an educated, testable prediction about what will happen.
It has to be stated in declarative form and not as a question.
“If we add magnification info, product video and making virtual mirror buttons, will that improve engagement?” is not declarative, but “Improving the experience of product pages by adding magnification info, product video and making virtual mirror buttons will increase engagement” is.
Here’s a quick example of how a hypothesis should be phrased:
- Replacing ___ with __ will increase [conversion goal] by [%], because:
- Removing ___ and __ will decrease [conversion goal] by [%], because:
- Changing ___ into __ will not affect [conversion goal], because:
- Improving ___ by ___will increase [conversion goal], because:
As you can see from the above sentences, a good hypothesis is written in clear and simple language. Reading your hypothesis should tell your team members exactly what you thought was going to happen in an experiment.
Another important element of a good hypothesis is that it defines the variables in easy-to-measure terms, like who the participants are, what changes during the testing, and what the effect of the changes will be:
Example: Let’s say this is our hypothesis:
Displaying full look items on every “continue shopping & view your bag” pop-up and highlighting the value of having a full look will improve the visibility of a full look, encourage visitors to add multiple items from the same look and that will increase the average order value, quantity with cross-selling by 3%.
Who are the participants:
Visitors.
What changes during the testing:
Displaying full look items on every “continue shopping & view your bag” pop-up and highlighting the value of having a full look…
What the effect of the changes will be:
Will improve the visibility of a full look, encourage visitors to add multiple items from the same look and that will increase the average order value, quantity with cross-selling by 3%.
Don’t bite off more than you can chew! Answering some scientific questions can involve more than one experiment, each with its own hypothesis. so, you have to make sure your hypothesis is a specific statement relating to a single experiment.
How a Marketing Experimentation Hypothesis Works
Assuming that you have done conversion research and you have identified a list of issues (UX or conversion-related problems) and potential revenue opportunities on the site. The next thing you’d want to do is to prioritize the issues and determine which issues will most impact the bottom line.
Having ranked the issues you need to test them to determine which solution works best. At this point, you don’t have a clear solution for the problems identified. So, to get better results and avoid wasting traffic on poor test designs, you need to make sure that your testing plan is guided.
This is where a hypothesis comes into play.
For each and every problem you’re aiming to address, you need to craft a hypothesis for it – unless the problem is a technical issue that can be solved right away without the need to hypothesize or test.
One important thing you should note about an experimentation hypothesis is that it can be implemented in different ways.
This means that one hypothesis can have four or five different tests as illustrated in the image above. Khalid Saleh, the Invesp CEO, explains:
“There are several ways that can be used to support one single hypothesis. Each and every way is a possible test scenario. And that means you also have to prioritize the test design you want to start with. Ultimately the name of the game is you want to find the idea that has the biggest possible impact on the bottom line with the least amount of effort. We use almost 18 different metrics to score all of those.”
In one of the recent tests we launched after watching video recordings, viewing heatmaps, and conducting expert reviews, we noticed that:
- Visitors were scrolling to the bottom of the page to fill out a calculator so as to get a free diet plan.
- Brand is missing
- Too many free diet plans – and this made it hard for visitors to choose and understand.
- No value proposition on the page
- The copy didn’t mention the benefits of the paid program
- There was no clear CTA for the next action
To help you understand, let’s have a look at how the original page looked like before we worked on it:
So our aim was to make the shopping experience seamless for visitors, make the page more appealing and not confusing. In order to do that, here is how we phrased the hypothesis for the page above:
Improving the experience of optin landing pages by making the free offer accessible above the fold and highlighting the next action with a clear CTA and will increase the engagement on the offer and increase the conversion rate by 1%.
For this particular hypothesis, we had two design variations aligned to it:
The two above designs are different, but they are aligned to one hypothesis. This goes on to show how one hypothesis can be implemented in different ways. Looking at the two variations above – which one do you think won?
Yes, you’re right, V2 was the winner.
Considering that there are many ways you can implement one hypothesis, so when you launch a test and it fails, it doesn’t necessarily mean that the hypothesis was wrong. Khalid adds:
“A single failure of a test doesn’t mean that the hypothesis is incorrect. Nine times out of ten it’s because of the way you’ve implemented the hypothesis. Look at the way you’ve coded and look at the copy you’ve used – you are more likely going to find something wrong with it. Always be open.”
So there are three things you should keep in mind when it comes to marketing experimentation hypotheses:
- It takes a while for this hypothesis to really fully test it.
- A single failure doesn’t necessarily mean that the hypothesis is incorrect.
- Whether a hypothesis is proved or disproved, you can still learn something about your users.
How experts develop a solid hypothesis
I know it’s never easy to develop a hypothesis that informs future testing – I mean it takes a lot of intense research behind the scenes, and tons of ideas to begin with. So, I reached out to six CRO experts for tips and advice to help you understand more about developing a solid hypothesis and what to include in it.
Maurice says that a solid hypothesis should have not more than one goal:
Maurice Beerthuyzen – CRO/CXO Lead at ClickValue
“Creating a hypothesis doesn’t begin at the hypothesis itself. It starts with research. What do you notice in your data, customer surveys, and other sources? Do you understand what happens on your website?
When you notice an opportunity it is tempting to base one single A/B test on one hypothesis. Create hypothesis A and run a single test, and then move forward to the next test. With another hypothesis.
But it is very rare that you solve your problem with only one hypothesis. Often a test provides several other questions. Questions which you can solve with running other tests. But based on that same hypothesis! We should not come up with a new hypothesis for every test.
Another mistake that often happens is that we fill the hypothesis with multiple goals. Then we expect that the hypothesis will work on conversion rate, average order value, and/or Click Through Ratio. Of course, this is possible, but when you run your test, your hypothesis can only have one goal at once.
And what if you have two goals? Just split the hypothesis then create a secondary hypothesis for your second goal. Every test has one primary goal. What if you find a winner on your secondary hypothesis? Rerun the test with the second hypothesis as the primary one.”
Jon believes that a strong hypothesis is built upon three pillars:
Jon MacDonald – President and Founder of The Good
- Respond to an established challenge – The challenge must have a strong background based on data, and the background should state an established challenge that the test is looking to address. Example: “Sign up form lacks proof of value, incorrectly assuming if users are on the page, they already want the product.”
- Propose a specific solution – What is the one, the single thing that is believed will address the stated challenge? Example: “Adding an image of the dashboard as a background to the signup form…”.
- State the assumed impact – The assumed impact should reference one specific, measurable optimization goal that was established prior to forming a hypothesis. Example: “…will increase signups.”
So, if your hypothesis doesn’t have a specific, measurable goal like “will increase signups,” you’re not really stating a test hypothesis!”
Matt uses his own hypothesis builder to collate important data points into a single hypothesis.
Matt Beischel – Founder of Corvus CRO
Like Jon, Matt also breaks down his hypothesis writing process into three sections. Unlike Jon, Matt sections are:
- Comprehension
- Response
- Outcome
I set it up so that the names neatly match the “CRO.” It’s a sort of “mad-libs” style fill-in-the-blank where each input is an important piece of information for building out a robust hypothesis. I consider these the minimum required data points for a good hypothesis; if you can’t completely fill out the form, then you don’t have a good hypothesis. Here’s a breakdown of each data point:
Comprehension – Identifying something that can be improved upon
- Problem: “What is a problem we have?”
- Observation Method: “How did we identify the problem?”
Response – Change that can cause improvement
- Variation: “What change do we think could solve the problem?”
- Location: “Where should the change occur?”
- Scope: “What are the conditions for the change?”
- Audience: “Who should the change affect?”
Outcome – Measurable result of the change that determines the success
- Behavior Change: “What change in behavior are we trying to affect?”
- Primary KPI: “What is the important metric that determines business impact?”
- Secondary KPIs: “Other metrics that will help reinforce/refute the Primary KPI”
Something else to consider is that I have a “user first” approach to formulating hypotheses. My process above is always considered within the context of how it would first benefit the user. Now, I do feel that a successful experiment should satisfy the needs of BOTH users and businesses, but always be in favor of the user.
Notice that “Behavior Change” is the first thing listed in Outcome, not primary business KPI. Sure, at the end of the day you are working for the business’s best interests (both strategically and financially), but placing the user first will better inform your decision making and prioritization; there’s a reason that things like personas, user stories, surveys, session replays, reviews, etc. exist after all.
A business-first ideology is how you end up with dark patterns and damaging brand credibility.”
One of the many mistakes that CROs make when writing a hypothesis is that they are focused on wins and not on insights. Shiva advises against this mindset:
Shiva Manjunath – Marketing Manager and CRO at Gartner
“Test to learn, not test to win.
It’s a very simple reframe of hypotheses but can have a magnitude of difference. Here’s an example:
Test to Win Hypothesis: If I put a product video in the middle of the product page, I will improve add to cart rates and improve CVR.
Test to Learn Hypothesis: If I put a product video on the product page, there will be high engagement with the video and it will positively influence traffic
What you’re doing is framing your hypothesis, and test, in a particular way to learn as much as you can. That is where you gain marketing insights. The more you run ‘marketing insight’ tests, the more you will win. Why? The more you compound marketing insight learnings, your win velocity will start to increase as a proxy of the learnings you’ve achieved. Then, you’ll have a higher chance of winning in your tests – and the more you’ll be able to drive business results.”
Lorenzo says it’s okay to focus on achieving a certain result as long as you are also getting an answer to: “Why is this event happening or not happening?”
Lorenzo Carreri – CRO Consultant
“When I come up with a hypothesis for a new or iterative experiment, I always try to find an answer to a question.
It could be something related to a problem people have or an opportunity to achieve a result or a way to learn something.
The main question I want to answer is “Why is this event happening or not happening?”
The question is driven by data, both qualitative and quantitative.
The structure I use for stating my hypothesis is:
From [data source], I noticed [this problem/opportunity] among [this audience of users] on [this page or multiple pages]. So I believe that by [offering this experiment solution], [this KPI] will [increase/decrease/stay the same].
Jakub Linowski says that hypotheses are meant to hold researchers accountable:
Jakub Linowski – Chief Editor of GoodUI
“They do this by making your change and prediction more explicit. A typical hypothesis may be expressed as:
If we change (X), then it will have some measurable effect (A).
Unfortunately, this oversimplified format can also become a heavy burden to your experiment design with its extreme reductionism. However you decide to format your hypotheses, here are three suggestions for more flexibility to avoid limiting yourself.
One Or More Changes
To break out of the first limitation, we have to admit that our experiments may contain a single or multiple changes. Whereas the classic hypothesis encourages a single change or isolated variable, it’s not the only way we can run experiments. In the real world, it’s quite normal to see multiple design changes inside a single variation. One valid reason for doing this is when wishing to optimize a section of a website while aiming for a greater effect. As more positive changes compound together, there are times when teams decide to run bigger experiments. An experiment design (along with your hypotheses) therefore should allow for both single or multiple changes.
One Or More Metrics
A second limitation of many hypotheses is that they often ask us to only make a single prediction at a time. There are times when we might like to make multiple guesses or predictions to a set of metrics. A simple example of this might be a trade-off experiment with a guess of increased sales but decreased trial signups. Being able to express single or multiple metrics in our experimental designs should therefore be possible.
Estimates, Directional Predictions, Or Unknowns
Finally, traditional hypotheses also tend to force very simple directional predictions by asking us to guess whether something will increase or decrease. In reality, however, the fidelity of predictions can be higher or lower. On one hand, I’ve seen and made experiment estimations that contain specific numbers from prior data (ex: increase sales by 14%). While at other times it should also be acceptable to admit the unknown and leave the prediction blank. One example of this is when we are testing a completely novel idea without any prior data in a highly exploratory type of experiment. In such cases, it might be dishonest to make any sort of predictions and we should allow ourselves to express the unknown comfortably.”
Conclusion
So there you have it! Before you jump on launching a test, start by making sure that your hypothesis is solid and backed by research. Ask yourself the questions below when crafting a hypothesis for marketing experimentation:
- Is the hypothesis backed by research?
- Can the hypothesis be tested?
- Does the hypothesis provide insights?
- Does the hypothesis set the expectation that there will be an explanation behind the results of whatever you’re testing?
Don’t worry! Hypothesizing may seem like a very complicated process, but it’s not complicated in practice especially when you have done proper research.
If you enjoyed reading this article and you’d love to get the best CRO content – delivered by the best experts in the industry – straight to your inbox, every week. Please subscribe here.