298-1.jpg
The whole point of statistical significance is to determine whether the difference between two metrics is meaningful or just a fluke. In this post, I'll cover six things you need to know to properly south africa consumer email list determine statistical significance for conversion metric A/B tests, as well as broader analytics .
1. Exactly what it means
298-2.jpg
"The change resulted in a 20% increase in conversion with a 90% confidence level." Unfortunately, this statement is not at all equivalent to another, very similar one: "The chances of increasing conversion by 20% are 90%." So what is it really about?
20% is the increase we recorded in one sample. If we were to start imagining and guessing, we could assume that this increase could be sustained permanently – if we continue testing forever. But that doesn’t mean that we are 90% likely to see a 20% increase in conversion, or an increase of “at least” 20%, or “approximately” 20%.
90% is the probability of showing any change in conversion. In other words, if we were to run ten A/B tests to get this result, and decided to run all ten ad infinitum, one of them (since the probability of change is 90%, that leaves 10% for the same outcome) would probably end up with the “post-test” result closer to the original conversion – i.e., no change. Of the remaining nine tests, some might show a lift of much less than 20%. Others might exceed that.
If we misinterpret this data, we take a huge risk when we roll out a test. It's easy to get excited when a test shows high conversion growth rates with a 95% confidence level, but it's wiser not to expect too much until the test is fully developed.
The Role of Statistical Significance in Increasing Conversion Rates: 6 Things You Need to Know
-
nusaiba127
- Posts: 149
- Joined: Sun Dec 22, 2024 9:43 am