T-Tests and P-Values in Python

T-Tests and P-Values

Let's say we're running an A/B test. We'll fabricate some data that randomly assigns order amounts from customers in sets A and B, with B being a little bit higher:
In [1]:
Out[1]:
Ttest_indResult(statistic=-14.633287515087083, pvalue=3.0596466523801155e-48)
The t-statistic is a measure of the difference between the two sets expressed in units of standard error. Put differently, it's the size of the difference relative to the variance in the data. A high t value means there's probably a real difference between the two sets; you have "significance". The P-value is a measure of the probability of an observation lying at extreme t-values; so a low p-value also implies "significance." If you're looking for a "statistically significant" result, you want to see a very low p-value and a high t-statistic (well, a high absolute value of the t-statistic more precisely). In the real world, statisticians seem to put more weight on the p-value result.
Let's change things up so both A and B are just random, generated under the same parameters. So there's no "real" difference between the two:
In [2]:
Out[2]:
Ttest_indResult(statistic=-0.5632806182026571, pvalue=0.5732501292213643)
In [3]:
Out[3]:
Ttest_indResult(statistic=0.7294273914972799, pvalue=0.4657411216867331)
Our p-value actually got a little lower, and the t-test a little larger, but still not enough to declare a real difference. So, you could have reached the right decision with just 10,000 samples instead of 100,000. Even a million samples doesn't help, so if we were to keep running this A/B test for years, you'd never acheive the result you're hoping for:
In [4]:
Out[4]:
Ttest_indResult(statistic=-0.9330159426646413, pvalue=0.350811849590674)
If we compare the same set to itself, by definition we get a t-statistic of 0 and p-value of 1:
In [5]:
Out[5]:
Ttest_indResult(statistic=0.0, pvalue=1.0)
The threshold of significance on p-value is really just a judgment call. As everything is a matter of probabilities, you can never definitively say that an experiment's results are "significant". But you can use the t-test and p-value as a measure of signficance, and look at trends in these metrics as the experiment runs to see if there might be something real happening between the two.

Comments

Popular posts from this blog

Maxpooling vs minpooling vs average pooling

Generative AI - Prompting with purpose: The RACE framework for data analysis

Best Practices for Storing and Loading JSON Objects from a Large SQL Server Table Using .NET Core