Many A/B testing problems come from using statistical methods without checking if they fit the situation. The three most common mistakes are: (1) using the MannMany A/B testing problems come from using statistical methods without checking if they fit the situation. The three most common mistakes are: (1) using the Mann

Three A/B Testing Mistakes I Keep Seeing (And How to Avoid Them)

Over the past few years, I have observed many common errors people make when designing A/B tests and performing post-analysis. In this article, I want to highlight three of these mistakes and explain how they can be avoided.

Using Mann–Whitney to compare medians

The first mistake is the incorrect use of the Mann–Whitney test. This method is widely misunderstood and frequently misused, as many people treat it as a non-parametric “t-test” for medians. In fact, the Mann–Whitney test is designed to determine whether there is a shift between two distributions.

\

When applying the Mann–Whitney test, the hypotheses are defined as follows:

\ We must always consider the assumptions of the test. There are only two:

  • Observations are i.i.d.
  • The distributions have the same shape

\ How to compute the Mann–Whitney statistic:

  1. Sort all observations by magnitude.
  2. Assign ranks to all observations.
  3. Compute the U statistics for both samples.

\

  1. Choose the minimum from these two values
  2. Use statistical tables for the Mann-Whitney U test to find the probability of observing this value of U or lower.

**Since we now know that this test should not be used to compare medians, what should we use instead?

\ Fortunately, in 1945 the statistician Frank Wilcoxon introduced the signed-rank test, now known as the Wilcoxon Signed Rank Test.

The hypotheses for this test match what we originally expected:

How to calculate the Wilcoxon Signed Rank test statistic:

  1. For each paired observation, calculate the difference, keeping both its absolute value and sign.

  2. Sort the absolute differences from smallest to largest and assign ranks.

  3. Compute the test statistic:

    \

  4. The statistic W follows a known distribution. When n is larger than roughly 20, it is approximately normally distributed. This allows us to compute the probability of observing W under the null hypothesis and determine statistical significance.

    \ Some intuition behind the formula:

Using bootstrapping everywhere and for every dataset

The second mistake is applying bootstrapping all the time. I’ve often seen people bootstrap every dataset without first verifying whether bootstrapping is appropriate in that context.

The key assumption behind bootstrapping is

==The sample must be representative of the population from which it was drawn.==

If the sample is biased and poorly represents the population, the bootstrapped statistics will also be biased. That’s why it’s crucial to examine proportions across different cohorts and segments.

For example, if your sample contains only women, while your overall customer base has an equal gender split, bootstrapping is not appropriate.

Always using default Type I and Type II error values

Last but not least is the habit of blindly using default experiment parameters. In about 95% of cases, 99% of analysts and data scientists at 95% of companies stick with defaults: a 5% Type I error rate and a 20% Type II error rate (or 80% test power).

\ Let’s start with why don’t we just set both Type I and Type II error rates to 0%?

==Because doing so would require an infinite sample size, meaning the experiment would never end.==

Clearly, that’s not practical. We must strike a balance between the number of samples we can collect and acceptable error rates.

I encourage people to consider all relevant product constraints.

The most convenient way to do it , create the table ,that you see below, and discuss it with product managers and people who are responsible for the product.

\

For a company like Netflix, even a 1% MDE can translate into substantial profit. For a small startup, that’s not true. Google, on the other hand, can easily run experiments involving tens of millions of users, making it reasonable to set the Type I error rate as low as 0.1% to gain higher confidence in the results.

\


Our path to excellence is paved with mistakes. Let’s make them!

Market Opportunity
B Logo
B Price(B)
$0.18905
$0.18905$0.18905
-2.45%
USD
B (B) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

The post Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference appeared on BitcoinEthereumNews.com. Key Takeaways Ethereum’s new roadmap was presented by Vitalik Buterin at the Japan Dev Conference. Short-term priorities include Layer 1 scaling and raising gas limits to enhance transaction throughput. Vitalik Buterin presented Ethereum’s development roadmap at the Japan Dev Conference today, outlining the blockchain platform’s priorities across multiple timeframes. The short-term goals focus on scaling solutions and increasing Layer 1 gas limits to improve transaction capacity. Mid-term objectives target enhanced cross-Layer 2 interoperability and faster network responsiveness to create a more seamless user experience across different scaling solutions. The long-term vision emphasizes building a secure, simple, quantum-resistant, and formally verified minimalist Ethereum network. This approach aims to future-proof the platform against emerging technological threats while maintaining its core functionality. The roadmap presentation comes as Ethereum continues to compete with other blockchain platforms for market share in the smart contract and decentralized application space. Source: https://cryptobriefing.com/ethereum-roadmap-scaling-interoperability-security-japan/
Share
BitcoinEthereumNews2025/09/18 00:25
Offchain Labs Purchases Additional ARB Tokens as Arbitrum Surpasses $20 Billion TVL

Offchain Labs Purchases Additional ARB Tokens as Arbitrum Surpasses $20 Billion TVL

Offchain Labs, the development company behind the Arbitrum Layer 2 scaling solution, has purchased additional ARB tokens under a previously approved token buyback plan, coinciding with Arbitrum surpassing $20 billion in total value locked (TVL) and reinforcing the company's commitment to ecosystem growth as competition intensifies among Ethereum Layer 2 networks for market share, developer activity, and liquidity.
Share
MEXC NEWS2025/12/25 14:21
Ondo Finance to Launch Tokenized US Stocks and ETFs on Solana in Early 2026

Ondo Finance to Launch Tokenized US Stocks and ETFs on Solana in Early 2026

Ondo Finance plans to launch tokenized U.S. stocks and exchange-traded funds on the Solana blockchain in early 2026, marking a significant expansion of the company's real-world asset (RWA) tokenization platform beyond its current focus on Treasury bonds and money market funds into equity markets with custody-backed structures enabling round-the-clock on-chain transfers and trading.
Share
MEXC NEWS2025/12/25 14:19