%%
tags:: #ProductGrowth
started_date:: [[2022-12-16]]
published::
project:: #ProjectKnowledgeBase
up:: [[Positive Experiments MOC]]
##### Research and idea capturing
- There’s something cooking on obsidian with this https://speero.com/blueprints/should-i-run-an-ab-test
%%
###### [[Product Growth]] | Updated [[2022-12-16]]
# Please, please don't AB test that
Should you AB test everything?
No, AB testing is one decision-making tool out of many available.
Yes, it is the most-reliable evidence you can get for product-development decisions.
However, depending on the type of product, it may not be practical or worth the cost.
I use this model with all product teams I onboard in experimentation.
![[Pasted image 20221216125509.png]]
—
## It usually goes like this.
##### Make a quick video please.
![[1.png]]
![[2.png]]
![[3.png]]
"PJ, should we test this headline image?"
"You tell me, is there a plausible downside?"
"Well, maybe. We don't know if this will decrease conversions"
"What makes the team believe this change is the right use of time? Do we have a well-formed hypothesis?
"I mean, we've done some research on competition."
"Is this change big enough for the traffic levels on this page? Are we maximizing our chances to detect an impact and avoid an inconclusive test after two weeks of waiting?"
"I don't think so.. it's just the headline image.."
"Is it worth it to enlarge the experiment?"
"Actually, it is. We could merge with another test we want to run that makes sense running together."
"So there is your decision."
—
Trustworthy controlled experiments are not as simple and easy as it sounds when done correctly.
There are just so many ways they can go wrong.
Continue reading » [[Top 20 data science guidelines for proper testing (soon)]].
---
> A/B testing is not an insurance policy for critical thinking or knowing your users. Inappropriately suggesting to A/B test is a good way to sound smart in a meeting at best, and cargo cult science at worst.
---
# Related to