Create a free account to unlock this video.
Submission confirms agreement to our Terms of Service and Privacy Policy.
Already a member? Login
Founder, Data Geek, Entrepreneur
Lesson: Data-Driven Decisions with Mike Greenfield
Step #4 Impact: Experiments that are good for business
There are lots of wrong ways to do A/B testing. One, is if you're testing things that aren't going to have a material impact on a business, you're probably wasting your time. If you're testing a change to the FAQ page, chances are that's not something you should be A/B testing; you should just change it. I've seen a lot of cases where companies are A/B testing small things that don't matter that much and they're not A/B testing big things that make a big difference, and it may be because the big thing is a little bit more technically challenging to A/B test. So, that's one way that people can do A/B testing in the wrong way.
I think just coming up with back-of-the-envelope calculations and saying, "How much of an impact is this going to have on the business, on the product?" is really important. Making a guess of, "Okay, I think that we've got this on the sign-up page and making this change could increase sign-ups by 20%; what does that do for the business?" and spend five minutes to map that out. If you do that, and you say, one, "Does 20% matter?" two, if you're saying "20%, 20%, 20%" and you run a whole bunch of tests and you never see anything that's anywhere close to 20%, you're going to have to re-calibrate.
Focusing on the big things is really important with respect to A/B testing. Something else is being fairly rigorous about your methodology, if you can, actually saying, “50% of our users are going to see A for this week and 50% of our users are going to see B this week,” and not, “Everybody's going to see A for a week and everybody's going to see B for another week,” because things can change from week to week, and you may see the numbers fluctuate as a result of that, and so being reasonably rigorous there.
But then at the same time, sometimes it makes sense from a business perspective to decide sooner. If there's a 0.1% difference between A and B and it's not statistically meaningful, but it doesn't really matter one way or the other if it's a 0.1% difference, sometimes you just have to decide. For instance, if you're sending a seasonal email, you're sending a Christmas email and you know that you're going to send a million of these emails and you've sent ¼ of them and A seems to be better, but it's not quite statistically significant, but you're pretty sure that B is not going to be way better than A, maybe trivially better than A, if it's been really unlucky, it might make sense to just say, "Okay, let's just send everybody A. We can't wait forever. We're not trying to cure cancer here. We're just trying to see which email is more effective and our time horizon is limited, so let's just decide." In some cases, being more decisive makes sense, but it's a subtle thing.