Marketing Experiments - Best Practices vs. Testing

The super talented folks at do something very interesting on their blog – they organize these contests where they share multiple versions of an ad or a landing page and ask readers to guess which version would get higher CTR or conversions. Then they conduct actual A/B/MVT split tests with those variations and share the results.

You can see the last few contests here, here and here.

If you plan to go through those links, I suggest you keep some spare time in hand - the most interesting part about those posts are the comments that readers leave.

Now, here’s the thing – a few commenters are exceptionally good at predicting the outcome of these split tests – not just that – they also substantiate their claims with very compelling arguments/theories about why they believe a particular version would be the winner. I think if these commenters were actually designing those ads or landing pages, they probably wouldn’t run those experiments – or even if they did, they’d probably test something more complex.

I think when I split test headlines for an ad or something – most of the time I'm just trying to make up for my lack of experience because a more experienced marketer would probably just know which headline would work.  On the other hand, I know for sure you can’t predict the optimum amount of discount by using just theories or best practices. There are way too many variables involved. Only a well-designed control test can reveal the discount that’d generate maximum profit.

So, maybe there are two types of consumer behavior or factors that influence consumer behavior – those that you can fairly accurately predict if you have the right experience and know the best practices - and those that no amount of best practices / theories will help you predict. You just have to test and find out.

I’m not sure if anybody has ever tried grouping these factors into separate buckets - the grouping will certainly be dynamic and contextual; not a static list – perhaps, something on the following lines:

Best practices bucket:
Ad/E-mail Headline
Marketing Message
(Testing not needed – if you have an expert with 10 years of relevant experience)

Testing bucket:
Communication timing
(Always test)

updated later:
2 of my readers pointed out that there's nothing wrong with testing even if you are an experienced marketer. Well, you see, I'm not really against the idea  of testing but you have to understand that there's always a transaction cost involved whenever you execute a test - so, you must have a damn good reason for testing - it's not like - since Google Website Optimizer is free or my ESP supports A/B testing for free, so why not use the feature. Also, if you have two or three equally good ideas for an ad copy / landing page / whatever, then you have a strong business case for testing but if all your ideas are - losers - because you don't have relevant experience/knowledge/etc. - testing will only help you choose the best of the worsts - that's a local maxima. The sad part of all this is - you and your bosses will think they are really optimizing their marketing.

Just because we the tools to test doesn't mean we should test all our half-baked ideas.

Anyways, the point I was trying to drive was this - I think there are certain things in marketing that just cannot be optimized without testing - offers and timing are two such things that come to my mind - since the variables that influence the outcome of offers/timing are so complex (macro trends, disposable income, latest fad, seasonality etc.) that it's practically impossible for anyone to predict the returns without testing. On the other hand, the e-mail subject lines, ad copy headlines, unique value proposition etc. can certainly benefit from testing but a really good copy writer or marketer can do an equally good job too.

Look at the comment below from D Bnonn Tennant (the winner of last marketing experiment contest). When you are done reading this + the original post, you'd know what I was talking about.

March 14th, 2012 at 06:52 | #5

Unlike the previous commenters, I’m not convinced the treatment is the winner.
The treatment’s eyepath is lousy. Readers have to choose between reading the left or right column first. The natural inclination is to read the right column, which leads to the bottom of the page — so there’s friction in coming back up into the red box. It’s going the wrong way (bottom to top).
There’s also the fact that the treatment has its headline in inverted text superimposed over an image — both techniques which, even alone, massively reduce readership.
However, there are clear problems with the control as well. Although all the text is in a single column, the copy is much poorer. The lack of specificity could well be the deciding factor — the treatment’s headline is far more compelling (concrete value, “your choice” etc).
There’s also the fact that the left-hand image draws the eye straight down the page into what look like quite unrelated numbered options. Potentially bypassing the signup form altogether.
I’m going to pick the treatment for the headline (should have the greatest effect on conversions), improved copy, and reduced distractions. But I’m not going to be surprised if this is a surprise underdog win.