Tuesday 20 August 2013

All-purpose announcement ,,,,

I am, strangely, reminded of the late and great Peter Sellers' all-purpose party political speech, when I read the following announcement about an upcoming conference:

"Business Analysis is growing at a phenomenal pace. Organisations are increasingly recognising the importance of business analysis in achieving successful change and many are investing heavily in developing this capability. The skills and techniques of business analysts are invaluable in shaping and forming business change overall as well as making the most of opportunities presented by new technologies. With the backdrop of a global economy recovering from a deep recession, organisations are in a constant state of fast-moving change and can't afford to get things wrong. Business Analysis capability is key to identifying what's needed and developing solutions that equip organisations for the future."

Just substitute almost any field of endeavour for the term 'Business Analysis' and you will see what I mean.









Wednesday 7 August 2013

Work(s) in progress


Forschungswerk (literally “Research works”), a Nürnberg based marketing research company, has recently (via www.marktforschung.de) announced[1] the availability of a procedure for statistically testing shifts/changes in NPS (Net Promoter Score).

As many readers would know (i.e. all two of you), when people think about NPS as a metric, they basically fall into two camps: those who think it is rubbish, and those who think it is the bee’s knees.

Regardless, NPS is here and it will stay.  So the question naturally arises, how do we know what is a ‘good’ NPS result, but probably of greater importance, how do we know when a change or difference in NPS is statistically significant.

To recap, NPS is simply the difference between the proportion of respondents who give an answer of 9 or 10 and the proportion who give an answer of 0 thru 6 on an 11-point recommendation scale.

It can in fact be easily computed in the statistical package of your choice with the following recoding scheme …

0 thru 6 = -100
7 or 8 = 0
9 or 10 = 100

… and then calculating the average score[2].  This then gives you enormous flexibility to (say) crosstab NPS by almost anything you fancy (i.e. there is no need to compute NPS ‘offline’ for desired groupings of respondents).

Unfortunately, the standard statistical significance tests do not apply when comparing different NPS results.  For a start, it is not too difficult to show that NPS has twice the sampling variability of a simple proportion (which does, of course, bring into question its suitability as a tracking metric – oops, here come those letters again).

There is, however, a way out, and one that I suspect that the Forschungswerk people have taken; namely to utilise a permutation test approach.  This is kind of hard to explain, but essentially, a null hypothesis that there is no difference between two NPS values is equivalent to saying that the two sets of results come from the same (statistical) population.  So if they come from the same population, then clearly we could validly mix up both sets of respondents into one group, and redraw our two samples, with no real difference in results.

And that is just what permutation testing does, except that it does it many, many times.  For example, we could draw 500 or 1,000 new samples underpinning each of the two NPS values from the combined respondent group, and see on how many of these 500/1,000 occasions the two NPS values were more different than they were in the original two samples.  If it’s less than 5% of occasions, then we can say that there is a statistically significant difference between the two original NPS values at p < .05.

This same approach can in fact be used to test the difference between other, equally intractable (i.e. from a statistical point of view) metrics, such as CVA ratios.

Clearly this approach isn’t ideal, since most standard packages used by the marketing research community won’t do it.  But there are some Excel™-based routines available (e.g. the excellent and home-grown PopTools[3]).