Statistical Significance vs. Practical Significance

Earlier in this chapter, you saw how researchers can do certain things in an effort to see whether a statistically significant finding is also meaningful in a practical sense. Unfortunately, many researchers do not rely on computed effect size indices, strength-of-association measures, or power analyses to help them avoid the mistake of "making a mountain out of a molehill." They simply use the six-step version of hypothesis testing and then get excited if the results are statistically significant.

Having results turn out to be statistically significant can cause researchers to go into a trance in which they willing allow "the tail to wag the dog." That's what happened, I think, to the researchers who conducted a study comparing the attitudes of two groups of women. In their technical report, they first indicated that the means turned out equal to 67.88 and 71.24 (on a scale that ranged from 17 to 85) and then stated "despite the small difference in means, there was a significant difference."

To me, the final 11words of the previous paragraph conjure up the image of statistical procedures functioning as some kind of magic powder that can be sprinkled on one's data and transform a "molehill" of a mean difference into a "mountain" that deserves others' attention. However, statistical analyses lack that kind of magical power! Had the researchers who obtained those means of 67.88 and 71.24 not been blinded by the allure of statistical significance, they would have focused their attention on the small difference and not the significant difference. And had they done this, their final words might have been "although there was a significant difference, the difference in means was small."

(From Chapter 10, p. 255)

Copyright © 2012

Schuyler W. Huck
All rights reserved.

| Book Info | Author Info |

Site URL: www.readingstats.com

Top | Site Map
Site Design: John W. Taylor V