Chapter 8: Misconceptions

When people read, hear, or prepare research summaries, they sometimes have misconceptions about what is or isn't "sound practice" regarding the collection, analysis, and interpretation of data. Here are some of these common (and dangerous) misconceptions associated with the content of Chapter 8.

  1. Other things held constant, a small sample can detect a small effect size whereas a large sample is needed to detect a large effect size.
  2. The term "effect size" has a single, agreed-upon meaning among applied researchers.
  3. Type I errors are always worse than Type II errors.
  4. Research summaries that contain estimates of effect size are superior to those studies that do not contain such estimates.
  5. In an a priori power analysis, it's better to use a standardized effect size instead of an unstandardized effect size.
  6. In any study, if the researcher makes the level of significance more rigorous so as to decrease the chances of a Type I error, then he/she must "suffer the consequences" of an increase in the likelihood of a Type II error.
  7. When the Bonferonni adjustment procedure is used, each of the separate tests must be conducted using the same "reduced" alpha level.
  8. There is an inverse relationship between sample size and the probability of a Type I error.
  9. The level of significance is something that the researcher has control over and there's nothing you can do about the alpha level if your reading the research summary.
  10. If a researcher reports a "trend toward significance," this means that changes in the study's participants are occurring such that future studies are likely to reveal significant results.
  11. If a researcher says, in the presentation of results, that "p < .05," this clearly means that (1) the level of significance has been set equal to .05 and (2) the null hypothesis has been rejected.
  12. There is an agreement among researchers not to use the term "highly significant" unless p < .001.
  13. When engaged in hypothesis testing, it's OK for a researcher to talk about "near misses" (e.g., a situation where p = .06 when the level of significance has been set equal to .05).
  14. All researchers use one asterisk to signify that p < .05, two asterisks to signify that p < .01, and three asterisks to signify that p < .001.
  15. If a researcher says nothing about the level of significance he/she used but simply reports that "p < .001," you would likely be correct in thinking that the researcher set alpha equal to .001 before analyzing the data.
  16. If a researcher reports that "p < .0000," you can be fully confident that a Type I error was not made.
  17. It would be unethical for anyone on the "receiving end" of a research report to alter the researcher's alpha (say by changing .05 to .001) and thereby conceivably reverse the researcher's decision to reject (or fail-to-reject) the null hypothesis.

Copyright © 2012

Schuyler W. Huck
All rights reserved.

| Book Info | Author Info |

Site URL: www.readingstats.com

Top | Site Map
Site Design: John W. Taylor V