Chapter 8: Misconceptions
When people read, hear, or prepare research summaries,
they sometimes have misconceptions about what is or isn't "sound
practice" regarding the collection, analysis, and interpretation
of data. Here are some of these common (and dangerous) misconceptions
associated with the content of Chapter 8.
 Other things held constant, a small sample can detect
a small effect size whereas a large sample is needed to detect a large
effect size.
 The term "effect size" has a single, agreedupon
meaning among applied researchers.
 Type I errors are always worse than Type II errors.
 Research summaries that contain estimates of effect
size are superior to those studies that do not contain such estimates.
 In an a priori power analysis, it's better to use
a standardized effect size instead of an unstandardized effect size.
 In any study, if the researcher makes the level of
significance more rigorous so as to decrease the chances of a Type I
error, then he/she must "suffer the consequences" of an increase
in the likelihood of a Type II error.
 When the Bonferonni adjustment procedure is used,
each of the separate tests must be conducted using the same "reduced"
alpha level.
 There is an inverse relationship between sample size
and the probability of a Type I error.
 The level of significance is something that the
researcher has control over and there's nothing you can do about the
alpha level if your reading the research summary.
 If a researcher reports a "trend toward significance,"
this means that changes in the study's participants are occurring such
that future studies are likely to reveal significant results.
 If a researcher says, in the presentation of results,
that "p < .05," this clearly means that (1) the level of
significance has been set equal to .05 and (2) the null hypothesis has
been rejected.
 There is an agreement among researchers not to use
the term "highly significant" unless p < .001.
 When engaged in hypothesis testing, it's OK for a
researcher to talk about "near misses" (e.g., a situation
where p = .06 when the level of significance has been set equal to .05).
 All researchers use one asterisk to signify that
p < .05, two asterisks to signify that p < .01, and three asterisks
to signify that p < .001.
 If a researcher says nothing about the level of significance
he/she used but simply reports that "p < .001," you would
likely be correct in thinking that the researcher set alpha equal to
.001 before analyzing the data.
 If a researcher reports that "p < .0000,"
you can be fully confident that a Type I error was not made.
 It would be unethical for anyone on the "receiving
end" of a research report to alter the researcher's alpha (say
by changing .05 to .001) and thereby conceivably reverse the researcher's
decision to reject (or failtoreject) the null hypothesis.
