OUTLINE FOR CHAPTER 7 (Part 2)

Hypothesis Testing

(Note: Steps 1, 2, 4, & 6 are covered on the outline for the 1st half of Chapter 7)

  1. The Final 2 Steps (#5 & #3) of the Basic Hypothesis Testing Procedure
    1. Step #5: The Criterion for Evaluating the Sample Evidence
      1. The primary question: "Are the sample data inconsistent with what would likely occur if Ho were true?"
      2. Three possible outcomes when the numerical summary of the sample data is compared against Ho's pinpoint number . . . and what should be decided about Ho
        1. They're identical . . . and therefore Ho obviously cannot be rejected
        2. A difference exists, but it's a "small" difference (i.e., small enough to be within the limits of expected sampling error). . . and therefore it's not logical to reject Ho
        3. A difference exists, and it's a "big" difference (i.e., larger than what's likely to have been produced simply by sampling error) . . . and thus Ho should be rejected
      3. Two procedures for deciding whether an observed difference (between the numerical summary of the sample data and Ho's pinpoint number) should be considered "small" or "large"
        1. Compare the calculated value (obtained in Step #4) against a tabled "critical value"
        2. Compare the data-based p-value (obtained in Step #4) against the "level of significance"
    2. Step #3:  Selecting the Level of Significance
      1. The level of significance as a "scientific cut-off point" for deciding whether Ho should be rejected
      2. Popular levels of significance: .05 and .01 (and sometimes .001) . . . with .05 used most often
      3. Different ways researchers talk about the level of significance: alpha, a, p = .05, p<  .05
      4. Type I & Type II errors (and how a defines the former, influences the latter if a is changed)
      5. Two points of possible confusion regarding the level of significance
  2. Results That Are Highly Significant and Near Misses
    1. The kind of p that causes a result to be "highly significant"
    2. Using different p-levels within the same study (even though a constant a is used)
    3. The kind of p that causes a result to be a "near miss"
    4. Four alternative labels for an outcome that is a "near miss"
    5. The all-or-nothing approach to hypothesis testing
  3. A Few Cautions
    1. Two meanings of "alpha"
    2. The importance of Ho
    3. The ambiguity of the word "hypothesis"
    4. When p is reported to be equal to or less than zero
    5. The meaning of the term "significant"

Copyright © 2012

Schuyler W. Huck
All rights reserved.

| Book Info | Author Info |

Site URL: www.readingstats.com

Top | Site Map
Site Design: John W. Taylor V