OUTLINE FOR CHAPTER 4 IN THE 6th EDITION

Reliability and Validity

  1. The Meaning of Reliability and the Reliability Coefficient
    1. A good synonym for "reliability"
    2. Three possible forms of the "reliability question"
    3. The continuum of possible values for the reliability coefficient
  2. Different Approaches to Reliability
    1. Test-retest and the coefficient of stability
    2. Parallel-forms/alternate-forms/equivalent forms and the coefficient of equivalence
    3. Internal consistency:
      1. Split-half (regular and Guttman)
      2. KR20 and KR21
      3. Cronbach's alpha
    4. Interrater reliability:
      1. Percent agreement
      2. Pearson's r
      3. Kendall's coefficient of concordance
      4. Cohen's kappa
      5. Intraclass correlation (ICC)
  3. The Standard Error of Measurement
    1. The abbreviation, SEM
    2. Using the SEM to build "confidence bands"
    3. Reliability, SEM, and the metric of the test
  4. Warnings About Reliability
    1. Different reliability procedures conceptualize "consistency" differently
    2. Reliability coefficients say something about scores, not the actual instrument
    3. Reliability coefficients only estimate reliability
    4. With tests having tight time limits, internal consistency coefficients are too high
    5. Reliability is not the only criterion that should be used when evaluating test data
  5. The Meaning of Validity and Its Relationship to Reliability
    1. A good synonym for "validity"
    2. High validity implies high reliability . . . but the reverse is not true
  6. Different Kinds of Validity
    1. Content validity
    2. Criterion-related validity:
      1. Concurrent
      2. Predictive
    3. Construct validity:
      1. Convergent and discriminant validity
      2. The "known-groups" approach
      3. Factor analysis
  7. Warnings About Validity Claims
    1. Validity is different than reliability; hence, reliability does not establish validity
    2. Validity is tied to the scores produced by an instrument, not the instrument itself
    3. Efforts at assessing content validity require competent "judges"
    4. Efforts at assessing criterion-related validity require a high-quality criterion
    5. Validity often relies on correlation . . . so remember the warnings of Chapter 3
  8. Three Final Comments
    1. How high should reliability and validity be?
    2. Using multiple methods to assess instrument quality
    3. Does the presence of highly reliable and valid data guarantee a good study?

Copyright © 2012

Schuyler W. Huck
All rights reserved.

| Book Info | Author Info |

Site URL: www.readingstats.com

Top | Site Map
Site Design: John W. Taylor V