Quiz (Chapter 4)


Reliability and Validity

The Meaning of Reliability and the Reliability Coefficient
  1. What is really good one-word synonym for "reliable"?
  2. Accurate
    Consistent
    Correlated
    Punctual
    Valid
  3. Reliability coefficients can assume numerical values anywhere between ___ and ___ .
  4. -1.0 and +1.0
    0 and +1.0
Different Approaches to Reliability
  1. Whereas the number produced by test-retest reliability is called the "coefficient of ______," the number produced by parallel-forms reliability is called the "coefficient of ______ ."
  2. equivalence; homogeneity
    equivalence; stability
    homogeneity; equivalence
    homogeneity; stability
    stability; equivalence
    stability; homogeneity
  3. What other term(s) is/are sometimes used in place of the term "parallel-forms reliability"?
  4. Alternate-forms
    Equivalent-forms
    Both of the above
  5. The reliability procedures known as "split-half," "K-R #20," and "alpha" are similar in that they are focused on _______ reliability.
  6. test-retest
    interrater
    internal consistency
    parallel-forms
  7. In estimating a test's split-half reliability, half of the examinees respond to the odd-numbered items while the other half respond to the even-numbered items.
  8. True
    False
  9. When the K-R #20 reliability method is used, examinees will be tested ___ time(s).
  10. 1
    2
    3
    4
  11. Who invented the reliability procedure that's often called "alpha" or "coefficient alpha"?
  12. Cronbach
    Kendall
    Kuder
    Richardson
  13. Which internal consistency reliability procedure--K-R #20 OR Cronbach's alpha--fits a situation where data are collected on a 5-point scale that goes from "Strongly Agree" to "Strongly Disagree"?
  14. K-R #20
    Cronbach's alpha
  15. Which of the reliability procedures that focus on internal consistency makes use of the Spearman-Brown formula?
  16. Split-half
    K-R #20
    Cronbach's alpha
  17. If applied to the same right/wrong (i.e., 0 or 1) data from a test, the split-half reliability coefficient will always turn out the same as the K-R#20 reliability coefficient.
  18. True
    False
Interrater Reliability
  1. The coefficient of concordance is symbolized by the letter ___ .
  2. C
    H
    Q
    W
  3. Interrater reliability will turn out equal to ____ if all raters are in full agreement.
  4. 0
    1
    10
    100
  5. What name is associated with the interrater reliability technique that uses rank-order data and produces a "coefficient of concordance"?
  6. Cohen
    Cronbach
    Kendall
    Richardson
    Spearman
  7. What name is associated with the interrater reliability technique that uses nominal data and produces a kappa coefficient?
  8. Cohen
    Cronbach
    Kendall
    Richardson
    Spearman
  9. What do the letters ICC stand for?
  10. Internal consistency coefficient
    Intermediate coefficient of consistency
    Interval-curved classification
    Intraclass correlation
  11. Although many techniques can be used to assess interrater reliability, Pearson's r is not one of them.
  12. True
    False
The Standard Error of Measurement
  1. The size of the SEM is ____ related to the amount of reliability present in the data.
  2. directly
    indirectly
  3. If a student's score on a test is 82 and if the SEM = 4, that student's 95% confidence band would extend from __ to __.
  4. 80-84
    78-86
    74-90
    70-94
    None of the above
  5. Which assessment of consistency--reliability OR SEM--is expressed "in" the same units as the scores around which confidence bands are built?
  6. Reliability
    SEM
Warnings About Reliability
  1. High test-retest reliability implies high internal consistency reliability; conversely, low test-retest reliability implies low internal consistency reliability.
  2. True
    False
  3. Reliability "resides" in the measuring instrument itself, not in the scores obtained by using the measuring instrument.
  4. True
    False
  5. Measures of internal consistency will be ______ if a test is administered under highly speeded conditions.
  6. too high
    too low
    unaffected
  7. Which of these statements is better: (a) We estimated the reliability of our data. (b) We determined the reliability of our data.
  8. a
    b
    Neither statement is superior to the other
Validity
  1. What is a really good one-word synonym for "valid"?
  2. Accurate
    Authoritative
    Correlated
    Reliable
    Standardized
The Relationship Between Reliability and Validity
  1. If reliability is very, very high . . . then validity must also be very, very high.
  2. True
    False
  3. If validity is very, very high . . . then reliability must also be very, very high.
  4. True
    False
Different Kinds of Validity
  1. Content validity normally ____ expressed by means of a numerical coefficient.
  2. is
    is not
  3. The term "criterion-related validity" covers two approaches: predictive and _____.
  4. concurrent
    construct
    content
  5. A validity coefficient normally takes the form of a ______ .
  6. mean
    SD
    correlation
    percentage
  7. Which of these is a construct:
  8. Height
    Hair color
    Happiness
    Date of birth
  9. To support the convergent and discriminant validity of a new test, correlation coefficients must turn out to be positive and negative in sign, respectively.
  10. True
    False
  11. Which of the main validity procedures (content, criterion-related, or construct) is sometimes dealt with by the statistical technique of factor analysis?
  12. Content
    Criterion-related
    Construct
Warnings About Reliability and Validity
  1. Reliability is a necessary but not sufficient condition for validity.
  2. True
    False
  3. Where does the validity of a new test reside, in the test itself or in the scores produced by an administration of the test?
  4. In the test
    In the scores
  5. What might cause an honest researcher to claim that his/her test has high content validity when in fact it has very little content validity when evaluated by several judges?
  6. Nonnormal data
    Poor reliability
    Not enough examinees
    Bad judges
  7. What might cause an honest researcher to claim that his/her test has high criterion-related validity when in fact it has very little criterion-related validity?
  8. Nonnormal data
    Poor reliability
    Not enough examinees
    A lousy criterion
Two Final Comments
  1. How high should reliability and validity coefficients be before we can confidently call them "high enough?"
  2. .50
    .75
    .90
    .95
    It depends
  3. If a researcher conducts a study wherein the data are perfectly reliable and valid, it's still possible for the researcher's data-based conclusions to be utterly worthless, even if it's the case that an important research question was under investigation.
  4. True
    False

 

Copyright © 2012

Schuyler W. Huck
All rights reserved.

| Book Info | Author Info |

Site URL: www.readingstats.com

Top | Site Map
Site Design: John W. Taylor V