e-Articles (Chapter 4)
Here are the titles of some full-length research articles that illustrate concepts discussed in this chapter. To view any article on-line, simply click on its title.
Illustrates how factor analysis, group discrimination, and convergent/ discriminant correlations can be used to assess validity.
In this study, kappa was used to assess the degree of agreement among 5 raters who independently "scored" each of 50 newspaper articles by classifying it into certain categories. (See the second and third paragraphs of the "Abstract" as well as the "Methods" and "Results" sections of the full article.)
The primary focus of this study is on reliability and validity. Techniques used to assess reliability were Cronbach's alpha and test-retest correlations; techniques used to assess construct validity included factor analysis and pre-post comparisons for a group that went though a five-day training workshop.
Illustrates concern for the reliability of a measuring instrument following its revision. Focus is on both internal consistence (as measured by coefficient alpha) and test- retest correlations.
Illustrates (1) the use of Cronbach's alpha to estimate reliability and (2) the use of factor analysis to assess construct validity. These reliability and validity concerns were directed to each of three parts of the instrument used to provide data for the study. (See the "Instrumentation" section of the authors' discussion of "Methodology.")
Illustrates a concern for content validity and internal consistency. Regarding reliability, a split-half technique was applied, with reference made to the "odd-even strategy" and "the Spearman-Brown formula." (See the section entitled "Instrumentation.")
Copyright © 2012
Schuyler W. Huck