OUTLINE FOR CHAPTER
9 IN THE 6^{th} EDITION
Statistical Inferences Concerning
Bivariate Correlation Coefficients
 Introduction
 The distinction between descriptive/inferential concerns about
an r
 Inferences on r and the relative popularity of hypothesis testing
vs. confidence intervals
 Statistical Tests Involving a Single Correlation
Coefficient
 The inferential purpose
 The null hypothesis:
 The usual situation, Ho: r
= 0
 Other possibilities
 Deciding if r is statistically significant
 Comparing r against the calculated value
 Comparing p against alpha
 Onetailed and twotailed tests on r
 Tests on specific kinds of correlations (e.g., r, rs,
rpb, etc.)
 Tests on Many Correlation Coefficients (Each Treated
Separately)
 Tests on the entries of a correlation matrix
 Tests on many correlation coefficients, with results presented
in a passage of text
 The Bonferroni adjustment technique
 Tests on Reliability and Validity Coefficients
 Statistically Comparing Two Correlation Coefficients
 Comparing the correlation coefficients from two different
samples against each other
 Comparing rxz and ryz
in one sample where there are 3 variables (X, Y,
& Z)
 The Use of Confidence Intervals Around Correlation
Coefficients
 Two possible reasons to build a CI around a sample r, only one
of which involves an Ho
 The "rule" for determining whether Ho: r
= 0 should be rejected if it's evaluated via a CI
 Cautions
 Relationship strength, effect size, and power
 Three underlying assumptions:
 The notions of "linearity," "homoscedasticity," and "normality"
 Assessing the plausibility of these assumptions
 Causality and correlation
 Attenuation:
 What causes it
 Correcting for it
