Quiz Over Chapter 9 of the 6th Edition


Statistical Inferences Concerning Bivariate Correlation Coefficients

Introduction

  1. (T/F)  The notions of "sample" & "population" are irrelevant when dealing inferentially with correlation.
  2. (T/F)  Researchers usually build CIs around their sample values of r rather than deal with null hypotheses.

Statistical Tests Involving a Single Correlation Coefficient
  1. Do researchers very often make inferences based upon a single correlation that comes from a single sample?
  2. (T/F)  The "direction" of the researcher's inference moves from the population to the sample.
  3. What pinpoint number is usually in the null hypothesis?
  4. Does the typical researcher indicate, either in words or symbols, the null hypothesis that he/she tested?
  5. Look at Excerpt 9.5. Express in symbols the null hypothesis that most likely was tested . . . and rejected.
  6. (T/F)  The sample value of r, computed from the collected data, typically serves as the calculated value.
  7. If r is compared against a critical value, Ho will be rejected if the former is __ (smaller/larger) than the latter.
  8. (Yes/No)  Do researchers usually include the critical value in their research report?
  9. Can tests on correlation coefficients be conducted in a one-tailed fashion?
  10. (T/F)  Inferential tests can be done on Pearson & Spearman correlations but not on biserial or point-biserial correlations.
  11. (T/F) If a researcher tests a correlation coefficient but does not indicate the type of correlation, you should guess that it was Pearson's r.

Tests on Many Correlation Coefficients (Each of Which is Treated Separately)
  1. Do researchers very often set up and test more than 1 correlational null hypothesis in the same study?
  2. (T/F)  If two or more rs are tested in the same study, the null hypothesis will likely be the same in all tests.
  3. Look at Excerpt 9.12. If there had been 8 coping strategies rather than 6, how many null hypotheses would have been tested in this excerpt's table?
  4. If a researcher computes all possible bivariate correlations among 5 variables, what would the Bonferroni-corrected alpha level be if the researcher wants to keep Type I error risk at 5% for the set of tests being conducted?

Tests on Reliability and Validity Coefficients
  1. Can a researcher set up and test a null hypothesis concerning a reliability or validity coefficient?
  2. Look at Excerpt 9.16. In order for the GMAT to have accounted for 80% of the variability among the final MBA GPAs, how high would the correlation have needed to be?

Statistically Comparing Two Correlation Coefficients
  1. (T/F) If a researcher inferentially compares two correlations, there might be 1 group involved or 2 groups.
  2. If the correlation between height and weight for a sample of men is compared with the correlation between height and weight in a sample of women, how many inferences would be made to the 2 populations?

The Use of Confidence Intervals Around Correlation Coefficients
  1. Which is more popular: setting up and testing a correlational Ho or building a CI around the sample value of r?
  2. If a researcher discovered that r = .13 and that CI.95 = .06 to .20, would he/she claim p < .05?

Cautions
  1. Can a researcher's r turn out to be close to zero and yet still be significantly different from zero?
  2. Which of these would constitute better evidence that there is, in the population, a strong relationship between the two variables that a researcher has measured and found to be significantly related:
    1. p < .0001
    2. r2 = .70
    3. n = 10,000
  3. Do many researchers concern themselves with the notions of "power" and "effect size" when testing their correlations?
  4. If a correlation coefficient is found to be statistically significant, and if a power analysis is then conducted, is it fair to assume that a "strong" relationship exists if the statistical test is shown to have had "high" power?
  5. Do many folks concerns themselves with the notions of "linearity" & "homoscedasticity" when testing their correlations?
  6. Which of these terms is a fairly good synonym for the term "homoscedasticity"?
    1. Equal means
    2. Equal variances
    3. Equal correlations
    4. Equal variables
  7. (T/F)  If a bivariate correlation coefficient has been found to be statistically significant with p<.05 (or better yet, with p<.01), the researcher can legitimately infer that a causal relationship exists between the 2 variables.
  8. Attenuation causes r to ____ (underestimate/overestimate) r, the magnitude of the correlation in the population.
  9. What causes attenuation?
    1. an n that's too small
    2. measurement errors
    3. inadequate statistical power
  10. Who is connected to the procedure that goes by the name "r-to-z transformation"?

These Questions are Supposed to be a Bit More Challenging
  1. Look at Excerpt 9.18. If the value of the second r had turned out equal to .26 (rather than equal to -.22), how many of the 3 results would have been significant?
  2. It's a good guess to think that the study from which Excerpt 9.3 involved a ___ (small/large) sample size.
  3. In Excerpt 9.4, an r of .13 turned out to be nonsignificant. In Excerpt 9.12, an r of .14 turned out to be significant (with p < .001). Assuming that both of these rs were of the Pearson product-moment variety, how could the results be so different when the sample correlation coefficients are almost identical?


Click here for answers.

 

Copyright © 2012

Schuyler W. Huck
All rights reserved.

| Book Info | Author Info |

Site URL: www.readingstats.com

Top | Site Map
Site Design: John W. Taylor V