p vs. alpha or the Test Statistic vs. the CV Dear Students, As you know, most researchers who set up and test a null hypothesis hope that they will be able to "reject the null." There are, of course, a few exceptions to this general state-of-affairs . . . but they constitute a tiny minority. In the vast majority of cases, the researcher hopes that he/she will be able to REJECT THE NULL. In order for the typical researcher to get what he/she wants, the sample evidence must turn out to be inconsistent with what we'd expect to happen if the null were true. That should make sense. If the sample data turned out to be CONSISTENT with Ho, the researcher would have no scientific grounds to reject the null. It's only when the sample data turn out to be INCONSISTENT with Ho that the researcher can logically reject his/her null. To determine whether the sample evidence is INCONSISTENT with Ho, the researcher will do one of two things. If a computer has analyzed the sample data, the researcher will look to see what the computer says "p" is (given the sample evidence that's been entered into the computer, as well as the null hypothesis that's being tested). If the value of "p" is SMALLER than the level of significance, then the data are viewed as being INCONSISTENT with Ho . . . and this gives the researcher the right to REJECT THE NULL. For example, if alpha is set equal to .05 and then the computer says that the data-based "p" is equal to .02, the researcher will REJECT THE NULL. This researcher would probably give us the size of the calculated value (that's computed by the computer) AND the p-value. For instance, if the researcher has done a t-test and ended up with a calculated t-value = 4.83, he/she would likely summarize things by saying "t = 4.83, p = .02" and we would know that the null was rejected because .02 is smaller than .05. (.05 is the most common level of significance used by researchers.) If the researcher does not have a computer that can compute a data-based "p," he/she will have to (a) compute a calculated value by putting the sample data into a statistical formula, and (b) compare the calculated value against the appropriate critical value that's pulled out of a statistical table. If the calculated value turns out to be LARGER than the critical value, then the data are viewed as being INCONSISTENT with Ho . . . and this gives the researcher the right to REJECT THE NULL. For example, if a researcher conducts a "t-test" and determines that his/her t = 4.83, that calculated value would be compared against a critical t. If, at the .05 level of significance, the critical t = 2.61, then the researcher would REJECT THE NULL because the calculated value is larger than the critical value. The researcher, in this case, would summarize things by saying "t = 4.83, p < .05." Note that in the first of the two approaches, the researcher will REJECT THE NULL if p turns out to be SMALLER than the level of significance. If, on the other hand, the researcher uses the second approach, he/she will REJECT THE NULL if the calculated value is LARGER than the critical value. So . . . in the first approach, the researcher is hoping for a small p while in the second approach he/she is hoping for a large calculated value. Finally, you need to realize that it doesn't matter which of the two approaches the researcher uses. Both procedures lead to the exact same decision about Ho. In other words, if the researcher is able to REJECT THE NULL because p is SMALLER than alpha, then it will be the case (guaranteed) that the sample data would lead to a calculated value that's LARGER than the critical value. Or, if the researcher must make a fail-to-reject decision because p is LARGER than alpha, then it'll be the case (guaranteed) that the calculated value, if it's computed, will be SMALLER than the critical value. Sky Huck