The other hypothesis testing doc covered some extreme basics, and left a lot of questions unanswered (but what about the geometry of error!?). While I probably won't ever have time to explore some of those concerns, this doc will cover some more intermediate topics in hypothesis testing.
-Tests
A Motivating Example
Suppose we are performing clinical trials, and want to measure the effectiveness of a drug over placebo at reducing cholesterol. Let be the test group measured drop in cholesterol and let be the control group measured drop in cholesterol. Then are iid and are iid , and our null hypothesis is that . Using Slutskey's Lemma and CLT, we have
However, this requires both sample sizes to go to infinity, and convergence will be extremely slow if they do not do so at the same rate. Since we rarely have a linear relationship between control and test group sizes (for ethical reasons), asymptotic hypothesis tests of the previous document will not be very useful.
Small Sample Sizes
Essentially, the above limit was a statement of the form...
The and Distributions
Set the following notation. Let
be the sample variance of a sample of independent Gaussian random variables with variance , and let
be the unbiased estimator of sample variance.
The Distributions
The distribution with degrees of freedom, written , is the distribution of the sum of the squares of independent samples from . Equivalently (and more geometrically), if is a -dimensional Gaussian random variable with unit variance, then . This provides a geometric reason why the kurtosis increases with : The average magnitude of a Gaussian random vector will increase with the number of dimensions (more positive numbers to sum).
Let . Then
- ; . For large , we have by CLT that
- .
Cochrane's Theorem
Theorem:
- is independent of .
The Student's -Distribution
Let be standard normal, let be , and assume and are independent. Then the random variable
has as its distribution the Student's distribution with degrees of freedom.
The Student's Test
This is the first nonasymptotic test we see in this class. One important thing to note is that when we do nonasymptotic hypothesis testing, we cannot escape the fact that we don't know our distribution. This means we always place an assumption on the underlying distribution of the sample. In the case of the Student's -test (note the "Student" in "Student's test" is there not only for historical reasons, but also to distinguish from Welch's -test), we assume that our data are Gaussian.
The One Sample Student's -Test
-
Assume are iid Gaussian .
-
The null hypothesis is that , and the alternate hypothesis is either that (for the two-sided test), or (for the one-sided test).
-
The test statistic is
Note that under the null, we have
where and (by Cochrane's Theorem). Thus, the test statistic follows a distribution with degrees of freedom and therefore its quantiles are known.
The Two Sample Welch's -Test
Returning to out cholesterol example, we can consider the scenario of testing for the difference of means of two samples using a distribution. As our null hypothesis is that , or equivalently that , we have a test statistic of the form
Thus, this is a one-sided example of the Welch -test. This is opposed to the Student's -test, where the test statistic follows a distribution. In this case, the test statistic is approximately a -distribution, particularly because the denominator involves something that is approximately (and very nearly so) a distribution.
Theorem (Welch-Satterthwaite): We have , where
Remark: If the variances are known to be equal, the test statistic becomes exactly a distribution, hence the test becomes a two sample Student's -test.
Tests based on MLEs
Briefly, these are some other tests.
Wald's Test
Consider an iid sample with statistical model , where and let be fixed and given. Let be the true parameter under the model. Consider the null hypothesis and let be the MLE.
If is true, then by CLT, we have
Hence, by plugging in the MLE into the Fisher information, we have a test statistic such that
Definition: Wald's Test is any test (one or two sided) based on the above test statistic.
Wald's Test For Implicit Hypotheses
Similar to above, suppose our null hypothesis is of the form for some continuously differentiable function (with ). Suppose an asymptotically normal estimator is available with asymptotic covariance . Let
Then by the Delta method, we have
By Slutskey's Theorem, we can plug into , hence we have a test statistic of the form
Definition: Wald's Test for Implicit Hypotheses is any test (one or two sided) based on the above test statistic for some function .