Size: 436
Comment: Initial commit
|
Size: 2024
Comment: Killing Econometrics page
|
Deletions are marked like this. | Additions are marked like this. |
Line 3: | Line 3: |
A '''test statistic''' is a statistic generated for hypothesis testing in statistical inference. | A '''test statistic''' is a statistic generated for hypothesis testing in [[Statistics/CausalInference|causal inference]]. |
Line 11: | Line 11: |
== Usage == | == Description == |
Line 17: | Line 17: |
---- | A critical value should reflect a tolerance for Type I error. |
Line 21: | Line 21: |
== t-test == | === P-Values === A '''p-value''' is the probability of achieving a test statistic assuming the null hypothesis is true. This is effectively a reformulation of the above framework: a critical level is selected (commonly 0.05; or conversely a level of significance is selected, like 95%) and ''H,,0,,'' is rejected if the p-value exceeds it. The advantage to formulating inference through p-values is that anyone can formulate their personal tolerance for Type I error, determine the corresponding critical level, and re-evaluate the inference. In contexts where a variety of critical levels ought to be considered (e.g., at 0.1, 0.05, and 0.01 simultaneously), it can also be more succinct to report p-values than annotate each. The disadvantage to this formulation is that p-values are more easily misunderstood (e.g., possible to misinterpret a p-value of 0.02 as meaning there is a 98% probability that the result is correct, when it truly means that there is a 2% probability that the estimate is 0). |
Line 27: | Line 33: |
== Z-test == | == Usage == |
Line 29: | Line 35: |
---- | There are a variety of test statistics. Most tests are designed for application in specific contexts, when certain assumptions hold. See the below pages for usage guidance: |
Line 31: | Line 37: |
== Chi-squared test == ---- == F-test == |
* [[Statistics/CollinearityTest|Collinearity test]] * [[Statistics/HosmerLemeshowTest|Hosmer-Lemeshow test]] * [[Statistics/KolmogorovSmirnovTest|Kolmogorov-Smirnov test]] * [[Statistics/LagrangeMultiplierTest|Lagrange multiplier test]] * [[Statistics/LikelihoodRatioTest|Likelihood-ratio test]] * [[Statistics/PearsonTest|Pearson test]] * [[Statistics/WaldTest|Wald test]] |
Test Statistic
A test statistic is a statistic generated for hypothesis testing in causal inference.
Contents
Description
A test statistic is compared to a critical value. H0 is rejected if:
A critical value should reflect a tolerance for Type I error.
P-Values
A p-value is the probability of achieving a test statistic assuming the null hypothesis is true. This is effectively a reformulation of the above framework: a critical level is selected (commonly 0.05; or conversely a level of significance is selected, like 95%) and H0 is rejected if the p-value exceeds it.
The advantage to formulating inference through p-values is that anyone can formulate their personal tolerance for Type I error, determine the corresponding critical level, and re-evaluate the inference. In contexts where a variety of critical levels ought to be considered (e.g., at 0.1, 0.05, and 0.01 simultaneously), it can also be more succinct to report p-values than annotate each.
The disadvantage to this formulation is that p-values are more easily misunderstood (e.g., possible to misinterpret a p-value of 0.02 as meaning there is a 98% probability that the result is correct, when it truly means that there is a 2% probability that the estimate is 0).
Usage
There are a variety of test statistics. Most tests are designed for application in specific contexts, when certain assumptions hold. See the below pages for usage guidance: