Differences between revisions 1 and 4 (spanning 3 versions)
Revision 1 as of 2023-10-28 16:10:39
Size: 436
Comment: Initial commit
Revision 4 as of 2025-01-10 14:52:51
Size: 2024
Comment: Killing Econometrics page
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
A '''test statistic''' is a statistic generated for hypothesis testing in statistical inference. A '''test statistic''' is a statistic generated for hypothesis testing in [[Statistics/CausalInference|causal inference]].
Line 11: Line 11:
== Usage == == Description ==
Line 17: Line 17:
---- A critical value should reflect a tolerance for Type I error.
Line 21: Line 21:
== t-test == === P-Values ===

A '''p-value''' is the probability of achieving a test statistic assuming the null hypothesis is true. This is effectively a reformulation of the above framework: a critical level is selected (commonly 0.05; or conversely a level of significance is selected, like 95%) and ''H,,0,,'' is rejected if the p-value exceeds it.

The advantage to formulating inference through p-values is that anyone can formulate their personal tolerance for Type I error, determine the corresponding critical level, and re-evaluate the inference. In contexts where a variety of critical levels ought to be considered (e.g., at 0.1, 0.05, and 0.01 simultaneously), it can also be more succinct to report p-values than annotate each.

The disadvantage to this formulation is that p-values are more easily misunderstood (e.g., possible to misinterpret a p-value of 0.02 as meaning there is a 98% probability that the result is correct, when it truly means that there is a 2% probability that the estimate is 0).
Line 27: Line 33:
== Z-test == == Usage ==
Line 29: Line 35:
---- There are a variety of test statistics. Most tests are designed for application in specific contexts, when certain assumptions hold. See the below pages for usage guidance:
Line 31: Line 37:


== Chi-squared test ==

----



== F-test ==
 * [[Statistics/CollinearityTest|Collinearity test]]
 * [[Statistics/HosmerLemeshowTest|Hosmer-Lemeshow test]]
 * [[Statistics/KolmogorovSmirnovTest|Kolmogorov-Smirnov test]]
 * [[Statistics/LagrangeMultiplierTest|Lagrange multiplier test]]
 * [[Statistics/LikelihoodRatioTest|Likelihood-ratio test]]
 * [[Statistics/PearsonTest|Pearson test]]
 * [[Statistics/WaldTest|Wald test]]

Test Statistic

A test statistic is a statistic generated for hypothesis testing in causal inference.


Description

A test statistic is compared to a critical value. H0 is rejected if:

test.svg

A critical value should reflect a tolerance for Type I error.

P-Values

A p-value is the probability of achieving a test statistic assuming the null hypothesis is true. This is effectively a reformulation of the above framework: a critical level is selected (commonly 0.05; or conversely a level of significance is selected, like 95%) and H0 is rejected if the p-value exceeds it.

The advantage to formulating inference through p-values is that anyone can formulate their personal tolerance for Type I error, determine the corresponding critical level, and re-evaluate the inference. In contexts where a variety of critical levels ought to be considered (e.g., at 0.1, 0.05, and 0.01 simultaneously), it can also be more succinct to report p-values than annotate each.

The disadvantage to this formulation is that p-values are more easily misunderstood (e.g., possible to misinterpret a p-value of 0.02 as meaning there is a 98% probability that the result is correct, when it truly means that there is a 2% probability that the estimate is 0).


Usage

There are a variety of test statistics. Most tests are designed for application in specific contexts, when certain assumptions hold. See the below pages for usage guidance:


CategoryRicottone

Statistics/TestStatistic (last edited 2025-04-08 15:28:58 by DominicRicottone)