P-value: Difference between revisions
imported>Truman456Capote m →Misuse |
imported>Talgalili →Misuse: added comment from Nature Human behaviour |
||
| (One intermediate revision by one other user not shown) | |||
| Line 2: | Line 2: | ||
{{distinguish|text=the [[P-factor]]}} | {{distinguish|text=the [[P-factor]]}} | ||
{{DISPLAYTITLE:''p''-value}} | {{DISPLAYTITLE:''p''-value}} | ||
In [[statistical hypothesis testing|null-hypothesis significance testing]], the '''''p''-value'''{{NoteTag|1=Italicisation, capitalisation and hyphenation of the term vary. For example, [[AMA style]] uses "''P'' value", [[APA style]] uses "''p'' value", and the [[American Statistical Association]] uses "''p''-value". In all cases, the "p" stands for probability.<ref>{{cite web | title = ASA House Style | url = http://magazine.amstat.org/wp-content/uploads/STATTKadmin/style%5B1%5D.pdf | work = Amstat News | publisher = American Statistical Association }}</ref>}} is the probability of obtaining test results at least as extreme as the [[Realization (probability)|result actually observed]], under the assumption that the [[null hypothesis]] is correct.<ref>{{cite web | vauthors = Aschwanden C |author-link = Christie Aschwanden| title = Not Even Scientists Can Easily Explain P-values | url = https://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p-values/ | website = FiveThirtyEight | access-date = 11 October 2019 | archive-url = https://web.archive.org/web/20190925221600/https://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p-values/ | archive-date = 25 September 2019 | date = 2015-11-24 }}</ref><ref name="ASA">{{cite journal | vauthors = Wasserstein RL, Lazar NA |date= 7 March 2016 |title = The ASA's Statement on p-Values: Context, Process, and Purpose |journal= The American Statistician |volume = 70 |issue = 2 |pages = 129–133 |doi= 10.1080/00031305.2016.1154108 |doi-access = free }}</ref> A very small ''p''-value means that such an extreme observed [[Outcome (probability)|outcome]] would be very unlikely ''under the null hypothesis''. Even though reporting ''p''-values of statistical tests is common practice in [[academic publishing|academic publications]] of many quantitative fields, misinterpretation and [[misuse of p-values]] is widespread and has been a major topic in mathematics and [[metascience]].<ref>{{cite journal | vauthors = Hubbard R, Lindsay RM |title=Why ''P'' Values Are Not a Useful Measure of Evidence in Statistical Significance Testing |journal=[[Theory & Psychology]] |year=2008 |volume=18 |issue=1 |pages=69–88 |doi=10.1177/0959354307086923 |s2cid=143487211 }}</ref><ref>{{cite journal | vauthors = Munafò MR, Nosek BA, Bishop DV, Button KS, Chambers CD, du Sert NP, Simonsohn U, Wagenmakers EJ, Ware JJ, Ioannidis JP | display-authors = 6 | title = A manifesto for reproducible science | journal = Nature Human Behaviour | volume = 1 | | In [[statistical hypothesis testing|null-hypothesis significance testing]], the '''''p''-value'''{{NoteTag|1=Italicisation, capitalisation and hyphenation of the term vary. For example, [[AMA style]] uses "''P'' value", [[APA style]] uses "''p'' value", and the [[American Statistical Association]] uses "''p''-value". In all cases, the "p" stands for [[probability]].<ref>{{cite web | title = ASA House Style | url = http://magazine.amstat.org/wp-content/uploads/STATTKadmin/style%5B1%5D.pdf | work = Amstat News | publisher = American Statistical Association }}</ref>}} is the [[probability]] of obtaining test results at least as extreme as the [[Realization (probability)|result actually observed]], under the assumption that the [[null hypothesis]] is correct.<ref>{{cite web | vauthors = Aschwanden C |author-link = Christie Aschwanden| title = Not Even Scientists Can Easily Explain P-values | url = https://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p-values/ | website = FiveThirtyEight | access-date = 11 October 2019 | archive-url = https://web.archive.org/web/20190925221600/https://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p-values/ | archive-date = 25 September 2019 | date = 2015-11-24 }}</ref><ref name="ASA">{{cite journal | vauthors = Wasserstein RL, Lazar NA |date= 7 March 2016 |title = The ASA's Statement on p-Values: Context, Process, and Purpose |journal= The American Statistician |volume = 70 |issue = 2 |pages = 129–133 |doi= 10.1080/00031305.2016.1154108 |doi-access = free }}</ref> A very small ''p''-value means that such an extreme observed [[Outcome (probability)|outcome]] would be very unlikely ''under the null hypothesis''. Even though reporting ''p''-values of statistical tests is common practice in [[academic publishing|academic publications]] of many quantitative fields, misinterpretation and [[misuse of p-values]] is widespread and has been a major topic in [[mathematics]] and [[metascience]].<ref>{{cite journal | vauthors = Hubbard R, Lindsay RM |title=Why ''P'' Values Are Not a Useful Measure of Evidence in Statistical Significance Testing |journal=[[Theory & Psychology]] |year=2008 |volume=18 |issue=1 |pages=69–88 |doi=10.1177/0959354307086923 |s2cid=143487211 }}</ref><ref>{{cite journal | vauthors = Munafò MR, Nosek BA, Bishop DV, Button KS, Chambers CD, du Sert NP, Simonsohn U, Wagenmakers EJ, Ware JJ, Ioannidis JP | display-authors = 6 | title = A manifesto for reproducible science | journal = Nature Human Behaviour | volume = 1 | page = 0021 | date = January 2017 | issue = 1 | pmid = 33954258 | pmc = 7610724 | doi = 10.1038/s41562-016-0021 | s2cid = 6326747 | doi-access = free | author-link1 = John Ioannidis }}</ref> | ||
In 2016, the [[American Statistical Association]] (ASA) made a formal statement that "''p''-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a ''p''-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis".<ref>{{Cite journal |last1=Wasserstein |first1=Ronald L. |last2=Lazar |first2=Nicole A. |date=2016-04-02 |title=The ASA Statement on p -Values: Context, Process, and Purpose |journal=The American Statistician |language=en |volume=70 |issue=2 |pages=129–133 |doi=10.1080/00031305.2016.1154108 |s2cid=124084622 |issn=0003-1305 |doi-access=free }}</ref> That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "''p''-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".<ref name="ASA2019">{{cite journal | last1=Benjamini | first1=Yoav | last2=De Veaux | first2=Richard D. | last3=Efron | first3=Bradley | last4=Evans | first4=Scott | last5=Glickman | first5=Mark | last6=Graubard | first6=Barry I. | last7=He | first7=Xuming | last8=Meng | first8=Xiao-Li | last9=Reid | first9=Nancy M. | last10=Stigler | first10=Stephen M. | last11=Vardeman | first11=Stephen B. | last12=Wikle | first12=Christopher K. | last13=Wright | first13=Tommy | last14=Young | first14=Linda J. | last15=Kafadar | first15=Karen | title=ASA President's Task Force Statement on Statistical Significance and Replicability | journal=Chance | publisher=Informa UK Limited | volume=34 | issue=4 | date=2021-10-02 | issn=0933-2480 | doi=10.1080/09332480.2021.2003631 | pages=10–11 | doi-access=free }}</ref> | In 2016, the [[American Statistical Association]] (ASA) made a formal statement that "''p''-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a ''p''-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis".<ref>{{Cite journal |last1=Wasserstein |first1=Ronald L. |last2=Lazar |first2=Nicole A. |date=2016-04-02 |title=The ASA Statement on p -Values: Context, Process, and Purpose |journal=The American Statistician |language=en |volume=70 |issue=2 |pages=129–133 |doi=10.1080/00031305.2016.1154108 |s2cid=124084622 |issn=0003-1305 |doi-access=free }}</ref> That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "''p''-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".<ref name="ASA2019">{{cite journal | last1=Benjamini | first1=Yoav | last2=De Veaux | first2=Richard D. | last3=Efron | first3=Bradley | last4=Evans | first4=Scott | last5=Glickman | first5=Mark | last6=Graubard | first6=Barry I. | last7=He | first7=Xuming | last8=Meng | first8=Xiao-Li | last9=Reid | first9=Nancy M. | last10=Stigler | first10=Stephen M. | last11=Vardeman | first11=Stephen B. | last12=Wikle | first12=Christopher K. | last13=Wright | first13=Tommy | last14=Young | first14=Linda J. | last15=Kafadar | first15=Karen | title=ASA President's Task Force Statement on Statistical Significance and Replicability | journal=Chance | publisher=Informa UK Limited | volume=34 | issue=4 | date=2021-10-02 | issn=0933-2480 | doi=10.1080/09332480.2021.2003631 | pages=10–11 | doi-access=free }}</ref> | ||
| Line 30: | Line 30: | ||
{{blockquote |text=The error that a practising statistician would consider the more important to avoid (which is a subjective judgment) is called the error of the first kind. The first demand of the mathematical theory is to deduce such test criteria as would ensure that the probability of committing an error of the first kind would equal (or approximately equal, or not exceed) a preassigned number α, such as α = 0.05 or 0.01, etc. This number is called the level of significance. |author=Jerzy Neyman |source="The Emergence of Mathematical Statistics"<ref name="Neyman1976">{{cite book | chapter = The Emergence of Mathematical Statistics: A Historical Sketch with Particular Reference to the United States | title = On the History of Statistics and Probability | page = 161 | year = 1976 | last = Neyman | first = Jerzy | author-link = Jerzy Neyman | place = New York | publisher = Marcel Dekker Inc | editor-last = Owen | editor-first = D.B. | series = Textbooks and Monographs | url = https://openlibrary.org/works/OL18334563W/On_the_history_of_statistics_and_probability?edition=key%3A/books/OL5206547M}}</ref>}} | {{blockquote |text=The error that a practising statistician would consider the more important to avoid (which is a subjective judgment) is called the error of the first kind. The first demand of the mathematical theory is to deduce such test criteria as would ensure that the probability of committing an error of the first kind would equal (or approximately equal, or not exceed) a preassigned number α, such as α = 0.05 or 0.01, etc. This number is called the level of significance. |author=Jerzy Neyman |source="The Emergence of Mathematical Statistics"<ref name="Neyman1976">{{cite book | chapter = The Emergence of Mathematical Statistics: A Historical Sketch with Particular Reference to the United States | title = On the History of Statistics and Probability | page = 161 | year = 1976 | last = Neyman | first = Jerzy | author-link = Jerzy Neyman | place = New York | publisher = Marcel Dekker Inc | editor-last = Owen | editor-first = D.B. | series = Textbooks and Monographs | url = https://openlibrary.org/works/OL18334563W/On_the_history_of_statistics_and_probability?edition=key%3A/books/OL5206547M}}</ref>}} | ||
In a significance test, the null hypothesis <math>H_0</math> is rejected if the ''p''-value is less than | In a significance test, the null hypothesis <math>H_0</math> is rejected if the ''p''-value is less than to a predefined threshold value [[Alpha|<math>\alpha</math>]], which is referred to as the alpha level or [[statistical significance|significance level]]. <math>\alpha</math> is not derived from the data, but rather is set by the researcher before examining the data. <math>\alpha</math> is commonly set to 0.05, though lower alpha levels are sometimes used. The 0.05 value (equivalent to 1/20 chances) was originally proposed by [[Ronald Fisher]] in 1925 in his famous book entitled "[[Statistical Methods for Research Workers]]".<ref>{{Citation |last=Fisher |first=R. A. |title=Statistical Methods for Research Workers |date=1992 |work=Breakthroughs in Statistics: Methodology and Distribution |series=Springer Series in Statistics |pages=66–70 |editor-last=Kotz |editor-first=Samuel |place=New York, NY |publisher=Springer |language=en |doi=10.1007/978-1-4612-4380-9_6 |isbn=978-1-4612-4380-9 |editor2-last=Johnson |editor2-first=Norman L.}}</ref> | ||
Different ''p''-values based on independent sets of data can be combined, for instance using [[Fisher's combined probability test]]. | Different ''p''-values based on independent sets of data can be combined, for instance using [[Fisher's combined probability test]]. | ||
| Line 38: | Line 38: | ||
The ''p''-value is a function of the chosen test statistic <math>T</math> and is therefore a [[random variable]]. If the null hypothesis fixes the probability distribution of <math>T</math> precisely (e.g. <math>H_0: \theta = \theta_0,</math> where <math>\theta</math> is the only parameter), and if that distribution is continuous, then when the null-hypothesis is true, the ''p''-value is [[Uniform distribution (continuous)|uniformly distributed]] between 0 and 1. Regardless of the truth of the <math>H_0</math>, the ''p''-value is not fixed; if the same test is repeated independently with fresh data, one will typically obtain a different ''p''-value in each iteration. | The ''p''-value is a function of the chosen test statistic <math>T</math> and is therefore a [[random variable]]. If the null hypothesis fixes the probability distribution of <math>T</math> precisely (e.g. <math>H_0: \theta = \theta_0,</math> where <math>\theta</math> is the only parameter), and if that distribution is continuous, then when the null-hypothesis is true, the ''p''-value is [[Uniform distribution (continuous)|uniformly distributed]] between 0 and 1. Regardless of the truth of the <math>H_0</math>, the ''p''-value is not fixed; if the same test is repeated independently with fresh data, one will typically obtain a different ''p''-value in each iteration. | ||
Usually only a single ''p''-value relating to a hypothesis is observed, so the ''p''-value is interpreted by a significance test, and no effort is made to estimate the distribution it was drawn from. When a collection of ''p''-values are available (e.g. when considering a group of studies on the same subject), the distribution of ''p''-values is sometimes called a ''p''-curve.<ref name="Head2015">{{cite journal | vauthors = Head ML, Holman L, Lanfear R, Kahn AT, Jennions MD | title = The extent and consequences of p-hacking in science | journal = PLOS Biology | volume = 13 | issue = 3 | | Usually only a single ''p''-value relating to a hypothesis is observed, so the ''p''-value is interpreted by a significance test, and no effort is made to estimate the distribution it was drawn from. When a collection of ''p''-values are available (e.g. when considering a group of studies on the same subject), the distribution of significant ''p''-values is sometimes called a ''p''-curve.<ref name="Head2015">{{cite journal | vauthors = Head ML, Holman L, Lanfear R, Kahn AT, Jennions MD | title = The extent and consequences of p-hacking in science | journal = PLOS Biology | volume = 13 | issue = 3 | article-number = e1002106 | date = March 2015 | pmid = 25768323 | pmc = 4359000 | doi = 10.1371/journal.pbio.1002106 | doi-access = free }}</ref> | ||
A ''p''-curve can be used to assess the reliability of scientific literature, such as by detecting publication bias or [[p-hacking|''p''-hacking]]. | A ''p''-curve can be used to assess the reliability of scientific literature, such as by detecting publication bias or [[p-hacking|''p''-hacking]]. | ||
<ref name="Head2015"/><ref name="Simonsohn2014">{{cite journal | vauthors = Simonsohn U, Nelson LD, Simmons JP | title = ''p''-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results | journal = Perspectives on Psychological Science | volume = 9 | issue = 6 | pages = 666–681 | date = November 2014 | pmid = 26186117 | doi = 10.1177/1745691614553988 | s2cid = 39975518 }}</ref> | <ref name="Head2015"/><ref name="Simonsohn2014">{{cite journal | vauthors = Simonsohn U, Nelson LD, Simmons JP | title = ''p''-Curve and Effect Size: Correcting for Publication Bias Using Only Significant Results | journal = Perspectives on Psychological Science | volume = 9 | issue = 6 | pages = 666–681 | date = November 2014 | pmid = 26186117 | doi = 10.1177/1745691614553988 | s2cid = 39975518 }}</ref> | ||
| Line 54: | Line 54: | ||
=== Misuse === | === Misuse === | ||
{{Main|Misuse of p-values}} | {{Main|Misuse of p-values}} | ||
According to the ASA, there is widespread agreement that ''p''-values are often misused and misinterpreted.<ref name="ASA" /> One practice that has been particularly criticized is accepting the alternative hypothesis for any ''p''-value nominally less than 0.05 without other supporting evidence. Although ''p''-values are helpful in assessing how incompatible the data are with a specified statistical model, contextual factors must also be considered, such as "the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis".<ref name="ASA" /> Another concern is that the ''p''-value is often misunderstood as being the probability that the null hypothesis is true.<ref name="ASA" /><ref>{{cite journal | vauthors = Colquhoun D | title = An investigation of the false discovery rate and the misinterpretation of p-values | journal = Royal Society Open Science | volume = 1 | issue = 3 | | According to the ASA, there is widespread agreement that ''p''-values are often misused and misinterpreted.<ref name="ASA" /> One practice that has been particularly criticized is accepting the alternative hypothesis for any ''p''-value nominally less than 0.05 without other supporting evidence. Although ''p''-values are helpful in assessing how incompatible the data are with a specified statistical model, contextual factors must also be considered, such as "the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis".<ref name="ASA" /> Another concern is that the ''p''-value is often misunderstood as being the probability that the null hypothesis is true.<ref name="ASA" /><ref>{{cite journal | vauthors = Colquhoun D | title = An investigation of the false discovery rate and the misinterpretation of p-values | journal = Royal Society Open Science | volume = 1 | issue = 3 | article-number = 140216 | date = November 2014 | pmid = 26064558 | pmc = 4448847 | doi = 10.1098/rsos.140216 | arxiv = 1407.5296 | bibcode = 2014RSOS....140216C }}</ref> ''p''-values and significance tests also say nothing about the possibility of drawing conclusions from a sample to a population. | ||
Some statisticians have proposed abandoning ''p''-values and focusing more on other inferential statistics,<ref name="ASA" /> such as [[confidence intervals]],<ref>{{cite journal | vauthors = Lee DK | title = Alternatives to P value: confidence interval and effect size | journal = Korean Journal of Anesthesiology | volume = 69 | issue = 6 | pages = 555–562 | date = December 2016 | pmid = 27924194 | pmc = 5133225 | doi = 10.4097/kjae.2016.69.6.555 }}</ref><ref>{{cite journal | vauthors = Ranstam J | title = Why the P-value culture is bad and confidence intervals a better alternative | journal = Osteoarthritis and Cartilage | volume = 20 | issue = 8 | pages = 805–808 | date = August 2012 | pmid = 22503814 | doi = 10.1016/j.joca.2012.04.001 | doi-access = free }}</ref> [[Likelihood principle#The law of likelihood|likelihood ratios]],<ref>{{cite journal | vauthors = Perneger TV | title = Sifting the evidence. Likelihood ratios are alternatives to P values | journal = BMJ | volume = 322 | issue = 7295 | pages = 1184–1185 | date = May 2001 | pmid = 11379590 | pmc = 1120301 | doi = 10.1136/bmj.322.7295.1184 }}</ref><ref>{{cite book | vauthors = Royall R |chapter=The Likelihood Paradigm for Statistical Evidence |title = The Nature of Scientific Evidence |pages=119–152 |doi = 10.7208/chicago/9780226789583.003.0005 |language=en |year=2004 |isbn= | Some statisticians have proposed abandoning ''p''-values and focusing more on other inferential statistics,<ref name="ASA" /> such as [[confidence intervals]],<ref>{{cite journal | vauthors = Lee DK | title = Alternatives to P value: confidence interval and effect size | journal = Korean Journal of Anesthesiology | volume = 69 | issue = 6 | pages = 555–562 | date = December 2016 | pmid = 27924194 | pmc = 5133225 | doi = 10.4097/kjae.2016.69.6.555 }}</ref><ref>{{cite journal | vauthors = Ranstam J | title = Why the P-value culture is bad and confidence intervals a better alternative | journal = Osteoarthritis and Cartilage | volume = 20 | issue = 8 | pages = 805–808 | date = August 2012 | pmid = 22503814 | doi = 10.1016/j.joca.2012.04.001 | doi-access = free }}</ref> [[Likelihood principle#The law of likelihood|likelihood ratios]],<ref>{{cite journal | vauthors = Perneger TV | title = Sifting the evidence. Likelihood ratios are alternatives to P values | journal = BMJ | volume = 322 | issue = 7295 | pages = 1184–1185 | date = May 2001 | pmid = 11379590 | pmc = 1120301 | doi = 10.1136/bmj.322.7295.1184 }}</ref><ref>{{cite book | vauthors = Royall R |chapter=The Likelihood Paradigm for Statistical Evidence |title = The Nature of Scientific Evidence |pages=119–152 |doi = 10.7208/chicago/9780226789583.003.0005 |language=en |year=2004 |isbn= 978-0-226-78957-6 }}</ref> or [[Bayes factors]],<ref>{{cite web | vauthors = Schimmack U |title=Replacing p-values with Bayes-Factors: A Miracle Cure for the Replicability Crisis in Psychological Science |url = https://replicationindex.wordpress.com/2015/04/30/replacing-p-values-with-bayes-factors-a-miracle-cure-for-the-replicability-crisis-in-psychological-science/ |website=Replicability-Index |access-date=7 March 2017 |date=30 April 2015 }}</ref><ref>{{cite journal | vauthors = Marden JI |title = Hypothesis Testing: From p Values to Bayes Factors |journal = Journal of the American Statistical Association |date=December 2000 |volume=95|issue=452 |pages=1316–1320 |doi = 10.2307/2669779 |jstor=2669779 }}</ref><ref>{{cite journal | vauthors = Stern HS | title = A Test by Any Other Name: P Values, Bayes Factors, and Statistical Inference | journal = Multivariate Behavioral Research | volume = 51 | issue = 1 | pages = 23–29 | date = 16 February 2016 | pmid = 26881954 | pmc = 4809350 | doi = 10.1080/00273171.2015.1099032 }}</ref> but there is heated debate on the feasibility of these alternatives.<ref>{{cite journal | vauthors = Murtaugh PA | title = In defense of P values | journal = Ecology | volume = 95 | issue = 3 | pages = 611–617 | date = March 2014 | pmid = 24804441 | doi = 10.1890/13-0590.1 | bibcode = 2014Ecol...95..611M | url = https://zenodo.org/record/894459 }}</ref><ref>{{cite web |url = https://fivethirtyeight.com/features/statisticians-found-one-thing-they-can-agree-on-its-time-to-stop-misusing-p-values/ |title = Statisticians Found One Thing They Can Agree On: It's Time To Stop Misusing P-Values | ||
| vauthors = Aschwanden C |author-link = Christie Aschwanden |website=FiveThirtyEight |date= 7 March 2016 }}</ref> Others have suggested to remove fixed significance thresholds and to interpret ''p''-values as continuous indices of the strength of evidence against the null hypothesis.<ref>{{cite journal | vauthors = Amrhein V, Korner-Nievergelt F, Roth T | title = The earth is flat (''p'' > 0.05): significance thresholds and the crisis of unreplicable research | journal = PeerJ | volume = 5 | | | vauthors = Aschwanden C |author-link = Christie Aschwanden |website=FiveThirtyEight |date= 7 March 2016 }}</ref> Others have suggested to remove fixed significance thresholds and to interpret ''p''-values as continuous indices of the strength of evidence against the null hypothesis.<ref>{{cite journal | vauthors = Amrhein V, Korner-Nievergelt F, Roth T | title = The earth is flat (''p'' > 0.05): significance thresholds and the crisis of unreplicable research | journal = PeerJ | volume = 5 | article-number = e3544 | year = 2017 | pmid = 28698825 | pmc = 5502092 | doi = 10.7717/peerj.3544 | author1-link = Valentin Amrhein | doi-access = free }}</ref><ref>{{cite journal | vauthors = Amrhein V, Greenland S | title = Remove, rather than redefine, statistical significance | journal = Nature Human Behaviour | volume = 2 | issue = 1 | page = 4 | date = January 2018 | pmid = 30980046 | doi = 10.1038/s41562-017-0224-0 | s2cid = 46814177 | author1-link = Valentin Amrhein }}</ref> Yet others suggested to report alongside ''p''-values the prior probability of a real effect that would be required to obtain a false positive risk (i.e. the probability that there is no real effect) below a pre-specified threshold (e.g. 5%).<ref>{{cite journal | vauthors = Colquhoun D | title = The reproducibility of research and the misinterpretation of ''p''-values | journal = Royal Society Open Science | volume = 4 | issue = 12 | article-number = 171085 | date = December 2017 | pmid = 29308247 | pmc = 5750014 | doi = 10.1098/rsos.171085 }}</ref> | ||
That said, in 2019 a task force by ASA had convened to consider the use of statistical methods in scientific studies, specifically hypothesis tests and ''p''-values, and their connection to replicability.<ref name="ASA2019" /> It states that "Different measures of uncertainty can complement one another; no single measure serves all purposes", citing ''p''-value as one of these measures. They also stress that ''p''-values can provide valuable information when considering the specific value as well as when compared to some threshold. In general, it stresses that "''p''-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data". | That said, in 2019 a task force by ASA had convened to consider the use of statistical methods in scientific studies, specifically hypothesis tests and ''p''-values, and their connection to replicability.<ref name="ASA2019" /> It states that "Different measures of uncertainty can complement one another; no single measure serves all purposes", citing ''p''-value as one of these measures. They also stress that ''p''-values can provide valuable information when considering the specific value as well as when compared to some threshold. In general, it stresses that "''p''-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data". This sentiment was further supported by a comment in [[Nature Human Behaviour]], that, in response to recommendations to redefine statistical significance to P ≤ 0.005, have proposed that "researchers should transparently report and justify all choices they make when designing a study, including the alpha level."<ref>Lakens, D., Adolfi, F.G., Albers, C.J. et al. Justify your alpha. Nat Hum Behav 2, 168–171 (2018). https://doi.org/10.1038/s41562-018-0311-x</ref> | ||
== Calculation == | == Calculation == | ||
| Line 99: | Line 99: | ||
{{Anchor|Optional stopping}} | {{Anchor|Optional stopping}} | ||
The difference between the two meanings of "extreme" appear when we consider a sequential hypothesis testing, or optional stopping, for the fairness of the coin. In general, optional stopping changes how p-value is calculated.<ref>{{Cite journal |last=Goodman |first=Steven |date=2008-07-01 |title=A Dirty Dozen: Twelve P-Value Misconceptions |url=https://www.sciencedirect.com/science/article/pii/S0037196308000620 |journal=Seminars in Hematology |series=Interpretation of Quantitative Research |volume=45 |issue=3 |pages=135–140 |doi=10.1053/j.seminhematol.2008.04.003 |pmid=18582619 |issn=0037-1963}}</ref><ref>{{Cite journal |last=Wagenmakers |first=Eric-Jan |date=October 2007 |title=A practical solution to the pervasive problems of p values |url=http://link.springer.com/10.3758/BF03194105 |journal=Psychonomic Bulletin & Review |language=en |volume=14 |issue=5 |pages=779–804 |doi=10.3758/BF03194105 |pmid=18087943 |issn=1069-9384}}</ref> Suppose we design the experiment as follows: | The difference between the two meanings of "extreme" appear when we consider a sequential hypothesis testing, or optional stopping, for the fairness of the coin. In general, optional stopping changes how p-value is calculated.<ref>{{Cite journal |last=Goodman |first=Steven |date=2008-07-01 |title=A Dirty Dozen: Twelve P-Value Misconceptions |url=https://www.sciencedirect.com/science/article/pii/S0037196308000620 |journal=Seminars in Hematology |series=Interpretation of Quantitative Research |volume=45 |issue=3 |pages=135–140 |doi=10.1053/j.seminhematol.2008.04.003 |pmid=18582619 |issn=0037-1963|url-access=subscription }}</ref><ref>{{Cite journal |last=Wagenmakers |first=Eric-Jan |date=October 2007 |title=A practical solution to the pervasive problems of p values |url=http://link.springer.com/10.3758/BF03194105 |journal=Psychonomic Bulletin & Review |language=en |volume=14 |issue=5 |pages=779–804 |doi=10.3758/BF03194105 |pmid=18087943 |issn=1069-9384}}</ref> Suppose we design the experiment as follows: | ||
* Flip the coin twice. If both comes up heads or tails, end the experiment. | * Flip the coin twice. If both comes up heads or tails, end the experiment. | ||
* Else, flip the coin 4 more times. | * Else, flip the coin 4 more times. | ||
| Line 105: | Line 105: | ||
This experiment has 7 types of outcomes: 2 heads, 2 tails, 5 heads 1 tail, ..., 1 head 5 tails. We now calculate the ''p''-value of the "3 heads 3 tails" outcome. | This experiment has 7 types of outcomes: 2 heads, 2 tails, 5 heads 1 tail, ..., 1 head 5 tails. We now calculate the ''p''-value of the "3 heads 3 tails" outcome. | ||
If we use the test statistic <math>\text{heads}/\text{tails}</math>, then under the null hypothesis | If we use the test statistic #<math>\text{heads}/\text{tails}</math>, then under the null hypothesis (i.e. #<math>\text{heads}\leq 3</math>) the two-sided ''p''-value is exactly equal to 1, and both the one-sided left-tail p-value and the one-sided right-tail ''p''-value are exactly equal to <math>19/32</math>. | ||
If we consider every outcome that has equal or lower probability than "3 heads 3 tails" as "at least as extreme", then the ''p''-value is exactly <math>1/2.</math> | If we consider every outcome that has equal or lower probability than "3 heads 3 tails" as "at least as extreme", then the ''p''-value is exactly <math>1/2.</math> | ||
| Line 120: | Line 120: | ||
''P''-value computations date back to the 1700s, where they were computed for the [[human sex ratio]] at birth, and used to compute statistical significance compared to the null hypothesis of equal probability of male and female births.<ref>{{cite book |title=The Descent of Human Sex Ratio at Birth |url=https://archive.org/details/descenthumansexr00bria |url-access=limited | vauthors = Brian E, Jaisson M |author-link1=Éric Brian |author-link2=Marie Jaisson |chapter=Physico-Theology and Mathematics (1710–1794) |pages=[https://archive.org/details/descenthumansexr00bria/page/n17 1]–25 |year=2007 |publisher=Springer Science & Business Media |isbn=978-1-4020-6036-6}}</ref> [[John Arbuthnot]] studied this question in 1710,<ref>{{cite journal| vauthors = Arbuthnot J |s2cid=186209819|title=An argument for Divine Providence, taken from the constant regularity observed in the births of both sexes|journal=[[Philosophical Transactions of the Royal Society of London]] | volume=27| pages=186–190 | year=1710 | url = http://www.york.ac.uk/depts/maths/histstat/arbuthnot.pdf|doi=10.1098/rstl.1710.0011|issue=325–336|doi-access=free}}</ref><ref name="Conover1999">{{cite book | vauthors = Conover WJ |title=Practical Nonparametric Statistics |edition=Third |year=1999 |publisher=Wiley |isbn=978-0-471-16068-7 |pages=157–176 |chapter=Chapter 3.4: The Sign Test }}</ref><ref name="Sprent1989">{{cite book | vauthors = Sprent P |title=Applied Nonparametric Statistical Methods |edition=Second |year=1989 |publisher=Chapman & Hall | ''P''-value computations date back to the 1700s, where they were computed for the [[human sex ratio]] at birth, and used to compute statistical significance compared to the null hypothesis of equal probability of male and female births.<ref>{{cite book |title=The Descent of Human Sex Ratio at Birth |url=https://archive.org/details/descenthumansexr00bria |url-access=limited | vauthors = Brian E, Jaisson M |author-link1=Éric Brian |author-link2=Marie Jaisson |chapter=Physico-Theology and Mathematics (1710–1794) |pages=[https://archive.org/details/descenthumansexr00bria/page/n17 1]–25 |year=2007 |publisher=Springer Science & Business Media |isbn=978-1-4020-6036-6}}</ref> [[John Arbuthnot]] studied this question in 1710,<ref>{{cite journal| vauthors = Arbuthnot J |s2cid=186209819|title=An argument for Divine Providence, taken from the constant regularity observed in the births of both sexes|journal=[[Philosophical Transactions of the Royal Society of London]] | volume=27| pages=186–190 | year=1710 | url = http://www.york.ac.uk/depts/maths/histstat/arbuthnot.pdf|doi=10.1098/rstl.1710.0011|issue=325–336|doi-access=free}}</ref><ref name="Conover1999">{{cite book | vauthors = Conover WJ |title=Practical Nonparametric Statistics |edition=Third |year=1999 |publisher=Wiley |isbn=978-0-471-16068-7 |pages=157–176 |chapter=Chapter 3.4: The Sign Test }}</ref><ref name="Sprent1989">{{cite book | vauthors = Sprent P |title=Applied Nonparametric Statistical Methods |edition=Second |year=1989 |publisher=Chapman & Hall | ||
|isbn=978-0-412-44980-2 }}</ref><ref>{{cite book |title = The History of Statistics: The Measurement of Uncertainty Before 1900 | vauthors = Stigler SM |publisher=Harvard University Press |year=1986 |isbn=978-0-67440341-3 |pages=[https://archive.org/details/historyofstatist00stig/page/225 225–226]}}</ref> and examined birth records in London for each of the 82 years from 1629 to 1710. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 1/2<sup>82</sup>, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, the ''p''-value. This is vanishingly small, leading Arbuthnot to conclude that this was not due to chance, but to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the ''p'' = 1/2<sup>82</sup> significance level. This and other work by Arbuthnot is credited as "… the first use of significance tests …"<ref name="Bellhouse2001">{{cite book | vauthors = Bellhouse P |title = Statisticians of the Centuries |editor1-link=Chris Heyde |editor2-link=Eugene Seneta | veditors = Heyde CC, Seneta E |year=2001 |publisher=Springer |isbn=978-0-387-95329-8 |pages=39–42 |chapter=John Arbuthnot}}</ref> the first example of reasoning about statistical significance,<ref name="Hald1998">{{cite book | vauthors = Hald A |title=A History of Mathematical Statistics from 1750 to 1930 |year=1998 |publisher=Wiley | | |isbn=978-0-412-44980-2 }}</ref><ref>{{cite book |title = The History of Statistics: The Measurement of Uncertainty Before 1900 | vauthors = Stigler SM |publisher=Harvard University Press |year=1986 |isbn=978-0-67440341-3 |pages=[https://archive.org/details/historyofstatist00stig/page/225 225–226]}}</ref> and examined birth records in London for each of the 82 years from 1629 to 1710. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 1/2<sup>82</sup>, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, the ''p''-value. This is vanishingly small, leading Arbuthnot to conclude that this was not due to chance, but to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the ''p'' = 1/2<sup>82</sup> significance level. This and other work by Arbuthnot is credited as "… the first use of significance tests …"<ref name="Bellhouse2001">{{cite book | vauthors = Bellhouse P |title = Statisticians of the Centuries |editor1-link=Chris Heyde |editor2-link=Eugene Seneta | veditors = Heyde CC, Seneta E |year=2001 |publisher=Springer |isbn=978-0-387-95329-8 |pages=39–42 |chapter=John Arbuthnot}}</ref> the first example of reasoning about statistical significance,<ref name="Hald1998">{{cite book | vauthors = Hald A |title=A History of Mathematical Statistics from 1750 to 1930 |year=1998 |publisher=Wiley |page=65 |chapter=Chapter 4. Chance or Design: Tests of Significance | ||
}}</ref> and "… perhaps the first published report of a [[non-parametric test|nonparametric test]] …",<ref name="Conover1999" /> specifically the [[sign test]]; see details at {{section link|Sign test|History}}. | }}</ref> and "… perhaps the first published report of a [[non-parametric test|nonparametric test]] …",<ref name="Conover1999" /> specifically the [[sign test]]; see details at {{section link|Sign test|History}}. | ||
| Line 128: | Line 128: | ||
The ''p''-value was first formally introduced by [[Karl Pearson]], in his [[Pearson's chi-squared test]],<ref name="Pearson1900">{{cite journal | vauthors = Pearson K | author-link = Karl Pearson | title = On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling | doi = 10.1080/14786440009463897 | journal = Philosophical Magazine |series=Series 5 | volume = 50 | issue = 302 | pages = 157–175 | year = 1900 |url = http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf }}</ref> using the [[chi-squared distribution]] and notated as capital P.<ref name="Pearson1900" /> The ''p''-values for the [[chi-squared distribution]] (for various values of χ<sup>2</sup> and degrees of freedom), now notated as ''P,'' were calculated in {{Harv|Elderton|1902}}, collected in {{Harv|Pearson|1914|pp=xxxi–xxxiii, 26–28|loc=Table XII}}. | The ''p''-value was first formally introduced by [[Karl Pearson]], in his [[Pearson's chi-squared test]],<ref name="Pearson1900">{{cite journal | vauthors = Pearson K | author-link = Karl Pearson | title = On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling | doi = 10.1080/14786440009463897 | journal = Philosophical Magazine |series=Series 5 | volume = 50 | issue = 302 | pages = 157–175 | year = 1900 |url = http://www.economics.soton.ac.uk/staff/aldrich/1900.pdf }}</ref> using the [[chi-squared distribution]] and notated as capital P.<ref name="Pearson1900" /> The ''p''-values for the [[chi-squared distribution]] (for various values of χ<sup>2</sup> and degrees of freedom), now notated as ''P,'' were calculated in {{Harv|Elderton|1902}}, collected in {{Harv|Pearson|1914|pp=xxxi–xxxiii, 26–28|loc=Table XII}}. | ||
[[Ronald Fisher]] formalized and popularized the use of the ''p''-value in statistics,<ref>{{Cite journal |last1=Biau |first1=David Jean |last2=Jolles |first2=Brigitte M. |last3=Porcher |first3=Raphaël |date=2010 |title=P Value and the Theory of Hypothesis Testing: An Explanation for New Researchers |journal=Clinical Orthopaedics and Related Research |volume=468 |issue=3 |pages=885–892 |doi=10.1007/s11999-009-1164-4 |issn=0009-921X |pmc=2816758 |pmid=19921345}}</ref><ref>{{Cite journal |last=Brereton |first=Richard G. |date=2021 |title=P values and multivariate distributions: Non-orthogonal terms in regression models |url=https://linkinghub.elsevier.com/retrieve/pii/S0169743921000320 |journal=Chemometrics and Intelligent Laboratory Systems |language=en |volume=210 | | [[Ronald Fisher]] formalized and popularized the use of the ''p''-value in statistics,<ref>{{Cite journal |last1=Biau |first1=David Jean |last2=Jolles |first2=Brigitte M. |last3=Porcher |first3=Raphaël |date=2010 |title=P Value and the Theory of Hypothesis Testing: An Explanation for New Researchers |journal=Clinical Orthopaedics and Related Research |volume=468 |issue=3 |pages=885–892 |doi=10.1007/s11999-009-1164-4 |issn=0009-921X |pmc=2816758 |pmid=19921345}}</ref><ref>{{Cite journal |last=Brereton |first=Richard G. |date=2021 |title=P values and multivariate distributions: Non-orthogonal terms in regression models |url=https://linkinghub.elsevier.com/retrieve/pii/S0169743921000320 |journal=Chemometrics and Intelligent Laboratory Systems |language=en |volume=210 |article-number=104264 |doi=10.1016/j.chemolab.2021.104264|url-access=subscription }}</ref> with it playing a central role in his approach to the subject.<ref>{{citation | vauthors = Hubbard R, Bayarri MJ |title=Confusion Over Measures of Evidence (''p''′s) Versus Errors (α′s) in Classical Statistical Testing |journal=The American Statistician |volume=57 |year=2003 |issue=3 |pages=171–178 [p. 171] |doi=10.1198/0003130031856 |s2cid=55671953 }}</ref> In his highly influential book ''[[Statistical Methods for Research Workers]]'' (1925), Fisher proposed the level ''p'' = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit for [[statistical significance]], and applied this to a normal distribution (as a two-tailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance (see [[68–95–99.7 rule]]).{{sfn|Fisher|1925|p=47|loc=Chapter [http://psychclassics.yorku.ca/Fisher/Methods/chap3.htm III. Distributions]}}{{NoteTag| 1 = To be more specific, the ''p'' = 0.05 corresponds to about 1.96 standard deviations for a normal distribution (two-tailed test), and 2 standard deviations corresponds to about a 1 in 22 chance of being exceeded by chance, or ''p'' ≈ 0.045; Fisher notes these approximations.}}{{sfn|Dallal|2012|loc=Note 31: [http://www.jerrydallal.com/LHSP/p05.htm Why P=0.05?]}} | ||
He then computed a table of values, similar to Elderton but, importantly, reversed the roles of χ<sup>2</sup> and ''p.'' That is, rather than computing ''p'' for different values of χ<sup>2</sup> (and degrees of freedom ''n''), he computed values of χ<sup>2</sup> that yield specified ''p''-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01.{{sfn|Fisher|1925|pp=78–79, 98|loc=Chapter [http://psychclassics.yorku.ca/Fisher/Methods/chap4.htm IV. Tests of Goodness of Fit, Independence and Homogeneity; with Table of χ<sup>2</sup>], [http://psychclassics.yorku.ca/Fisher/Methods/tabIII.gif Table III. Table of χ<sup>2</sup>]}} That allowed computed values of χ<sup>2</sup> to be compared against cutoffs and encouraged the use of ''p''-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting ''p''-values themselves. The same type of tables were then compiled in {{Harv|Fisher|Yates|1938}}, which cemented the approach.{{sfn|Dallal|2012|loc=Note 31: [http://www.jerrydallal.com/LHSP/p05.htm Why P=0.05?]}} | He then computed a table of values, similar to Elderton but, importantly, reversed the roles of χ<sup>2</sup> and ''p.'' That is, rather than computing ''p'' for different values of χ<sup>2</sup> (and degrees of freedom ''n''), he computed values of χ<sup>2</sup> that yield specified ''p''-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01.{{sfn|Fisher|1925|pp=78–79, 98|loc=Chapter [http://psychclassics.yorku.ca/Fisher/Methods/chap4.htm IV. Tests of Goodness of Fit, Independence and Homogeneity; with Table of χ<sup>2</sup>], [http://psychclassics.yorku.ca/Fisher/Methods/tabIII.gif Table III. Table of χ<sup>2</sup>]}} That allowed computed values of χ<sup>2</sup> to be compared against cutoffs and encouraged the use of ''p''-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting ''p''-values themselves. The same type of tables were then compiled in {{Harv|Fisher|Yates|1938}}, which cemented the approach.{{sfn|Dallal|2012|loc=Note 31: [http://www.jerrydallal.com/LHSP/p05.htm Why P=0.05?]}} | ||
| Line 144: | Line 144: | ||
== Related indices == | == Related indices == | ||
The ''E-value'' can refer to two concepts, both of which are related to the p-value and both of which play a role in [[multiple comparisons|multiple testing]]. First, [[E-values|it corresponds to a generic, more robust alternative to the p-value]] that can deal with ''optional continuation'' of experiments. Second, it is also used to abbreviate "expect value", which is the [[Conditional expectation|expected]] number of times that one expects to obtain a test statistic at least as extreme as the one that was actually observed if one assumes that the null hypothesis is true.<ref>{{cite web | url = https:// | The ''E-value'' can refer to two concepts, both of which are related to the p-value and both of which play a role in [[multiple comparisons|multiple testing]]. First, [[E-values|it corresponds to a generic, more robust alternative to the p-value]] that can deal with ''optional continuation'' of experiments. Second, it is also used to abbreviate "expect value", which is the [[Conditional expectation|expected]] number of times that one expects to obtain a test statistic at least as extreme as the one that was actually observed if one assumes that the null hypothesis is true.<ref>{{cite web | url = https://blast.ncbi.nlm.nih.gov/Blast.cgi?CMD=Web&PAGE_TYPE=BlastDocs&DOC_TYPE=FAQ | work = National Institutes of Health | title = Definition of E-value }}</ref> This expect-value is the product of the number of tests and the ''p''-value. | ||
The [[Q-value (statistics)|''q''-value]] is the analog of the ''p''-value with respect to the [[False discovery rate#Related concepts|positive false discovery rate]].<ref>{{Cite journal| vauthors = Storey JD |date=2003|title=The positive false discovery rate: a Bayesian interpretation and the q-value|journal=The Annals of Statistics|volume=31|issue=6|pages=2013–2035|doi=10.1214/aos/1074290335|doi-access=free}}</ref> It is used in [[Multiple comparisons problem|multiple hypothesis testing]] to maintain statistical power while minimizing the [[false positive rate]].<ref>{{cite journal | vauthors = Storey JD, Tibshirani R | title = Statistical significance for genomewide studies | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 16 | pages = 9440–9445 | date = August 2003 | pmid = 12883005 | pmc = 170937 | doi = 10.1073/pnas.1530509100 | doi-access = free | bibcode = 2003PNAS..100.9440S }}</ref> | The [[Q-value (statistics)|''q''-value]] is the analog of the ''p''-value with respect to the [[False discovery rate#Related concepts|positive false discovery rate]].<ref>{{Cite journal| vauthors = Storey JD |date=2003|title=The positive false discovery rate: a Bayesian interpretation and the q-value|journal=The Annals of Statistics|volume=31|issue=6|pages=2013–2035|doi=10.1214/aos/1074290335|doi-access=free}}</ref> It is used in [[Multiple comparisons problem|multiple hypothesis testing]] to maintain statistical power while minimizing the [[false positive rate]].<ref>{{cite journal | vauthors = Storey JD, Tibshirani R | title = Statistical significance for genomewide studies | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 16 | pages = 9440–9445 | date = August 2003 | pmid = 12883005 | pmc = 170937 | doi = 10.1073/pnas.1530509100 | doi-access = free | bibcode = 2003PNAS..100.9440S }}</ref> | ||
The [[Probability of Direction|Probability of Direction (''pd'')]] is the [[Bayesian statistics|Bayesian]] numerical equivalent of the ''p''-value.<ref name="makowski2019indices">{{cite journal | vauthors = Makowski D, Ben-Shachar MS, Chen SH, Lüdecke D | title = Indices of Effect Existence and Significance in the Bayesian Framework | journal = Frontiers in Psychology | volume = 10 | | The [[Probability of Direction|Probability of Direction (''pd'')]] is the [[Bayesian statistics|Bayesian]] numerical equivalent of the ''p''-value.<ref name="makowski2019indices">{{cite journal | vauthors = Makowski D, Ben-Shachar MS, Chen SH, Lüdecke D | title = Indices of Effect Existence and Significance in the Bayesian Framework | journal = Frontiers in Psychology | volume = 10 | page = 2767 | date = 10 December 2019 | pmid = 31920819 | pmc = 6914840 | doi = 10.3389/fpsyg.2019.02767 | doi-access = free }}</ref> It corresponds to the proportion of the [[Posterior probability|posterior distribution]] that is of the median's sign, typically varying between 50% and 100%, and representing the certainty with which an effect is positive or negative. | ||
[[Second-generation p-values]] extend the concept of p-values by not considering extremely small, practically irrelevant [[effect size]]s as significant.<ref>An Introduction to Second-Generation p-Values Jeffrey D. Blume, Robert A. Greevy, Valerie F. Welty, Jeffrey R. Smith &William D. Dupont https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1537893</ref> | [[Second-generation p-values]] extend the concept of p-values by not considering extremely small, practically irrelevant [[effect size]]s as significant.<ref>An Introduction to Second-Generation p-Values Jeffrey D. Blume, Robert A. Greevy, Valerie F. Welty, Jeffrey R. Smith &William D. Dupont https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1537893</ref> | ||
| Line 176: | Line 176: | ||
* {{cite journal |last=Pearson |first=Karl |title=On the probability that two independent distributions of frequency are really samples of the same population, with special reference to recent work on the identity of Trypanosome strains |date=1914 |journal=Biometrika |volume=10 |pages=85–154|doi=10.1093/biomet/10.1.85 }} | * {{cite journal |last=Pearson |first=Karl |title=On the probability that two independent distributions of frequency are really samples of the same population, with special reference to recent work on the identity of Trypanosome strains |date=1914 |journal=Biometrika |volume=10 |pages=85–154|doi=10.1093/biomet/10.1.85 }} | ||
* {{cite book |title = Statistical Methods for Research Workers | vauthors = Fisher RA |author-link = Ronald Fisher |year=1925 |publisher=Oliver & Boyd |location = Edinburgh, Scotland |isbn = 978-0-05-002170-5|title-link=Statistical Methods for Research Workers }} | * {{cite book |title = Statistical Methods for Research Workers | vauthors = Fisher RA |author-link = Ronald Fisher |year=1925 |publisher=Oliver & Boyd |location = Edinburgh, Scotland |isbn = 978-0-05-002170-5|title-link=Statistical Methods for Research Workers }} | ||
* {{cite book |title = The Design of Experiments |edition=9th | vauthors = Fisher RA |author-link = Ronald Fisher |orig- | * {{cite book |title = The Design of Experiments |edition=9th | vauthors = Fisher RA |author-link = Ronald Fisher |orig-date=1935 |year=1971 |publisher=Macmillan |isbn = 978-0-02-844690-5}} | ||
* {{cite book | vauthors = Fisher RA, Yates F | title = Statistical tables for biological, agricultural and medical research | year = 1938 | location = London, England }} | * {{cite book | vauthors = Fisher RA, Yates F | title = Statistical tables for biological, agricultural and medical research | year = 1938 | location = London, England }} | ||
* {{cite book |title = The history of statistics : the measurement of uncertainty before 1900 | vauthors = Stigler SM |author-link = Stephen M. Stigler |year = 1986 |publisher = Belknap Press of Harvard University Press |location = Cambridge, Mass |isbn = 978-0-674-40340-6|url-access = registration |url = https://archive.org/details/historyofstatist00stig }} | * {{cite book |title = The history of statistics: the measurement of uncertainty before 1900 | vauthors = Stigler SM |author-link = Stephen M. Stigler |year = 1986 |publisher = Belknap Press of Harvard University Press |location = Cambridge, Mass |isbn = 978-0-674-40340-6|url-access = registration |url = https://archive.org/details/historyofstatist00stig }} | ||
* {{cite journal | vauthors = Hubbard R, Armstrong JS | author2-link = J. Scott Armstrong |title = Why We Don't Really Know What Statistical Significance Means: Implications for Educators |doi = 10.1177/0273475306288399 |url = https://hops.wharton.upenn.edu/ideas/pdf/Armstrong/StatisticalSignificance.pdf |journal = Journal of Marketing Education |volume=28 |issue=2 |pages=114–120 |year=2006 |archive-url = https://web.archive.org/web/20060518054857/http://hops.wharton.upenn.edu/ideas/pdf/Armstrong/StatisticalSignificance.pdf |archive-date=May 18, 2006 |hdl=2092/413 |s2cid=34729227 |hdl-access=free }} | * {{cite journal | vauthors = Hubbard R, Armstrong JS | author2-link = J. Scott Armstrong |title = Why We Don't Really Know What Statistical Significance Means: Implications for Educators |doi = 10.1177/0273475306288399 |url = https://hops.wharton.upenn.edu/ideas/pdf/Armstrong/StatisticalSignificance.pdf |journal = Journal of Marketing Education |volume=28 |issue=2 |pages=114–120 |year=2006 |archive-url = https://web.archive.org/web/20060518054857/http://hops.wharton.upenn.edu/ideas/pdf/Armstrong/StatisticalSignificance.pdf |archive-date=May 18, 2006 |hdl=2092/413 |s2cid=34729227 |hdl-access=free }} | ||
* {{cite journal | vauthors = Hubbard R, Lindsay RM |doi = 10.1177/0959354307086923 |title = Why ''P'' Values Are Not a Useful Measure of Evidence in Statistical Significance Testing |journal = Theory & Psychology |volume = 18 |issue = 1 |pages = 69–88 |year = 2008 |s2cid = 143487211 |url = http://wiki.bio.dtu.dk/~agpe/papers/pval_notuseful.pdf <!-- paper that explains the difference between Fisher's evidential [[p-value|''p''-value]] and the Neyman–Pearson [[Type I error rate]] ''α'' --> |access-date = 2015-08-28 |archive-url = https://web.archive.org/web/20161021014340/http://wiki.bio.dtu.dk/~agpe/papers/pval_notuseful.pdf |archive-date = 2016-10-21 |url- | * {{cite journal | vauthors = Hubbard R, Lindsay RM |doi = 10.1177/0959354307086923 |title = Why ''P'' Values Are Not a Useful Measure of Evidence in Statistical Significance Testing |journal = Theory & Psychology |volume = 18 |issue = 1 |pages = 69–88 |year = 2008 |s2cid = 143487211 |url = http://wiki.bio.dtu.dk/~agpe/papers/pval_notuseful.pdf <!-- paper that explains the difference between Fisher's evidential [[p-value|''p''-value]] and the Neyman–Pearson [[Type I error rate]] ''α'' --> |access-date = 2015-08-28 |archive-url = https://web.archive.org/web/20161021014340/http://wiki.bio.dtu.dk/~agpe/papers/pval_notuseful.pdf |archive-date = 2016-10-21 |url-access = subscription }} | ||
* {{cite journal | vauthors = Stigler S |author-link = Stephen Stigler |title = Fisher and the 5% level |doi = 10.1007/s00144-008-0033-3 |journal = Chance | volume = 21 | issue = 4 | page = 12 |date=December 2008 |doi-access = free }} | * {{cite journal | vauthors = Stigler S |author-link = Stephen Stigler |title = Fisher and the 5% level |doi = 10.1007/s00144-008-0033-3 |journal = Chance | volume = 21 | issue = 4 | page = 12 |date=December 2008 |doi-access = free }} | ||
* {{cite book |title = The Little Handbook of Statistical Practice | vauthors = Dallal GE |year=2012 |url = http://www.tufts.edu/~gdallal/LHSP.HTM}} | * {{cite book |title = The Little Handbook of Statistical Practice | vauthors = Dallal GE |year=2012 |url = http://www.tufts.edu/~gdallal/LHSP.HTM}} | ||
* {{cite journal | vauthors = Biau DJ, Jolles BM, Porcher R | title = P value and the theory of hypothesis testing: an explanation for new researchers | journal = Clinical Orthopaedics and Related Research | volume = 468 | issue = 3 | pages = 885–892 | date = March 2010 | pmid = 19921345 | pmc = 2816758 | doi = 10.1007/s11999-009-1164-4 }} | * {{cite journal | vauthors = Biau DJ, Jolles BM, Porcher R | title = P value and the theory of hypothesis testing: an explanation for new researchers | journal = Clinical Orthopaedics and Related Research | volume = 468 | issue = 3 | pages = 885–892 | date = March 2010 | pmid = 19921345 | pmc = 2816758 | doi = 10.1007/s11999-009-1164-4 }} | ||
* {{cite book | vauthors = Reinhart A |title=Statistics Done Wrong: The Woefully Complete Guide |publisher=[[No Starch Press]] |url = http://statisticsdonewrong.com |isbn = 978- | * {{cite book | vauthors = Reinhart A |title=Statistics Done Wrong: The Woefully Complete Guide |publisher=[[No Starch Press]] |url = http://statisticsdonewrong.com |isbn = 978-1-59327-620-1 |page = 176 |year = 2015 }} | ||
* {{cite journal |author-last1=Benjamini |author-first1=Yoav |author-link1=Yoav Benjamini |author-last2=De Veaux |author-first2=Richard D. | * {{cite journal |author-last1=Benjamini |author-first1=Yoav |author-link1=Yoav Benjamini |author-last2=De Veaux |author-first2=Richard D. | ||
|author-last3=Efron|author-first3=Bradley|author-link3=Bradley Efron|author-last4=Evans|author-first4=Scott | |author-last3=Efron|author-first3=Bradley|author-link3=Bradley Efron|author-last4=Evans|author-first4=Scott | ||
| Line 204: | Line 204: | ||
|title=The ASA President's Task Force Statement on Statistical Significance and Replicability | |title=The ASA President's Task Force Statement on Statistical Significance and Replicability | ||
|issue=3 |doi=10.1214/21-AOAS1501|doi-access=free}} | |issue=3 |doi=10.1214/21-AOAS1501|doi-access=free}} | ||
*{{cite journal | last1 = Benjamin | first1 = Daniel J. | last2 = Berger | first2 = James O. | last3 = Johannesson | first3 = Magnus | last4 = Nosek | first4 = Brian A. | last5 = Wagenmakers | first5 = E.-J. | last6 = Berk | first6 = Richard | last7 = Bollen | first7 = Kenneth A. | last8 = Brembs | first8 = Björn | last9 = Brown | first9 = Lawrence | last10 = Camerer | first10 = Colin | last11 = Cesarini | first11 = David | last12 = Chambers | first12 = Christopher D. | last13 = Clyde | first13 = Merlise | last14 = Cook | first14 = Thomas D. | last15 = De Boeck | first15 = Paul | last16 = Dienes | first16 = Zoltan | last17 = Dreber | first17 = Anna | last18 = Easwaran | first18 = Kenny | last19 = Efferson | first19 = Charles | last20 = Fehr | first20 = Ernst | last21 = Fidler | first21 = Fiona | last22 = Field | first22 = Andy P. | last23 = Forster | first23 = Malcolm | last24 = George | first24 = Edward I. | last25 = Gonzalez | first25 = Richard | last26 = Goodman | first26 = Steven | last27 = Green | first27 = Edwin | last28 = Green | first28 = Donald P. | last29 = Greenwald | first29 = Anthony G. | last30 = Hadfield | first30 = Jarrod D. | last31 = Hedges | first31 = Larry V. | last32 = Held | first32 = Leonhard | last33 = Hua Ho | first33 = Teck | last34 = Hoijtink | first34 = Herbert | last35 = Hruschka | first35 = Daniel J. | last36 = Imai | first36 = Kosuke | last37 = Imbens | first37 = Guido | last38 = Ioannidis | first38 = John P. A. | last39 = Jeon | first39 = Minjeong | last40 = Jones | first40 = James Holland | last41 = Kirchler | first41 = Michael | last42 = Laibson | first42 = David | last43 = List | first43 = John | last44 = Little | first44 = Roderick | last45 = Lupia | first45 = Arthur | last46 = Machery | first46 = Edouard | last47 = Maxwell | first47 = Scott E. | last48 = McCarthy | first48 = Michael | last49 = Moore | first49 = Don A. | last50 = Morgan | first50 = Stephen L. | last51 = Munafó | first51 = Marcus | last52 = Nakagawa | first52 = Shinichi | last53 = Nyhan | first53 = Brendan | last54 = Parker | first54 = Timothy H. | last55 = Pericchi | first55 = Luis | last56 = Perugini | first56 = Marco | last57 = Rouder | first57 = Jeff | last58 = Rousseau | first58 = Judith | last59 = Savalei | first59 = Victoria | last60 = Schönbrodt | first60 = Felix D. | last61 = Sellke | first61 = Thomas | last62 = Sinclair | first62 = Betsy | last63 = Tingley | first63 = Dustin | last64 = Van Zandt | first64 = Trisha | last65 = Vazire | first65 = Simine | last66 = Watts | first66 = Duncan J. | last67 = Winship | first67 = Christopher | last68 = Wolpert | first68 = Robert L. | last69 = Xie | first69 = Yu | last70 = Young | first70 = Cristobal | last71 = Zinman | first71 = Jonathan | last72 = Johnson | first72 = Valen E. | title = Redefine statistical significance | journal = Nature Human Behaviour | date = 1 September 2017 | volume = 2 | issue = 1 | pages = 6–10 | eissn = 2397-3374 | doi = 10.1038/s41562-017-0189-z | pmid = 30980045 | s2cid = 256726352 | *{{cite journal | last1 = Benjamin | first1 = Daniel J. | last2 = Berger | first2 = James O. | last3 = Johannesson | first3 = Magnus | last4 = Nosek | first4 = Brian A. | last5 = Wagenmakers | first5 = E.-J. | last6 = Berk | first6 = Richard | last7 = Bollen | first7 = Kenneth A. | last8 = Brembs | first8 = Björn | last9 = Brown | first9 = Lawrence | last10 = Camerer | first10 = Colin | last11 = Cesarini | first11 = David | last12 = Chambers | first12 = Christopher D. | last13 = Clyde | first13 = Merlise | last14 = Cook | first14 = Thomas D. | last15 = De Boeck | first15 = Paul | last16 = Dienes | first16 = Zoltan | last17 = Dreber | first17 = Anna | last18 = Easwaran | first18 = Kenny | last19 = Efferson | first19 = Charles | last20 = Fehr | first20 = Ernst | last21 = Fidler | first21 = Fiona | last22 = Field | first22 = Andy P. | last23 = Forster | first23 = Malcolm | last24 = George | first24 = Edward I. | last25 = Gonzalez | first25 = Richard | last26 = Goodman | first26 = Steven | last27 = Green | first27 = Edwin | last28 = Green | first28 = Donald P. | last29 = Greenwald | first29 = Anthony G. | last30 = Hadfield | first30 = Jarrod D. | last31 = Hedges | first31 = Larry V. | last32 = Held | first32 = Leonhard | last33 = Hua Ho | first33 = Teck | last34 = Hoijtink | first34 = Herbert | last35 = Hruschka | first35 = Daniel J. | last36 = Imai | first36 = Kosuke | last37 = Imbens | first37 = Guido | last38 = Ioannidis | first38 = John P. A. | last39 = Jeon | first39 = Minjeong | last40 = Jones | first40 = James Holland | last41 = Kirchler | first41 = Michael | last42 = Laibson | first42 = David | last43 = List | first43 = John | last44 = Little | first44 = Roderick | last45 = Lupia | first45 = Arthur | last46 = Machery | first46 = Edouard | last47 = Maxwell | first47 = Scott E. | last48 = McCarthy | first48 = Michael | last49 = Moore | first49 = Don A. | last50 = Morgan | first50 = Stephen L. | last51 = Munafó | first51 = Marcus | last52 = Nakagawa | first52 = Shinichi | last53 = Nyhan | first53 = Brendan | last54 = Parker | first54 = Timothy H. | last55 = Pericchi | first55 = Luis | last56 = Perugini | first56 = Marco | last57 = Rouder | first57 = Jeff | last58 = Rousseau | first58 = Judith | last59 = Savalei | first59 = Victoria | last60 = Schönbrodt | first60 = Felix D. | last61 = Sellke | first61 = Thomas | last62 = Sinclair | first62 = Betsy | last63 = Tingley | first63 = Dustin | last64 = Van Zandt | first64 = Trisha | last65 = Vazire | first65 = Simine | last66 = Watts | first66 = Duncan J. | last67 = Winship | first67 = Christopher | last68 = Wolpert | first68 = Robert L. | last69 = Xie | first69 = Yu | last70 = Young | first70 = Cristobal | last71 = Zinman | first71 = Jonathan | last72 = Johnson | first72 = Valen E. | title = Redefine statistical significance | journal = Nature Human Behaviour | date = 1 September 2017 | volume = 2 | issue = 1 | pages = 6–10 | eissn = 2397-3374 | doi = 10.1038/s41562-017-0189-z | pmid = 30980045 | s2cid = 256726352 | hdl = 10281/184094 | hdl-access = free }} | ||
{{refend}} | {{refend}} | ||
| Line 213: | Line 213: | ||
* {{YouTube|5Z9OIYA8He8|StatQuest: P Values, clearly explained}} | * {{YouTube|5Z9OIYA8He8|StatQuest: P Values, clearly explained}} | ||
* {{YouTube|UFhJefdVCjE|StatQuest: P-value pitfalls and power calculations}} | * {{YouTube|UFhJefdVCjE|StatQuest: P-value pitfalls and power calculations}} | ||
* [https://fivethirtyeight.com/features/science-isnt-broken/ Science | * [https://fivethirtyeight.com/features/science-isnt-broken/ Science Isn't Broken - Article on how ''p''-values can be manipulated and an interactive tool to visualize it.] | ||
{{Clear}} | {{Clear}} | ||
Latest revision as of 10:09, 23 December 2025
Template:Short description Script error: No such module "Distinguish".
In null-hypothesis significance testing, the p-valueTemplate:NoteTag is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct.[1][2] A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience.[3][4]
In 2016, the American Statistical Association (ASA) made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis".[5] That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data".[6]
Basic concepts
In statistics, every conjecture concerning the unknown probability distribution of a collection of random variables representing the observed data in some study is called a statistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not to investigate other specific hypotheses, then such a test is called a null hypothesis test.
As our statistical hypothesis will, by definition, state some property of the distribution, the null hypothesis is the default hypothesis under which that property does not exist. The null hypothesis is typically that some parameter (such as a correlation or a difference between means) in the populations of interest is zero. Our hypothesis might specify the probability distribution of precisely, or it might only specify that it belongs to some class of distributions. Often, we reduce the data to a single numerical statistic, e.g., , whose marginal probability distribution is closely connected to a main question of interest in the study.
The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic .Template:NoteTag The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.
Loosely speaking, rejection of the null hypothesis implies that there is sufficient evidence against it.
As a particular example, if a null hypothesis states that a certain summary statistic follows the standard normal distribution then the rejection of this null hypothesis could mean that (i) the mean of is not 0, or (ii) the variance of is not 1, or (iii) is not normally distributed. Different tests of the same null hypothesis would be more or less sensitive to different alternatives. However, even if we do manage to reject the null hypothesis for all 3 alternatives, and even if we know that the distribution is normal and variance is 1, the null hypothesis test does not tell us which non-zero values of the mean are now most plausible. The more independent observations from the same probability distribution one has, the more accurate the test will be, and the higher the precision with which one will be able to determine the mean value and show that it is not equal to zero; but this will also increase the importance of evaluating the real-world or scientific relevance of this deviation.
Definition and interpretation
Definition
The p-value is the probability under the null hypothesis of obtaining a real-valued test statistic at least as extreme as the one obtained. Consider an observed test-statistic from unknown distribution . Then the p-value is what the prior probability would be of observing a test-statistic value at least as "extreme" as if null hypothesis were true. That is:
- for a one-sided right-tail test-statistic distribution.
- for a one-sided left-tail test-statistic distribution.
- for a two-sided test-statistic distribution. If the distribution of is symmetric about zero, then
Interpretations
<templatestyles src="Template:Blockquote/styles.css" />
The error that a practising statistician would consider the more important to avoid (which is a subjective judgment) is called the error of the first kind. The first demand of the mathematical theory is to deduce such test criteria as would ensure that the probability of committing an error of the first kind would equal (or approximately equal, or not exceed) a preassigned number α, such as α = 0.05 or 0.01, etc. This number is called the level of significance.
Script error: No such module "Check for unknown parameters".
In a significance test, the null hypothesis is rejected if the p-value is less than to a predefined threshold value , which is referred to as the alpha level or significance level. is not derived from the data, but rather is set by the researcher before examining the data. is commonly set to 0.05, though lower alpha levels are sometimes used. The 0.05 value (equivalent to 1/20 chances) was originally proposed by Ronald Fisher in 1925 in his famous book entitled "Statistical Methods for Research Workers".[8]
Different p-values based on independent sets of data can be combined, for instance using Fisher's combined probability test.
Distribution
The p-value is a function of the chosen test statistic and is therefore a random variable. If the null hypothesis fixes the probability distribution of precisely (e.g. where is the only parameter), and if that distribution is continuous, then when the null-hypothesis is true, the p-value is uniformly distributed between 0 and 1. Regardless of the truth of the , the p-value is not fixed; if the same test is repeated independently with fresh data, one will typically obtain a different p-value in each iteration.
Usually only a single p-value relating to a hypothesis is observed, so the p-value is interpreted by a significance test, and no effort is made to estimate the distribution it was drawn from. When a collection of p-values are available (e.g. when considering a group of studies on the same subject), the distribution of significant p-values is sometimes called a p-curve.[9] A p-curve can be used to assess the reliability of scientific literature, such as by detecting publication bias or p-hacking. [9][10]
Distribution for composite hypothesis
In parametric hypothesis testing problems, a simple or point hypothesis refers to a hypothesis where the parameter's value is assumed to be a single number. In contrast, in a composite hypothesis the parameter's value is given by a set of numbers. When the null-hypothesis is composite (or the distribution of the statistic is discrete), then when the null-hypothesis is true the probability of obtaining a p-value less than or equal to any number between 0 and 1 is still less than or equal to that number. In other words, it remains the case that very small values are relatively unlikely if the null-hypothesis is true, and that a significance test at level is obtained by rejecting the null-hypothesis if the p-value is less than or equal to .[11][12]
For example, when testing the null hypothesis that a distribution is normal with a mean less than or equal to zero against the alternative that the mean is greater than zero (, variance known), the null hypothesis does not specify the exact probability distribution of the appropriate test statistic. In this example that would be the Z-statistic belonging to the one-sided one-sample Z-test. For each possible value of the theoretical mean, the Z-test statistic has a different probability distribution. In these circumstances the p-value is defined by taking the least favorable null-hypothesis case, which is typically on the border between null and alternative. This definition ensures the complementarity of p-values and alpha-levels: means one only rejects the null hypothesis if the p-value is less than or equal to , and the hypothesis test will indeed have a maximum type-1 error rate of .
Usage
The p-value is widely used in statistical hypothesis testing, specifically in null hypothesis significance testing. In this method, before conducting the study, one first chooses a model (the null hypothesis) and the alpha level α (most commonly 0.05). After analyzing the data, if the p-value is less than α, that is taken to mean that the observed data is sufficiently inconsistent with the null hypothesis for the null hypothesis to be rejected. However, that does not prove that the null hypothesis is false. The p-value does not, in itself, establish probabilities of hypotheses. Rather, it is a tool for deciding whether to reject the null hypothesis.[13]
Misuse
Script error: No such module "Labelled list hatnote". According to the ASA, there is widespread agreement that p-values are often misused and misinterpreted.[2] One practice that has been particularly criticized is accepting the alternative hypothesis for any p-value nominally less than 0.05 without other supporting evidence. Although p-values are helpful in assessing how incompatible the data are with a specified statistical model, contextual factors must also be considered, such as "the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis".[2] Another concern is that the p-value is often misunderstood as being the probability that the null hypothesis is true.[2][14] p-values and significance tests also say nothing about the possibility of drawing conclusions from a sample to a population.
Some statisticians have proposed abandoning p-values and focusing more on other inferential statistics,[2] such as confidence intervals,[15][16] likelihood ratios,[17][18] or Bayes factors,[19][20][21] but there is heated debate on the feasibility of these alternatives.[22][23] Others have suggested to remove fixed significance thresholds and to interpret p-values as continuous indices of the strength of evidence against the null hypothesis.[24][25] Yet others suggested to report alongside p-values the prior probability of a real effect that would be required to obtain a false positive risk (i.e. the probability that there is no real effect) below a pre-specified threshold (e.g. 5%).[26]
That said, in 2019 a task force by ASA had convened to consider the use of statistical methods in scientific studies, specifically hypothesis tests and p-values, and their connection to replicability.[6] It states that "Different measures of uncertainty can complement one another; no single measure serves all purposes", citing p-value as one of these measures. They also stress that p-values can provide valuable information when considering the specific value as well as when compared to some threshold. In general, it stresses that "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data". This sentiment was further supported by a comment in Nature Human Behaviour, that, in response to recommendations to redefine statistical significance to P ≤ 0.005, have proposed that "researchers should transparently report and justify all choices they make when designing a study, including the alpha level."[27]
Calculation
Usually, is a test statistic. A test statistic is the output of a scalar function of all the observations. This statistic provides a single number, such as a t-statistic or an F-statistic. As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the input observational data.
For the important case in which the data are hypothesized to be a random sample from a normal distribution, depending on the nature of the test statistic and the hypotheses of interest about its distribution, different null hypothesis tests have been developed. Some such tests are the z-test for hypotheses concerning the mean of a normal distribution with known variance, the t-test based on Student's t-distribution of a suitable statistic for hypotheses concerning the mean of a normal distribution when the variance is unknown, the F-test based on the F-distribution of yet another statistic for hypotheses concerning the variance. For data of other nature, for instance, categorical (discrete) data, test statistics might be constructed whose null hypothesis distribution is based on normal approximations to appropriate statistics obtained by invoking the central limit theorem for large samples, as in the case of Pearson's chi-squared test.
Thus computing a p-value requires a null hypothesis, a test statistic (together with deciding whether the researcher is performing a one-tailed test or a two-tailed test), and data. Even though computing the test statistic on given data may be easy, computing the sampling distribution under the null hypothesis, and then computing its cumulative distribution function (CDF) is often a difficult problem. Today, this computation is done using statistical software, often via numeric methods (rather than exact formulae), but, in the early and mid 20th century, this was instead done via tables of values, and one interpolated or extrapolated p-values from these discrete valuesScript error: No such module "Unsubst".. Rather than using a table of p-values, Fisher instead inverted the CDF, publishing a list of values of the test statistic for given fixed p-values; this corresponds to computing the quantile function (inverse CDF).
Example
Script error: No such module "Labelled list hatnote".
Testing the fairness of a coin
As an example of a statistical test, an experiment is performed to determine whether a coin flip is fair (equal chance of landing heads or tails) or unfairly biased (one outcome being more likely than the other).
Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The full data would be a sequence of twenty times the symbol "H" or "T". The statistic on which one might focus could be the total number of heads. The null hypothesis is that the coin is fair, and coin tosses are independent of one another. If a right-tailed test is considered, which would be the case if one is actually interested in the possibility that the coin is biased towards falling heads, then the p-value of this result is the chance of a fair coin landing on heads at least 14 times out of 20 flips. That probability can be computed from binomial coefficients as
This probability is the p-value, considering only extreme results that favor heads. This is called a one-tailed test. However, one might be interested in deviations in either direction, favoring either heads or tails. The two-tailed p-value, which considers deviations favoring either heads or tails, may instead be calculated. As the binomial distribution is symmetrical for a fair coin, the two-sided p-value is simply twice the above calculated single-sided p-value: the two-sided p-value is 0.115.
In the above example:
- Null hypothesis (H0): The coin is fair, with Pr(heads) = 0.5.
- Test statistic: Number of heads.
- Alpha level (designated threshold of significance): 0.05.
- Observation O: 14 heads out of 20 flips.
- Two-tailed p-value of observation O given H0 = 2 × min(Pr(no. of heads ≥ 14 heads), Pr(no. of heads ≤ 14 heads)) = 2 × min(0.058, 0.978) = 2 × 0.058 = 0.115.
The Pr(no. of heads ≤ 14 heads) = 1 − Pr(no. of heads ≥ 14 heads) + Pr(no. of head = 14) = 1 − 0.058 + 0.036 = 0.978; however, the symmetry of this binomial distribution makes it an unnecessary computation to find the smaller of the two probabilities. Here, the calculated p-value exceeds 0.05, meaning that the data falls within the range of what would happen 95% of the time, if the coin were fair. Hence, the null hypothesis is not rejected at the 0.05 level.
However, had one more head been obtained, the resulting p-value (two-tailed) would have been 0.0414 (4.14%), in which case the null hypothesis would be rejected at the 0.05 level.
Optional stopping
Script error: No such module "anchor".
The difference between the two meanings of "extreme" appear when we consider a sequential hypothesis testing, or optional stopping, for the fairness of the coin. In general, optional stopping changes how p-value is calculated.[28][29] Suppose we design the experiment as follows:
- Flip the coin twice. If both comes up heads or tails, end the experiment.
- Else, flip the coin 4 more times.
This experiment has 7 types of outcomes: 2 heads, 2 tails, 5 heads 1 tail, ..., 1 head 5 tails. We now calculate the p-value of the "3 heads 3 tails" outcome.
If we use the test statistic #, then under the null hypothesis (i.e. #) the two-sided p-value is exactly equal to 1, and both the one-sided left-tail p-value and the one-sided right-tail p-value are exactly equal to .
If we consider every outcome that has equal or lower probability than "3 heads 3 tails" as "at least as extreme", then the p-value is exactly
However, suppose we have planned to simply flip the coin 6 times no matter what happens, then the second definition of p-value would mean that the p-value of "3 heads 3 tails" is exactly 1.
Thus, the "at least as extreme" definition of p-value is deeply contextual and depends on what the experimenter planned to do even in situations that did not occur.
History
P-value computations date back to the 1700s, where they were computed for the human sex ratio at birth, and used to compute statistical significance compared to the null hypothesis of equal probability of male and female births.[30] John Arbuthnot studied this question in 1710,[31][32][33][34] and examined birth records in London for each of the 82 years from 1629 to 1710. In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 1/282, or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, the p-value. This is vanishingly small, leading Arbuthnot to conclude that this was not due to chance, but to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the p = 1/282 significance level. This and other work by Arbuthnot is credited as "… the first use of significance tests …"[35] the first example of reasoning about statistical significance,[36] and "… perhaps the first published report of a nonparametric test …",[32] specifically the sign test; see details at Template:Section link.
The same question was later addressed by Pierre-Simon Laplace, who instead used a parametric test, modeling the number of male births with a binomial distribution:[37]
<templatestyles src="Template:Blockquote/styles.css" />
In the 1770s Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. He concluded by calculation of a p-value that the excess was a real, but unexplained, effect.
Script error: No such module "Check for unknown parameters".
The p-value was first formally introduced by Karl Pearson, in his Pearson's chi-squared test,[38] using the chi-squared distribution and notated as capital P.[38] The p-values for the chi-squared distribution (for various values of χ2 and degrees of freedom), now notated as P, were calculated in Script error: No such module "Footnotes"., collected in Script error: No such module "Footnotes"..
Ronald Fisher formalized and popularized the use of the p-value in statistics,[39][40] with it playing a central role in his approach to the subject.[41] In his highly influential book Statistical Methods for Research Workers (1925), Fisher proposed the level p = 0.05, or a 1 in 20 chance of being exceeded by chance, as a limit for statistical significance, and applied this to a normal distribution (as a two-tailed test), thus yielding the rule of two standard deviations (on a normal distribution) for statistical significance (see 68–95–99.7 rule).Template:SfnTemplate:NoteTagTemplate:Sfn
He then computed a table of values, similar to Elderton but, importantly, reversed the roles of χ2 and p. That is, rather than computing p for different values of χ2 (and degrees of freedom n), he computed values of χ2 that yield specified p-values, specifically 0.99, 0.98, 0.95, 0,90, 0.80, 0.70, 0.50, 0.30, 0.20, 0.10, 0.05, 0.02, and 0.01.Template:Sfn That allowed computed values of χ2 to be compared against cutoffs and encouraged the use of p-values (especially 0.05, 0.02, and 0.01) as cutoffs, instead of computing and reporting p-values themselves. The same type of tables were then compiled in Script error: No such module "Footnotes"., which cemented the approach.Template:Sfn
As an illustration of the application of p-values to the design and interpretation of experiments, in his following book The Design of Experiments (1935), Fisher presented the lady tasting tea experiment,Template:Sfn which is the archetypal example of the p-value.
To evaluate a lady's claim that she (Muriel Bristol) could distinguish by taste how tea is prepared (first adding the milk to the cup, then the tea, or first tea, then milk), she was sequentially presented with 8 cups: 4 prepared one way, 4 prepared the other, and asked to determine the preparation of each cup (knowing that there were 4 of each). In that case, the null hypothesis was that she had no special ability, the test was Fisher's exact test, and the p-value was so Fisher was willing to reject the null hypothesis (consider the outcome highly unlikely to be due to chance) if all were classified correctly. (In the actual experiment, Bristol correctly classified all 8 cups.)
Fisher reiterated the p = 0.05 threshold and explained its rationale, stating:Template:Sfn
<templatestyles src="Template:Blockquote/styles.css" />
It is usual and convenient for experimenters to take 5 per cent as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the greater part of the fluctuations which chance causes have introduced into their experimental results.
Script error: No such module "Check for unknown parameters".
He also applies this threshold to the design of experiments, noting that had only 6 cups been presented (3 of each), a perfect classification would have only yielded a p-value of which would not have met this level of significance.Template:Sfn Fisher also underlined the interpretation of p, as the long-run proportion of values at least as extreme as the data, assuming the null hypothesis is true.
In later editions, Fisher explicitly contrasted the use of the p-value for statistical inference in science with the Neyman–Pearson method, which he terms "Acceptance Procedures".Template:Sfn Fisher emphasizes that while fixed levels such as 5%, 2%, and 1% are convenient, the exact p-value can be used, and the strength of evidence can and will be revised with further experimentation. In contrast, decision procedures require a clear-cut decision, yielding an irreversible action, and the procedure is based on costs of error, which, he argues, are inapplicable to scientific research.
Related indices
The E-value can refer to two concepts, both of which are related to the p-value and both of which play a role in multiple testing. First, it corresponds to a generic, more robust alternative to the p-value that can deal with optional continuation of experiments. Second, it is also used to abbreviate "expect value", which is the expected number of times that one expects to obtain a test statistic at least as extreme as the one that was actually observed if one assumes that the null hypothesis is true.[42] This expect-value is the product of the number of tests and the p-value.
The q-value is the analog of the p-value with respect to the positive false discovery rate.[43] It is used in multiple hypothesis testing to maintain statistical power while minimizing the false positive rate.[44]
The Probability of Direction (pd) is the Bayesian numerical equivalent of the p-value.[45] It corresponds to the proportion of the posterior distribution that is of the median's sign, typically varying between 50% and 100%, and representing the certainty with which an effect is positive or negative.
Second-generation p-values extend the concept of p-values by not considering extremely small, practically irrelevant effect sizes as significant.[46]
See also
- Student's t-test
- Bonferroni correction
- Counternull
- Fisher's method of combining p-values
- Generalized p-value
- Harmonic mean p-value
- Holm–Bonferroni method
- Multiple comparisons problem
- p-rep
- p-value fallacy
Notes
References
<templatestyles src="Reflist/styles.css" />
- ↑ Script error: No such module "citation/CS1".
- ↑ a b c d e Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ a b Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Lakens, D., Adolfi, F.G., Albers, C.J. et al. Justify your alpha. Nat Hum Behav 2, 168–171 (2018). https://doi.org/10.1038/s41562-018-0311-x
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ a b Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ a b Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ Script error: No such module "Citation/CS1".
- ↑ An Introduction to Second-Generation p-Values Jeffrey D. Blume, Robert A. Greevy, Valerie F. Welty, Jeffrey R. Smith &William D. Dupont https://www.tandfonline.com/doi/full/10.1080/00031305.2018.1537893
Script error: No such module "Check for unknown parameters".
Further reading
<templatestyles src="Refbegin/styles.css" />
- Script error: No such module "Citation/CS1".
- Script error: No such module "Citation/CS1".
- Script error: No such module "Citation/CS1".
- Script error: No such module "citation/CS1".
- Script error: No such module "citation/CS1".
- Script error: No such module "citation/CS1".
- Script error: No such module "citation/CS1".
- Script error: No such module "Citation/CS1".
- Script error: No such module "Citation/CS1".
- Script error: No such module "Citation/CS1".
- Script error: No such module "citation/CS1".
- Script error: No such module "Citation/CS1".
- Script error: No such module "citation/CS1".
- Script error: No such module "Citation/CS1".
- Script error: No such module "Citation/CS1".
External links
- Free online p-values calculators for various specific tests (chi-square, Fisher's F-test, etc.).
- Understanding p-values, including a Java applet that illustrates how the numerical values of p-values can give quite misleading impressions about the truth or falsity of the hypothesis under test.
- Template:Trim Template:Replace on YouTubeScript error: No such module "Check for unknown parameters".
- Template:Trim Template:Replace on YouTubeScript error: No such module "Check for unknown parameters".
- Science Isn't Broken - Article on how p-values can be manipulated and an interactive tool to visualize it.
Script error: No such module "Navbox".