<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Heteroskedasticity-consistent_standard_errors</id>
	<title>Heteroskedasticity-consistent standard errors - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://debianws.lexgopc.com/wiki143/index.php?action=history&amp;feed=atom&amp;title=Heteroskedasticity-consistent_standard_errors"/>
	<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Heteroskedasticity-consistent_standard_errors&amp;action=history"/>
	<updated>2026-05-07T15:42:03Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://debianws.lexgopc.com/wiki143/index.php?title=Heteroskedasticity-consistent_standard_errors&amp;diff=5301945&amp;oldid=prev</id>
		<title>imported&gt;Bazsola: /* Problem */</title>
		<link rel="alternate" type="text/html" href="http://debianws.lexgopc.com/wiki143/index.php?title=Heteroskedasticity-consistent_standard_errors&amp;diff=5301945&amp;oldid=prev"/>
		<updated>2025-06-14T01:47:10Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Problem&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Asymptotic variances under heteroskedasticity}}&lt;br /&gt;
The topic of &amp;#039;&amp;#039;&amp;#039;heteroskedasticity-consistent&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;HC&amp;#039;&amp;#039;&amp;#039;) &amp;#039;&amp;#039;&amp;#039;standard errors&amp;#039;&amp;#039;&amp;#039; arises in [[statistics]] and [[econometrics]] in the context of [[linear regression]] and [[time series analysis]]. These are also known as &amp;#039;&amp;#039;&amp;#039;heteroskedasticity-robust standard errors&amp;#039;&amp;#039;&amp;#039; (or simply &amp;#039;&amp;#039;&amp;#039;robust standard errors&amp;#039;&amp;#039;&amp;#039;), &amp;#039;&amp;#039;&amp;#039;Eicker–Huber–White standard errors&amp;#039;&amp;#039;&amp;#039; (also &amp;#039;&amp;#039;&amp;#039;Huber–White standard errors&amp;#039;&amp;#039;&amp;#039; or &amp;#039;&amp;#039;&amp;#039;White standard errors&amp;#039;&amp;#039;&amp;#039;),&amp;lt;ref&amp;gt;{{cite web|last1=Kleiber |first1=C. |last2=Zeileis |first2=A. |year=2006 |url=http://www.r-project.org/useR-2006/Slides/Kleiber+Zeileis.pdf |title=Applied Econometrics with R |work=UseR-2006 conference |archive-url=https://web.archive.org/web/20070422030316/http://www.r-project.org/useR-2006/Slides/Kleiber%2BZeileis.pdf |archive-date=April 22, 2007 |url-status=dead }}&amp;lt;/ref&amp;gt; to recognize the contributions of [[Friedhelm Eicker]],&amp;lt;ref&amp;gt;{{Cite book |last=Eicker |first=Friedhelm |chapter=Limit Theorems for Regression with Unequal and Dependent Errors |title=Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability |year=1967 |volume=5 |issue=1 |pages=59–82 |chapter-url=http://projecteuclid.org/euclid.bsmsp/1200512981 |mr=0214223 |zbl=0217.51201 }}&amp;lt;/ref&amp;gt; [[Peter J. Huber]],&amp;lt;ref&amp;gt;{{Cite book | last=Huber| first=Peter J.| chapter=The behavior of maximum likelihood estimates under nonstandard conditions| title=Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability| year=1967| volume=5| issue=1| pages=221–233| chapter-url=http://projecteuclid.org/euclid.bsmsp/1200512988|  mr = 0216620| zbl=0212.21504}}&amp;lt;/ref&amp;gt; and [[Halbert White]].&amp;lt;ref&amp;gt;{{Cite journal |last=White |first=Halbert |title=A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity |journal=[[Econometrica]] |volume=48 |pages=817–838 |year=1980 |doi=10.2307/1912934 |issue=4 |mr=575027 |jstor=1912934 |citeseerx=10.1.1.11.7646 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbances &amp;#039;&amp;#039;u&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;i&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt; have the same variance across all observation points.  When this is not the case, the errors are said to be heteroskedastic, or to have [[heteroskedasticity]], and this behaviour will be reflected in the residuals &amp;lt;math display=&amp;quot;inline&amp;quot;&amp;gt;\widehat{u}_i &amp;lt;/math&amp;gt; estimated from a fitted model. Heteroskedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroskedastic residuals. The first such approach was proposed by Huber (1967), and further improved procedures have been produced since for cross-sectional data, [[time-series]] data and [[GARCH| GARCH estimation]].&lt;br /&gt;
&lt;br /&gt;
Heteroskedasticity-consistent standard errors that differ from classical standard errors may indicate model misspecification. Substituting heteroskedasticity-consistent standard errors does not resolve this misspecification, which may lead to bias in the coefficients. In most situations, the problem should be found and fixed.&amp;lt;ref&amp;gt;{{Cite journal|last1=King|first1=Gary|last2=Roberts|first2=Margaret E.|date=2015|title=How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It|url=https://www.cambridge.org/core/product/identifier/S1047198700011670/type/journal_article|journal=Political Analysis|language=en|volume=23|issue=2|pages=159–179|doi=10.1093/pan/mpu015|issn=1047-1987}}&amp;lt;/ref&amp;gt; Other types of standard error adjustments, such as [[clustered standard errors]] or [[Newey–West estimator|HAC standard errors]], may be considered as extensions to HC standard errors.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
Heteroskedasticity-consistent standard errors are introduced by [[Friedhelm Eicker]],&amp;lt;ref&amp;gt;{{Cite journal|title=Asymptotic Normality and Consistency of the Least Squares Estimators for Families of Linear Regressions|year=1963|doi=10.1214/aoms/1177704156|url=https://projecteuclid.org/euclid.aoms/1177704156|last1=Eicker|first1=F.|journal=The Annals of Mathematical Statistics|volume=34|issue=2|pages=447–456|doi-access=free}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite journal|title=Limit theorems for regressions with unequal and dependent errors|journal=Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics|date=January 1967|volume=5|issue=1|pages=59–83|url=https://projecteuclid.org/euclid.bsmsp/1200512981|last1=Eicker|first1=Friedhelm}}&amp;lt;/ref&amp;gt; and popularized in econometrics by [[Halbert White]].&lt;br /&gt;
&lt;br /&gt;
==Problem==&lt;br /&gt;
Consider the linear regression model for the scalar &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;&lt;br /&gt;
y = \mathbf{x}^{\top} \boldsymbol{\beta} + \varepsilon, \,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathbf{x}&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;k&amp;#039;&amp;#039; × 1 column vector of explanatory variables (features), &amp;lt;math&amp;gt;\boldsymbol{\beta}&amp;lt;/math&amp;gt; is a &amp;#039;&amp;#039;k&amp;#039;&amp;#039; × 1 column vector of parameters to be estimated, and &amp;lt;math&amp;gt;\varepsilon&amp;lt;/math&amp;gt; is the [[Errors and residuals|residual error]].&lt;br /&gt;
&lt;br /&gt;
The [[ordinary least squares]] (OLS) estimator is&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;&lt;br /&gt;
\widehat \boldsymbol{\beta}_\mathrm{OLS} = (\mathbf{X}^{\top} \mathbf{X})^{-1} \mathbf{X}^\top \mathbf{y}. \,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathbf{y}&amp;lt;/math&amp;gt; is a vector of observations &amp;lt;math&amp;gt;y_i&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\mathbf{X}&amp;lt;/math&amp;gt; denotes the matrix of stacked &amp;lt;math&amp;gt;\mathbf{x}_i&amp;lt;/math&amp;gt; values observed in the data.&lt;br /&gt;
&lt;br /&gt;
If the [[errors and residuals in statistics|sample errors]] have equal variance &amp;lt;math&amp;gt;\sigma^2&amp;lt;/math&amp;gt; and are [[uncorrelated]], then the least-squares estimate of &amp;lt;math&amp;gt;\boldsymbol{\beta}&amp;lt;/math&amp;gt; is [[BLUE]] (best linear unbiased estimator), and its variance is estimated with&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;\hat{\mathbb{V}}\left[\widehat\boldsymbol\beta_\mathrm{OLS}\right] = s^2 (\mathbf{X}^{\top}\mathbf{X})^{-1}, \quad s^2 = \frac{\sum_{i=0}^n \widehat \varepsilon_i^2}{n-k} &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\widehat \varepsilon_i = y_i - \mathbf{x}_i^{\top} \widehat \boldsymbol{\beta}_\mathrm{OLS}&amp;lt;/math&amp;gt; are the regression residuals.&lt;br /&gt;
&lt;br /&gt;
When the error terms do not have constant variance (i.e., the assumption of &amp;lt;math&amp;gt; \mathbb{E}[\varepsilon\varepsilon^{\top}] = \sigma^2 \mathbf{I}_n&amp;lt;/math&amp;gt; is untrue), the OLS estimator loses its desirable properties.  The formula for variance now cannot be simplified:&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt; \mathbb{V}\left[\widehat\boldsymbol\beta_\mathrm{OLS}\right] = \mathbb{V}\big[ (\mathbf{X}^{\top}\mathbf{X})^{-1} \mathbf{X}^{\top}\mathbf{y} \big] = (\mathbf{X}^{\top}\mathbf{X})^{-1} \mathbf{X}^{\top} \mathbf{\Sigma} \mathbf{X} (\mathbf{X}^{\top}\mathbf{X})^{-1}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt; \mathbf{\Sigma} = \mathbb{V}[\varepsilon].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the OLS point estimator remains unbiased, it is not &amp;quot;best&amp;quot; in the sense of having minimum mean square error, and the OLS variance estimator &amp;lt;math&amp;gt;\hat{\mathbb{V}} \left[ \widehat \boldsymbol{\beta}_\mathrm{OLS} \right]&amp;lt;/math&amp;gt; does not provide a consistent estimate of the variance of the OLS estimates.&lt;br /&gt;
&lt;br /&gt;
For any non-linear model (for instance [[logit]] and [[probit]] models), however, heteroskedasticity has more severe consequences: the [[maximum likelihood estimation|maximum likelihood estimates]] of the parameters will be biased (in an unknown direction), as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroskedasticity).&amp;lt;ref&amp;gt;{{cite web |first=Dave |last=Giles |title=Robust Standard Errors for Nonlinear Models |work=Econometrics Beat |date=May 8, 2013 |url=http://davegiles.blogspot.com/2013/05/robust-standard-errors-for-nonlinear.html }}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite journal |first=Michael |last=Guggisberg |title=Misspecified Discrete Choice Models and Huber-White Standard Errors |journal=[[Journal of Econometric Methods]] |year=2019 |volume=8 |issue=1 |doi=10.1515/jem-2016-0002 }}&amp;lt;/ref&amp;gt; As pointed out by [[William Greene (economist)|Greene]], “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption.”&amp;lt;ref&amp;gt;{{cite book |last=Greene |first=William H. |author-link=William Greene (economist) |title=Econometric Analysis |edition=Seventh |location=Boston |publisher=Pearson Education |year=2012 |isbn=978-0-273-75356-8 |pages=692–693 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Solution==&lt;br /&gt;
&lt;br /&gt;
If the regression errors &amp;lt;math&amp;gt;\varepsilon_i&amp;lt;/math&amp;gt; are independent, but have distinct variances &amp;lt;math&amp;gt;\sigma^2_i&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;\mathbf{\Sigma} = \operatorname{diag}(\sigma_1^2, \ldots, \sigma_n^2)&amp;lt;/math&amp;gt; which can be estimated with &amp;lt;math&amp;gt;\widehat\sigma_i^2 = \widehat \varepsilon_i^2&amp;lt;/math&amp;gt;. This provides White&amp;#039;s (1980) estimator, often referred to as &amp;#039;&amp;#039;HCE&amp;#039;&amp;#039; (heteroskedasticity-consistent estimator):&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\hat{\mathbb{V}}_\text{HCE} \big[ \widehat \boldsymbol{\beta}_\text{OLS} \big] &amp;amp;= \frac{1}{n} \bigg(\frac{1}{n} \sum_i \mathbf{x}_i \mathbf{x}_i^{\top} \bigg)^{-1} \bigg(\frac{1}{n} \sum_i \mathbf{x}_i \mathbf{x}_i^\top \widehat{\varepsilon}_i^2 \bigg) \bigg(\frac{1}{n} \sum_i \mathbf{x}_i \mathbf{x}_i^{\top} \bigg)^{-1} \\&lt;br /&gt;
&amp;amp;= ( \mathbf{X}^{\top} \mathbf{X} )^{-1} ( \mathbf{X}^{\top} \operatorname{diag}(\widehat \varepsilon_1^2, \ldots, \widehat \varepsilon_n^2)  \mathbf{X} ) ( \mathbf{X}^{\top} \mathbf{X})^{-1},&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where as above &amp;lt;math&amp;gt;\mathbf{X}&amp;lt;/math&amp;gt; denotes the matrix of stacked &amp;lt;math&amp;gt;\mathbf{x}_i^{\top}&amp;lt;/math&amp;gt; values from the data. The estimator can be derived in terms of the [[generalized method of moments]] (GMM).&lt;br /&gt;
&lt;br /&gt;
Also often discussed in the literature (including White&amp;#039;s paper) is the covariance matrix &amp;lt;math&amp;gt;\widehat\mathbf{\Omega}_n&amp;lt;/math&amp;gt; of the &amp;lt;math&amp;gt;\sqrt{n}&amp;lt;/math&amp;gt;-consistent limiting distribution:&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;&lt;br /&gt;
\sqrt{n}(\widehat \boldsymbol{\beta}_n - \boldsymbol{\beta})  \, \xrightarrow{d} \, \mathcal{N}(\mathbf{0}, \mathbf{\Omega}),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;&lt;br /&gt;
\mathbf{\Omega} = \mathbb{E}[\mathbf{X} \mathbf{X}^{\top}]^{-1} \mathbb{V}[\mathbf{X} \boldsymbol{\varepsilon}]\operatorname \mathbb{E}[\mathbf{X} \mathbf{X}^{\top}]^{-1},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\widehat\mathbf{\Omega}_n &amp;amp;= \bigg(\frac{1}{n} \sum_i \mathbf{x}_i \mathbf{x}_i^{\top} \bigg)^{-1} \bigg(\frac{1}{n} \sum_i \mathbf{x}_i \mathbf{x}_i^{\top} \widehat \varepsilon_i^2 \bigg) \bigg(\frac{1}{n} \sum_i \mathbf{x}_i \mathbf{x}_i^{\top} \bigg)^{-1} \\&lt;br /&gt;
&amp;amp;= n ( \mathbf{X}^{\top} \mathbf{X} )^{-1} ( \mathbf{X}^{\top} \operatorname{diag}(\widehat \varepsilon_1^2, \ldots, \widehat \varepsilon_n^2)  \mathbf{X} ) ( \mathbf{X}^{\top} \mathbf{X})^{-1}&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus,&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\widehat \mathbf{\Omega}_n = n \cdot \hat{\mathbb{V}}_\text{HCE}[\widehat \boldsymbol{\beta}_\text{OLS}]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\widehat \mathbb{V}[\mathbf{X} \boldsymbol{\varepsilon}] = \frac{1}{n} \sum_i \mathbf{x}_i \mathbf{x}_i^{\top} \widehat \varepsilon_i^2 = \frac{1}{n} \mathbf{X}^{\top} \operatorname{diag}(\widehat \varepsilon_1^2, \ldots, \widehat \varepsilon_n^2)  \mathbf{X}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Precisely which covariance matrix is of concern is a matter of context.&lt;br /&gt;
&lt;br /&gt;
Alternative estimators have been proposed in MacKinnon &amp;amp; White (1985) that correct for unequal variances of regression residuals due to different [[Leverage (statistics)|leverage]].&amp;lt;ref&amp;gt;{{Cite journal |last1=MacKinnon |first1=James G. |author-link=James G. MacKinnon |last2=White |first2=Halbert |author2-link=Halbert White |title=Some Heteroskedastic-Consistent Covariance Matrix Estimators with Improved Finite Sample Properties |journal=[[Journal of Econometrics]] |volume=29 |issue=3 |pages=305–325 |year=1985 |doi=10.1016/0304-4076(85)90158-7 |hdl=10419/189084 |hdl-access=free }}&amp;lt;/ref&amp;gt; Unlike the asymptotic White&amp;#039;s estimator, their estimators are unbiased when the data are homoscedastic.&lt;br /&gt;
&lt;br /&gt;
Of the four widely available different options, often denoted as HC0-HC3, the HC3 specification appears to work best, with tests relying on the HC3 estimator featuring better power and closer proximity to the targeted [[Statistical hypothesis testing#Definition of terms|size]], especially in small samples. The larger the sample, the smaller the difference between the different estimators.&amp;lt;ref&amp;gt;{{Cite journal |last=Long |first=J. Scott |last2=Ervin |first2=Laurie H. |date=2000 |title=Using Heteroscedasticity Consistent Standard Errors in the Linear Regression Model |url=https://www.jstor.org/stable/2685594 |journal=The American Statistician |volume=54 |issue=3 |pages=217–224 |doi=10.2307/2685594 |issn=0003-1305|url-access=subscription }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative to explicitly modelling the heteroskedasticity is using a [[Resampling (statistics)|resampling method]] such as the [[Bootstrapping (statistics)#Wild bootstrap|wild bootstrap]]. Given that the [[Bootstrapping (statistics)#Methods for bootstrap confidence intervals|studentized bootstrap]], which standardizes the resampled statistic by its standard error, yields an asymptotic refinement,&amp;lt;ref&amp;gt;{{Cite book |last=C. |first=Davison, Anthony |url=http://worldcat.org/oclc/740960962 |title=Bootstrap methods and their application |date=2010 |publisher=Cambridge Univ. Press |isbn=978-0-521-57391-7 |oclc=740960962}}&amp;lt;/ref&amp;gt; heteroskedasticity-robust standard errors remain nevertheless useful.&lt;br /&gt;
&lt;br /&gt;
Instead of accounting for the heteroskedastic errors, most linear models can be transformed to feature homoskedastic error terms (unless the error term is heteroskedastic by construction, e.g. in a [[linear probability model]]). One way to do this is using [[weighted least squares]], which also features improved efficiency properties.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
{{Div col|colwidth=20em}}&lt;br /&gt;
*[[Delta method]]&lt;br /&gt;
*[[Generalized least squares]]&lt;br /&gt;
*[[Generalized estimating equation]]s&lt;br /&gt;
*[[Weighted least squares]], an alternative formulation&lt;br /&gt;
*[[White test]] — a test for whether heteroskedasticity is present.&lt;br /&gt;
*[[Newey–West estimator]]&lt;br /&gt;
*[[Quasi-maximum likelihood estimate]]&lt;br /&gt;
{{Div col end}}&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
* [[EViews]]: EViews version 8 offers three different methods for robust least squares: M-estimation (Huber, 1973), S-estimation (Rousseeuw and Yohai, 1984), and MM-estimation (Yohai 1987).&amp;lt;ref&amp;gt;{{Cite web|url=http://www.eviews.com/EViews8/ev8ecrobust_n.html|title=EViews 8 Robust Regression}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Julia (programming language)|Julia]]: the &amp;lt;code&amp;gt;CovarianceMatrices&amp;lt;/code&amp;gt; package offers several methods for heteroskedastic robust variance covariance matrices.&amp;lt;ref&amp;gt;[https://github.com/gragusa/CovarianceMatrices.jl CovarianceMatrices: Robust Covariance Matrix Estimators]&amp;lt;/ref&amp;gt; &lt;br /&gt;
* [[MATLAB]]: See the &amp;lt;code&amp;gt;hac&amp;lt;/code&amp;gt; function in the Econometrics toolbox.&amp;lt;ref&amp;gt;{{cite web |title=Heteroskedasticity and autocorrelation consistent covariance estimators |work=Econometrics Toolbox |url=https://www.mathworks.com/help/econ/hac.html}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Python (programming language)|Python]]: The Statsmodel package offers various robust standard error estimates, see [http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.RegressionResults.html statsmodels.regression.linear_model.RegressionResults] for further descriptions &lt;br /&gt;
* [[R (programming language)|R]]: the &amp;lt;code&amp;gt;vcovHC()&amp;lt;/code&amp;gt; command from the {{mono|sandwich}} package.&amp;lt;ref&amp;gt;[https://cran.r-project.org/web/packages/sandwich/index.html sandwich: Robust Covariance Matrix Estimators]&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite book |first1=Christian |last1=Kleiber |first2=Achim |last2=Zeileis |title=Applied Econometrics with R |location=New York |publisher=Springer |year=2008 |isbn=978-0-387-77316-2 |pages=106–110 |url=https://books.google.com/books?id=86rWI7WzFScC&amp;amp;pg=PA106 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[RATS (statistical package)|RATS]]: {{mono|robusterrors}} option is available in many of the regression and optimization commands ({{mono|linreg}}, {{mono|nlls}}, etc.).&lt;br /&gt;
* [[Stata]]: &amp;lt;code&amp;gt;robust&amp;lt;/code&amp;gt; option applicable in many pseudo-likelihood based procedures.&amp;lt;ref&amp;gt;See online help for [https://www.stata.com/manuals15/p_robust.pdf &amp;lt;code&amp;gt;_robust&amp;lt;/code&amp;gt;] option and [https://www.stata.com/manuals15/rregress.pdf &amp;lt;code&amp;gt;regress&amp;lt;/code&amp;gt;] command.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Gretl]]: the option &amp;lt;code&amp;gt;--robust&amp;lt;/code&amp;gt; to several estimation commands (such as &amp;lt;code&amp;gt;ols&amp;lt;/code&amp;gt;) in the context of a cross-sectional dataset produces robust standard errors.&amp;lt;ref&amp;gt;{{cite web |title=Robust covariance matrix estimation |work=Gretl User&amp;#039;s Guide, chapter 22 |url=http://gretl.sourceforge.net/gretl-help/gretl-guide.pdf }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
==Further reading==&lt;br /&gt;
* {{cite journal |first=David A. |last=Freedman |author-link=David A. Freedman |title=On The So-Called &amp;#039;Huber Sandwich Estimator&amp;#039; and &amp;#039;Robust Standard Errors&amp;#039; |journal=The American Statistician |volume=60 |year=2006 |issue=4 |pages=299–302 |doi=10.1198/000313006X152207 |s2cid=6222876 }}&lt;br /&gt;
* {{Cite book |first=James W. |last=Hardin |chapter=The Sandwich Estimate of Variance |pages=45–74 |title=Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later |editor-first=Thomas B. |editor-last=Fomby |editor2-first=R. Carter |editor2-last=Hill |location=Amsterdam |publisher=Elsevier |year=2003 |isbn=0-7623-1075-8 }}&lt;br /&gt;
* {{Cite journal |last1=Hayes |first1=Andrew F. |last2=Cai |first2=Li |title=Using heteroskedasticity-consistent standard error estimators in OLS regression: An introduction and software implementation |journal=Behavior Research Methods |volume=39 |issue=4 |pages=709–722 |doi=10.3758/BF03192961 |pmid=18183883 |year=2007 |doi-access=free }}&lt;br /&gt;
* {{cite journal |first1=Gary |last1=King |author-link=Gary King (political scientist) |first2=Margaret E. |last2=Roberts |title=How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It |journal=[[Political Analysis (journal)|Political Analysis]] |volume=23 |issue=2 |year=2015 |pages=159–179 |doi=10.1093/pan/mpu015 |url=http://nrs.harvard.edu/urn-3:HUL.InstRepos:13572089 }}&lt;br /&gt;
* {{Cite book |last=Wooldridge |first=Jeffrey M. |author-link=Jeffrey Wooldridge |chapter=Heteroskedasticity-Robust Inference after OLS Estimation |title=Introductory Econometrics : A Modern Approach |edition=Fourth |location=Mason |publisher=South-Western |year=2009 |isbn=978-0-324-66054-8 |pages=265–271 }}&lt;br /&gt;
* Buja, Andreas, et al. &amp;quot;Models as approximations-a conspiracy of random regressors and model deviations against classical inference in regression.&amp;quot; Statistical Science (2015): 1. [http://www-stat.wharton.upenn.edu/~buja/PAPERS/Buja_et_al_A_Conspiracy-rev1.pdf pdf]&lt;br /&gt;
&lt;br /&gt;
[[Category:Regression analysis]]&lt;br /&gt;
[[Category:Simultaneous equation methods (econometrics)]]&lt;br /&gt;
[[Category:Estimation methods]]&lt;br /&gt;
[[Category:Regression with time series structure]]&lt;/div&gt;</summary>
		<author><name>imported&gt;Bazsola</name></author>
	</entry>
</feed>