关键词不能为空

当前您在: 主页 > 英语 >

Autocorrelation

作者:高考题库网
来源:https://www.bjmy2z.cn/gaokao
2021-02-08 16:23
tags:

-

2021年2月8日发(作者:行长)



Autocorrelation


?



In


the


regression


context,


the


classical


linear


regression


model


assumes


that


there


is


no


correlation


between


disturbance


terms.


Symbolically:



Co v


(


u


i


,< /p>


u


j


)


?


E


(


u


i

< p>
u


j


)


?


0



i


?


j



?



If there is dependence between error terms, we have autocorrelation.


Symbolically:



Cov


(


u


i


,


u


j


)


?


0




?



Forms of autocorrelation:


(1)



first order autocorrelation:


u


t

?


f


(


u


t


?


1


)



(2)



higher order autocorrelation:


u


t

?


f


(


u


t


?


1


,


u< /p>


t


?


2


,... )



?



Usually


we


assume


that


autocorrelation


between


the


error


terms


are


linear, namely:


u


t


?


a


1


u


t

< br>?


1


?


v


t



where v


t


satisfies all the CLRM assumptions


?



We concentrate on the first order autocorrelation:


?


1


?


u


t


?


a


1


u


t


?


1

< br>?


v


t


,


the


estimated


formula


for





is:


a


?


u


u


t


t


?< /p>


2


T


T


t


?


1


2


?

< p>
u


t


?


2


.


The


t


?


1




coefficient of autocorrelation is:


?


?


?


?


u


u

< br>t


t


?


2


T


2


t


t


?


2


T


t


?


1


T


?


a


1


2


t


?


1


?


u


?

< br>u


t


?


2




?



The


first


order


autocorrelation


of



can


be


expressed


as:


u


t


?


?


u


t


?


1


?


v


t



?



ρ


takes


the


value


between


[-1,1]


,


when

< p>
ρ


>0,


positively


autocorrelated



when

< p>
ρ


<0,


negatively autocorrelated


?



If





is


first


order


autoregressive


(AR(1))


process,


it


has


mean,


variance and covariance like the following:


E


(


u


t


)

?


E


(


?


u


t


?


1


?< /p>


v


t


)


?


?


E


(


u

< p>
t


?


1


)


?


E


(


v

t


)


?


0



Var


(


u


t


)


?


E


(


u


t


)


2


?


E


(


?


u


t


?


1

< br>?


v


t


)


2




?< /p>


?


2


Var


(< /p>


u


t


)


?


?


v


2


?

< p>
?


u


2


?


?


v


2


/(

< br>1


?


?


2


)


2


Cov


(


u


t


,


u


t


?


1


)


?


E


(


u


t


u


t


?


1


)


?


E


[(


?


u


t


?

1


?


v


t


)


u


t


?


1< /p>


]


?


??


u



2


Cov


(


u


t


,


u


t


?


s


)


?


?


s


?

< br>u




?


1


?


2


?


?


E


(


UU


'< /p>


)


?


?


?


?


u


?


...


?


T


?


1

< p>
?


?


?


?


1


...


?


2


?


...


?


T


?


2


?


T

?


3


...


?

T


?


1


?


?


...


?


T


?


2


?


...


...


?



?


...


1


?


?


OLS estimation in the presence of autocorrelation


?


is still unbiased





OLS estimator


?


1< /p>


?


)



is not minimum any more





Var


(


?


1




T


T


?


)


?


E


(


?


?


?


?


)

< br>?


E


(


x


u


/


x


2


)


2


Var


(


?


?


t


t


?


t


1


1


1


2


t


?


1


t


?


1




?



(


1


2


x


?


t


t


?


1


T


)


2


E


(


x


1


u

< br>1


?


x


2


u


2


?


...


x


T


u


T


)


2


x


t


x


s





?


?


(


t


?


1


T


x


t


?


x


t


?


1


T


2


t


)


E

< br>(


u


)


?


2


?


2


2


t


t


?


s


?


x


t


?


1


T


E


(


u


t


u


s


)

< br>2


t




When





and





are


both


positively


autocorrelated,


variance


of


?



is


underestimated.


OLS


estimator


does


not


have


minimum


?


1


variance



Detection of autocorrelation


1.



Graphical method


(1)



run regression for the given data, calculate residual


figure for residuals


(2)



analyze residual figures (figures)


?


t


, draw the


u


2.



DW (Durbin-Watson test) test




Durbin



Watson d statistic:


d


?


?


?


(


u


t

< p>
?


2


n


t


?


t


?


1

)


2


?


u


2


t


?


?


u< /p>


t


?


2


t


?


n





D


statistic


is


simply


the


ratio


of


the


sum


of


squared


difference


in


successive


residuals


to


the


RSS.


A


great


advantage


of


d


statistic:


simple







Assumptions underlying d statistic:


(1)



(2)



The regression model includes the intercept terms


The explanatory variables, the X



s, are non stochastic, or fixed in


repeated samplings


(3)



The disturbances U


t


are generated by the first order autoregressive


scheme:


u


t


?


?


u


t

?


1


?


?


t



(4)



(5)



The error term U


t


is assumed to be normally distributed


The


regression


model


does


not


include


the


lagged


values


of


dependent variable as one of the explanatory variables


(6)



There are no missing observations in the data




The probability distribution of d statistic depends in a complicated way


on the X values present in a given sample




There


is


no


unique


critical


value


that


will


lead


to


the


rejection


or


acceptance


of


the


null


hypothesis


that


there


is


no


first


order


serial


correlation in the disturbances U


t






DW critical value:


2


2


?


?


?


?


u

?


u


?


?


t


t


?


1


?< /p>


2


?


u


t


u


t


?


1

< p>
d


?


2


?


u


?


t


?

u


?


u


?


d


?


2


(


1< /p>


?


?


?


u


t


t


?


1

< p>
2


t


)



















Postive







Incon-





No Serial








Incon-





Negative





Correlation








clusion







Serial

























Correlation







Serial






clusion




Correlation





,




0












d


L







d


U





2









4-d


U







4-d


L











4




?


u


?


u


?


?


?


Now let



s define


?


?


?


u


t


t


?

1


2


t



So


?


)


,


0


?


d


?


4



d


?


2


(


1


?


?


?


?


0

< br>,



d=2, there is no serial correlation




If


?




The


closer


d


is


to


0,


the


greater


the


evidence


of


positive


serial


correlation




The


closer


d


is


to


4,


the


greater


the


evidence


of


negative


serial


correlation




The mechanics of Durbin Watson test are as follows: (1) run the OLS


regression and obtain the residuals. (2) Compute d statistic. (3) For the


given sample size and given number of explanatory variables, find out


the critical d


L



and d


u


values. (4) follow the decision rules to reject or


accept the null




The d test has a great drawback in that, if it falls in the indecisive zone,


one can not conclude that first order autocorrelation does or does not


exit


-


-


-


-


-


-


-


-



本文更新与2021-02-08 16:23,由作者提供,不代表本网站立场,转载请注明出处:https://www.bjmy2z.cn/gaokao/615567.html

Autocorrelation的相关文章