3. SEEMINGLY UNRELATED REGRESSIONS (SUR) [] Examples Demad for some commodities: y Nike,t = x Nike,t β Nike + ε Nike,t y Reebok,t = x Reebok,t β Reebok + ε Reebok,t ; where y Nike,t is the quatity demaded for Nike seakers, x Nike,t is a k Nike vector of regressors such as the uit price of Nike seakers, prices of other seakers, icome..., ad t idexes time. Grufeld s ivestmet data: I: gross ivestmet ($millio) F: market value of firm at the ed of previous year. C: value of firm s capital at the ed of previous year. I it = β i + β i F it + β 3i C it + ε it, where i = GM, CH (Chrysler), GE, etc. Notice that although the same regressors are used for each i, the values of the regressors are differet across differet i. CAPM (Capital Asset Pricig Model) r it - r ft : excess retur o security i over a risk-free security. r mt -r ft : excess market retur. r it -r ft = α i + β i (r mt -r ft ) + ε it. Notice that the values of regressors are the same for every security. SUR-
VAR (Vector Autoregressios) g CPI,t = growth rate of CPI (iflatio rate) g GDP,t = growth rate of ormial GDP g CPI,t = α CPI + β CPI g CPI,t- + γ CPI g GDP,t- + ε CPI,t g GDP,t = α GDP + β GDP g CPI,t- + γ GDP g GDP,t- + ε GDP,t Notice that the values of regressors are the same. Cobb-Douglas Cost fuctio system Cost fuctio is a fuctio of output ad iput prices. Assume M iputs (labor, capital, lad, material, eergy, etc). Cobb-Douglas Cost fuctio: lc = α + β ly +Σ β l( P ) + ε. (CD.) M y j= j j c Share fuctios (s j = p j x j /C, where x j = quatity of iput j used) s lc = = β + ε, j =,..., M. (CD.) l P j j j j M M M Observe that Σ s = 0. This imples that Σ β = 0 ad Σ ε = 0. j= j j= j j= j Traslog Cost fuctio System Assume three iputs (j =,, ad 3 idex iputs, capital (k), labor (l), ad fuel (f), respectively. SUR-
Traslog productio fuctio: 3 3 3 lc = α +Σ j= β jl Pj +.5Σ j= Σi= δ jilog Pjlog Pi +Σ γ l P ly + θ l Y +.5 θ (l Y) + ε, (T.) 3 j= y, j j y yy c where δ ji = δ ij. (See Greee Ch. 4.) This a geeralizatio of Cobb-Douglas cost fuctio. If the productio fuctio is homothetic, γ y, = γ y, = γ y,3 = 0. If the productio fuctio is Cobb-Douglas, δ ji = 0, γ y,j = 0, θ yy = 0, for all j ad i. If the productio fuctio is Cobb-Douglas ad costat returs to scale, δ ji = 0, γ y,j = 0, θ yy = 0, for all j ad i; ad θ y =. Share fuctios: lc s = = β +Σ δ l P + γ l Y + ε ; 3 i= i i y, l P lc s = = β +Σ δ l P + γ l Y + ε ; 3 i= i i y, l P lc s = = β +Σ δ l P + γ l Y + ε. 3 3 3 i= 3 i i y,3 3 l P3 Note that s + s + s 3 =. Thus, Σ β = ; Σ Σ δ = 0; Σ γ = 0; Σ ε = 0. 3 3 3 3 3 j= j j= i ji j= y, j j= j (T.) SUR-3
[] Basic Model Model: y = X β + ε ; y = X β + ε ; : y = X β + ε, where y i = T, X i = T k i, β i = k i, ε i = (ε i,..., ε it ) = T, ad i =,...,. Assumptios o the model: ) SIC hold. (I fact, WIC are sufficiet). ) cov(ε it,ε jt ) = E(ε it ε jt ) = σ ij for ay t, i ad j. cov(ε it,ε it ) = var(ε it ) = σ ii. 3) cov(ε it,ε js ) = E(ε it ε js ) = 0 for ay t s. 4) The ε it are ormally distributed. (For simplicity. Not required) Implicatio of ozero σ ij : A uobservable macro shock at time t ca ifluece all of the y it. SUR-4
Form of E(ε i ε j ) ε... i εε i j εε i j εε i jt ε ε iε j εiε j... εiε i jt E( εε i j ) = E ( εj εj... εjt) = E : : : : ε εitε j εitε j... ε it itε jt σ ij 0... 0 0 σ ij... 0 = = σ ijit : : : 0 0... σ ij Matrix represetatio of the model y = X β + ε y = X β + ε : y = X β + ε y X 0 T k... 0 T k β ε y 0 T k X... 0 T k β ε = + : : : : : : y 0T k 0 T k... X β ε T k T T k, where k = Σ i k i. y = X β + ε SUR-5
Digressio to Kroecker Products Let A = [a ij ] m ad B = [b ij ] p q. The two matrices do ot have to be of the same dimesios. The, A ab ab... a B a B a B... a B B = : : : a B a B... a B m m m mp q Example: 3 A= ; B = 4 4 6 6 3 3 3 A B = = 4 4 Facts: Let A, B ad C be coformable matrices. The, (A+B) C = (A C) + (B C); (A B)(C D) = AC BD, if AC ad BD ca be defied; (A B) - = A - B -, if both A ad B are ivertible; (A B) = (A B ). Ed of Digressio SUR-6
Covariace matrix of ε i the system y = X β + ε Let Σ = [σ ij ]. (Note that σ ij = σ ji ) εε εε... εε εε εε... εε Cov( ε ) = E( ε ε ) = E : : : εε εε... εε σit σit... σ IT σ I σ I... σ I : : : σit σit... σit T T T = =Σ T Ω I If we let ε t = (ε t,..., ε t ), the ε t are iid N(0, Σ). SUR-7
[3] OLS ad GLS Two possible OLS OLS o idividual i: β i = ( X X ) X y, for i =,...,. OLS o y = X β + ε : i i i i β β β = = : β ( X X ) X y. Propositio : β j = β, for i =,,...,. j <Proof> Ad X 0... 0 X 0... 0 0 X 0... 0 X... 0 X X = : : : : : : 0 0... X 0 0... X X X 0... 0 0 X X... 0 = : : : 0 0... X X. SUR-8
( X X) 0... 0 0 ( X X)... 0 ( X X ) = ; : : : 0 0... ( X X) Thus, = X y X y X y. : X y SUR-9
β β β = = ( X X ) X y : β ( X X) 0... 0 X y 0 ( X X)... 0 X y = : : : : 0 0... ( X X) X y ( X X) X y β ( X X) X y β = = : : ( X ) X X y β Implicatio: Note that Cov(ε ) = Σ I T σ ε I T. So, i geeral, the OLS o y = X β + ε would ot be efficiet, ad so are idividual OLS estimators. You ca use idividual OLS estimators β i ad Cov( β ) = σ ( X X ). But they would be iefficiet. i ii j j SUR-0
Propositio : Let Ω - = (Σ I T ) - = Ω = ( Σ I ) =Σ I. The, the (ifeasible) GLS T T estimator β = asymptotically efficiet <Proof> Obvious. ( X Ω X ) X Ω y is ubiased, efficiet, cosistet ad Structure of the GLS estimator Deote Σ - = [σ ij ]. The, σ X X σ X X... σ X X σ X X σ X X... σ X X X Ω X = ; : : : σ X X σ X X... σ X X j Σ j σ X y = j j Σ j σ X y = j X Ω y =. : Σ j= X y j (Justify this by yourself.) Efficiecy gai of GLS over OLS: SUR-
Digressio Suppose A A A = A A ; A A A = A where A ad A are ivertible square matrices. The, A = (A -A (A ) - A ) - = (A ) - + (A ) - A (A -A (A ) - A ) - A (A ) - A = - (A ) - A (A -A (A ) - A ) - = - (A -A (A ) - A ) - A (A ) - A = - (A -A (A ) - A ) - A (A ) - = - (A ) - A (A -A (A ) - A ) - A = (A -A (A ) - A ) - = (A ) - + (A ) - A (A -A (A ) - A ) - A (A ) - Ed of Digressio A Digressio (Review): Let B = [b ij ] x be a symmetric matrix, ad c = [c,..., c ]. The, the scalar c Bc is called a quadratic form of B. If c Bc > (<) 0 for ay ozero vector c, B is called positive (egative) defiite. If c Bc ( ) 0 for ay ozero c, B is called positive (egative) semidefiite. SUR-
Let B be a symmetric ad square matrix give by: Defie the pricipal miors by: B b b... b b b... b = : : : b b... b. b b b b b B b ; B ; B b b b ;... 3 = = 3 = 3 b b b3 b3 b33 B is positive defiite iff B, B,..., B are all positive. B is egative defiite iff B < 0, B > 0, B 3 < 0,.... Ed of Digressio Digressio 3 (Review): Let θ ad θ be two p ubiased estimators. Let c = [c,..., c p ] be ay ozero vector. The, θ is said to be efficiet relative to θ iff var( c θ) var( c θ). ccov ( θ) c ccov ( θ) c 0. c [ Cov( θ) Cov( θ)] c 0. [ Cov( θ) Cov( θ)] is positive semidefiite. SUR-3
If θ is more efficiet tha θ, var( θ ) var( j θ j ), for ay j =,..., p. But, the reverse is ot true. Ed of Digressio 3 Retur to Efficiecy gai of GLS over OLS: Cosider the cases of two equatios: y = X β + ε y = X β + ε, with σ σ ; Σ= σ σ σ σ Σ =. σ σ Σ must be positive defiite; that is, σ > 0 ad σ σ -(σ ) > 0. β σ X X σ X X Cov( β ) = Cov = ( X Ω X ) = β σ X X σ X X A A A A = ( say) A A = A A Cov( β ) = A ; Cov( β ) = A. SUR-4
Usig the fact that A = (A - A (A ) - A ) -, Cov( β ) = [σ X X - {(σ ) /σ }X X (X X ) - X X ] - = [σ X X - {(σ ) /σ }X X + {(σ ) /σ }X X - {(σ ) /σ }X X (X X ) - X X ] - = [{σ -(σ ) /σ }X X + {(σ )/σ }X M(X )X ] -, where M(X ) = I T - X (X X ) - X = [(/σ )X X + {(σ ) /σ }X M(X )X ] - = σ [X X + {σ (σ ) /σ }X M(X )X ] - = σ [X X + {(σ ) /(σ σ -(σ ) )}X M(X )X ] - = σ [X X + m X M(X )X ] - σ, where m = σ σ σ (m 0). Note that Cov( β) = σ( X X). Thus, β is more efficiet tha β, because: [ Cov( β)] [ Cov( β)] = m XM ( X ) X σ is psd. Cov( β ) Cov( β ) is psd. β is more efficiet. Similarly, we ca show that β is more efficiet tha β. SUR-5
There are three possible cases i which β is as efficiet as β (m X M(X )X = 0): ) σ = 0 m = 0. ) X = X X M(X )X = X M(X )X = 0. 3) X = [X,W] X M(X )X = X 0 = 0. For ) ad ), β ad β are equally efficiet. But for 3), β is still more efficiet tha β. Three Cases i which OLS = GLS: Case I: σ ij = 0 for ay i j. Σ = diag(σ,..., σ ) Σ - = diag(/σ,..., /σ ). σ X X 0... 0 0 σ X X... 0 X Ω X = ; : : : 0 0... σ X X σ X y σ X y X Ω y =. : σ X y SUR-6
Thus, β = ( X Ω X ) X Ω y X X 0... 0 σ X y σ 0 σ X X... 0 σ X y = : : : : 0 0... σ X X σ X y 0... 0 ( X X) σ σ X y 0... 0 σ X y ( X X) = σ : : : 0 0... ( X X σ ) σ ( X X) X y ( X X) X y = = β : ( X X) X y : X y Case II: X = X =... = X. This is the case where the values of regressors are the same for all equatios. X 0... 0 X 0 X... 0X 0 X... 0 0X X... 0X X = = = I X : : : : : : 0 0... X 0X 0 X... X SUR-7
The, β = [X Ω - X ] - X Ω - y = [(I X) (Σ - I T )(I X)] - (I X) (Σ - I T )y = [Σ - X X] - (Σ - X )y = (Σ (X X) - )(Σ - X )y = (I (X X) - X )y = ( XX ) X 0... 0 y ( X X) Xy 0 ( )... 0 y XX X ( X X) Xy = : : : : : 0 0... ( XX) X y ( X X) Xy = β Case III: X, X,..., X m X m+, X m+,..., X β i = β i for i =,,..., m. But β j are still more efficiet tha β j for j = m+,...,. SUR-8
[4] Feasible GLS Estimator The GLS estimator defied above is ot feasible i the sese that it depeds o the ukow covariace matrix Σ. Feasible GLS estimator is a GLS estimator obtaied by replacig Σ by a cosistet estimate of it. Feasible GLS estimators caot be said to be ubiased. But they are cosistet ad asymptotically equivalet to the ifeasible couterpart. () Two-Step Feasible GLS Let T sij = ( yi Xiβ )( i yj X jβ j) = Σ t= etietj ; ad S = [s ij ]. T T It ca be show that plim s ij = σ ij. The two-step feasible GLS estimator is the give by: where Ω= S I. T β = ( X Ω X ) X Ω y, () Iterative Feasible GLS. Usig the two-step GLS estimator, recompute S. Usig this recomputed S, recompute the feasible GLS estimator. Repeat this procedure, util the value of FGLS does ot chage. SUR-9
(3) Facts: The two-step ad iterative feasible GLS are asymptotically equivalet to the MLE uder the ormality assumptio about the ε s. I fact, iterative feasible GLS = MLE, umerically. SUR-0
[4] MLE of β ad Σ Cosider a simple regressio model: y = Xβ + ε; ε ~ N(0 T,Ω). l T = -(T/)l(π) - (/)l[det(ω)] - (/)(y-xβ) Ω - (y-xβ). The log-likelihood fuctio of the SUR model: y = X β + ε ; ε ~ N(0 T,Σ I T ). l = -(T/)l(π) - (/)l[det(σ I T )] - (/)(y -X β ) (Σ - I T )(y -X β ). Let A ad B are p p ad q q matrices. The, det(a B) = [det(a)] q [det(b)] p (Theil, p. 305). (y -X β ) (Σ - I T )(y -X β ) = Σ i Σ j σ ij (y i -X i β i ) (y j -X j β j ). l = -(T/)l(π) + (T/)l[det(Σ - )] - (/)(y -X β ) (Σ - I T )(y -X β ) = -(T/)l(π) + (T/)l[det(Σ - )] - (/)Σ i Σ j σ ij (y i -X i β i ) (y j -X j β j ) = -(T/)l(π) + (T/)l[det(Σ - )] - (/)Σ i σ ii (y i -X i β i ) (y i -X i β i ) -Σ i Σ j<i σ ij (y i -X i β i ) (y j -X j β j ) l[det( A )] = a ij a ij for A = [a ij ] ad A - = [a ij ]. For a symmetric A, l[det( A )] a ii = a ii ; ad l[det( A )] a ij = a ij for j i. SUR-
Maximize l w.r.t. β ad σ ii ad σ ij to get MLE of β ad Σ. Let A be a symmetric p p matrix ad; x be a p vector. The, x Ax = Ax. x FOC: l = X ( Σ IT )( y X β ) = 0; β l T = σii ( yi X ii iβi )( yj X jβ j ) = 0 ; σ l = Tσij ( yi Xiβi )( yj X jβ j ) = 0. ij σ MLE estimators, β ad Σ solves: β = [ X ( Σ I ) X ] X ( Σ I ) y ; σ ij = ( yi X iβ i )( yj X jβ j ) T. T T SUR-
[5] Testig Hypotheses Let β be a feasible GLS estimator; S be a cosistet estimator of Σ; ad C = Cov( β ) = [X (S - I T )X ] -. () Testig liear hypotheses: H o : Rβ = r, where R ad r are kow matrices with m rows. Uder H 0, W = ( Rβ r)[ RC R ] ( Rβ r) χ ( m). T Example : l(q coke,t ) = β coke, + β coke, l(p coke,t ) + β coke,3 l(p pep,t ) + ε coke,t ; l(q pep,t ) = β pep, + β pep, l(p coke,t ) + β pep,3 l(p pepe,t ) + ε pep,t. βcoke, β coke, βcoke,3 β = β. pep, β pep, β pep,3 H 0 : Ow-price elasticities are the same. H 0 : β coke, - β pep,3 = 0 R = (0,,0,0,0,-), r = 0. SUR-3
Example : Assume that X,..., X cotai the same umber (k) of variables. H 0 : β = β =... = β. R Ik Ik 0 k k... 0k k 0k k 0 k k Ik Ik... 0k k 0 k k = : : : : : 0k k 0k k 0 k k... Ik I k ; r 0 k ( ) =. () Testig oliear hypotheses: H o : w(β ) = 0, where w is a m vecor of fuctios of β. Uder H 0, W = w( β )[ W( β ) CW( β )] w( β ) χ ( m), where T W ( β ) = w. β Example: l(q coke,t ) = β coke, + β coke, l(p coke,t ) + β coke,3 l(p pep,t ) + ε coke,t ; l(q pep,t ) = β pep, + β pep, l(p coke,t ) + β pep,3 l(p pep,t ) + ε pep,t. H 0 : β coke, β pep,3 = w( β ) = β β. coke, pep,3 W ( β ) = (0, β,0,0,0, β ). pep,3 coke, SUR-4
(3) Testig diagoality of Σ Digressio to LM test: Let θ be a ukow parameter vector (p ). H o : w(θ) = 0. Let θ R be the restricted MLE wihch max. l T subject to w(θ) = 0. lt Let st ( θ ) = ad I θ T lt( θo) ( θ ) = E θθ. The, LM = s ( θ )[ ( )] ( R I θ R s θ R). Ed of Digressio T T T Note that uder H o : Σ is diagoal, the restricted MLE of β i s = OLS of β i s; restricted MLE of σ ii = s ii = ( y X β )( y X β )/ T ; ad restricted MLE of σ ij = 0. i i i i i j Breusch ad Paga (979, Restud): Let r ij = s ij /(s ii s jj ) / (estimated correlatio coefficiet betwee e i ad e j ). LM T for H o = TΣ i Σ j<i r ij χ [(-)/]. Do ot eed to compute urestricted MLE. This statistic is obtaied uder the assumptio of ormal errors. Questio: Is this statistic still chi-squared eve if the errors are ot ormal? SUR-5
[6] Autocorrelatio: () AR() y it = x it β i + ε it ; ε it = ρ i ε i,t- + v it ; (v t,..., v t ) ~ N(0, Σ). Use OLS estimates of β i to estimate ρ i s as we did i ECN 55. Trasform each equatio usig Prais-Wiste or Cochrae-Orcutt methods: y ρ y = ρ x β + v ; i i i i i i ρ y = ( x ρ x ) β + v ; i i i i i i i i y it : ρ y = ( x ρ x ) β + v. i i, T it i i, T i it The, do SUR. () AR(p) Similar to the above procedures. (3) MA(q): Procedures are complicated. SUR-6
[6] Applicatio: Use taba5_a.wf (EVIEWS data set) from the CD attached to the textbook. Grufeld s ivestmet data: I: gross ivestmet ($millio) F: market value of firm at the ed of previous year. CS: value of firm s capital at the ed of previous year. I it = β i + β i F it + β 3i CS it + ε it, where i = GM (), CH (Chrysler, ), GE (3), WE (Westighouse, 4), ad US (U.S. Steel, 5). GLS Read the work file usig EVIEWS. Go to \objects\new Objects... Choose System ad click o the ok butto. The, a empty widow will pop up. Type the followigs o the widow: i = c()+c()f+c(3)cs i = c(4)+c(5)f+c(6)cs i3 = c(7)+c(8)f3+c(9)cs3 i4 = c(0)+c()f4+c()cs4 i5 = c(3)+c(4)f5+c(5)cs5 Click o proc\estimate. The, you will see the meu for etimatio of systems of equatios. Choose Seeigly Urelated Regressio. For Two-Step GLS, choose Iterate Coefs. For Iterative GLS, choose Sequetial. Do ot use Oe-Step Coefs or Simultaeous. SUR-7
<Two-Step GLS> Estimatio Results: System: SUR Estimatio Method: Seemigly Urelated Regressio (Marquardt) Sample: 935 954 Icluded observatios: 0 Total system (balaced) observatios 00 Liear estimatio after oe-step weightig matrix Coefficiet Std. Error t-statistic Prob. C() -6.364 89.4593 -.8495 0.073 C() 0.0493 0.069 5.570868 0.0000 C(3) 0.38746 0.03768.68047 0.0000 C(4) 0.504304.583 0.043804 0.965 C(5) 0.069546 0.06898 4.573 0.000 C(6) 0.308545 0.05864.997 0.0000 C(7) -.4389 5.5859-0.87936 0.387 C(8) 0.0379 0.063 3.040936 0.003 C(9) 0.30783 0.0050 5.937 0.0000 C(0).088877 6.58804 0.73975 0.863 C() 0.057009 0.036 5.0746 0.0000 C() 0.04506 0.040.007400 0.366 C(3) 85.435.8774 0.763543 0.4473 C(4) 0.0478 0.054784.85344 0.0674 C(5) 0.39999 0.7795 3.9956 0.004 Determiat residual covariace 6.8E+3 Equatio: I = C()+C()F+C(3)CS Observatios: 0 R-squared 0.9074 Mea depedet var 608.000 Adjusted R-squared 0.947 S.D. depedet var 309.5746 S.E. of regressio 9.388 Sum squared resid 4430.9 Durbi-Watso stat 0.936490 Equatio: I = C(4)+C(5)F+C(6)CS Observatios: 0 R-squared 0.986 Mea depedet var 86.350 Adjusted R-squared 0.90493 S.D. depedet var 4.7556 S.E. of regressio 3.40980 Sum squared resid 3056.985 Durbi-Watso stat.97509 SUR-8
Equatio: I3 = C(7)+C(8)F3+C(9)CS3 Observatios: 0 R-squared 0.687636 Mea depedet var 0.900 Adjusted R-squared 0.650887 S.D. depedet var 48.58450 S.E. of regressio 8.70654 Sum squared resid 4009. Durbi-Watso stat 0.96757 Equatio: I4 = C(0)+C()F4+C()CS4 Observatios: 0 R-squared 0.7649 Mea depedet var 4.8950 Adjusted R-squared 0.69444 S.D. depedet var 9.09 S.E. of regressio 0.5670 Sum squared resid 898.49 Durbi-Watso stat.59005 Equatio: I5 = C(3)+C(4)F5+C(5)CS5 Observatios: 0 R-squared 0.4959 Mea depedet var 405.4600 Adjusted R-squared 0.353954 S.D. depedet var 9.359 S.E. of regressio 03.969 Sum squared resid 83763.0 Durbi-Watso stat.0798 SUR-9
views/residuals/graphs I Residuals I Residuals 50 50 00 40 50 30 0 0 0-50 0-00 -0-50 36 38 40 4 44 46 48 50 5 54-0 36 38 40 4 44 46 48 50 5 54 60 I3 Residuals 0 I4 Residuals 40 5 0 0 5 0 0-5 -0-0 -40 36 38 40 4 44 46 48 50 5 54-5 36 38 40 4 44 46 48 50 5 54 I5 Residuals 00 50 00 50 0-50 -00-50 -00 36 38 40 4 44 46 48 50 5 54 SUR-30
views/residulas/correlatio matrix I I I3 I4 I5 I.000000-0.9870 0.695 0.56947-0.39933 I -0.9870.000000 0.00657 0.3834 0.38408 I3 0.695 0.00657.000000 0.776898 0.48637 I4 0.56947 0.3834 0.776898.000000 0.698954 I5-0.39933 0.38408 0.48637 0.698954.000000 Testig H o : c() = c(4), c() = c(5), c(3) = c(6) views/wald Coefficiet Tests. Type: c() = c(4), c() = c(5), ad c(3) = c(6). Wald Test: System: SUR Null Hypothesis: C()=C(4) C()=C(5) C(3)=C(6) Chi-square 8.63969 Probability 0.034606 SUR-3
<Iterative GLS> System: SUR Estimatio Method: Iterative Seemigly Urelated Regressio (Marquardt) Date: 08/9/0 Time: 8:57 Sample: 935 954 Icluded observatios: 0 Total system (balaced) observatios 00 Sequetial weightig matrix & coefficiet iteratio Covergece achieved after: 3 weight matrices, 4 total coef iteratios Coefficiet Std. Error t-statistic Prob. C() -73.0379 84.7963 -.0534 0.043 C() 0.953 0.0043 6.04448 0.0000 C(3) 0.38945 0.0385.679 0.0000 C(4).37834.6335 0.04477 0.8385 C(5) 0.06745 0.070 3.943997 0.000 C(6) 0.305066 0.06067.7030 0.0000 C(7) -6.37654 4.96084-0.656089 0.535 C(8) 0.03709 0.0770 3.45 0.003 C(9) 0.6954 0.073 5.3896 0.0000 C(0) 4.488934 6.0064 0.74545 0.458 C() 0.05386 0.0094 5.386 0.0000 C() 0.06469 0.037038 0.7466 0.4768 C(3) 38.00 94.6080.458757 0.483 C(4) 0.088600 0.04578.956796 0.0537 C(5) 0.30930 0.7830.64987 0.003 Determiat residual covariace 5.97E+3 Equatio: I = C()+C()F+C(3)CS Observatios: 0 R-squared 0.9970 Mea depedet var 608.000 Adjusted R-squared 0.9055 S.D. depedet var 309.5746 S.E. of regressio 9.74073 Sum squared resid 464.3 Durbi-Watso stat 0.93677 Equatio: I = C(4)+C(5)F+C(6)CS Observatios: 0 R-squared 0.90565 Mea depedet var 86.350 Adjusted R-squared 0.900043 S.D. depedet var 4.7556 S.E. of regressio 3.50807 Sum squared resid 30.956 Durbi-Watso stat.885 SUR-3
Equatio: I3 = C(7)+C(8)F3+C(9)CS3 Observatios: 0 R-squared 0.6690 Mea depedet var 0.900 Adjusted R-squared 0.630083 S.D. depedet var 48.58450 S.E. of regressio 9.54949 Sum squared resid 4843.93 Durbi-Watso stat 0.89809 Equatio: I4 = C(0)+C()F4+C()CS4 Observatios: 0 R-squared 0.70750 Mea depedet var 4.8950 Adjusted R-squared 0.66666 S.D. depedet var 9.09 S.E. of regressio.03336 Sum squared resid 069.496 Durbi-Watso stat.4739 Equatio: I5 = C(3)+C(4)F5+C(5)CS5 Observatios: 0 R-squared 0.390335 Mea depedet var 405.4600 Adjusted R-squared 0.3860 S.D. depedet var 9.359 S.E. of regressio 06.7753 Sum squared resid 9386.3 Durbi-Watso stat 0.967353 SUR-33
<GLS with AR()> EVIEWS estimates β s ad ρ s joitly usig oliear GLS method. Type: i = c()+c()f+c(3)cs+[ar()=c(4)] i = c(5)+c(6)f+c(7)cs+[ar()=c(8)] i3 = c(9)+c(0)f3+c()cs3+[ar()=c()] i4 = c(3)+c(4)f4+c(5)cs4+[ar()=c(6)] i5 = c(7)+c(8)f5+c(9)cs5+[ar()=c(0)] The followig results are from two-step oliear GLS (Iterate Coefs). System: SUR Estimatio Method: Seemigly Urelated Regressio (Marquardt) Date: 08/9/0 Time: 9:00 Sample: 936 954 Icluded observatios: 0 Total system (balaced) observatios 95 Iterate coefficiets after oe-step weightig matrix Covergece achieved after: weight matrix, 50 total coef iteratios Coefficiet Std. Error t-statistic Prob. C() -93.5646 90.0353 -.03999 0.30 C() 0.0978 0.07643 5.50973 0.0000 C(3) 0.4374 0.04694 8.997976 0.0000 C(4) 0.56 0.748 3.03005 0.0034 C(5) -9.99356.79400-0.7809 0.437 C(6) 0.08055 0.08744 4.95565 0.000 C(7) 0.38708 0.0774 4.43375 0.0000 C(8) -0.7537 0.46 -.0409 0.309 C(9) -6.0 3.468-0.80408 0.439 C(0) 0.0468 0.0335 3.09067 0.008 C() 0.06 0.03875 3.0689 0.007 C() 0.48330 0.7903.699858 0.0086 C(3) 4.87474 8.87484 0.54456 0.5879 C(4) 0.04896 0.089 4.35373 0.000 C(5) 0.05995 0.05753.35943 0.596 C(6) 0.38645 0.6864.38640 0.095 C(7) 34.409 93.5550 0.694386 0.4896 C(8) 0.5774 0.04046 3.88450 0.000 C(9) -0.04437 0.5078-0.097330 0.97 C(0) 0.77540 0.6074 4.789968 0.0000 SUR-34
Determiat residual covariace.6e+3 Equatio: I = C()+C()F+C(3)CS+[AR()=C(4)] Observatios: 9 R-squared 0.94840 Mea depedet var 63.3053 Adjusted R-squared 0.937768 S.D. depedet var 30.069 S.E. of regressio 77.3854 Sum squared resid 8986.90 Durbi-Watso stat.397833 Equatio: I = C(5)+C(6)F+C(7)CS+[AR()=C(8)] Observatios: 9 R-squared 0.90887 Mea depedet var 88.53579 Adjusted R-squared 0.890646 S.D. depedet var 4.47399 S.E. of regressio 4.0456 Sum squared resid 959.93 Durbi-Watso stat.7844 Equatio: I3 = C(9)+C(0)F3+C()CS3+[AR()=C()] Observatios: 9 R-squared 0.737358 Mea depedet var 05.936 Adjusted R-squared 0.684830 S.D. depedet var 47.080 S.E. of regressio 6.405 Sum squared resid 0455.6 Durbi-Watso stat.989 Equatio: I4 = C(3)+C(4)F4+C(5)CS4+[AR()=C(6)] Observatios: 9 R-squared 0.73566 Mea depedet var 44.4684 Adjusted R-squared 0.66879 S.D. depedet var 8.4806 S.E. of regressio 0.500 Sum squared resid 656.906 Durbi-Watso stat.464709 Equatio: I5 = C(7)+C(8)F5+C(9)CS5+[AR()=C(0)] Observatios: 9 R-squared 0.56855 Mea depedet var 45.756 Adjusted R-squared 0.486 S.D. depedet var 4.974 S.E. of regressio 89.36507 Sum squared resid 979.7 Durbi-Watso stat.565597 SUR-35