TEORIA BÀSICA I EXEMPLES: REGRESSSIÓ

Σχετικά έγγραφα
Guia de curs: Descripció de dades, Inferència, Regressió Simple i Múltiple, Regressió Logística


Wan Nor Arifin under the Creative Commons Attribution-ShareAlike 4.0 International License. 1 Introduction 1

Biostatistics for Health Sciences Review Sheet

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

Wan Nor Arifin under the Creative Commons Attribution-ShareAlike 4.0 International License. 1 Introduction 1

Table 1: Military Service: Models. Model 1 Model 2 Model 3 Model 4 Model 5 Model 6 Model 7 Model 8 Model 9 num unemployed mili mili num unemployed

MATHACHij = γ00 + u0j + rij

Queensland University of Technology Transport Data Analysis and Modeling Methodologies

5.1 logistic regresssion Chris Parrish July 3, 2016

ST5224: Advanced Statistical Theory II

Supplementary Appendix

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

519.22(07.07) 78 : ( ) /.. ; c (07.07) , , 2008

FORMULAS FOR STATISTICS 1

SECTION II: PROBABILITY MODELS

Does anemia contribute to end-organ dysfunction in ICU patients Statistical Analysis

Statistical Inference I Locally most powerful tests

HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)

Γενικευμένα Γραμμικά Μοντέλα (GLM) Επισκόπηση

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Μηχανική Μάθηση Hypothesis Testing

Modern Regression HW #8 Solutions

Άσκηση 10, σελ Για τη μεταβλητή x (άτυπος όγκος) έχουμε: x censored_x 1 F 3 F 3 F 4 F 10 F 13 F 13 F 16 F 16 F 24 F 26 F 27 F 28 F

EE512: Error Control Coding

Generalized additive models in R

Λογιστική Παλινδρόµηση

Exercises to Statistics of Material Fatigue No. 5

Other Test Constructions: Likelihood Ratio & Bayes Tests

!"!"!!#" $ "# % #" & #" '##' #!( #")*(+&#!', & - #% '##' #( &2(!%#(345#" 6##7

5.4 The Poisson Distribution.

Introduction to the ML Estimation of ARMA processes

Homework for 1/27 Due 2/5

ΑΝΑΛΥΣΗ Ε ΟΜΕΝΩΝ. 7. Παλινδρόµηση

Solution Series 9. i=1 x i and i=1 x i.

6.3 Forecasting ARMA processes

Supplementary figures

( ) ( ) STAT 5031 Statistical Methods for Quality Improvement. Homework n = 8; x = 127 psi; σ = 2 psi (a) µ 0 = 125; α = 0.

APPENDICES APPENDIX A. STATISTICAL TABLES AND CHARTS 651 APPENDIX B. BIBLIOGRAPHY 677 APPENDIX C. ANSWERS TO SELECTED EXERCISES 679

Aquinas College. Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET

Lampiran 1 Output SPSS MODEL I

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

PENGARUHKEPEMIMPINANINSTRUKSIONAL KEPALASEKOLAHDAN MOTIVASI BERPRESTASI GURU TERHADAP KINERJA MENGAJAR GURU SD NEGERI DI KOTA SUKABUMI


χ 2 test ανεξαρτησίας

2. ΧΡΗΣΗ ΣΤΑΤΙΣΤΙΚΩΝ ΠΑΚΕΤΩΝ ΣΤΗ ΓΡΑΜΜΙΚΗ ΠΑΛΙΝΔΡΟΜΗΣΗ

Γραµµική Παλινδρόµηση

Επιστηµονική Επιµέλεια ρ. Γεώργιος Μενεξές. Εργαστήριο Γεωργίας. Viola adorata

6. MAXIMUM LIKELIHOOD ESTIMATION

Εργαστήριο στατιστικής Στατιστικό πακέτο S.P.S.S.

LAMPIRAN. Fixed-effects (within) regression Number of obs = 364 Group variable (i): kode Number of groups = 26

Math 6 SL Probability Distributions Practice Test Mark Scheme

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Μενύχτα, Πιπερίγκου, Σαββάτης. ΒΙΟΣΤΑΤΙΣΤΙΚΗ Εργαστήριο 5 ο

Appendix A3. Table A3.1. General linear model results for RMSE under the unconditional model. Source DF SS Mean Square

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Interpretation of linear, logistic and Poisson regression models with transformed variables and its implementation in the R package tlm

Numerical Analysis FMN011

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Repeated measures Επαναληπτικές μετρήσεις

Reminders: linear functions

The Simply Typed Lambda Calculus

Μαντζούνη, Πιπερίγκου, Χατζή. ΒΙΟΣΤΑΤΙΣΤΙΚΗ Εργαστήριο 5 ο

Summary of the model specified

Λογαριθμικά Γραμμικά Μοντέλα Poisson Παλινδρόμηση Παράδειγμα στο SPSS

1. A fully continuous 20-payment years, 30-year term life insurance of 2000 is issued to (35). You are given n A 1

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Αν οι προϋποθέσεις αυτές δεν ισχύουν, τότε ανατρέχουµε σε µη παραµετρικό τεστ.

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Δεδομένα (data) και Στατιστική (Statistics)

ΧΡΟΝΟΣΕΙΡΕΣ & ΠΡΟΒΛΕΨΕΙΣ-ΜΕΡΟΣ 7 ΕΛΕΓΧΟΙ. (TEST: Unit Root-Cointegration )

4.6 Autoregressive Moving Average Model ARMA(1,1)

Προβλέψεις ισοτιμιών στο EViews

Partial Differential Equations in Biology The boundary element method. March 26, 2013

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Lecture 7: Overdispersion in Poisson regression

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

DirichletReg: Dirichlet Regression for Compositional Data in R

TABLES AND FORMULAS FOR MOORE Basic Practice of Statistics

5. Partial Autocorrelation Function of MA(1) Process:

An Inventory of Continuous Distributions

EE101: Resonance in RLC circuits

C.S. 430 Assignment 6, Sample Solutions

Mean bond enthalpy Standard enthalpy of formation Bond N H N N N N H O O O

Για να ελέγξουµε αν η κατανοµή µιας µεταβλητής είναι συµβατή µε την κανονική εφαρµόζουµε το test Kolmogorov-Smirnov.

Homework 8 Model Solution Section

Chapter 1 Introduction to Observational Studies Part 2 Cross-Sectional Selection Bias Adjustment

Tridiagonal matrices. Gérard MEURANT. October, 2008

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

ΧΩΡΙΚΑ ΟΙΚΟΝΟΜΕΤΡΙΚΑ ΥΠΟΔΕΙΓΜΑΤΑ ΣΤΗΝ ΕΚΤΙΜΗΣΗ ΤΩΝ ΤΙΜΩΝ ΤΩΝ ΑΚΙΝΗΤΩΝ SPATIAL ECONOMETRIC MODELS FOR VALUATION OF THE PROPERTY PRICES

Μενύχτα, Πιπερίγκου, Σαββάτης. ΒΙΟΣΤΑΤΙΣΤΙΚΗ Εργαστήριο 6 ο

BarChart y 1, y 2, makes a bar chart with bar lengths y 1, y 2,.

Homework 3 Solutions

[2] T.S.G. Peiris and R.O. Thattil, An Alternative Model to Estimate Solar Radiation

1. Ιστόγραμμα. Προκειμένου να αλλάξουμε το εύρος των bins κάνουμε διπλό κλικ οπουδήποτε στο ιστόγραμμα και μετά

Problem Set 3: Solutions

Solutions to Exercise Sheet 5

Transcript:

TEORIA BÀSICA I EXEMPLES: REGRESSSIÓ SIMPLE, MÚLTIPLE I LOGÍSTICA Mètodes Estadístics, UPF, Hivern 2014

1 Descripció de dades i inferència estadística 2 3 Robust s.e. (Optional) 4 Case Influence statistics Multiple regression and multicolinearity 5

Fitxer de Dades Mostra aleatoria de mida n = 800 d una població Variables: despesa, renda, gènere (1/0, noi = 1), vot (1/0, partit A = 1) Fixer de dades és a la web (dues opcions.sav i el.txt): library(foreign) data=read.spss("http://www.econ.upf.edu/ satorra/dades/m2014dadessim.sav") data= read.table("http://www.econ.upf.edu/ satorra/dades/m2013regressiosamp.txt", header =T) names(data) "Lrenda" "Ldespeses" "Genere" "Vot" head(data) Lrenda Ldespeses Genere Vot 1 9.477 4.503 1 1 2 11.435 6.147 1 0 3 10.686 4.961 0 0 4 10.407 3.993 0 0 5 10.814 5.746 0 0 6 9.944 4.950 0 1

sis files del fitxer de dades Lrenda Ldespeses Genere Vot 1 9.477 4.503 1 1 2 11.435 6.147 1 0 3 10.686 4.961 0 0 4 10.407 3.993 0 0 5 10.814 5.746 0 0 6 9.944 4.950 0 1

Anàlisi Univariant (repliqueu amb SPSS) Descripció bàsica: attach(data) renda=exp(lrenda) summary(renda) Min. 1st Qu. Median Mean 3rd Qu. Max. 1306 11250 23970 36940 44270 528600 Mitjanes i desviacions estàndard: apply( data,2,mean) Lrenda Ldespeses Genere Vot 10.031189 4.978471 0.515000 0.550000 apply( data,2,sd) Lrenda Ldespeses Genere Vot 1.0032099 0.6615195 0.5000876 0.4978049 Destribució univariant (renda, Ldespeses): summary(renda) Min. 1st Qu. Median Mean 3rd Qu. Max. 1306 11250 23970 36940 44270 528600 sd(renda) = 44358.15 summary(ldespeses) Min. 1st Qu. Median Mean 3rd Qu. Max. 2.232 4.572 4.968 4.978 Albert5.421 Satorra 7.759

Renda: Mean, sd Mean: 36940 sd = 44358.15 > 44358.15/sqrt(800) [1] 1568.297 > 36890 + 2*(44358.15/sqrt(800)) [1] 40026.59 36890-2*(44358.15/sqrt(800)) [1] 33753.41 95\% IC: (33753.41, 40026.59)

Histograma Histograma (freq.) de variable renda Frequency 0 100 200 300 400 500 600 0e+00 1e+05 2e+05 3e+05 4e+05 5e+05

Histograma de la variable log de renda Lrenda = log(renda) Histograma (freq.) de la variable log de renda Frequency 0 50 100 150

Comenteu, en aquesta base de dades: 1 Tipus de variables, tipus de distribució de les variables continues 2 Variable X estandarditzada x = x i x s x ( scale(lrenda) ) 3 Inferència sobre la renda mitjana µ de la població (estimació, intèrval de confiança,... ) 4 Mida de mostra per una determinada precisió: inferència sobre la mitjana de renda, vot =1,...

Relació bivariant: diagrama de dispersió 0e+00 1e+05 2e+05 3e+05 4e+05 5e+05 0 500 1000 1500 2000 renda despeses

Relació bivariant: diagrama de dispersió 0e+00 1e+05 2e+05 3e+05 4e+05 5e+05 3 4 5 6 7 renda Ldespeses

Relació bivariant: diagrama de dispersió 7 8 9 10 11 12 13 3 4 5 6 7 Ldespeses

Coeficient de correlació, r > cor(renda,despeses) [1] 0.2613614 > cor(renda,ldespeses) [1] 0.32058 > cor(lrenda,ldespeses) [1] 0.4385204 > round(cor(lrenda,ldespeses),2) [1] 0.44 > (cor(lrenda,ldespeses))ˆ2 [1] 0.1923001 El coeficient de rcorrelació r entre log de despesa i el log de renda és: r = 0.44 El quadrat r 2 del coeficient de correlació, el 0.1923, és el coeficient de determinació R 2 del tema següent, la regressió

Funció esperança condicionada: E(Y X) Regressió lineal: Regressió lineal simple: Y = α + βx + ɛ on ɛ és independent (incorrelacionada) amb X Regressió lineal múltiple: Y = α + β 1 X 1 + β 2 X 2 + + β k X k + ɛ on ɛ és independent (incorrelacionada) amb X 1,..., X k Nomenclatura: α és el terme independent (la constant, el intercept ); els βs són coeficients de regressió. En la regressió múltiple, β 1,..., β k són coeficients de regressió parcial. El ɛ és el terme de perturbació del model.

3 2 1 0 1 2 3 4 2 0 2 4 Efecte de Regressio scale(lrenda) scale(ldespeses) Y=X regressio Figure: Efecte de regressió

Figure: Dades de Francis Galton: (1822-1911): Recta de regressió de Alçada de Fills vs. Alçada Pare

Exemple de regressió simple (dades estandarditzades) library(texreg) texreg(lm(scale(ldespeses) scale(lrenda)), scriptsize=t, stars=c(.05)) Model 1 (Intercept) 0.00 (0.03) scale(lrenda) 0.44 (0.03) R 2 0.19 Adj. R 2 0.19 Num. obs. 800 * p < 0.05 Table: Regressió amb dades estandarditzades

Exemple de regressió simple Model de regressió: Y = α + βx + ɛ, ɛ (0, σ 2 ɛ ), on Y = Ldespesa, X=Lrenda. Estimacions de α i β i R 2 a: 2.08 (0.21) b: 0.29 (0.02) R 2 0.19 Adj. R 2 0.19 Num. obs. 800 *** p < 0.001, ** p < 0.01, * p < 0.05, p < 0.1 Table: Taula de resultats 19% de la variació de Y ve explicada per la variació de X El coeficient de regressió de Y sobre X és positiu, 0.29, i altament significatiu (p < 0.001) Un increment de una unitat de X va associada a un increment de 0.29 del valor esperat de Y (variables expressades en logaritmes) Coeficients beta: de Lrenda =

El coeficient beta (coeficients de regressió estandarditzats) Són els coeficients de regressió quan les variables són estandarditzades; en aquest cas α = 0 coef.beta=0.28916*(1.0032099)/0.6615195 [1] 0.4385179 > (coef.beta)ˆ2 [1] 0.192298 > cor(ldespeses,lrenda) [1] 0.4385204 > (cor(ldespeses,lrenda))ˆ2 [1] 0.1923001

Regressió Múltiple Robust s.e. (Optional) re=lm(ldespeses Lrenda + texreg(re) Genere) Model de regressió: Y = α + β 1 X1 + β 2 X2 + ɛ, ɛ (0, σ 2 ɛ ), on Y = Ldespesa, X1=Lrenda, X2=Gènere Table: Multiple regression (Intercept) Lrenda Genere Estimates 2.98 (0.20) 0.23 (0.02) 0.55 (0.04) R2 0.35 AR2 0.35 n 800 this is OLS analysis

Robust s.e. (Optional) The linear multiple regression model (a bit of theory) It assumes, the regression function E(Y X) is lineal in its inputs X 1, X 2,..., X k ; i.e. E(Y) = α + β 1 X 1 + + β k X k β 1 is the expected change in Y when we increase X 1 by one unit ceteris paribus all the other variables being constant. for prediction purposes, can sometimes outperform fancier more complicated models, specially in situations with small sample size it applies to transformed variables, so they encompass a large variety of functions for E(Y X) for the X s variables, it requires them to be continuous or binary variables we have Y = E(Y X) + ɛ, where the disturbance term ɛ is a random variable assumed to be independent of X, typically with variance that does not change with X (homoscedastic residuals) for the fitted model, we have Ŷ = a + b 1 X 1 + + b k X k, where the bs are partial regression coefficients (obtained usually by OLS), and e = Y Ŷ define de residuals Note that E(Y X 1 ) is different than E(Y X 1, X 2 ) or E(Y X 1, X 2..., X k ). So, the regression coefficient b 1 for X 1 will typically change depending on which additional variables, besides X 1, we are conditioning. In causal analysis, researchers are interested in the change on Y 1 when we change X 1. This is a complicated issue that can only be answer properly with more context regarding the design of the data collection. So far we have been dealing only with a conditional expectation model (no elements have been introduced yet for proper causal analysis)

Robust s.e. (Optional) Regressió Múltiple 1 35% de variació de Y és explicada per la variació conjunta de Lrenda i Genere 2 Comparem el coeficients de regressió de Lrenda de la regressió simple i múltiple: 0.29 versus 0.23 3 Interpretació dels coeficients de regressió: coeficients de regressió parcials. Variació de Y quan variem X1 ceteris paribus (control) les altres var. explicatives 4 La despesa difereix per gènere? 5...

Robust s.e. (Optional) Residuals vs Fitted 164 Residuals -2-1 0 1 2 305 201

Robust s.e. (Optional) library(faraway); prplot(re,1) 7 8 9 10 11 12 13 0 1 2 3 4 Lrenda beta*lrenda+res

Robust s.e. (Optional) library(faraway); prplot(re,2) 0.0 0.2 0.4 0.6 0.8 1.0 2 1 0 1 2 Genere beta*genere+res

Robust s.e. (Optional) Exemple de regressió múltiple: dades Paisos.sav Pregunta: calories en la dieta afecta a l esperança de vida? Sintaxis de SPSS

Lectura de dades Robust s.e. (Optional) library(foreign) data= read.spss("http://www.econ.upf.edu/ satorra/dades/paisos.sav") attach(data) names(data) CALORIES[(CALORIES == 9999)]=NA

Valors missing? Robust s.e. (Optional) > ESPVIDA [1] 46.4 52.1 47.5 39.0 50.7 53.5 44.9 50.2 55.6 43.5 47.5 56.5 45.6 47.3 51.0 46.5 60.4 46. [27] 65.5 55.0 47.6 49.4 61.5 56.0 68.5 44.5 56.0 51.5 71.9 67.7 53.7 60.5 63.6 51.0 62.7 62. [53] 64.8 63.3 69.6 57.5 68.8 51.3 67.9 69.9 66.4 65.2 70.3 69.3 66.0 73.6 71.2 70.0 58.8 67. [79] 67.5 71.5 73.6 64.9 72.8 76.0 71.3 73.8 70.2 66.3 67.6 62.9 70.8 70.5 71.7 69.0 72.5 70. [105] 71.1 77.6 74.6 69.7 71.6 77.0 73.1 75.5 75.3 76.5 77.6 78.6 70.5 74.8 77.6 76.2 74.9 77. [131] 76.0 78.2 76.9 75.3 78.2 79.5 75.7 78.0 72.0 76.1 68.5 66.0 63.1 67.1 75.3 63.7 71.1 43. [157] 55.4 47.0 74.2 48.3 > > CALORIES [1] 1680 2021 1610 1695 9999 1957 2162 1941 2019 2556 1989 2135 1827 1821 2259 2257 2395 227 [27] 2380 2243 2532 1691 2316 2206 2729 2390 1897 2685 2275 2306 1989 2201 3336 2491 2755 262 [53] 2255 2985 2342 2706 2587 2297 3031 3252 2663 2744 2548 2678 1883 2607 3683 2670 2120 333 [79] 2861 2407 2670 2288 2239 2820 3609 2583 2696 2824 3380 2705 2884 2582 2622 3638 2750 318 [105] 3298 3793 3414 2751 9999 3782 2791 3389 3779 3150 3567 3144 9999 3249 3186 3181 2535 350 [131] 3676 3518 3338 3622 2945 2909 9999 3565 9999 3238 3319 2122 3310 3175 2833 1899 2834 152 [157] 1640 1505 2745 9999 > CALORIES[(CALORIES == 9999)]=NA > CALORIES [1] 1680 2021 1610 1695 NA 1957 2162 1941 2019 2556 1989 2135 1827 1821 2259 2257 2395 227 [27] 2380 2243 2532 1691 2316 2206 2729 2390 1897 2685 2275 2306 1989 2201 3336 2491 2755 262 [53] 2255 2985 2342 2706 2587 2297 3031 3252 2663 2744 2548 2678 1883 2607 3683 2670 2120 333 [79] 2861 2407 2670 2288 2239 2820 3609 2583 2696 2824 3380 2705 2884 2582 2622 3638 2750 318 [105] 3298 3793 3414 2751 NA 3782 2791 3389 3779 3150 3567 3144 NA 3249 3186 3181 2535 350 [131] 3676 3518 3338 3622 2945 2909 NA 3565 NA 3238 3319 2122 3310 3175 2833 1899 2834 152 [157] 1640 1505 2745 NA

Robust s.e. (Optional) plot matricial ESPVIDA 1500 3000 1.0 2.0 3.0 0 40 80 0 40 80 40 60 80 1500 3000 CALORIES SANITAT 0 40 80 1.0 2.0 3.0 NIVELL ALFAB 20 60 100 0 40 80 DIARIS TV 0 40 80 40 60 80 0 40 80 0 40 80 20 60 100 0 40 80 AGRICULT

Robust s.e. (Optional) Regressió simple: ESPEV vs. CALORIES (Intercept) CALORIES Model 1 28.7629 (2.7159) 0.0135 (0.0010) R 2 0.5481 Adj. R 2 0.5451 Num. obs. 152 * p < 0.05 length(espvida)= 160 Table: Statistical models

Robust s.e. (Optional) Regressió simple: ESPEV vs. CALORIES, ALFAB (Intercept) CALORIES ALFAB Model 1 28.1428 (1.9458) 0.0062 (0.0009) 0.2714 (0.0227) R 2 0.7698 Adj. R 2 0.7667 Num. obs. 152 * p < 0.05 Table: Statistical models

Robust s.e. (Optional) ESPEV vs. CALORIES, ALFAB, NBAIX, NALT (Intercept) CALORIES ALFAB NBAIXTRUE NALTTRUE HABMETG Model 1 49.0691 (3.0704) 0.0030 (0.0008910) 0.1170 (0.0268) 6.6066 (1.3683) 4.8071 (1.0609) 0.0001752 (0.0000526) R 2 0.8589 Adj. R 2 0.8535 Num. obs. 136 * p < 0.05 Table: Statistical models baix mitja alt : 47 54 59 NALT = NIVELL == "alt"; NBAIX = NIVELL == "baix" ; HABMETG[HABMETG == 99999 ] =NA table(nivell); re=lm(espvida CALORIES + ALFAB + NBAIX +NALT + HABMETG)

Robust s.e. (Optional) 1500 2000 2500 3000 3500 4000 5 0 5 10 15 CALORIES beta*calories+res Figure: Gràfic de regressió parcial: ESPVI versus CALORIES

Robust s.e. (Optional) 20 40 60 80 100 5 0 5 10 15 ALFAB beta*alfab+res Figure: Gràfic de regressió parcial: ESPVI versus ALFAB

Robust s.e. (Optional) (Optional) Regression with regular s.e. http://diffuseprior.wordpress.com/2012/06/15/standard-robust-and-clustered-standard-errors-co r1=lm(ldespeses Lrenda + Genere) Estimate Std. Error t value Pr(> t ) (Intercept) 2.97911 0.19970 14.92 <2e-16 *** Lrenda 0.22736 0.01927 11.80 <2e-16 *** Genere -0.54637 0.03867-14.13 <2e-16 *** # get X matrix/predictors X <- model.matrix(r1) # number of obs n <- dim(x)[1] # n of predictors k <- dim(x)[2] # calculate stan errs as in the above # sq root of diag elements in vcov se <- sqrt(diag(solve(crossprod(x)) * as.numeric(crossprod(resid(r1))/(n-k)))) > se (Intercept) Lrenda Genere 0.19969731 0.01927412 0.03866520

Robust s.e. (Optional) (Optional) Regression with heteroscedastic robust s.e. r1=lm(ldespeses Lrenda + Genere) X <- model.matrix(r1) n <- dim(x)[1] k <- dim(x)[2] # residual vector u <- matrix(resid(r1)) # meat part Sigma is a diagonal with uˆ2 as elements meat1 <- t(x) %*% diag(diag(crossprod(t(u)))) %*% X # degrees of freedom adjust dfc <- n/(n-k) # like before se <- sqrt(dfc*diag(solve(crossprod(x)) %*% meat1 %*% solve(crossprod(x)))) > se (Intercept) Lrenda Genere 0.19980279 0.01945393 0.03799626

Robust s.e. (Optional) (Optional) Regression with s.e. robust to clustering # clustered standard errors in regression #by : http://thetarzan.wordpress.com/2011/06/11/clustered-standard-errors-in-r/ cl <- function(dat,fm, cluster){ require(sandwich, quietly = TRUE) require(lmtest, quietly = TRUE) M <- length(unique(cluster)) N <- length(cluster) K <- fm$rank dfc <- (M/(M-1))*((N-1)/(N-K)) uj <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum)); vcovcl <- dfc*sandwich(fm, meat=crossprod(uj)/n) coeftest(fm, vcovcl) }

Robust s.e. (Optional) (Optional) Regression with s.e. robust to clustering r1=lm(ldespeses Lrenda + Genere) summary(r1) Estimate Std. Error t value Pr(> t ) (Intercept) 2.97911 0.19970 14.92 <2e-16 *** Lrenda 0.22736 0.01927 11.80 <2e-16 *** Genere -0.54637 0.03867-14.13 <2e-16 *** clust= sample(1:40,800, replace=t) > tabulate(clust) [1] 14 22 16 16 23 21 25 19 21 29 17 20 21 22 16 19 23 17 19 22 23 25 26 17 17 [26] 10 25 26 24 22 21 17 14 18 21 15 17 26 16 18 cl(cbind(ldespesesa,genere, clust), fit, clust) Estimate Std. Error t value Pr(> t ) (Intercept) 2.139410 0.126312 16.938 < 2.2e-16 *** Lrenda -0.182044 0.011657-15.617 < 2.2e-16 *** Genere 0.459628 0.031029 14.813 < 2.2e-16 ***

Robust s.e. (Optional) Material addicional de regressió simple i multiple 1 web del curs M2014 M2012Setmanes12: Detalls de la regressió lineal simple i multiple + sintaxis SPSS 2 Idra UCLA: SPSS Web Books Regression with SPSS

Case Influence statistics Multiple regression and multicolinearity Variable depenent Y binaria Fins ara Y era una variable continua Regressió logística (i la regressió probit ) Y és binaria Com en la regressió habitual, les variables explicatives poden ser continues o binaries

Case Influence statistics Multiple regression and multicolinearity No serveix la regressió lineal? La relació es no-lineal Els terme d error és heteroscedastics El terme d error no té distribució normal Exemple: Y = Vot, X = Lrenda Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) 2.89760 0.15615 18.56 <2e-16 *** Lrenda -0.23403 0.01549-15.11 <2e-16 *** Multiple R-squared: 0.2224 Ŷ = 2.89 0.23 Lrenda + ɛ, R 2 =.22

Case Influence statistics Multiple regression and multicolinearity Vot 0.0 0.2 0.4 0.6 0.8 1.0 7 8 9 10 11 12 13 Lrenda Figure: Vot vs. Lrenda

Regressió logística (el model) Case Influence statistics Multiple regression and multicolinearity Suposem que Y i Bernoulli (π i ) π i = P(Y i = 1), i = 1,..., n probabilitats odds (probabilitats en contra) logit π o(odds) = π/(1 π) L(logit) = ln (o) L i = ln π i 1 π i π i = 1 1+e L i Model lineal logit: L i = β 0 + β 1 X 1 + β 2 X 2 + β 3 X 3 Model no-lineal de probabilitat: π i = 1 1 + e (β 0+β 1 X 1 +β 2 X 2 +β 3 X 3 )

Ajust de la regressió logística Case Influence statistics Multiple regression and multicolinearity Exemple: Y = Vot, X = Lrenda π i = 1 1+e L i π i = 1 1+e 12.389+1.208Lrenda i glm(vot Lrenda, family = "binomial") oefficients: Estimate Std. Error z value Pr(> z ) (Intercept) 12.389 1.027 12.07 <2e-16 *** Lrenda -1.208 0.101-11.96 <2e-16 *** Number of Fisher Scoring iterations: 4 ˆL = 12.389 1.208 Lrenda

Case Influence statistics Multiple regression and multicolinearity Vot 0.0 0.2 0.4 0.6 0.8 1.0 linear model logistic model 6 8 10 12 14 Lrenda Figure: Vot vs. Lrenda: linear versus logistic fits

Interpretació dels paràmetres Case Influence statistics Multiple regression and multicolinearity ˆL = 12.389 1.208 Lrenda exp(-1.208)= 0.2987943 0.2987943 1 = 0.7012057 Odds disminueixen en un 70% quan X X + 1 % d augment/decreixement odds (exp(β) 1) 100

Vot versus Lrenda + Genere Case Influence statistics Multiple regression and multicolinearity glm(formula = Vot Lrenda + Genere, family = "binomial" (Intercept) 12.2964 1.2207 10.07 <2e-16 *** Lrenda -1.3238 0.1229-10.77 <2e-16 *** Genere 2.6149 0.2017 12.97 <2e-16 *** ˆL = 12.2964 1.3238 Lrenda + 2.6149 Genere > 100*(exp( -1.3238 )-1) [1] -73.38779 > 100*(exp( 2.6149 )-1) [1] 1266.585

Case Influence statistics Multiple regression and multicolinearity Lrenda Lrenda+1 disminueix un 73% els odds de Vot = 1, controlant per gènere Els odds de Vot = 1 dels nois (Gènere = 1) són un 1266% superiors que els de les noies (Gènere = 0), controlant per Renda

Case Influence statistics Multiple regression and multicolinearity Vot 0.0 0.2 0.4 0.6 0.8 1.0 Simple Mult. Homes Mult. Dones 6 8 10 12 14 Lrenda Figure: Corbes logístiques, marginals (reg. simple) i condicionals (regr. múltiple)

Case Influence statistics Multiple regression and multicolinearity (Optional) More on logistic regression: lrm lrm( Vot Lrenda + Genere, y =T, x=t) Logistic Regression Model Model Likelihood Discrimination Rank Discrim. Ratio Test Indexes Indexes Obs 800 LR chi2 413.74 R2 0.540 C 0.881 0 360 d.f. 2 g 2.372 Dxy 0.763 1 440 Pr(> chi2) <0.0001 gr 10.718 gamma 0.764 max deriv 5e-07 gp 0.379 tau-a 0.378 Brier 0.140 Coef S.E. Wald Z Pr(> Z ) Intercept 12.2964 1.2207 10.07 <0.0001 Lrenda -1.3238 0.1229-10.77 <0.0001 Genere 2.6149 0.2017 12.97 <0.0001

Case Influence statistics Multiple regression and multicolinearity (Optional) More on logistic regression: e b and the % of increment of odds Suppose the fitted logistic regression where L = 2 + 2 x Logit2 = 2 + 2 (x + 1)(e b 1) 100 = (e 2 1) 100 = 639% and an unit increase of x = 1, x x + 1. ### when p is around 0.5 x= 1 Logit1 = -2 + 2*x Logit2 = -2 + 2*(x+1) prob1= 1/(1+ exp( -Logit1)) prob2= 1/(1+ exp(-logit2 )) ((prob2-prob1)/prob1)*100 > prob1 [1] 0.5 > prob2 [1] 0.8807971 ### p = molt baixa x= 1

Optional: Case influence Case Influence statistics Multiple regression and multicolinearity 11 14 5 16 Y 5 0 5 all exclude 2 and 11 exclude 1 exclude 1 and 14 3 7 18 17 8 4 12 15 9 13 19 20 21 6 22 10 1 2 3 2 1 0 1 2 3 X

(Optional): multicolinearity Case Influence statistics Multiple regression and multicolinearity datafile=read.table("/users/albertsatorra/rstudio/datasets/regressiomulticol.dat") reg=lm(y X1+factor(X2)+X3) summary(reg) Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) 0.4987 1.4834 0.336 0.73751 X1-0.8042 3.3921-0.237 0.81310 factor(x2)1 3.1534 1.7499 1.802 0.07475. factor(x2)2 5.4445 1.8216 2.989 0.00357 ** factor(x2)3 7.3730 2.4295 3.035 0.00311 ** X3 5.8343 3.3368 1.748 0.08365. --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 Residual standard error: 6.082 on 94 degrees of freedom Multiple R-squared: 0.433, Adjusted R-squared: 0.4028 F-statistic: 14.36 on 5 and 94 DF, p-value: 1.961e-10

Optional: Case influence Case Influence statistics Multiple regression and multicolinearity 11 14 5 16 Y 5 0 5 all exclude 2 and 11 exclude 1 exclude 1 and 14 3 7 18 17 8 4 12 15 9 13 19 20 21 6 22 10 1 2 3 2 1 0 1 2 3 X

Case Influence statistics Multiple regression and multicolinearity (Optional) s.e. robust to cluster effects: library rms library(rms ) fit =lrm( Vot Lrenda + Genere, y =T, x=t) library(rms ) length(vot) [1] 800 # assume we have a variable clust clust= sample(1:40,800, replace=t) robcov(fit,cluster=clust) Logistic Regression Model Model Likelihood Discrimination Rank Discrim. Ratio Test Indexes Indexes Obs 800 LR chi2 413.74 R2 0.540 C 0.881 0 360 d.f. 2 g 2.372 Dxy 0.763 1 440 Pr(> chi2) <0.0001 gr 10.718 gamma 0.764 max deriv 5e-07 gp 0.379 tau-a 0.378 Brier 0.140 Coef S.E. Wald Z Pr(> Z ) Intercept 12.2964 1.3397 9.18 <0.0001 Lrenda -1.3238 0.1359-9.74 <0.0001 Genere 2.6149 0.1790 14.61 <0.0001

Case Influence statistics Multiple regression and multicolinearity (Optional) s.e. robust to cluster effects: bootstrap > bootcov(fit, cluster=clust) Logistic Regression Model lrm(formula = Vot Lrenda + Genere, x = T, y = T) Coef S.E. Wald Z Pr(> Z ) Intercept 12.2964 1.4004 8.78 <0.0001 Lrenda -1.3238 0.1420-9.32 <0.0001 Genere 2.6149 0.1814 14.41 <0.0001

Case Influence statistics Multiple regression and multicolinearity Material addicional de regressió logística 1 web del curs, M2014: Slides Logit Regression, més detalls sobre la regressió logistica + altre material en la secció de regressió logística. 2 Idra UCLA: SPSS Data Analysis Examples Logit Regression R Data Analysis Examples: Logit Regression

Fitxer de Dades Mostra aleatoria de mida n = 1000 d una població Variables: data= read.table("http://www.econ.upf.edu/ satorra/m/dadesregressio2014.txt", header =T) #data= read.spss("http://www.econ.upf.edu/ satorra/m/dadesme2014.sav") #data=as.data.frame(data) names(data) "Y1" "Y2" "X1" "X2" "X3" "X4" "X5" "X6" head(data) > head(data) Y1 Y2 X1 X2 X3 X4 X5 X6 1-19.18 0 0.96 2.78 0.84 2.32 0 2 2-19.66 0 4.25-0.44 3.82 3.24 0 2 3-24.35 1 2.47 1.04 2.85 3.23 0 4 4-20.75 0 3.10 0.90 1.66 2.63 0 2 5-22.46 0 2.60 1.36 1.84 2.36 0 3 6-22.82 1-0.17 3.95 0.77 2.49 1 3 Y1 is logexpenses Y2 is voting X5 is home = 1 X6 is categorial X1 to X4 are indicators related with income (latent)

Y1 0.0 0.4 0.8 1 1 3 5 1.0 2.5 4.0 28 22 16 0.0 0.4 0.8 Y2 X1 1 1 3 5 1 1 3 5 X2 X3 1 1 3 5 1.0 2.5 4.0 X4 28 22 16 1 1 3 5 1 1 3 5 0.0 0.4 0.8 0.0 0.4 0.8 X5 Figure: Matrix plot of the new data set

Multiple regression fit1 = lm(y1 X1+ X2+X3+X4+X5+factor(X6)) summary(fit1) Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) -22.98561 0.71205-32.281 <2e-16 *** X1 0.20459 0.17081 1.198 0.2313 X2-0.28337 0.16639-1.703 0.0889. X3-0.04619 0.05681-0.813 0.4163 X4 0.00525 0.11536 0.046 0.9637 X5-0.07055 0.12551-0.562 0.5742 factor(x6)2 2.83711 0.14656 19.359 <2e-16 *** factor(x6)3-0.08415 0.12816-0.657 0.5116 factor(x6)4 0.02103 0.12875 0.163 0.8703 Residual standard error: 1.437 on 991 degrees of freedom Multiple R-squared: 0.454, Adjusted R-squared: 0.4496 F-statistic: 103 on 8 and 991 DF, p-value: < 2.2e-16

Multiple regression (excluding X2) fit1 = lm(y1 X1+X3+X4+X5+factor(X6)) summary(fit1) Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) -24.074327 0.313959-76.680 < 2e-16 *** X1 0.472673 0.066398 7.119 2.09e-12 *** X3-0.042238 0.056819-0.743 0.457 X4 0.002787 0.115466 0.024 0.981 X5-0.100423 0.124402-0.807 0.420 factor(x6)2 2.822129 0.146432 19.273 < 2e-16 *** factor(x6)3-0.070031 0.128018-0.547 0.584 factor(x6)4 0.028213 0.128803 0.219 0.827 - Residual standard error: 1.438 on 992 degrees of freedom Multiple R-squared: 0.4524, Adjusted R-squared: 0.4485 F-statistic: 117.1 on 7 and 992 DF, p-value: < 2.2e-16

Diagnostic in linear multiple regression (Fox s library(car)) library(car) fit1 = lm(y1 X1+X2+X3+X4+X5+factor(X6)) # Evaluate Collinearity vif(fit) # variance inflation factors sqrt(vif(fit)) > 2 # problem? VIF > 4? ncvtest(fit1) Non-constant Variance Score Test Variance formula: fitted.values Chisquare = 0.2619034 Df = 1 p = 0.6088155 > durbinwatsontest(fit1) lag Autocorrelation D-W Statistic p-value 1-0.03516656 2.069103 0.286 Alternative hypothesis: rho!= 0 ## Multicolineality > vif(fit1) GVIF Df GVIFˆ(1/(2*Df)) X1 14.969943 1 3.869101 X2 14.524505 1 3.811103 X3 1.136309 1 1.065978 X4 1.632153 1 1.277557 X5 1.907518 1 1.381129 factor(x6) 1.401702 3 1.057895

1 0 1 2 3 4 5 4 2 0 2 4 X1 Component+Residual(Y1) 1 0 1 2 3 4 5 4 2 0 2 4 X2 Component+Residual(Y1) 1 0 1 2 3 4 4 2 0 2 4 X3 Component+Residual(Y1) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4 2 0 2 4 X4 Component+Residual(Y1) 0.0 0.2 0.4 0.6 0.8 1.0 4 2 0 2 4 X5 Component+Residual(Y1) 1 2 3 4 4 2 0 2 4 6 factor(x6) Component+Residual(Y1) Component + Residual Plots Figure: component + residual plot

Regression with a principal component > F1 = princomp(cbind(x1,x2,x3,x4))$score[,1] > fit1 = lm(y1 F1+X5+factor(X6)) > summary(fit1) Call: lm(formula = Y1 F1 + X5 + factor(x6)) Residuals: Min 1Q Median 3Q Max -4.6418-0.9351 0.0393 0.9297 4.1060 Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) -23.20763 0.11828-196.204 < 2e-16 *** F1 0.30851 0.03699 8.341 2.43e-16 *** X5-0.10188 0.12490-0.816 0.415 factor(x6)2 2.82883 0.14666 19.288 < 2e-16 *** factor(x6)3-0.06937 0.12767-0.543 0.587 factor(x6)4 0.02032 0.12875 0.158 0.875 --- Signif. codes: 0 *** 0.001 ** 0.01 * 0.05. 0.1 1 Residual standard error: 1.439 on 994 degrees of freedom Multiple R-squared: 0.4509, Adjusted R-squared: 0.4482 F-statistic: 163.3 on 5 and 994 DF, p-value: < 2.2e-16

Logistic Regression library(rms) lrm(formula = Y2 X1 + X2 + X3 + X4 + X5 + factor(x6)) Model Likelihood Discrimination Rank Discrim. Ratio Test Indexes Indexes Obs 1000 LR chi2 405.30 R2 0.445 C 0.842 0 537 d.f. 8 g 2.003 Dxy 0.685 1 463 Pr(> chi2) <0.0001 gr 7.412 gamma 0.686 max deriv 1e-07 gp 0.341 tau-a 0.341 Brier 0.162 Coef S.E. Wald Z Pr(> Z ) Intercept -2.0695 1.2476-1.66 0.0972 X1 0.0250 0.3004 0.08 0.9336 X2 1.3035 0.2989 4.36 <0.0001 X3-0.0816 0.0967-0.84 0.3987 X4-0.0963 0.2039-0.47 0.6365 X5-0.1624 0.1986-0.82 0.4134 X6=2-2.6178 0.2996-8.74 <0.0001 X6=3 0.3319 0.2102 1.58 0.1144 X6=4 0.5034 0.2117 2.38 0.0174