On the general understanding of the empirical Bayes method
|
|
- Ἀβειρὼν Γαλάνη
- 6 χρόνια πριν
- Προβολές:
Transcript
1 On the general understanding of the empirical Bayes method Judith Rousseau 1, Botond Szabó 2 1 Paris Dauphin, Paris, France 2 Budapest University of Technology and Economics, Budapest, Hungary ERCIM 2014, Pisa,
2 Table of contents 1 Introduction 2 General Theorem on EB 3 Examples Gaussian white noise model Nonparametric regression Density function problem 4 Epilogue
3 Introduction General Theorem on EB Examples Motivation Applications: Genetics Clark & Swanson (2005). Contextual region classification Lazebnik et al. (2009). High dimensional classification Chen et al. (2008). Robotics Schauerte et al. (2013). Although it is widely used in practice, it does not have full theoretical underpinning. Epilogue
4 Bayes vs Frequentist approach Statistical model: Consider a collection of distributions P = {P θ : θ Θ}. Schools: Frequentist Bayes Model: X (n) P θ0, θ 0 Θ θ Π (prior), X (n) θ P θ Goal: Try to recover θ 0 : Update our belief about θ: Estimator ˆθ(X (n) ) Posterior: θ X (n)
5 Bayes vs Frequentist approach Statistical model: Consider a collection of distributions P = {P θ : θ Θ}. Schools: Frequentist Bayes Model: X (n) P θ0, θ 0 Θ θ Π (prior), X (n) θ P θ Goal: Try to recover θ 0 : Update our belief about θ: Estimator ˆθ(X (n) ) Posterior: θ X (n) Frequentist Bayes Investigate Bayesian techniques from frequentist perspective, i.e. assume that there exists a true θ 0 and investigate the behaviour of the posterior θ X (n).
6 Adaptive Bayes Assume that we have a family of prior distributions indexed by a hyper-parameter λ. {Π λ : λ Λ}, Problem: In (nonparametric models) the posterior crucially depends on the prior, hence on the hyper-parameter.
7 Adaptive Bayes Assume that we have a family of prior distributions indexed by a hyper-parameter λ. {Π λ : λ Λ}, Problem: In (nonparametric models) the posterior crucially depends on the prior, hence on the hyper-parameter. Question: How to choose λ? Fixed λ: without strong belief misleading. Use the data to find λ, i.e. adaptive techniques: Hierarchical Bayes: endow λ with hyper-prior π(λ). Empirical Bayes: estimate λ from the data X (n).
8 Empirical Bayes method EB Method: Frequentist estimator for the hyper-parameter λ : Marginal likelihood empirical Bayes: Plug in the marginal maximum likelihood estimator ˆλ n = arg max{λ : e ln(θ) Π λ (dθ)}, where l n (θ) is the log-likelihood, into the posterior ( X (n) ) = Π Πˆλn λ ( X (n) ). λ=ˆλn Θ
9 Empirical Bayes method EB Method: Frequentist estimator for the hyper-parameter λ : Marginal likelihood empirical Bayes: Plug in the marginal maximum likelihood estimator ˆλ n = arg max{λ : e ln(θ) Π λ (dθ)}, where l n (θ) is the log-likelihood, into the posterior ( X (n) ) = Π Πˆλn λ ( X (n) ). λ=ˆλn Mimics the HB method. Θ
10 Empirical Bayes method EB Method: Frequentist estimator for the hyper-parameter λ : Marginal likelihood empirical Bayes: Plug in the marginal maximum likelihood estimator ˆλ n = arg max{λ : e ln(θ) Π λ (dθ)}, where l n (θ) is the log-likelihood, into the posterior ( X (n) ) = Π Πˆλn λ ( X (n) ). λ=ˆλn Mimics the HB method. Other frequentist estimators for ˆλ n : MM, MRE,... Widely used in the literature, BUT missing full theoretical justification. Θ
11 Theoretical investigation Frequentist analysis: Consider a loss L and a collection of nested sub-classes {Θ β : β B}. Minimax risk: r n,β = inf ˆθn T n sup θ Θ β E θ L(ˆθ n, θ). Do we have adaptive contraction rate: inf E θ0 (θ : L(θ, θ Πˆλn 0 ) Mr n,β X (n) ) 1 θ 0 Θ β for all β B and a large enough constant M > 0?
12 Theoretical investigation Frequentist analysis: Consider a loss L and a collection of nested sub-classes {Θ β : β B}. Minimax risk: r n,β = inf ˆθn T n sup θ Θ β E θ L(ˆθ n, θ). Do we have adaptive contraction rate: inf E θ0 (θ : L(θ, θ Πˆλn 0 ) Mr n,β X (n) ) 1 θ 0 Θ β for all β B and a large enough constant M > 0? Literature: Specific models: Florens & Simoni (2012), Knapik et al.(2012), Sz. et al. (2013), Serra & Krivobokova (2014). Comparing EB and HB in parametric models: Petrone et al. (2014). General nonparametric models, BUT for well behaved estimators ˆλ n : Donnet et al. (2014).
13 The set of possible hyper-parameters λ Determine the location of λ: Define ε n (λ) = ε n (λ, θ 0 ), such that Π λ (θ : θ θ 0 2 Kε n (λ)) = e nε2 n (λ), for some K > 0 (specified later).
14 The set of possible hyper-parameters λ Determine the location of λ: Define ε n (λ) = ε n (λ, θ 0 ), such that Π λ (θ : θ θ 0 2 Kε n (λ)) = e nε2 n (λ), for some K > 0 (specified later). Let us denote by m n = inf β B r n,β and assume that m n 1/ n. Define the set: Λ n = {λ : ε n (λ) m n }. Let ε n,0 = min λ {ε n (λ) : λ Λ n }.
15 The set of possible hyper-parameters λ Determine the location of λ: Define ε n (λ) = ε n (λ, θ 0 ), such that Π λ (θ : θ θ 0 2 Kε n (λ)) = e nε2 n (λ), for some K > 0 (specified later). Let us denote by m n = inf β B r n,β and assume that m n 1/ n. Define the set: Λ n = {λ : ε n (λ) m n }. Let ε n,0 = min λ {ε n (λ) : λ Λ n }. Finally define the set of probable hyper-parameters Λ 0 = {λ : ε n (λ) M n ε n,0 } Λ c n, for some M n tending to infinity. Our first goal is to show that ˆλ n Λ 0.
16 Conditions Following Donet et al. (2014) we introduce some assumptions: Entropy (hyper): Discretize Λ c 0 into balls B(λ i, u n, ) and assume that for some w n M n : log N n = o(w 2 n nε 2 n,0 )
17 Conditions Following Donet et al. (2014) we introduce some assumptions: Entropy (hyper): Discretize Λ c 0 into balls B(λ i, u n, ) and assume that for some w n M n : log N n = o(w 2 n nε 2 n,0 ) Transformation: Let ψ λ,λ : Θ Θ such that if θ Π λ ( ) then ψ λ,λ (θ) Π λ ( ) for λ, λ Λ, and introduce the notation dq θ λ,n(x (n) ) = sup e ln(ψλ,λ (θ))(x (n)) dµ(x (n) ). λ λ u n
18 Conditions Following Donet et al. (2014) we introduce some assumptions: Entropy (hyper): Discretize Λ c 0 into balls B(λ i, u n, ) and assume that for some w n M n : log N n = o(w 2 n nε 2 n,0 ) Transformation: Let ψ λ,λ : Θ Θ such that if θ Π λ ( ) then ψ λ,λ (θ) Π λ ( ) for λ, λ Λ, and introduce the notation dq θ λ,n(x (n) ) = sup e ln(ψλ,λ (θ))(x (n)) dµ(x (n) ). λ λ u n Boundedness: For all θ B ( θ 0, ε n (λ), 2 ) Q θ λ,n (X n ) e cnε2 n (λ), c < 1.
19 Conditions Following Donet et al. (2014) we introduce some assumptions: Entropy (hyper): Discretize Λ c 0 into balls B(λ i, u n, ) and assume that for some w n M n : log N n = o(w 2 n nε 2 n,0 ) Transformation: Let ψ λ,λ : Θ Θ such that if θ Π λ ( ) then ψ λ,λ (θ) Π λ ( ) for λ, λ Λ, and introduce the notation dq θ λ,n(x (n) ) = sup e ln(ψλ,λ (θ))(x (n)) dµ(x (n) ). λ λ u n Boundedness: For all θ B ( θ 0, ε n (λ), 2 ) Q θ λ,n (X n ) e cnε2 n (λ), c < 1. Sieve: For all λ Λ c 0 assume that there exists Θ n(λ) such that Qλ,n(X θ n )Π λ (dθ) e w 2 n nε2 n,0. Θ n(λ) c
20 Conditions II Tests: For all λ Λ c 0 and all θ Θ n(λ) there exist tests ϕ n,i (θ) : E θ0 (ϕ n,i (θ)) e c1nd 2 (θ,θ 0), sup Qλ θ i,n(1 ϕ n,i (θ)) e c1nd 2 (θ,θ 0), d(θ,θ ) ζd(θ,θ 0) with 0 < ζ < 1, and for some large enough C > 0 { θ θ 0 2 > Cε n (λ), θ Θ n (λ)} {d(θ, θ 0 ) ε n (λ), θ Θ n (λ)}.
21 Conditions II Tests: For all λ Λ c 0 and all θ Θ n(λ) there exist tests ϕ n,i (θ) : E θ0 (ϕ n,i (θ)) e c1nd 2 (θ,θ 0), sup Qλ θ i,n(1 ϕ n,i (θ)) e c1nd 2 (θ,θ 0), d(θ,θ ) ζd(θ,θ 0) with 0 < ζ < 1, and for some large enough C > 0 { θ θ 0 2 > Cε n (λ), θ Θ n (λ)} {d(θ, θ 0 ) ε n (λ), θ Θ n (λ)}. Entropy: For all u Cε n (λ): N(ζu, {u d(θ, θ 0 ) 2u} Θ n (λ), d(, )) c 1 nu 2 /2
22 Conditions II Tests: For all λ Λ c 0 and all θ Θ n(λ) there exist tests ϕ n,i (θ) : E θ0 (ϕ n,i (θ)) e c1nd 2 (θ,θ 0), sup Qλ θ i,n(1 ϕ n,i (θ)) e c1nd 2 (θ,θ 0), d(θ,θ ) ζd(θ,θ 0) with 0 < ζ < 1, and for some large enough C > 0 { θ θ 0 2 > Cε n (λ), θ Θ n (λ)} {d(θ, θ 0 ) ε n (λ), θ Θ n (λ)}. Entropy: For all u Cε n (λ): N(ζu, {u d(θ, θ 0 ) 2u} Θ n (λ), d(, )) c 1 nu 2 /2 Local Metric Exchange: There exists M 1, M 2 > 0 and λ n Λ 0 satisfying ε n (λ n ) M 1 ε n,0 such that { θ θ 0 2 ε n (λ n )} B n (θ 0, M 2 ε n (λ n ), {KL(p θ, p θ0 ), V 0,k (p θ, p θ0 )}).
23 Main theorem Theorem: Assume that all the above conditions hold. Then for all θ 0 Θ with P θ0 -probability tending to one we have ˆλ n Λ 0.
24 Main theorem Theorem: Assume that all the above conditions hold. Then for all θ 0 Θ with P θ0 -probability tending to one we have ˆλ n Λ 0. Our next goal is to give upper bounds for the EB contraction rates. Following Donet et al. (2014) assume that Uniform likelihood ratio: sup sup λ Λ 0 θ B(θ 0,λ) P θ0 { inf l n (ψ λ,λ (θ)) l n (θ 0 ) K 5 nε 2 1 n,0} = o( λ λ u n N n (u n ) Stronger Entropy (hyper): N n (u n ) = o((nε 2 n,0 )k/2 ).
25 Main theorem Theorem: Assume that all the above conditions hold. Then for all θ 0 Θ with P θ0 -probability tending to one we have ˆλ n Λ 0. Our next goal is to give upper bounds for the EB contraction rates. Following Donet et al. (2014) assume that Uniform likelihood ratio: sup sup λ Λ 0 θ B(θ 0,λ) P θ0 { inf l n (ψ λ,λ (θ)) l n (θ 0 ) K 5 nε 2 1 n,0} = o( λ λ u n N n (u n ) Stronger Entropy (hyper): N n (u n ) = o((nε 2 n,0 )k/2 ). Theorem: Under the preceding conditions the empirical Bayes posterior distribution contracts around the truth with a rate M n ε n,0 : ( θ : θ θ Πˆλn 0 2 M n ε n,0 X (n)) P θ0 1.
26 Gaussian white noise model Gaussian white noise model Model: Let us observe the sequence X (n) = (X 1, X 2,...) satisfying X i = θ 0,i + (1/ n)z i, i = 1, 2,..., where θ 0 = (θ 0,1, θ 0,2,...) the unknown infinite dimensional parameter and Z i are iid standard normal random variables. Sub-classes: Θ β (M) = {θ l 2 : θ 2 i Mi 1 2β }.
27 Gaussian white noise model Gaussian white noise model Model: Let us observe the sequence X (n) = (X 1, X 2,...) satisfying X i = θ 0,i + (1/ n)z i, i = 1, 2,..., where θ 0 = (θ 0,1, θ 0,2,...) the unknown infinite dimensional parameter and Z i are iid standard normal random variables. Sub-classes: Θ β (M) = {θ l 2 : θ 2 i Mi 1 2β }. Priors: Π α ( ) = i=1 N(0, i 1 2α ), see Knapik et al. (2012). Π τ ( ) = i=1 N(0, τ 2 i 1 2α ), see Sz. et al. (2013). Π N ( ) = N i=1 g( ), where G 1e G2 t α g(t) G 3 e G4 t α, see Arbel et. al (2012) for HB. Π γ ( ) = i=1 N(0, e γi ), see Castillo et al. (2014) for HB.
28 Gaussian white noise model GWN model with regularity hyper-parameter Prior: Π α ( ) = i=1 N(0, i 1 2α ) All the conditions of our theorems are met.
29 Gaussian white noise model GWN model with regularity hyper-parameter Prior: Π α ( ) = i=1 N(0, i 1 2α ) All the conditions of our theorems are met. Upper bound on ε n (α) (for θ 0 Θ β ): Concentration inequality vd Vaart & v Zanten (2008): nε 2 n(α) = log Π α (θ : θ θ 0 Kε n (α)) ϕ α,θ0 (Kε n (α)/2), Centered small ball Li & Shao (2001): log Π α (θ : θ 2 Kε n (α)/2) (K/2) 1/α ε n (α) 1/α. RKHS term: Cε 1/β n inf h 2 h H α H i 1+2α θ : h θ α 0,i 2 ε n (α) 1+2α 2β β ε n Solution: ε n (α) n β α 1+2α. i=1
30 Gaussian white noise model GWN model with regularity hyper-parameter Prior: Π α ( ) = i=1 N(0, i 1 2α ) All the conditions of our theorems are met. Upper bound on ε n (α) (for θ 0 Θ β ): Concentration inequality vd Vaart & v Zanten (2008): nε 2 n(α) = log Π α (θ : θ θ 0 Kε n (α)) ϕ α,θ0 (Kε n (α)/2), Centered small ball Li & Shao (2001): log Π α (θ : θ 2 Kε n (α)/2) (K/2) 1/α ε n (α) 1/α. RKHS term: Cε 1/β n inf h 2 h H α H i 1+2α θ : h θ α 0,i 2 ε n (α) 1+2α 2β β ε n Solution: ε n (α) n β α 1+2α. i=1 EB rate: α 0 = β, hence ε n,0 = ε n (α 0 ) n β 1+2β.
31 Gaussian white noise model GWN model with scaling hyper-parameter Prior: Π τ ( ) = i=1 N(0, τ 2 i 1 2α ), with fixed α > 0. All the conditions of our theorems are met.
32 Gaussian white noise model GWN model with scaling hyper-parameter Prior: Π τ ( ) = i=1 N(0, τ 2 i 1 2α ), with fixed α > 0. All the conditions of our theorems are met. Upper bound on ε n (τ) (for θ 0 Θ β ): Similarly as in the regularity hyper-parameter case: nε 2 n(τ) log Π τ (θ : θ 2 Kε n (τ)/2) + inf h H τ : h θ h 2 H τ 0 2 ε n(τ) 1/β ε n(τ) τ 1/α ε n (τ) 1/a + τ 2 i=1 i 2(α β)
33 Gaussian white noise model GWN model with scaling hyper-parameter Prior: Π τ ( ) = i=1 N(0, τ 2 i 1 2α ), with fixed α > 0. All the conditions of our theorems are met. Upper bound on ε n (τ) (for θ 0 Θ β ): Similarly as in the regularity hyper-parameter case: nε 2 n(τ) log Π τ (θ : θ 2 Kε n (τ)/2) + inf h H τ : h θ h 2 H τ 0 2 ε n(τ) 1/β ε n(τ) τ 1/α ε n (τ) 1/a + τ 2 EB posterior rate: i=1 i 2(α β) n β 1+2β, if β < α + 1/2, ε n (τ 0 ) n β 1+2β (log n) 1/(1+2β), if β = α + 1/2, n 1/2+α 2+2α, if β > α + 1/2.
34 Nonparametric regression Nonparametric regression model Fixed design: Assume that we observe Y 1, Y 2,..., Y n satisfying Y i = f 0 (x i ) + Z i, i = 1, 2,..., n, where Z i are iid standard Gaussian random variables and x i = i/n.
35 Nonparametric regression Nonparametric regression model Fixed design: Assume that we observe Y 1, Y 2,..., Y n satisfying Y i = f 0 (x i ) + Z i, i = 1, 2,..., n, where Z i are iid standard Gaussian random variables and x i = i/n. Series decomposition: Let us denote by θ 0 = (θ 0,1, θ 0,2,..) the Fourier coefficients of the regression function f 0 L 2 (M): f 0 (t) = θ 0,j ψ j (t). j=1
36 Nonparametric regression Nonparametric regression model Fixed design: Assume that we observe Y 1, Y 2,..., Y n satisfying Y i = f 0 (x i ) + Z i, i = 1, 2,..., n, where Z i are iid standard Gaussian random variables and x i = i/n. Series decomposition: Let us denote by θ 0 = (θ 0,1, θ 0,2,..) the Fourier coefficients of the regression function f 0 L 2 (M): f 0 (t) = θ 0,j ψ j (t). j=1 Prior: Π α ( ) = i=1 N(0, i 1 2α ) on θ 0. EB posterior rate: for θ 0 S β (M) we have ε n (α 0 ) n β 1+2β.
37 Density function problem Density function Model: Let X 1, X 2,..., X n be iid sample from the density function f 0 on [0, 1]. Assume that the density takes the form: f 0 (x) = exp θ 0,j ϕ j (x) c(θ 0 ), θ 0 l 2, j=1 where (ϕ j ) is an orthonormal basis of L 2 ([0, 1]).
38 Density function problem Density function Model: Let X 1, X 2,..., X n be iid sample from the density function f 0 on [0, 1]. Assume that the density takes the form: f 0 (x) = exp θ 0,j ϕ j (x) c(θ 0 ), θ 0 l 2, j=1 where (ϕ j ) is an orthonormal basis of L 2 ([0, 1]). Prior: Log-linear priors Rivoirard & Rousseau (2012): f θ (x) = exp θ j ϕ j (x) c(θ), θ l 2, j=1 where the parameter θ l 2 follows Π τ ( ) = i=1 N(0, τ 2 i 1 2α ), Π N ( ) = N i=1 g( ).
39 Summary We characterized the set Λ 0 where the marginal likelihood estimator ˆλ n belongs (with probability tending to one). We gave an upper bound on the EB contraction rate. We investigated various examples: Gaussian white noise (reproduced multiple specific results from the literature), Nonparametric regression, Density function problem.
40 Future/Ongoing work Extensions: Consider other, more complex models. Consider other metrics (at the moment we have L 2 -norm). Lower bounds on the contraction rates of the EB posterior. Inverse problems. Investigate the coverage property of the EB credible sets. (Under polished tail assumption, see Sz. et al. (2014)). Using the EB results on the coverage to derive general theorems on hierarchical Bayes credible sets.
Other Test Constructions: Likelihood Ratio & Bayes Tests
Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :
Statistical Inference I Locally most powerful tests
Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided
Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.
Bayesian statistics DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Frequentist vs Bayesian statistics In frequentist
ST5224: Advanced Statistical Theory II
ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known
Solution Series 9. i=1 x i and i=1 x i.
Lecturer: Prof. Dr. Mete SONER Coordinator: Yilin WANG Solution Series 9 Q1. Let α, β >, the p.d.f. of a beta distribution with parameters α and β is { Γ(α+β) Γ(α)Γ(β) f(x α, β) xα 1 (1 x) β 1 for < x
4.6 Autoregressive Moving Average Model ARMA(1,1)
84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this
Lecture 12: Pseudo likelihood approach
Lecture 12: Pseudo likelihood approach Pseudo MLE Let X 1,...,X n be a random sample from a pdf in a family indexed by two parameters θ and π with likelihood l(θ,π). The method of pseudo MLE may be viewed
2 Composition. Invertible Mappings
Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,
Theorem 8 Let φ be the most powerful size α test of H
Testing composite hypotheses Θ = Θ 0 Θ c 0 H 0 : θ Θ 0 H 1 : θ Θ c 0 Definition 16 A test φ is a uniformly most powerful (UMP) level α test for H 0 vs. H 1 if φ has level α and for any other level α test
Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1
Eon : Fall 8 Suggested Solutions to Problem Set 8 Email questions or omments to Dan Fetter Problem. Let X be a salar with density f(x, θ) (θx + θ) [ x ] with θ. (a) Find the most powerful level α test
Lecture 34 Bootstrap confidence intervals
Lecture 34 Bootstrap confidence intervals Confidence Intervals θ: an unknown parameter of interest We want to find limits θ and θ such that Gt = P nˆθ θ t If G 1 1 α is known, then P θ θ = P θ θ = 1 α
Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3
Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all
SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM
SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question 1 a) The cumulative distribution function of T conditional on N n is Pr (T t N n) Pr (max (X 1,..., X N ) t N n) Pr (max
SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM
SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question 1 a) The cumulative distribution function of T conditional on N n is Pr T t N n) Pr max X 1,..., X N ) t N n) Pr max
Exercises to Statistics of Material Fatigue No. 5
Prof. Dr. Christine Müller Dipl.-Math. Christoph Kustosz Eercises to Statistics of Material Fatigue No. 5 E. 9 (5 a Show, that a Fisher information matri for a two dimensional parameter θ (θ,θ 2 R 2, can
Every set of first-order formulas is equivalent to an independent set
Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent
Uniform Convergence of Fourier Series Michael Taylor
Uniform Convergence of Fourier Series Michael Taylor Given f L 1 T 1 ), we consider the partial sums of the Fourier series of f: N 1) S N fθ) = ˆfk)e ikθ. k= N A calculation gives the Dirichlet formula
Problem Set 3: Solutions
CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C
ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016
ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture 7: Information bound Lecturer: Yihong Wu Scribe: Shiyu Liang, Feb 6, 06 [Ed. Mar 9] Recall the Chi-squared divergence
Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University
Estimation for ARMA Processes with Stable Noise Matt Calder & Richard A. Davis Colorado State University rdavis@stat.colostate.edu 1 ARMA processes with stable noise Review of M-estimation Examples of
6. MAXIMUM LIKELIHOOD ESTIMATION
6 MAXIMUM LIKELIHOOD ESIMAION [1] Maximum Likelihood Estimator (1) Cases in which θ (unknown parameter) is scalar Notational Clarification: From now on, we denote the true value of θ as θ o hen, view θ
Example Sheet 3 Solutions
Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note
Solutions to Exercise Sheet 5
Solutions to Eercise Sheet 5 jacques@ucsd.edu. Let X and Y be random variables with joint pdf f(, y) = 3y( + y) where and y. Determine each of the following probabilities. Solutions. a. P (X ). b. P (X
Introduction to the ML Estimation of ARMA processes
Introduction to the ML Estimation of ARMA processes Eduardo Rossi University of Pavia October 2013 Rossi ARMA Estimation Financial Econometrics - 2013 1 / 1 We consider the AR(p) model: Y t = c + φ 1 Y
A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics
A Bonus-Malus System as a Markov Set-Chain Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics Contents 1. Markov set-chain 2. Model of bonus-malus system 3. Example 4. Conclusions
Lecture 2. Soundness and completeness of propositional logic
Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness
SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018
Journal of rogressive Research in Mathematics(JRM) ISSN: 2395-028 SCITECH Volume 3, Issue 2 RESEARCH ORGANISATION ublished online: March 29, 208 Journal of rogressive Research in Mathematics www.scitecresearch.com/journals
Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit
Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ting Zhang Stanford May 11, 2001 Stanford, 5/11/2001 1 Outline Ordinal Classification Ordinal Addition Ordinal Multiplication Ordinal
Notes on the Open Economy
Notes on the Open Econom Ben J. Heijdra Universit of Groningen April 24 Introduction In this note we stud the two-countr model of Table.4 in more detail. restated here for convenience. The model is Table.4.
Chapter 6: Systems of Linear Differential. be continuous functions on the interval
Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations
SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions
SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i)
Μηχανική Μάθηση Hypothesis Testing
ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Μηχανική Μάθηση Hypothesis Testing Γιώργος Μπορμπουδάκης Τμήμα Επιστήμης Υπολογιστών Procedure 1. Form the null (H 0 ) and alternative (H 1 ) hypothesis 2. Consider
Concrete Mathematics Exercises from 30 September 2016
Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)
Lecture 21: Properties and robustness of LSE
Lecture 21: Properties and robustness of LSE BLUE: Robustness of LSE against normality We now study properties of l τ β and σ 2 under assumption A2, i.e., without the normality assumption on ε. From Theorem
New bounds for spherical two-distance sets and equiangular lines
New bounds for spherical two-distance sets and equiangular lines Michigan State University Oct 8-31, 016 Anhui University Definition If X = {x 1, x,, x N } S n 1 (unit sphere in R n ) and x i, x j = a
STAT200C: Hypothesis Testing
STAT200C: Hypothesis Testing Zhaoxia Yu Spring 2017 Some Definitions A hypothesis is a statement about a population parameter. The two complementary hypotheses in a hypothesis testing are the null hypothesis
Bounding Nonsplitting Enumeration Degrees
Bounding Nonsplitting Enumeration Degrees Thomas F. Kent Andrea Sorbi Università degli Studi di Siena Italia July 18, 2007 Goal: Introduce a form of Σ 0 2-permitting for the enumeration degrees. Till now,
Section 8.3 Trigonometric Equations
99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.
6.3 Forecasting ARMA processes
122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear
2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 2005-03-08 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
Statistics 104: Quantitative Methods for Economics Formula and Theorem Review
Harvard College Statistics 104: Quantitative Methods for Economics Formula and Theorem Review Tommy MacWilliam, 13 tmacwilliam@college.harvard.edu March 10, 2011 Contents 1 Introduction to Data 5 1.1 Sample
EE512: Error Control Coding
EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3
Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in
Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in : tail in X, head in A nowhere-zero Γ-flow is a Γ-circulation such that
Υπολογιστική Φυσική Στοιχειωδών Σωματιδίων
Υπολογιστική Φυσική Στοιχειωδών Σωματιδίων Όρια Πιστότητας (Confidence Limits) 2/4/2014 Υπολογ.Φυσική ΣΣ 1 Τα όρια πιστότητας -Confidence Limits (CL) Tα όρια πιστότητας μιας μέτρησης Μπορεί να αναφέρονται
Numerical Analysis FMN011
Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =
Homework 3 Solutions
Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For
k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +
Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b
Web-based supplementary materials for Bayesian Quantile Regression for Ordinal Longitudinal Data
Web-based supplementary materials for Bayesian Quantile Regression for Ordinal Longitudinal Data Rahim Alhamzawi, Haithem Taha Mohammad Ali Department of Statistics, College of Administration and Economics,
Matrices and Determinants
Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z
C.S. 430 Assignment 6, Sample Solutions
C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order
A Note on Intuitionistic Fuzzy. Equivalence Relation
International Mathematical Forum, 5, 2010, no. 67, 3301-3307 A Note on Intuitionistic Fuzzy Equivalence Relation D. K. Basnet Dept. of Mathematics, Assam University Silchar-788011, Assam, India dkbasnet@rediffmail.com
Partial Differential Equations in Biology The boundary element method. March 26, 2013
The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet
ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ
ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ ΕΛΕΝΑ ΦΛΟΚΑ Επίκουρος Καθηγήτρια Τµήµα Φυσικής, Τοµέας Φυσικής Περιβάλλοντος- Μετεωρολογίας ΓΕΝΙΚΟΙ ΟΡΙΣΜΟΙ Πληθυσµός Σύνολο ατόµων ή αντικειµένων στα οποία αναφέρονται
Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- -----------------
Inverse trigonometric functions & General Solution of Trigonometric Equations. 1. Sin ( ) = a) b) c) d) Ans b. Solution : Method 1. Ans a: 17 > 1 a) is rejected. w.k.t Sin ( sin ) = d is rejected. If sin
Iterated trilinear fourier integrals with arbitrary symbols
Cornell University ICM 04, Satellite Conference in Harmonic Analysis, Chosun University, Gwangju, Korea August 6, 04 Motivation the Coifman-Meyer theorem with classical paraproduct(979) B(f, f )(x) :=
derivation of the Laplacian from rectangular to spherical coordinates
derivation of the Laplacian from rectangular to spherical coordinates swapnizzle 03-03- :5:43 We begin by recognizing the familiar conversion from rectangular to spherical coordinates (note that φ is used
Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)
Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts
6.1. Dirac Equation. Hamiltonian. Dirac Eq.
6.1. Dirac Equation Ref: M.Kaku, Quantum Field Theory, Oxford Univ Press (1993) η μν = η μν = diag(1, -1, -1, -1) p 0 = p 0 p = p i = -p i p μ p μ = p 0 p 0 + p i p i = E c 2 - p 2 = (m c) 2 H = c p 2
Congruence Classes of Invertible Matrices of Order 3 over F 2
International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and
D Alembert s Solution to the Wave Equation
D Alembert s Solution to the Wave Equation MATH 467 Partial Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Objectives In this lesson we will learn: a change of variable technique
ORDINAL ARITHMETIC JULIAN J. SCHLÖDER
ORDINAL ARITHMETIC JULIAN J. SCHLÖDER Abstract. We define ordinal arithmetic and show laws of Left- Monotonicity, Associativity, Distributivity, some minor related properties and the Cantor Normal Form.
Homework 8 Model Solution Section
MATH 004 Homework Solution Homework 8 Model Solution Section 14.5 14.6. 14.5. Use the Chain Rule to find dz where z cosx + 4y), x 5t 4, y 1 t. dz dx + dy y sinx + 4y)0t + 4) sinx + 4y) 1t ) 0t + 4t ) sinx
Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.
Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο The time integral of a force is referred to as impulse, is determined by and is obtained from: Newton s 2 nd Law of motion states that the action
An Introduction to Signal Detection and Estimation - Second Edition Chapter II: Selected Solutions
An Introduction to Signal Detection Estimation - Second Edition Chapter II: Selected Solutions H V Poor Princeton University March 16, 5 Exercise : The likelihood ratio is given by L(y) (y +1), y 1 a With
Local Approximation with Kernels
Local Approximation with Kernels Thomas Hangelbroek University of Hawaii at Manoa 5th International Conference Approximation Theory, 26 work supported by: NSF DMS-43726 A cubic spline example Consider
5. Choice under Uncertainty
5. Choice under Uncertainty Daisuke Oyama Microeconomics I May 23, 2018 Formulations von Neumann-Morgenstern (1944/1947) X: Set of prizes Π: Set of probability distributions on X : Preference relation
CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS
CHAPTER 5 SOLVING EQUATIONS BY ITERATIVE METHODS EXERCISE 104 Page 8 1. Find the positive root of the equation x + 3x 5 = 0, correct to 3 significant figures, using the method of bisection. Let f(x) =
Fractional Colorings and Zykov Products of graphs
Fractional Colorings and Zykov Products of graphs Who? Nichole Schimanski When? July 27, 2011 Graphs A graph, G, consists of a vertex set, V (G), and an edge set, E(G). V (G) is any finite set E(G) is
Homework for 1/27 Due 2/5
Name: ID: Homework for /7 Due /5. [ 8-3] I Example D of Sectio 8.4, the pdf of the populatio distributio is + αx x f(x α) =, α, otherwise ad the method of momets estimate was foud to be ˆα = 3X (where
1. A fully continuous 20-payment years, 30-year term life insurance of 2000 is issued to (35). You are given n A 1
Chapter 7: Exercises 1. A fully continuous 20-payment years, 30-year term life insurance of 2000 is issued to (35). You are given n A 1 35+n:30 n a 35+n:20 n 0 0.068727 11.395336 10 0.097101 7.351745 25
Supplementary Materials: Proofs
Supplementary Materials: Proofs The following lemmas prepare for the proof of concentration properties on prior density π α (β j /σ for j = 1,..., p n on m n -dimensional vector β j /σ. As introduced earlier,
Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..
Supplemental Material (not for publication) Persistent vs. Permanent Income Shocks in the Buffer-Stock Model Jeppe Druedahl Thomas H. Jørgensen May, A Additional Figures and Tables Figure A.: Wealth and
Math221: HW# 1 solutions
Math: HW# solutions Andy Royston October, 5 7.5.7, 3 rd Ed. We have a n = b n = a = fxdx = xdx =, x cos nxdx = x sin nx n sin nxdx n = cos nx n = n n, x sin nxdx = x cos nx n + cos nxdx n cos n = + sin
Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.
Exercises 0 More exercises are available in Elementary Differential Equations. If you have a problem to solve any of them, feel free to come to office hour. Problem Find a fundamental matrix of the given
Homomorphism in Intuitionistic Fuzzy Automata
International Journal of Fuzzy Mathematics Systems. ISSN 2248-9940 Volume 3, Number 1 (2013), pp. 39-45 Research India Publications http://www.ripublication.com/ijfms.htm Homomorphism in Intuitionistic
The Simply Typed Lambda Calculus
Type Inference Instead of writing type annotations, can we use an algorithm to infer what the type annotations should be? That depends on the type system. For simple type systems the answer is yes, and
HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)
HW 3 Solutions a) I use the autoarima R function to search over models using AIC and decide on an ARMA3,) b) I compare the ARMA3,) to ARMA,0) ARMA3,) does better in all three criteria c) The plot of the
Chapter 6: Systems of Linear Differential. be continuous functions on the interval
Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations
Χρηματοοικονομική Ανάπτυξη, Θεσμοί και
ΑΡΙΣΤΟΤΕΛΕΙΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΘΕΣΣΑΛΟΝΙΚΗΣ ΣΧΟΛΗ ΝΟΜΙΚΩΝ, ΟΙΚΟΝΟΜΙΚΩΝ ΚΑΙ ΠΟΛΙΤΙΚΩΝ ΕΠΙΣΤΗΜΩΝ ΤΜΗΜΑ ΟΙΚΟΝΟΜΙΚΩΝ ΕΠΙΣΤΗΜΩΝ Τομέας Ανάπτυξης και Προγραμματισμού Χρηματοοικονομική Ανάπτυξη, Θεσμοί και Οικονομική
P AND P. P : actual probability. P : risk neutral probability. Realtionship: mutual absolute continuity P P. For example:
(B t, S (t) t P AND P,..., S (p) t ): securities P : actual probability P : risk neutral probability Realtionship: mutual absolute continuity P P For example: P : ds t = µ t S t dt + σ t S t dw t P : ds
Lecture 7: Overdispersion in Poisson regression
Lecture 7: Overdispersion in Poisson regression Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Introduction Modeling overdispersion through mixing Score test for
Supplementary Appendix
Supplementary Appendix Measuring crisis risk using conditional copulas: An empirical analysis of the 2008 shipping crisis Sebastian Opitz, Henry Seidel and Alexander Szimayer Model specification Table
Mean-Variance Analysis
Mean-Variance Analysis Jan Schneider McCombs School of Business University of Texas at Austin Jan Schneider Mean-Variance Analysis Beta Representation of the Risk Premium risk premium E t [Rt t+τ ] R1
Second Order Partial Differential Equations
Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y
Various types of likelihood
Various types of likelihood 1. likelihood, marginal likelihood, conditional likelihood, profile likelihood, adjusted profile likelihood, Bayesian asymptotics 2. quasi-likelihood, composite likelihood 3.
forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with
Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We
Bayesian Analysis in Moment Inequality Models Supplement Material
Bayesian Analysis in Moment Inequality Models Supplement Material Yuan Liao Wenxin Jiang November 4, 2008 Abstract This is a supplement material of Liao and Jiang (2008). We consider the case when the
Approximation of distance between locations on earth given by latitude and longitude
Approximation of distance between locations on earth given by latitude and longitude Jan Behrens 2012-12-31 In this paper we shall provide a method to approximate distances between two points on earth
These derivations are not part of the official forthcoming version of Vasilaky and Leonard
Target Input Model with Learning, Derivations Kathryn N Vasilaky These derivations are not part of the official forthcoming version of Vasilaky and Leonard 06 in Economic Development and Cultural Change.
Last Lecture. Biostatistics Statistical Inference Lecture 19 Likelihood Ratio Test. Example of Hypothesis Testing.
Last Lecture Biostatistics 602 - Statistical Iferece Lecture 19 Likelihood Ratio Test Hyu Mi Kag March 26th, 2013 Describe the followig cocepts i your ow words Hypothesis Null Hypothesis Alterative Hypothesis
Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.
Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequalit for metrics: Let (X, d) be a metric space and let x,, z X. Prove that d(x, z) d(, z) d(x, ). (ii): Reverse triangle inequalit for norms:
Asymptotic distribution of MLE
Asymptotic distribution of MLE Theorem Let {X t } be a causal and invertible ARMA(p,q) process satisfying Φ(B)X = Θ(B)Z, {Z t } IID(0, σ 2 ). Let ( ˆφ, ˆϑ) the values that minimize LL n (φ, ϑ) among those
Dynamic types, Lambda calculus machines Section and Practice Problems Apr 21 22, 2016
Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Dynamic types, Lambda calculus machines Apr 21 22, 2016 1 Dynamic types and contracts (a) To make sure you understand the
b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!
MTH U341 urface Integrals, tokes theorem, the divergence theorem To be turned in Wed., Dec. 1. 1. Let be the sphere of radius a, x 2 + y 2 + z 2 a 2. a. Use spherical coordinates (with ρ a) to parametrize.
Higher Derivative Gravity Theories
Higher Derivative Gravity Theories Black Holes in AdS space-times James Mashiyane Supervisor: Prof Kevin Goldstein University of the Witwatersrand Second Mandelstam, 20 January 2018 James Mashiyane WITS)
Tridiagonal matrices. Gérard MEURANT. October, 2008
Tridiagonal matrices Gérard MEURANT October, 2008 1 Similarity 2 Cholesy factorizations 3 Eigenvalues 4 Inverse Similarity Let α 1 ω 1 β 1 α 2 ω 2 T =......... β 2 α 1 ω 1 β 1 α and β i ω i, i = 1,...,
w o = R 1 p. (1) R = p =. = 1
Πανεπιστήµιο Κρήτης - Τµήµα Επιστήµης Υπολογιστών ΗΥ-570: Στατιστική Επεξεργασία Σήµατος 205 ιδάσκων : Α. Μουχτάρης Τριτη Σειρά Ασκήσεων Λύσεις Ασκηση 3. 5.2 (a) From the Wiener-Hopf equation we have:
1 1 1 2 1 2 2 1 43 123 5 122 3 1 312 1 1 122 1 1 1 1 6 1 7 1 6 1 7 1 3 4 2 312 43 4 3 3 1 1 4 1 1 52 122 54 124 8 1 3 1 1 1 1 1 152 1 1 1 1 1 1 152 1 5 1 152 152 1 1 3 9 1 159 9 13 4 5 1 122 1 4 122 5
Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics
Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)
The Probabilistic Method - Probabilistic Techniques. Lecture 7: The Janson Inequality
The Probabilistic Method - Probabilistic Techniques Lecture 7: The Janson Inequality Sotiris Nikoletseas Associate Professor Computer Engineering and Informatics Department 2014-2015 Sotiris Nikoletseas,