INDIRECT ADAPTIVE CONTROL

Σχετικά έγγραφα
( ) ( t) ( 0) ( ) dw w. = = β. Then the solution of (1.1) is easily found to. wt = t+ t. We generalize this to the following nonlinear differential

University of Washington Department of Chemistry Chemistry 553 Spring Quarter 2010 Homework Assignment 3 Due 04/26/10

3 Frequency Domain Representation of Continuous Signals and Systems

( ) ( ) ( ) Fourier series. ; m is an integer. r(t) is periodic (T>0), r(t+t) = r(t), t Fundamental period T 0 = smallest T. Fundamental frequency ω

Reservoir modeling. Reservoir modelling Linear reservoirs. The linear reservoir, no input. Starting up reservoir modeling

Appendix. The solution begins with Eq. (2.15) from the text, which we repeat here for 1, (A.1)

ΕΡΓΑΣΙΑ ΜΑΘΗΜΑΤΟΣ: ΘΕΩΡΙΑ ΒΕΛΤΙΣΤΟΥ ΕΛΕΓΧΟΥ ΦΙΛΤΡΟ KALMAN ΜΩΥΣΗΣ ΛΑΖΑΡΟΣ

Necessary and sufficient conditions for oscillation of first order nonlinear neutral differential equations

9.1 Introduction 9.2 Lags in the Error Term: Autocorrelation 9.3 Estimating an AR(1) Error Model 9.4 Testing for Autocorrelation 9.

Linear singular perturbations of hyperbolic-parabolic type

r t te 2t i t Find the derivative of the vector function. 19. r t e t cos t i e t sin t j ln t k Evaluate the integral.

Fourier transform of continuous-time signals

The Euler Equations! λ 1. λ 2. λ 3. ρ ρu. E = e + u 2 /2. E + p ρ. = de /dt. = dh / dt; h = h( T ); c p. / c v. ; γ = c p. p = ( γ 1)ρe. c v.

Lecture 12 Modulation and Sampling

Nonlinear Analysis: Modelling and Control, 2013, Vol. 18, No. 4,

Vol. 40 No Journal of Jiangxi Normal University Natural Science Jul. 2016

Oscillation criteria for two-dimensional system of non-linear ordinary differential equations

Statistical Inference I Locally most powerful tests

6.003: Signals and Systems

J. of Math. (PRC) u(t k ) = I k (u(t k )), k = 1, 2,, (1.6) , [3, 4] (1.1), (1.2), (1.3), [6 8]

Uniform Convergence of Fourier Series Michael Taylor

Oscillation Criteria for Nonlinear Damped Dynamic Equations on Time Scales

Example Sheet 3 Solutions

ω = radians per sec, t = 3 sec

Other Test Constructions: Likelihood Ratio & Bayes Tests

Managing Production-Inventory Systems with Scarce Resources

Anti-aliasing Prefilter (6B) Young Won Lim 6/8/12

Matrices and Determinants

6.003: Signals and Systems. Modulation

4.6 Autoregressive Moving Average Model ARMA(1,1)

= e 6t. = t 1 = t. 5 t 8L 1[ 1 = 3L 1 [ 1. L 1 [ π. = 3 π. = L 1 3s = L. = 3L 1 s t. = 3 cos(5t) sin(5t).

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Section 8.3 Trigonometric Equations

8. The Normalized Least-Squares Estimator with Exponential Forgetting

EE512: Error Control Coding

Approximation of the Lerch zeta-function

Reminders: linear functions

CRASH COURSE IN PRECALCULUS

A Simple Version of the Lucas Model

Numerical Analysis FMN011

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

A New Approach to Bounded Real Lemma Representation for Linear Neutral Systems

Every set of first-order formulas is equivalent to an independent set

The canonical 2nd order transfer function is expressed as. (ω n

ST5224: Advanced Statistical Theory II

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

2 Composition. Invertible Mappings

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Second Order RLC Filters

The Student s t and F Distributions Page 1

Homework 3 Solutions

On local motion of a general compressible viscous heat conducting fluid bounded by a free surface

derivation of the Laplacian from rectangular to spherical coordinates

Math221: HW# 1 solutions

APPENDIX A DERIVATION OF JOINT FAILURE DENSITIES

Lecture 6. Goals: Determine the optimal threshold, filter, signals for a binary communications problem VI-1

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

ECE Spring Prof. David R. Jackson ECE Dept. Notes 2

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

The choice of an optimal LCSCR contract involves the choice of an x L. such that the supplier chooses the LCS option when x xl

Concrete Mathematics Exercises from 30 September 2016

Areas and Lengths in Polar Coordinates

6. MAXIMUM LIKELIHOOD ESTIMATION

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

is the home less foreign interest rate differential (expressed as it

5. Choice under Uncertainty

Solution Series 9. i=1 x i and i=1 x i.

Approximation of distance between locations on earth given by latitude and longitude

6.3 Forecasting ARMA processes

Second Order Partial Differential Equations

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

C.S. 430 Assignment 6, Sample Solutions

Space-Time Symmetries

Bounding Nonsplitting Enumeration Degrees

Differentiation exercise show differential equation

Forced Pendulum Numerical approach

Inverse trigonometric functions & General Solution of Trigonometric Equations

Areas and Lengths in Polar Coordinates

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

New bounds for spherical two-distance sets and equiangular lines

TRM +4!5"2# 6!#!-!2&'!5$27!842//22&'9&2:1*;832<

Lecture 2. Soundness and completeness of propositional logic

arxiv: v1 [math.ap] 10 Apr 2017

The Simply Typed Lambda Calculus

HMY 220: Σήματα και Συστήματα Ι

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

On Strong Product of Two Fuzzy Graphs

1 String with massive end-points

Key Formulas From Larson/Farber Elementary Statistics: Picturing the World, Second Edition 2002 Prentice Hall

w o = R 1 p. (1) R = p =. = 1

On shift Harnack inequalities for subordinate semigroups and moment estimates for Lévy processes

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Section 7.6 Double and Half Angle Formulas

Multiple positive periodic solutions of nonlinear functional differential system with feedback control

Lifting Entry (continued)

Fractional Colorings and Zykov Products of graphs

Transcript:

INDIREC ADAPIVE CONROL OULINE. Inroducion a. Main properies b. Running example. Adapive parameer esimaion a. Parameerized sysem model b. Linear parameric model c. Normalized gradien algorihm d. Normalized leas-squares algorihm e. Discree-ime version of adapive algorihms 3. Idenificaion and robusness a. Parameric convergence and persisency of exciaion b. Robusness of adapive algorihms 4. Indirec adapive conrol a. Model reference conrol b. Pole placemen conrol 5. Adapive observers

. INRODUCION Dynamic sysems are characerized by heir srucures and parameers: Linear: Nonlinear: :{ x = A( θ) x+ B( θ) u+ d; :{ x = f( x, u, d, θ); Σl Σ y = Cx+ Du+ v, n y = h( x, u) + v, x is sae vecor, u is conrol inpu, d is disurbance, y is oupu, v is noise, θ is parameers. Conrol sysem design seps: u u P P P m y y Σ {, Σ} m l n. Modeling. Conrol design u C P m Δ uncerainy y 3. Implemenaion u C P y Sabiliy, robusness, performance??? 3

a. Main properies Parameer esimaion is o use a collecion of available sysem signals y and u, based on cerain sysem srucure informaion Σ l or Σ n, o produce esimaes θ () of he sysem parameers θ Appears on he sep. Adapive parameer esimaion is a dynamic esimaion procedure ha produce updaed parameer esimaes on-line Appears on he sep &3. Adapive parameer esimaion is crucial for indirec adapive conrol design where conroller parameers θ () are some coninuous funcions of he esimaes θ ( ): c u C P m (θ) Sraegy for conrol adjusmen he general scheme of adapive conrol. y u C(θ с ) P m (θ) Adapive parameer esimaion y θ c () Conrol parameer derivaion θ () he scheme of indirec adapive conrol. 4

Key issues in he classical adapive parameer esimaion: linear parameerizaion of sysem models, linear represenaion of parameric error models, sable design of adapive esimaion algorihms, analyical proof of sysem sabiliy, parameer convergence, robusness of adapive esimaion. Realizaion: coninuous-ime, discree-ime. 5

b. Running example Moving vehicle: V d F f +F l V F e =kn e F e F f <F l F f >F l V is velociy (regulaing variable), V = dv / d is acceleraion, m is unknown vehicle mass, F e is engine force, Fe F f is fricion force, Ff = kne, where e N is orque, k is unknown conversion coefficien, = ρ V, where ρ is unknown fricion coefficien, F l is load force (unknown, dependen on he road profile). he firs order dynamics (Newon's Second Law): mv = F ( F + F ) = kn ρv F. e f l e l 6

Define he sae variable x = V, he conrol inpu u = Ne, he disurbance d = Fl / m: x = ax+ bu+ d, () y = x+ v, where y is he oupu, v is he measuremen noise, a = ρ / m, b = k / m. Noe: he engine from he inroducion lecure has he same model. Feaures: he consan parameers a > and b > are unknown () is a varian of Σ l ; he ime-varying signals d and v are unknown, bu bounded; he unperurbed noise-free case: d = v =, he reference signal r = Vd, where d V desired velociy. Conrol problem (he asympoic racking): x () r () wih +. 7

A varian of he soluion: u = b [ ay a y+ b r ], where a m > and b m are parameers of he reference model: m m m m m m m x = a x + b r. he closed loop sysem has form: x = a x+ b r+ d, d = d + ( a a ) v. m In he noise-free case ( d = v = d = ) he variable x has he desired dynamics! m o design he conrol u we have o esimae he unknown parameers a and b! Le us ry o solve his problem for he noise-free case. We will analyze he robusness issue laer. In his case he model () can be rewrien as follows: y = ay+ bu. ( ) 8

. ADAPIVE PARAMEER ESIMAION a. Parameerized sysem model Consider a linear ime-invarian SISO sysem described by he differenial equaion: y (), u ( ) are he measured oupu and inpu as before; Ps ()[ y]() = Zs ( )[ u](), () n n n m m Ps () = s + p s +... + ps+ p, m Z( s) = z s + z s +... + z s+ z, are polynomials in s, wih s being he differeniaion operaor i m s[ x]() = x () ; p, i =, n and z j, j =, m are he unknown bu consan parameers o be esimaed. Noe: n =, m = ( ) wih p = a and z = b. he objecive: esimae he values p i, i =, n and z j, j =, m using available for on-line measuremens signals y ( ) and u ( ) (no a priori accessible daases). 9

Parameerizaion: le n n n Λ () s = s +λ s +... +λ s+λ be a sable polynomial (all zeros are in Re[ s ] < ). hen () can be represened as follows: Ps () Zs () Λ() s Ps () Zs () [ y]() = [ u]() ( )[ y]( ) + [ y]( ) = [ u]( ) Λ() s Λ() s Λ() s Λ() s Λ() s Z( s) Λ( s) P( s) y() = [ u]() + [ y](). (3) Λ( s) Λ( s) Define parameer vecor θ = [ z, z,..., z, z, λ p, λ p,..., λ p, λ p ] n m * + + m m n n n n and regressor funcion m n s s n+ m+ φ () = [ u](),... [ u](), [ y](),... [ y](). Λ() s Λ() s Λ() s Λ() s hen (3) can be expressed in he equivalen form * y () = θ φ(). (4)

In (4): he vecor * θ conains all unknown parameers of he sysem (); he regressor φ ( ) can be compued using he filers i s Λ () s, i =, n. Anoher varian of implemenaion: ω () = Aλω () + bu(), ω () = Aλω () + by(), where () n ω, () n ω and Aλ =, λ λ λn λn hen, we generae he regressor φ ( ) from b =. where m m+ ( m+ ) ( n m ) φ () = [{ C ω ()}, ω () ], m ( m+ ) n C = [ I, ] ( φ () = [ ω(), ω () ] for m = n ).

Linear parameric model has he form where * n θ b. Linear parameric model * y() = θ φ(),, (4) θ is an unknown parameer vecor, y ( ) is a known (measured) signal, n φ() θ is a known vecor signal (regressor), n = n+ m+ is he dimension of he model. Feaures: ) he model (4) is commonly seen in sysem modeling when unknown sysem parameers can be separaed from known signals. ) he componens of φ ( ) may conain nonlinear and/or filered funcions of y ( ) and u ( ) (or some oher sysem signals). 3) Adapive parameer esimaion based on y, ( ) u ( ) Linear parameric model. Le θ ( ) be he esimae of θ * obained from an adapive updae law, parameric error, hen define he esimaion error * θ * () = () is he θ θ θ ε () = θ() φ() y() = θ() φ() θ φ () = θ () φ(). (5)

Example y = ay+ bu. ( ) I has he form () for Ps () = s+ p, Z() s = z wih p = a, z = b, m= n, n =. he filer = Λ () s s +. he parameer vecor he regressor * * * = [ θ, θ ] = [ b, a] θ, n θ =. φ () = [ u](), [ y](). s + s + he fas implemenaion φ () = [ ω(), ω ()] for ω () = ω () + u(), ω () = ω () + y(), he esimaion error for he esimae A λ =, b =. θ() = [ θ (), θ ()] : θ θ ε () = () φ() y() =ω () θ () +ω () θ () y() = =ω()( θ () b) +ω ()( θ () + a) = () φ(). 3

c. Normalized gradien algorihm How o updae θ ()? How o minimize he error ε () = θ() φ() y() = θ () φ()? he idea is o choose he derivaive of θ ( ) in a seepes descen direcion in order o minimize a normalized quadraic cos funcional * * ε() θ () φ() φ() θ () ( θ() θ ) φ() φ() ( θ() θ ) J (, θ) = = =, m ( ) m ( ) m ( ) where m ( ) is a normalizing signal no depending (explicily) on θ ( ). he idea of m ( ) choice: φ() φ () / m() has o be bounded (reurn laer o his issue). he seepes descen direcion of J (, θ ) is J (, θ) ε() ε φ() = = ε() θ m () θ m (), herefore: φ() θ () = ε() Γ m (), θ( ) = θ,, (6) where Γ = Γ > is a design marix gain, θ is an iniial esimae of * θ. 4

For (6) an admissible choice of he normalizing funcion m ( ) is where κ> is a design parameer. he esimaion error and he regressor: he cos funcional and derivaive: m () = +κφ() φ (), Example ε () =ω() θ () +ω() θ() y(), φ () = [ ω (), ω ()]. ε() [ ω ( ) θ ( ) +ω ( ) θ ( ) y( )] J (, θ) ε() ω J (, θ ) = =, () = m ( ) m ( ) m () ω(). θ he normalized gradien algorihm for Γ = γi, γ > and κ= : ε() ω () θ () = γ ( ) ( ) ω(). +ω +ω 5

Lemma. he adapive algorihm (6) guaranees: (i) θ ( ), θ ( ) and ε ()/ m() are bounded (belong o L ); (ii) ε ( ) / m( ) and θ () belong o L. Proof. Inroduce he posiive definie (Lyapunov) funcion, hen ( θ = V ( θ) = θ Γ θ θ ) V = θ Γ θ = θ Γ ε ( ) Γ = ε ( ) = m () m () m () Since V we have: V () φ() θ φ() ε(), L. (7) θ () θ () = all hese signals are bounded. he boundedness of ε ( ) / m( ) follows he boundedness of θ () and he inequaliy L L ε( ) θ ( ) φ( ) φ( ) = m () +κφ( ) φ ( ) +κ φ( ) θ ( ). hen boundedness of θ ( ) follows from he inequaliy φ() ε() φ() φ() ε() θ = ε( ) Γ Γ Γ. (i) m () m () m () +κ φ( ) m () 6

Lemma. he adapive algorihm (6) guaranees: (i) θ ( ), θ ( ) and ε ()/ m() are bounded (belong o L ); (ii) ε ( ) / m( ) and θ () belong o L. Proof. Le us rewrie he equaliy (7) in he form ε() = V ( ) and inegrae i: m () ε() d= Vd ( ) V ( ) V ( ) V ( ) ( *) ( *) = = θ θ Γ θ θ < m () ε() herefore L. From he inequaliy m (),, θ Γ φ() ε() +κ φ( ) m () we obain ha θ () belongs o L. (ii) he Lemma is proven. Noe: We did no prove ha * lim θ( ) θ! = 7

Discussion: ) he algorihm has equilibriums when θ () =, from (6) we have ) θ() = * φ () = θ () = ε () = φ() θ () = ε() Γ m () : * θ() = θ! θ is no unique equilibrium of (6) (he usual drawback of any gradien algorihm)! * is a measure of deviaion of θ ( ) from θ, and from (7) V V V () = θ() Γ θ() V () [ θ() θ*] Γ [ θ() θ*] = () ( ) = [ θ θ*] Γ [ θ θ *]. 3) From Lemma we have ha ε()/ m() L L and lim ε ( ) / m( ) =. 4) From (7) we have ha he funcion is nonincreasing ( V ( ) ) and bounded from below ( V ( ) ), hus here exiss lim V ( ) = V for some consan V : * V = lim θ( ) θ ; = V > lim θ( ) = θ for some consan vecor θ. 5) if θ ( ) L θ () L L (Lemma ) lim θ ( ) = lim θ( ) = θ. cos( + ) θ () = sin( + ), θ () =.5 + θ () L L, lim θ ( ) =, lim θ ( ) =? n θ 8

Example Plan: y = ay + bu. ω Adapive esimaor: () θ () +ω() θ() y() ω () θ ω () = γ ( ) ( ) ω(), () = ω () + u(), +ω +ω ω () = ω () + y(). Simulaion : a =.5, b =, γ = and u ( ) = sin( ),.5 y θ b u θ.5 a 5 5 5 5 Simulaion : a =.5, b =, γ = and u ( ) = sin( ), b y u θ θ a 5 5 5 5 9

Simulaion 3: a =.5, b =, γ = and u () = e cos(),.5.5 y θ u.5 θ.5 5 5 5 5.5. V.5..5 5 5 Conclusions: he convergence of adjused esimaes θ ( ) o heir ideal values * θ depends on he inpu u ; y, u are oscillaing * θ() θ ; y cons, u cons (se-poin) * θ() θ.

d. Normalized lease-squares algorihm P() φ() θ () = ε(), θ( ) = θ,, (8) m () () φ() φ() () P () = P P, P( ) = P = P >,, (9) m () m () = +κφ() P φ(), where κ> is a design parameer, θ is he iniial esimae of n n he gain marix () θ P θ. * θ and P is he iniial value of Noe: if P( ) = Γ for all, hen (8) (6); he dimension of (6) is nθ = n + m +, as far as he dimension of (8), (9) is nθ + nθ. ε P ω + P ω θ = m P, Example P ω + P ω P ω + P ω =,,,,,, P,ω + P,ω m P,ω + P,ω P,ω + P,ω m = + P,ω + P,ωω + P,ωω + P,ω.

Lemma. he adapive algorihm (8),(9) guaranees: (i) P() = P () > for all, P ( ), P ( ) are bounded; (ii) θ ( ) and ε ()/ m() are bounded (belong o L ), where m () = +φ() φ () ; (iii) ε ( ) / m( ), ε ()/ m() and θ () belong o L ; n n (iv) here exis a consan marix θ θ n θ θ such ha lim P( ) = P, lim θ( ) = θ. Proof. Firs, P() = P () and P ( ) is bounded by he algorihm (9) consrucion: P() φ() φ() P () = P() +κφ ( ) P( ) φ ( ). = Second, he ideniy P () P () I nθ implies d ( ) P() = P() P () P() = m() φ() φ() d, hen inegraing his equaliy we obain: P() = P ( ) + m() τ φ()() τ φ τ dτ,. () P ( ) > P() P ( ) > P () > and P ( ) is bounded. (i)

Consider he posiive definie funcion =, hen ( ε ( ) = ( ) φ ( ) V (, θ) θ P() θ θ ) d ( V = θ () P() θ () + θ () P() θ () + θ () P() ) θ () = d φ() P() P() φ() φ() φ() = ε() P() θ () θ () P() ε () + θ () θ () = m () m () m () φ() θ () θ () φ() θ () φ( ) φ( ) θ = ε() ε () + ( ) ε( ) =,. m () m () m () m () () Hence, V ( ) = V [, θ ( )] is bounded, and using () we obain: V () = θ () P( ) θ () + θ () m() τ φ( τ) φ() τ dτ θ () <,. herefore () ( ) () is bounded () θ P θ ( ) θ and θ ( ) are bounded. Boundedness of ε ()/ m() follows he proven propery θ ( ) and he inequaliy L ε( ) θ ( ) φ( ) φ( ) = m () +φ() φ () +φ() φ() θ ( ). (ii) 3

Rewriing he equaliy () in he form () ε / m() = V () and inegraing i, we obain: ε() d Vd V V V = ( ) = ( ) ( ) ( ) = ( *) ( *) < m () θ θ P θ θ ε() herefore L and m (),, ε() ε() m() ε() = + L, m () m () m () m () m () m () ε() L L. m () Since P() = P () is bounded and P() = P () P () ( P () is also bounded) we have P() φ() P() φ() ε() θ ( ) = ε ( ) = = m () +κφ( ) P( ) φ( ) m () s s P () P () φ() ε( ) P () φ() ε( ) = = ( ), +κφ( ) ( ) ( ) φ ( ) +κ ( ) φ( ) herefore, θ () L. (iii) s s P s s () P m m () s P s P s s 4

he inegraion of he differenial equaion (9) gives for : P τφτφτ ()()() P() τ P() = P ( ) dτ> m() τ P τφτφτ ()()() P() τ P ( ) > dτ. m() τ For any z n θ we have P()()() τφτφτ P() τ > z P( ) z > z z dτ, consequenly, he m() τ scalar funcion P τφτφτ ()()() P() τ f (, z) = z z dτ has properies: m() τ i is a nondecreasing funcion of ; i is upper and lower bounded, hen here exiss f z such ha lim f (, z ) = f z. lim P( ) = P, Noe ha ( ε ( ) =φ( ) θ ( ) ) Pφ Pφ θ = θ = ε= φ PP θ = PP θ m m θ ( ) = P( ) P( ) θ ( ) * * θ θ P P θ θ P P θ θ nθ nθ P. lim ( ) = + lim ( ) ( ) ( ) = + ( ) ( ) = θ. n 5

Discussion: ) he algorihm (8) (9) can be presened in he form θ () = P () P () θ () = A(,) θ (), θ ( ) = P( ) P( ) θ ( ), hus i is a linear ime-varying sysem!!! he same as he algorihm (6): φ() φ() φ() θ () = θ () = ε () Γ = Γ θ () = B() θ () m () m (). ) Uniform sabiliy: θ ( ) = P( ) P( ) θ ( ) c θ ( ) for some c >. 3) he leas-squares algorihm (8), (9) minimizes a cos funcion which is an inegral of squared errors a many ime insans wih a penaly on he iniial esimae θ( ) = θ : ( θ( τ) φ( τ) y( τ)) ετ () J (, θ) = dτ+ [ θ() θ ] P [ θ() θ ] = m() τ ( ) = dτ+ θ P θ ( ). m() τ Compare wih he gradien descen algorihm (6): ε() J (, θ ) =. m () 6

Example Plan: y = ay+ bu. Esimaor: ε P,ω + P,ω θ = m P,ω + P,,ω P P,ω + P,ω P,ω + P,ω = m P ω + P ω P ω + P ω Simulaion : a =.5, b =, P = I and u ( ) = sin( ),,,,, b y u θ θ.8.6.4. a 5 5 5 5 Simulaion : a =.5, b =, P = 5I and u ( ) = sin( ), b y u θ θ a 5 5 5 5 7

Simulaion 3: a =.5, b =, P = 5I and u ( ) = e cos( ), σ ( ) = P ( ),.5 y θ.8.6 u.5 θ.4. 5 5 5 5.9 55.8.7 5 V.6 σ 45.5.4 4.3 5 5 35 5 5 Conclusions: he rae of convergence in he algorihm (8), (9) is a more complex issue han in (6); he convergence of adjused esimaes θ ( ) o heir ideal values * θ depends on he inpu u ; y, u are oscillaing * θ() θ ; y cons, u cons (se-poin) * θ() θ. 8

e. Discree-ime version of adapive algorihms Coninuous ime Discree ime {, +, +...}, > is he period. he normalized gradien algorihm: φ()() ε θ( + ) = θ( ) Γ, θ( ) = θ, In m () θ > Γ = Γ >, he normalized leas-squares algorihm: Proprieies: m () = κ+φ() φ (), κ>. φ()() ε θ( + ) = θ( ) P ( ), θ( ) = θ, m () φ() φ() P( ) = P( ) P( ) P ( ), P( ) = P = P >, m () m () = κ+φ() P ( ) φ(), κ>. θ (), ε ( ) / m( ), ε ()/ m() and P() = P () > are bounded; ε ()/ m(), ε ()/ m() and θ( + ) θ ( ) belong o L. 9

3. IDENIFICAION AND ROBUSNESS idenificaion parameer convergence; robusness d, v. a. Parameric convergence and persisency of exciaion Lemma 3. For he gradien algorihm (6) or leas-squares algorihm (8) (9), if m () and φ () L, hen lim ε ( ) =. ε() Proof. L L and θ ( ), θ ( ) L m () from lemmas,. Since ( ) ( ) ε =φ θ ( ) we have ε () =φ () θ () +φ() θ (). Hence: φ () L ε () L, m () L ε() L L. Under condiions of lemma 3 asympoically a) φ ( ) = [,,...,] b) φ ( ) = [,,...,] * θ () θ =, θ () for i n θ? n i * i i θ [ θ ( ) θ ] =? c) φ ( ) = sin( ω i ), i =,, ω> i n θ i n n i * i i i ε () = θ [ θ () θ ] φ () =, : * i i θ [ θ i( ) θ ]sin( ω i) = * i θ () =θ, i =,. i n θ L 3

Definiion. A bounded vecor signal ϕ(), q, is exciing over he finie ime inerval [ σ, σ +δ ], δ >, σ, if for some α > q σ +δ ϕτϕτ () () dτ α σ I q. Definiion. A bounded vecor signal ϕ(), q, is Persisenly Exciing (PE) if here exis δ> and α> such ha q σ+δ σ ϕ() is PE ρ >, δ> : q ϕ() τ ϕ() τ dτ αi q, σ. ϕτϕτ () () dτ ρ( ) I q, (posiive definie in average). he idea: rank[ ϕ( ) ϕ ( ) ] =, rank[ +δ ϕ ( τ ) ϕ ( τ ) d τ ] = q. +δ Example. ϕ () = [,] ϕ() ϕ () = δ ϕ() τ ϕ() τ d τ= δ no PE. 3

() [, e δ δ e ϕ = ] ϕτϕτ () () d τ= δ δ exciing over some finie inervals. e.5e ϕ ( ) = [,sin( )] δ δ cos( δ) ϕ() τ ϕ() τ () d τ= λ δ cos( δ).5δ.5sin( δ) I, 6δ sin( δ) [δ+ sin( δ)] λ( δ ) = + cos( δ) ρδ, ρ =.4 for δ> 5: 8 64 δ 6 λ() ρ 4 PE!!! 4 6 8 cos( ) ϕ () = sin( ) δ δ+.5sin( δ) sin( δ) ϕτϕτ () () [ sin()] d τ= δ δ I PE. sin( δ) δ.5sin( δ) 3

* Normalized gradien algorihm (6) ( θ () = θ() θ, () () ε =φ θ () ): φ() φ() θ () = θ () = ε () Γ = Γ φ () θ () = B() θ (), m () m () φ() φ() B() = Γ. m () Le Φ (, ) be he sae ransiion marix of he linear ime-varying sysem (6), hen () (,) ( ); θ =Φ θ φ () is PE φ ()/ m(), m () = +κφ() φ () is PE η=φ () (,) φ ()/ m() is PE: ρ >, δ> : Consider he Lyapunov funcion ητητ ()() dτ ρ( ) I n, V ( ) θ = θ Γ θ : θ +δ. θ θ θ θ () () () V ε φ φ = = ( ) ( ) = ηη ( ) ( ) m () m () inegraing his equaliy for +δ we obain ( V ( ) ): = θγ θ V ( ) = V ( ) η ( τ) η ( τ) dτ V ( ) ρ( ) = [ ρ( )] θ θ θθ θ Γ θ lim V ( ) = lim θ( ) = θ. *, 33

Normalized leas-squares algorihm (8) (9): Properies: lim P ( ) lim θ ( ) ; = = P τφτφτ P θ ( ) = P( ) P( ) θ ( ),. ()()() () τ P() = P ( ) dτ, P( ) = P ( ) > for all ; m() τ φ () is PE φ ()/ m(), m () = +κφ() φ () is PE η () = P () φ()/ m() is PE: hen ρ >, δ> : ητητ ()() dτ ρ( ) I n, * lim θ( ) = θ. < P( ) = P( ) η ( τ) η ( τ) dτ P( ) ρ( ) I for some θ nθ +δ. Lemma 4. For he gradien algorihm (6) or leas-squares algorihm (8) (9), if φ () is PE, hen * lim θ( ) θ. = 34

Discussion: Wha is PE propery of he regressor φ ( ) : φ () = [{ C ω ()}, ω () ], where () n ω, () n ω and for a Hurwiz marix A λ : ω () = A ω () + bu(), ω () = ω () + by(). m λ Aλ PE of φ ( ) PE of ω () and ω () PE of u ( ) and y. ( ) () is a linear sysem PE of y ( ) is deermined by he inpu u! ( ) PE of u ( ) PE of φ ( ) (ha we already observed in he example). 35

Example Plan: y = ay+ bu, a =.5, b = and u ( ) = sin( ). Gradien algorihm: γ =.5 y θ b u θ.5 a 5 5 Leas-squares algorihm: P = I 5 5 b y u θ θ.8.6.4. a 5 5 5 5 ω u () = sin() y () =α sin( +β ) ω i() =α isin( +β i) due o () = ω () + u(), ω () = ω () + y() φ () = [ ω (), ω ()] ϕ () = [cos(),sin()] PE. 36

b. Robusness of adapive algorihms Before he noise free case wih d ( ) = and v ( ) = has been considered for :{ x = A( θ) x+ B( θ) u+ d; Σl y = Cx+ Du+ v. Wha happens if d () or v ()? (only he case d ( ) will be considered) Example Plan: y = ay+ bu+ d( ), a =.5, b = and u ( ) = sin( ), d ( ) =.5sin(3 ). (6) 3 b (8) (9) y θ y u θ u a θ θ b a. 5 V.. V.8.6.4. σ 4 3 3 φ () is PE Robusness!!! 37

u () = e cos() (6).5.5 (8) (9).5 y θ y u θ u.5.5.5 θ θ.8.6.4.. 5. 4 V. V.8.6 σ 3.4.. u () = sin(), d () =.5sin() 3 (6) (8) (9) y u θ θ b a y u θ θ 3 b a. 5 Conclusion: V.. 3 V.8.6.4. he disurbance can seriously modify he sysem behavior. σ 4 3 38

Linear parameric model wih modeling errors: * y () = φ () θ +δ(),, where * n θ θ is an unknown parameer vecor, φ() θ is a known regressor, y ( ) n is a measured oupu, δ( ) n Le θ() θ be he esimae of represens sysem modeling errors: δ( ) c φ ( ) + c, c >, c >. * θ and define he esimaion error ε () =φ() θ() y() =φ () θ () +δ(),, where * () = () is he parameric error. θ θ θ Modified gradien algorihm (6): φ() θ () = ε () Γ + Γf () m (), θ( ) = θ, m () = +κφ() φ (), κ>,, () where Γ = Γ > is a design marix gain, f ( ) θ is he modificaion erm for robusness. n 39

Sabiliy & robusness analysis for nonlinear sysems Lyapunov funcion heory V ( ) θ = θ Γ θ, V ε ε δ () () () = + +θ f () m () m () Noe: δ( ) c φ ( ) + c c c c + + m () +κφ( ) φ( ) κ m () κ c. hen V m () κ m () m () and ( ) c c ε ε() c + c ε() m () κ m () + m () m () κ m () he simples modificaion: ε() c c ε() + + + () θ f, V θ f (). φ() f () = f () s, f () if () / () / / (), s = ε m c κ + c m m () ε () oherwise. V. 4

he simples modificaion: φ() f () = f () s, f ()if () / () / / (), s() = ε ε m < c κ + c m V. m () oherwise. A dead zone modificaion: φ() ε() if ε () / m() < c f () = f () d, / κ + c / m(), fd () = m () [ cm ( )/ κ+ c] sign[ ε( )] oherwise. V. φ() θ () = Γ [ ε() f ()] d m () ε ε f s ( ε) ε f d ( ε) σ-modificaion: f () = σθ () φ() θ () = σγθ() Γ ε() m () θ () L. ε 4

Projecion: assume ha he se of admissible values for * n * θ is given, i.e. θ Ω= { θ θ : θ M}, M >. Projecion has o ensure ha θ ( ) Ω for all, herefore f () φ() if θ( ) < M or θ( ) = M and θ( ) Γ ε( ), m () = Γθ() θ() φ() Γ ε() oherwise. θ() Γθ() m() θ Inside he circle doing nohing. On an aemp o exi he circle. * θ θ θ = M 4

he properies: boundedness of θ ( ), θ () and ε ( ) / m( ) (belong o L ); ε ()/ m() and θ ROBUSNESS! () belong o L ; in he noise-free case ( d ( ) = ) he qualiy is preserved? ESIMAION? Example Plan: y = ay+ bu+ d( ), a =.5, b = and u () = u () = e cos() ; (6) wih γ =..5 y u.5.5 θ θ.5 b a d () =.5sin(.3) 3 y u θ θ b a 43

Dead zone algorihm:.5 θ θ θ θ.5 b a a nx ( ) nx ( ) θ θ, b, x σ-modificaion (σ =.): Projecion (M =.5): θ θ.5.5 b a θ a nx ( ) nx ( ) θ θ, b, x θ d()=.5 sin(.3) θ.5 θ θ θ.5 b a a nx ( ) nx ( ) θ.5 θ, b, x 44

Dead zone algorihm: b θ.8 θ θ θ.6.4. a a nx () nx () θ θ, b, x σ-modificaion (σ =.):.5 θ θ θ θ.5 b a a nx ( ) nx ( ) θ d()= θ, b, x Projecion (M =.5):.5 θ θ θ b a θ.5 a nx ( ) nx ( ) θ θ, b, x 45

SUMMARY. Adapive parameer esimaion: a. Parameerized sysem model * y () =φ() θ. b. Linear parameric model ε () =φ() θ() y() =φ() θ (), φ() c. Normalized gradien algorihm θ () = ε() Γ m (). *. θ() = θ() θ d. Normalized leas-squares algorihm P() φ() θ () = ε(), m () () φ() φ() () P () = P P. m () e. Discree-ime version of adapive algorihms.. Idenificaion and robusness: f. Parameric convergence and PE (PE convergence/esimaion robusness). g. Robusness of adapive algorihms (robusness esimaion). 46

Example Oscillaing pendulum: ϕ f, d ϕ [ ππ, ) is he pendulum angle, f is he (conrolling or exciing) inpu applied o he suppor, d is he disurbance influencing he suppor also. Nonlinear model: y = ω sin( y) ρ y + bcos( y) f( ) + d( ), (3) y = ϕ [ ππ, ) is he measured angle, y and y are he angle velociy and acceleraion; ρ > is an unknown fricion coefficien, ω> is an unknown naural frequency, b > is an unknown conrol gain. 3 unknown parameers + nonlineariy. Define u = sin( y) and u = cos( y) u: y+ρ y = ω u () + bu () + d() () for n =, m = and a vecor u = [ u, u]. 47

Define he polynomials: Ps () = s + ps, p = ρ; Z() s = z, = ω ; Z() s = z, = b, hen he noise-free model (3) has he form Ps ()[ y]() = Z()[ s u]() + Z( s)[ u](). Parameerizaion for Λ () s = s +λ s+λ : Ps () Z() s Z() s [ y ]() = [ u ]() + [ u ]() Λ() s Λ() s Λ() s Λ() s P() s Z() s Z() s ( )[ y]( ) + [ y]( ) = [ u]( ) + [ u]( ) Λ() s Λ() s Λ() s Λ() s Λ() s P() s Z() s Z() s * y () = [ y]() + [ u]() + [ u]() y () =φ() θ, Λ() s Λ() s Λ() s he parameerized sysem model for y () = y () λ Λ ()[ s y](), φ ( ) = s [ y]( ), [ u ]( ), [ u]( ) = [ ω,, ω,, ω,] Λ() s Λ() s Λ() s ω () = Aλω () + by(), ω () = A ω () + bu (), ω = ω + λ λ () Aλ () bu(), λ λ * = λ ρ ω A =, b =. θ [,, b] and, = λω, y () y () (), 48

ω=, ρ =., b =.5, f ( ) = sin(3 ), λ =, λ =, γ =. d () = d ( ) =.5sin(.3 ).4.4.. ϕ' ϕ'...4.3.....3.4.5.5 ϕ ϕ θ θ θ 3 b λ ρ ω 4 6 8 Normalized gradien algorihm θ θ θ 3 b 3 λ ρ ω 4 6 8 λ ρ 3 θ θ θ 3 b ω Dead zone modificaion θ θ θ 3 b λ ρ ω 4 6 8 4 6 8 49

4. INDIREC ADAPIVE CONROL Adjusmen of conrol parameers: direc (from an adapive conrol law/lyapunov analysis); indirec (from adapive esimaes of he sysem parameers). Indirec adapive conrol design: ) adapive esimaion of he plan parameers; ) calculaion of conrol parameers. u C(θ с ) P m (θ * ) Adapive parameer esimaion y θ c () Conrol parameer derivaion θ () he main seps: ) adapive esimaion algorihm design; ) reference model selecion; 3) conroller srucure consrucion; 4) conroller parameer calculaion; 5) sabiliy and robusness analysis. a. Model reference conrol 5

Example Plan: y = ay+ bu+ d. Adapive esimaion algorihm ( * [ *, * θ = θ θ ] = [ b, a] ): ε() ω () θ () = γ m () ω(), ω m () = +ω () +ω (), () = ω () + u(), ω () = ω () + y(), A λ =, b =. Reference model: y m = amym + bmr( ) where r ( ) is he reference signal o be racked, a > (he reference model is sable). Conroller srucure: Conroller parameer calculaion: u = b [( a am ) y+ bmr ] y = amy+ bmr + d. c c u =θ y+θ r, c m a m c b m θ =θ ( θ ), θ =θ. Division on θ projecion modificaion of he adapaion algorihm: ε() ω () f() θ if θ () > bmin or θ () = bmin and ε() ω(), () = γ + m () ω(), f() = γε ( ) ω ( ) m( ) oherwise. b min > is he low bound for b, i.e. b bmin. 5

a =.5, b =, a =, b =, b min =. m m.5 y u r 5 5 θ θ.5 d () =.5 5 5 a b y u r 5 5 θ θ a d () =.5sin(3) 5 5.5 y u r 5 5 θ θ.5 d ( ) =.5sin(.3 ).5 5 5 a 5

he general procedure: Ps ()[ y]() = k Zs ()[]() u + d (),, (4) p y (), u ( ) are he measured oupu and inpu as before; n n n Ps ( ) = s + p s +... + ps+ p, m m m Z() s = s + z s +... + z s+ z, k p, p i, i =, n and z j, j =, m are he unknown bu consan parameers. Assumpion. he consan Assumpion. k k > and sign( k ) are given. Necessary. p min k kp k ; pi pi pi, i =, n ; zj zj zj, j =, m. Desired. p ) Adapive esimaion algorihm design: Zs () Λ() s Ps () y () = kp [ u]() + [ y](), Λ() s Λ() s n n n Λ () s = s +λ s +... +λ s+λ * y () = θ φ(), * = kpz kpzm kp λ p λn pn Cm = [ Im+, m+ n m ], θ [,...,,,,..., ], φ () = [{ C mω ()}, ω () ], ( ) ( ) ω () = Aλω () + bu(), ω () = A ω () + by(). λ (5) 53

Normalized gradien algorihm wih projecion (assumpion ): θ () = g() + f (), θ() = θ,, (6) φ() g() = ε() Γ, () () ε = θ φ() y(), m () = +κφ() φ (), m () Properies: f k () if θ k <θ k() < θk or θ k() = θk and gk() or =, k =, n θ k() = θk and gk(), θ. gk ( ) oherwise, ) Reference model selecion: θ (), θ (), ε( ) / m( ) L and θ (), ε()/ m() L. P ()[ s y ]() = r(), (7) m m where Pm () s is a sable polynomial of degree n m and r ( ) is a bounded and piecewise coninuous reference inpu signal. 54

3) Conroller srucure consrucion: c c c c c c 3 4 u () =ω () θ +ω () θ +θ y () +θ r (), (8) where θ c n, θ c n, θ3 c, θ c 4 are he conroller parameers, c as () c as () n ω () = [ u](), ω () = [ y](), as ( ) = [, s,..., s ], Λ () s Λ () s c n c n c c n and Λ c() s = s +λ s +... +λ s+λ is a sable polynomial. A varian of realizaion: c c c c ω () = Aλω () + b u(), c A c c c c λ = c ω () = Aλω, b = () + b y(),. c c c c λ λ λn 3 λn c r () c θ 4 u () d () Ps ()[ y]() = k Zs ()[ u]() + d () p y () θ c ω c () as () Λc () s c θ ω c () as () Λc () s (5) θ 3 c 55

he conroller parameer equaion: c c c c 3 c p c p 4 m as () θ Ps () + [() as θ +θ Λ ()] s k Zs () =Λ ()[ s Ps () k θ ZsP () ()] s. (9) Muliply (9) on y ( ) and subsiue (4) for he case d ( ) = : c c c 3 c p c c p 4 c m as () θ Ps ()[ y]() + [() as θ +θ Λ ()] s k Zs ()[ y]() = =Λ () s P()[ s y]() k θ Λ () s Z() s P ()[ s y](), c c c p 3 c p c c p p 4 c m as () θ k Zs ()[]() u + [() as θ +θ Λ ()] s k Zs ()[ y]() = =Λ () skzs ()[]() u kθ Λ () szsp () ()[ s y](). Now divide boh sides on Λ () s k Z() s ( Z( s ) and Λ () are sable polynomials): c p c s c c c 3 as () θ as () θ +θ Λ () [ ]() c s c u + [ y]() = u() θ4pm ( s)[ y](), Λ () s Λ () s c c Subsiuion of he conrol (8) gives c c 4 m 4 c c c c c c 4 3 4 m ω () θ +ω () θ +θ y() = u() θ P ( s)[ y](). c c θ P ( s)[ y]() =θ r() θ 4Pm()[ s y]() =θ 4Pm()[ s ym]() Pm( s){[ y]( ) [ ym]( )} =. 56

4) Conroller parameer calculaion: c 4 k p θ = Bs () =Λ ()[ s Ps () ZsP () ()] s, hen (9) akes he form: c m c c c 3 as () θ Ps () + [() as θ +θ Λ ()] s k Zs () = Bs (). he righ hand side is a polynomial of degree n wih coefficiens linearly dependen on θ c, θ c and θ 3 c. he lef hand side is a polynomial of degree n wih consan coefficiens. c p Equaing he coefficiens wih he same powers of s we obain he soluion: c c c θ =Θ( pn,..., p; zm,..., z; λn,..., λ ), θ =Θ( n,..., λ ), θ 3 =Θ( n,..., λ ) p p Example : c * n θ =Θ ( θ ; λ,..., λ ), c * n θ =Θ ( θ ; λ,..., λ ), c a m c b m θ =θ ( θ ), θ =θ. heorem. Under assumpion and ha all zeros of Z() s are sable: c * 3 n θ =Θ ( θ ; λ,..., λ ). (i) y, ( ) θ ( ), θ (), ω (), ω () L ; (ii) y () ym() L, lim [ y () ym()] =. 57

he pole placemen equaion: b. Pole placemen conrol A * () s = C() s Q() s P() s + D() s Z() s, () where A * () s is he desired polynomial of he closed loop sysem; Cs ( ) and Ds ( ) are polynomials of he pole placemen conrol: c c c u () = { Λ ( s) CsQs ( ) ( )} Λ ( s)[ u]() + Ds () Λ ( s)[ r y](), () where r ( ) is a bounded and piecewise coninuous reference inpu signal, Q( s)[ r]( ) = (a) r ( ) = Qs () = ; (b) r ( ) = c Qs () = s; (c) r ( ) = ce a Qs () = s+ a, a >. According o () he conrol is a dynamical sysem: CsQs () ()[]() u = Ds ()[ r y](). () n n Conroller srucure ( a ( s) = [, s,..., s θ + ] ): c c c c c 3 u () =θ a() sλ ()[]() s u +θ a() sλ ()[ s y r]() +θ {() y r ()}. 58

Properies: ) muliplying boh sides of () on y ( ) we obain: * A ()[ s y ]() = C () s Q () s P ()[ s y ]() + D () s Z ()[ s y ]() = = CsQsPs ( ) ( ) ( )[ y]( ) + Zs ( ){ Ds ( )[ r]( ) CsQs ( ) ( )[ u]( )} = = Z() s D()[](). s r (3) r () L and A is sable y () L. ) muliplying boh sides of () on u ( ) we obain: * * A ()[]() s u = C () s Q () s P ()[]() s u + D () s Z ()[]() s u = = P() s D()[ s r y]() + D() s Z()[]() s u = P() s D()[](). s r r () L and * A is sable u () L. 3) using () (3) we ge: A * ()[ s y r]() = lim { y () r ()} =. Assumpion 3. QsPs () () and Z( s ) are coprime. heorem. Under assumpion 3 all signals are bounded and lim [ () ()] y r =. 59

SUMMARY Direc adapive conrol Indirec adapive conrol Srucure + Parameerizaion + Resricions (minimum phase) u C(θ с ) P m ( θ * c) Adapive parameer adjusmen y θ c () u C(θ с ) P m (θ * ) Adapive parameer esimaion y θ c () Conrol parameer derivaion θ() Cerainy equivalence 6

Example Indirec adapive conrol Robus conrol Plan: y = ay+ bu+ d. Assumpion: < a a, < b b + a >, r ( ) =. m Normalized gradien descen algorihm wih projecion. Robus conrol: u = ky, k = min{ b ( a a m ),}. a =.5, b =, a =.5, b =., a = 5, d ( ) =, v ( ) =. m y a.8..4.6.8. y r.6 u a y m.4 u r 3. 4..4.6.8. 5 6

a =, d ( ) =, v ( ) =. m y a.8..4.6.8 y r.6 u a 4 y m.4 u r 6. 8..4.6.8 a =, d ( ) = 5sin(5 ), v ( ) =. m 5 y a y r.5 u a 3 4 5 y m 3 4 5 u r 5.5 a =, d ( ) =, v () =.sin(). m y a y r y m.5.5 3 4 5 u a u r 4 6 8 3 4 5 6

5. ADAPIVE OBSERVERS A nonlinear sysem in sae space presenaion: x = Ax+ B( y) u+ φ( y), y = Cx, (4) n x, m u, p y are he sae, he inpu (conrol) and he measurable oupu; A, C are consan and known, he funcions By ( ) and φ( y ) are coninuous and known. Everyhing is known excep he sae x (i is no measurable) he sae observer design: x = Ax+ B( y) u+ φ( y) + L[ y Cx], x is he esimae of x; L is he observer marix gain, Assumpion. x (), u () for all. L L A LC is Hurwiz. he esimaion error e= x x : e = x x = { Ax+ B( y) u+ φ( y)} { Ax + B( y) u + φ( y) + L[ y Cx]} = [ A LC] e. he marix A LC is Hurwiz (design of L) x () L, lim [ x() x()] =. 63

A nonlinear sysem wih parameric uncerainy: q x = Ax+ B( y) u+ φ( y) + G( y, u) θ, y = Cx, (5) θ is he vecor of unknown parameers, Gyu (, ) is a known coninuous funcion. he adapive observer: x = Ax+ B( y) u+ φ( y) + L[ y Cx] + G( y, u) θ Ωθ, (6) Ω = [ A LC] Ω Gyu (, ), (7) θ = γω C [ y Cx], γ >, (8) q θ is he esimae of θ, n q Ω is an auxiliary filer variable. he sae esimaion error e= x x : e = [ A LC] e+ G( y, u)[ θ θ] + Ωθ. A LC is Hurwiz + Properies of θ ( ) and θ () Properies of e ( ). 64

he auxiliary error δ = e + Ωθ [ θ ]: δ = e + Ωθ [ θ] Ωθ= = {[ A LC] e+ G( y, u)[ θ θ] + Ωθ} + {[ A LC] Ω Gyu (, )}[ θ θ] Ωθ= [ A LC] δ. A LC is Hurwiz δ (), lim δ ( ). L = A LC is Hurwiz + y (), u () (assumpion ) Ω (). he parameer esimaion error θ () = θ θ () : θ = θ=γω C [ y Cx] =γ Ω C Ce=γΩ C C[ δ Ωθ ]. L Inuiion: lim δ ( ) θ = γh() h() θ, () () h = Ω C for big enough. = Assumpion. h ( ) is PE: ρ >, δ> : h()() τ h τ dτ ρi n, δ. L q L Assumpion θ (), lim θ ( ) + properies of δ ( ) e (), lim e ( ). L = L = heorem. Under assumpions and all signals in (5) (8) are bounded and lim [ x() x()] =, lim [ θ( ) θ] =. 65

Example Oscillaing pendulum: y = ω sin( y) ρ y + bcos( y) f( ) + d( ), (3) y = ϕ [ ππ, ) is he measured angle, y and y are he angle velociy and acceleraion; ρ > is an known fricion coefficien, ω> is an unknown naural frequency, b > is an unknown conrol gain. Presenaion in he form (5) for x = ρ ω + = y, x x = x, y = x, x x sin( x ) bcos( x ) u( ). = y, u = f and d ( ) = : A = ρ, G ( yu, ) = sin( y) cos( y) u, Boh assumpions are saisfied for his example. C =, B ( y ) =, ϕ ( y) =, θ = ω b. 66

ω=, ρ =., b =.5, f ( ) = sin(3 ), L = [,], γ =..4.5 d () = ϕ'. θ θ.5 ω b..4.3.....3 ϕ.4.5 4 6 8.5 d () =.5sin() ϕ'. θ θ.5 ω b..4.4...4 ϕ.4.5 4 6 8.5 d () =.5sin(6) ϕ'. θ θ.5 ω b..4.4...4 ϕ.5 4 6 8 67

INDIREC ADAPIVE CONROL OULINE. Inroducion a. Main properies b. Running example. Adapive parameer esimaion a. Parameerized sysem model b. Linear parameric model c. Normalized gradien algorihm d. Normalized leas-squares algorihm e. Discree-ime version of adapive algorihms 3. Idenificaion and robusness a. Parameric convergence and persisency of exciaion b. Robusness of adapive algorihms 4. Indirec adapive conrol a. Model reference conrol b. Pole placemen conrol 5. Adapive observers 68