Nondifferentiable Convex Functions

Μέγεθος: px
Εμφάνιση ξεκινά από τη σελίδα:

Download "Nondifferentiable Convex Functions"

Transcript

1 Nondifferentiable Convex Functions DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis Carlos Fernandez-Granda

2 Applications Subgradients Optimization methods

3 Regression The aim is to learn a function h that relates a response or dependent variable y to several observed variables x 1, x 2,..., x p, known as covariates, features or independent variables The response is assumed to be of the form y = h ( x) + z where x R p contains the features and z is noise

4 Linear regression The regression function h is assumed to be linear y (i) = x (i) T β + z (i), 1 i n Our aim is to estimate β R p from the data

5 Linear regression In matrix form y (1) x (1) y (2) 1 x (1) 2 x (1) p β = x (2) 1 x (2) 2 x p (2) 1 z (1) β 2 + z (2) y (n) x (n) 1 x (n) 2 x p (n) β p z (n) Equivalently, y = X β + z

6 Sparse linear regression Only a subset of the features are relevant Model selection problem Two objectives: Good fit to the data; X β y 2 2 should be as small as possible Using a small number of features; β should be as sparse as possible

7 Sparse linear regression y (1) x (1) j x (1) l [ ] z (1) y (2) = x (2) j x (2) l β j β + z (2) y (n) x (n) l x (n) l z (n) l 0 x (1) 1 x (1) j x (1) l x (1) p x (2) = 1 x (2) j x (2) l x p (2) β z (1) j + z (2) x (n) 1 x (n) j x (n) l x p (n) β l z (n) 0 = X β + z

8 Sparse linear regression with 2 features y := α x 1 + z X := [ x 1 x 2 ] x 1 2 = 1 x 2 2 = 1 x 1, x 2 = ρ

9 Least squares: not sparse ( ) 1 β LS = X T X X T y 1 = 1 ρ x 1 T y ρ 1 x 2 T y = 1 1 ρ α + x 1 T z 1 ρ 2 ρ 1 αρ + x 2 T z = α + 1 x 1 ρ x 2, z 0 1 ρ 2 x 2 ρ x 1, z

10 The lasso Idea: Use l 1 -norm regularization to promote sparse coefficients β lasso := arg min β 1 2 y X β 2 +λ β 1 2

11 Nonnegative weighted sums The weighted sum of m convex functions f 1,..., f m f := m α i f i i=1 is convex if α 1,..., α R are nonnegative

12 Nonnegative weighted sums The weighted sum of m convex functions f 1,..., f m f := m α i f i i=1 is convex if α 1,..., α R are nonnegative Proof: f (θ x + (1 θ) y)

13 Nonnegative weighted sums The weighted sum of m convex functions f 1,..., f m f := m α i f i i=1 is convex if α 1,..., α R are nonnegative Proof: f (θ x + (1 θ) y) = m α i f i (θ x + (1 θ) y) i=1

14 Nonnegative weighted sums The weighted sum of m convex functions f 1,..., f m f := m α i f i i=1 is convex if α 1,..., α R are nonnegative Proof: f (θ x + (1 θ) y) = m α i f i (θ x + (1 θ) y) i=1 m α i (θf i ( x) + (1 θ) f i ( y)) i=1

15 Nonnegative weighted sums The weighted sum of m convex functions f 1,..., f m f := m α i f i i=1 is convex if α 1,..., α R are nonnegative Proof: f (θ x + (1 θ) y) = m α i f i (θ x + (1 θ) y) i=1 m α i (θf i ( x) + (1 θ) f i ( y)) i=1 = θ f ( x) + (1 θ) f ( y)

16 Regularized least-squares Regularized least-squares cost functions are convex A x y x

17 It works Coefficients Regularization parameter

18 Ridge regression doesn t work Coefficients Regularization parameter

19 Prostate cancer data set 8 features (age, weight, analysis results) Response: Prostate-specific antigen (PSA), associated to cancer Training set: 60 patients Test set: 37 patients

20 Prostate cancer data set 1.5 Coefficient Values Training Loss Test Loss λ

21 Principal component analysis Given n data vectors x 1, x 2,..., x n R d, 1. Center the data, c i = x i av ( x 1, x 2,..., x n ), 1 i n 2. Group the centered data as columns of a matrix 3. Compute the SVD of C C = [ c 1 c 2 c n ]. The left singular vectors are the principal directions The principal values are the coefficients of the centered vectors in the basis of principal directions.

22 Example C :=

23 Principal component analysis

24 Example C :=

25 Principal component analysis

26 Outliers Problem: Outliers distort principal directions Model: Data equals low-rank component + sparse component + = L S Y Idea: Fit model to data, then apply PCA to L

27 Robust PCA Data: Y R n m Robust PCA estimator of low-rank component: L RPCA := arg min L L + λ Y L 1 where λ > 0 is a regularization parameter Robust PCA estimator of sparse component: S RPCA := Y L RPCA 1 is the l 1 norm of the vectorized matrix

28 Example 4 Low Rank Sparse λ

29 λ = 1 n L S

30 Large λ L S

31 Small λ L S

32 Background subtraction

33 Background subtraction Matrix with vectorized frames as columns Static image: Y = [ x x x ] = x [ ] Slowly varying background: Low-rank Rapidly varying foreground: Sparse

34 Frame

35 Low-rank component

36 Sparse component

37 Frame

38 Low-rank component

39 Sparse component

40 Frame

41 Low-rank component

42 Sparse component

43 Applications Subgradients Optimization methods

44 Gradient A differentiable function f : R n R is convex if and only if for every x, y R n f ( y) f ( x) + f ( x) T ( y x)

45 Gradient x f ( y)

46 Subgradient The subgradient of f : R n R at x R n is a vector g R n such that f ( y) f ( x) + g T ( y x), for all y R n Geometrically, the hyperplane y[1] T H g := y y[n + 1] = g y[n] is a supporting hyperplane of the epigraph at x The set of all subgradients at x is called the subdifferential

47 Subgradients

48 Subgradient of differentiable function If a function is differentiable, the only subgradient at each point is the gradient

49 Proof Assume g is a subgradient at x, for any α 0 f ( x + α e i ) f ( x) + g T α e i = f ( x) + g[i] α f ( x) f ( x α e i ) + g T α e i = f ( x α e i ) + g[i] α

50 Proof Assume g is a subgradient at x, for any α 0 Combining both inequalities f ( x + α e i ) f ( x) + g T α e i = f ( x) + g[i] α f ( x) f ( x α e i ) + g T α e i = f ( x α e i ) + g[i] α f ( x) f ( x α e i ) α g[i] f ( x + α e i) f ( x) α

51 Proof Assume g is a subgradient at x, for any α 0 Combining both inequalities f ( x + α e i ) f ( x) + g T α e i = f ( x) + g[i] α f ( x) f ( x α e i ) + g T α e i = f ( x α e i ) + g[i] α f ( x) f ( x α e i ) α g[i] f ( x + α e i) f ( x) α Letting α 0, implies g[i] = f ( x) x[i]

52 Subgradient A function f : R n R is convex if and only if it has a subgradient at every point It is strictly convex if and only for all x R n there exists g R n such that f ( y) > f ( x) + g T ( y x), for all y x.

53 Optimality condition for nondifferentiable functions If 0 is a subgradient of f at x, then f ( y) f ( x) + 0 T ( y x)

54 Optimality condition for nondifferentiable functions If 0 is a subgradient of f at x, then f ( y) f ( x) + 0 T ( y x) = f ( x) for all y R n

55 Optimality condition for nondifferentiable functions If 0 is a subgradient of f at x, then f ( y) f ( x) + 0 T ( y x) = f ( x) for all y R n Under strict convexity the minimum is unique

56 Sum of subgradients Let g 1 and g 2 be subgradients at x R n of f 1 : R n R and f 2 : R n R g := g 1 + g 2 is a subgradient of f := f 1 + f 2 at x

57 Sum of subgradients Let g 1 and g 2 be subgradients at x R n of f 1 : R n R and f 2 : R n R g := g 1 + g 2 is a subgradient of f := f 1 + f 2 at x Proof: For any y R n f ( y) = f 1 ( y) + f 2 ( y)

58 Sum of subgradients Let g 1 and g 2 be subgradients at x R n of f 1 : R n R and f 2 : R n R g := g 1 + g 2 is a subgradient of f := f 1 + f 2 at x Proof: For any y R n f ( y) = f 1 ( y) + f 2 ( y) f 1 ( x) + g T 1 ( y x) + f 2 ( y) + g T 2 ( y x)

59 Sum of subgradients Let g 1 and g 2 be subgradients at x R n of f 1 : R n R and f 2 : R n R g := g 1 + g 2 is a subgradient of f := f 1 + f 2 at x Proof: For any y R n f ( y) = f 1 ( y) + f 2 ( y) f 1 ( x) + g T 1 ( y x) + f 2 ( y) + g T 2 ( y x) f ( x) + g T ( y x)

60 Subgradient of scaled function Let g 1 be a subgradient at x R n of f 1 : R n R For any η 0 g 2 := η g 1 is a subgradient of f 2 := ηf 1 at x

61 Subgradient of scaled function Let g 1 be a subgradient at x R n of f 1 : R n R For any η 0 g 2 := η g 1 is a subgradient of f 2 := ηf 1 at x Proof: For any y R n f 2 ( y) = ηf 1 ( y)

62 Subgradient of scaled function Let g 1 be a subgradient at x R n of f 1 : R n R For any η 0 g 2 := η g 1 is a subgradient of f 2 := ηf 1 at x Proof: For any y R n f 2 ( y) = ηf 1 ( y) ( ) η f 1 ( x) + g 1 T ( y x)

63 Subgradient of scaled function Let g 1 be a subgradient at x R n of f 1 : R n R For any η 0 g 2 := η g 1 is a subgradient of f 2 := ηf 1 at x Proof: For any y R n f 2 ( y) = ηf 1 ( y) ( ) η f 1 ( x) + g 1 T ( y x) f 2 ( x) + g T 2 ( y x)

64 Subdifferential of absolute value f(x) = x

65 Subdifferential of absolute value At x 0, f (x) = x is differentiable, so g = sign (x) At x = 0, we need f (0 + y) f (0) + g (y 0)

66 Subdifferential of absolute value At x 0, f (x) = x is differentiable, so g = sign (x) At x = 0, we need f (0 + y) f (0) + g (y 0) y gy

67 Subdifferential of absolute value At x 0, f (x) = x is differentiable, so g = sign (x) At x = 0, we need f (0 + y) f (0) + g (y 0) Holds if and only if g 1 y gy

68 Subdifferential of l 1 norm g is a subgradient of the l 1 norm at x R n if and only if g[i] = sign (x[i]) if x[i] 0 g[i] 1 if x[i] = 0

69 Proof g is a subgradient of 1 at x if and only if g[i] is a subgradient of at x[i] for all 1 i n

70 Proof If g is a subgradient of 1 at x then for any y R y = x[i] + x + (y x[i]) e i 1 x 1

71 Proof If g is a subgradient of 1 at x then for any y R y = x[i] + x + (y x[i]) e i 1 x 1 x[i] + x 1 + g T (y x[i]) e i x 1

72 Proof If g is a subgradient of 1 at x then for any y R y = x[i] + x + (y x[i]) e i 1 x 1 x[i] + x 1 + g T (y x[i]) e i x 1 = x[i] + g[i] (y x[i]) so g[i] is a subgradient of at x[i] for all 1 i n

73 Proof If g[i] is a subgradient of at x[i] for 1 i n then for any y R n y 1 = n y [i] i=1

74 Proof If g[i] is a subgradient of at x[i] for 1 i n then for any y R n y 1 = n y [i] i=1 n x[i] + g[i] ( y [i] x[i]) i=1

75 Proof If g[i] is a subgradient of at x[i] for 1 i n then for any y R n y 1 = n y [i] i=1 n x[i] + g[i] ( y [i] x[i]) i=1 = x 1 + g T ( y x) so g is a subgradient of 1 at x

76 Subdifferential of l 1 norm

77 Subdifferential of l 1 norm

78 Subdifferential of l 1 norm

79 Subdifferential of the nuclear norm Let X R m n be a rank-r matrix with SVD USV T, where U R m r, V R n r and S R r r A matrix G is a subgradient of the nuclear norm at X if and only if where W satisfies G := UV T + W W 1 U T W = 0 W V = 0

80 Proof By Pythagoras Theorem, for any x R m with unit l 2 norm we have P row(x ) x Prow(X ) x 2 = 2 x 2 2 = 1

81 Proof By Pythagoras Theorem, for any x R m with unit l 2 norm we have P row(x ) x Prow(X ) x 2 = 2 x 2 2 = 1 The rows of UV T are in row (X ) and the rows of W in row (X ), so G 2 := max { x 2 =1 x R n } G x 2 2

82 Proof By Pythagoras Theorem, for any x R m with unit l 2 norm we have P row(x ) x Prow(X ) x 2 = 2 x 2 2 = 1 The rows of UV T are in row (X ) and the rows of W in row (X ), so G 2 := max { x 2 =1 x R n } G x 2 2 = max UV T x 2 + W 2 x 2 2 { x 2 =1 x R n }

83 Proof By Pythagoras Theorem, for any x R m with unit l 2 norm we have P row(x ) x Prow(X ) x 2 = 2 x 2 2 = 1 The rows of UV T are in row (X ) and the rows of W in row (X ), so G 2 := max { x 2 =1 x R n } G x 2 2 = max UV T x 2 + W 2 x 2 2 { x 2 =1 x R n } = max { x 2 =1 x R n } UV T P row(x ) x 2 W + Prow(X 2 ) x 2 2

84 Proof By Pythagoras Theorem, for any x R m with unit l 2 norm we have P row(x ) x Prow(X ) x 2 = 2 x 2 2 = 1 The rows of UV T are in row (X ) and the rows of W in row (X ), so G 2 := max { x 2 =1 x R n } G x 2 2 = max UV T x 2 + W 2 x 2 2 { x 2 =1 x R n } = max { x 2 =1 x R n } UV T P row(x ) x 2 W + Prow(X 2 ) x UV T 2 Prow(X ) x W Prow(X 2 ) x

85 Proof By Pythagoras Theorem, for any x R m with unit l 2 norm we have P row(x ) x Prow(X ) x 2 = 2 x 2 2 = 1 The rows of UV T are in row (X ) and the rows of W in row (X ), so G 2 := max { x 2 =1 x R n } G x 2 2 = max UV T x 2 + W 2 x 2 2 { x 2 =1 x R n } = max { x 2 =1 x R n } UV T P row(x ) x 2 W + Prow(X 2 ) x UV T 2 Prow(X ) x W Prow(X 2 ) x

86 Hölder s inequality for matrices For any matrix A R m n, A = sup A, B. { B 1 B R m n }

87 Proof For any matrix Y R m n Y G, Y = G, X + G, Y X = UV T, X + W, X + G, Y X

88 Proof U T W = 0 implies W, X = W, USV T = U T W, SV T = 0 UV T, X

89 Proof U T W = 0 implies W, X = W, USV T = U T W, SV T = 0 ( ) UV T, X = tr VU T X

90 Proof U T W = 0 implies W, X = W, USV T = U T W, SV T = 0 UV T, X ( ) = tr VU T X = tr (VU ) T USV T

91 Proof U T W = 0 implies W, X = W, USV T = U T W, SV T = 0 ( ) UV T, X = tr VU T X = tr (VU ) T USV T ( ) = tr V T V S

92 Proof U T W = 0 implies W, X = W, USV T = U T W, SV T = 0 ( ) UV T, X = tr VU T X = tr (VU ) T USV T ( ) = tr V T V S = tr (S)

93 Proof U T W = 0 implies W, X = W, USV T = U T W, SV T = 0 ( ) UV T, X = tr VU T X = tr (VU ) T USV T ( ) = tr V T V S = tr (S) = X

94 Proof For any matrix Y R m n Y G, Y = G, X + G, Y X = UV T, X + G, Y X = UV T, X + W, X + G, Y X = X + G, Y X

95 Sparse linear regression with 2 features y := α x 1 + z X := [ x 1 x 2 ] x 1 2 = 1 x 2 2 = 1 x 1, x 2 = ρ

96 Analysis of lasso estimator Let α 0 [ ] α + x T β lasso = 1 z λ 0 as long as x 2 T z ρ x T 1 z λ α + x 1 T 1 ρ z

97 Lasso estimator Coefficients Regularization parameter

98 Optimality condition for nondifferentiable functions If 0 is a subgradient of f at x, then f ( y) f ( x) + 0 T ( y x) = f ( x) for all y R n Under strict convexity the minimum is unique

99 Proof The cost function is strictly convex if n 2 and ρ 1 Aim: Show that there is a subgradient equal to 0 at a 1-sparse solution

100 Proof The gradient of the quadratic term ( ) q β := 1 2 X β y 2 2 at β lasso equals ( ) ( q βlasso = X T X β ) lasso y

101 Proof If only the first entry is nonzero and nonnegative [ ] 1 g l1 := γ is a subgradient of the l 1 norm at β lasso for any γ R such that γ 1

102 Proof If only the first entry is nonzero and nonnegative [ ] 1 g l1 := γ is a subgradient of the l 1 norm at β lasso for any γ R such that γ 1 ( ) In that case g lasso := q βlasso + λ g l1 is a subgradient of the cost function at β lasso

103 Proof If only the first entry is nonzero and nonnegative [ ] 1 g l1 := γ is a subgradient of the l 1 norm at β lasso for any γ R such that γ 1 ( ) In that case g lasso := q βlasso + λ g l1 is a subgradient of the cost function at β lasso If g lasso = 0 then β lasso is the unique solution

104 Proof ( g lasso := X T X β ) [ ] 1 lasso y + λ γ

105 Proof ( g lasso := X T X β ) [ ] 1 lasso y + λ γ = X T ( βlasso [1] x 1 α x 1 z ) + λ [ ] 1 γ

106 Proof ( g lasso := X T X β ) [ ] 1 lasso y + λ γ ( ) = X T βlasso [1] x 1 α x 1 z ( = x 1 T βlasso [1] x 1 α x 1 z) ( βlasso [1] x 1 α x 1 z) x T 2 [ ] 1 + λ γ + λ + λγ

107 Proof ( g lasso := X T X β ) [ ] 1 lasso y + λ γ ( ) [ ] = X T βlasso 1 [1] x 1 α x 1 z + λ γ ( = x 1 T βlasso [1] x 1 α x 1 z) + λ ( x 2 T βlasso [1] x 1 α x 1 z) + λγ [ ] βlasso [1] α x 1 T z + λ = ρβ lasso [1] ρα x 2 T z + λγ

108 Proof [ ] βlasso [1] α x g lasso = 1 T z + λ ρβ lasso [1] ρα x 2 T z + λγ Equal to 0 if

109 Proof [ ] βlasso [1] α x g lasso = 1 T z + λ ρβ lasso [1] ρα x 2 T z + λγ Equal to 0 if β lasso [1] = α + x T 1 z λ

110 Proof [ ] βlasso [1] α x g lasso = 1 T z + λ ρβ lasso [1] ρα x 2 T z + λγ Equal to 0 if β lasso [1] = α + x T 1 z λ γ = ρα + x T 2 z ρ β lasso [1] λ

111 Proof [ ] βlasso [1] α x g lasso = 1 T z + λ ρβ lasso [1] ρα x 2 T z + λγ Equal to 0 if β lasso [1] = α + x T 1 z λ γ = ρα + x 2 T z ρ β lasso [1] λ = x 2 T z ρ x T 1 z + ρ λ

112 Proof We still need to check that it s a valid subgradient at β lasso, i.e. βlasso [1] is nonnegative λ α + x T 1 γ 1 which holds if γ x T 2 λ z ρ x T 1 z λ + ρ 1 ρ x T 1 z + x T 2 z 1 ρ

113 Robust PCA Data: Y R n m Robust PCA estimator of low-rank component: L RPCA := arg min L L + λ Y L 1 where λ > 0 is a regularization parameter Robust PCA estimator of sparse component: S RPCA := Y L RPCA 1 is the l 1 norm of the vectorized matrix

114 Example 2 1 α 1 2 Y :=

115 Analysis of robust PCA estimator The robust PCA estimates of both components are exact for any value of α as long as 2 2 < λ < 30 3

116 Example 4 Low Rank Sparse λ

117 Optimality + uniqueness condition Let Y := L + S where L, S R m n L = U L S L V T L has rank r, U L R m r, V L R n r, S L R r r Assume there exists G := U L V T L + W where W satisfies W < 1, U T W = 0, W V = 0, and there also exists a matrix G l1 satisfying G l1 [i, j] = sign (S [i, j]) if S [i, j] 0, (1) G l1 [i, j] < 1 otherwise, (2) such that G + λg l1 = 0 Then the solution to the robust PCA problem is unique and equal to L

118 Optimality + uniqueness condition G := U L V T L + W is a subgradient of the nuclear norm at L G l1 is a subgradient of Y 1 at L G + λg l1 is a subgradient of the cost function at L G + λg l1 = 0 implies that L is a solution (uniqueness is more difficult to prove)

119 Example 2 1 α 1 2 Y := We want to show that the solution is L := α 0 0 S :=

120 Example L :=

121 Example L := = ( 1 [ ] ) 2

122 Example L := = ( 1 [ ] ) U L VL T = ] 30 1

123 Example G = U L VL T + W = [ ] + W 30 1

124 Example G = U L VL T + W = [ ] + W 30 1 g 1 g 2 sign (α) g 3 g 4 G l1 = g 5 g 6 g 7 g 8 g 9 g 10 g 11 g 12 g 13 g 14

125 Example G + λg l1 = λ sign (α)

126 Example G + λg l1 = λ λ sign (α) + sign (α)

127 Example G + λg l1 = λ λ sign (α) sign (α)

128 Example G + λg l1 = λ λ sign (α) sign (α) WV = 0

129 Example G + λg l1 = λ λ sign (α) sign (α) WV = 0

130 Example G + λg l1 = λ λ sign (α) sign (α) WV = 0 U T W = 0

131 Example G + λg l1 = λ λ sign (α) λ sign(α) λ sign(α) sign (α) WV = 0 U T W = 0

132 Example G + λg l1 = λ λ sign (α) λ sign(α) λ sign(α) sign (α) 1 λ sign(α) 1 2λ λ sign(α) 1 2λ WV = 0 U T W = 0

133 Example G + λg l1 = λ λ sign (α) λ sign(α) λ sign(α) sign (α) 1 λ sign(α) 1 2λ λ sign(α) 1 2λ G l1 [i, j] < 1 for S [i, j] = 0? WV = 0 U T W = 0

134 Example G + λg l1 = λ λ sign (α) λ sign(α) λ sign(α) sign (α) 1 λ sign(α) 1 2λ λ sign(α) 1 2λ WV = 0 U T W = 0 G l1 [i, j] < 1 for S [i, j] = 0? λ > 2/ 30

135 Example G + λg l1 = λ λ sign (α) λ sign(α) λ sign(α) sign (α) 1 λ sign(α) 1 2λ λ sign(α) 1 2λ WV = 0 U T W = 0 G l1 [i, j] < 1 for S [i, j] = 0? λ > 2/ 30 W < 1?

136 Example G + λg l1 = λ λ sign (α) λ sign(α) λ sign(α) sign (α) 1 λ sign(α) 1 2λ λ sign(α) 1 2λ WV = 0 U T W = 0 G l1 [i, j] < 1 for S [i, j] = 0? λ > 2/ 30 W < 1? λ < 2/3

137 Applications Subgradients Optimization methods

138 Subgradient method Optimization problem minimize f ( x) where f is convex but nondifferentiable Subgradient-method iteration: x (0) = arbitrary initialization x (k+1) = x (k) α k g (k) where g (k) is a subgradient of f at x (k)

139 Least-squares regression with l 1 -norm regularization minimize 1 2 A x y λ x 1 Subgradient at x (k) g (k)

140 Least-squares regression with l 1 -norm regularization minimize 1 2 A x y λ x 1 Subgradient at x (k) ( ) g (k) = A T A x (k) y + λ sign ( x (k))

141 Least-squares regression with l 1 -norm regularization minimize 1 2 A x y λ x 1 Subgradient at x (k) ( ) g (k) = A T A x (k) y + λ sign ( x (k)) Subgradient-method iteration: x (0) = arbitrary initialization x (k+1) = x (k) α k (A T ( A x (k) y ) + λ sign ( x (k)))

142 Convergence of subgradient method It is not a descent method Convergence rate can be shown to be O ( 1/ɛ 2) Diminishing step sizes are necessary for convergence Experiment: minimize 1 2 A x y λ x 1 A R , y = A x + z where x is 100-sparse and z is iid Gaussian

143 Convergence of subgradient method 10 4 α α 0 / k α 0 /k 10 2 f(x (k) ) f(x ) f(x ) k

144 Convergence of subgradient method α 0 α 0 / n α 0 /n f(x (k) ) f(x ) f(x ) k

145 Composite functions Interesting class of functions for data analysis f ( x) + h ( x) f convex and differentiable, h convex but not differentiable Example: 1 2 A x y λ x 1

146 Motivation Aim: Minimize convex differentiable function f Idea: Iteratively minimize first-order approximation, while staying close to current point x (0) = arbitrary initialization x (k+1) = arg min f x ( x (k)) + f ( x (k)) T ( x x (k)) + 1 where α k is a parameter that determines how close we stay x x (k) 2 α k 2 2

147 Motivation Linear approximation+ l 2 term is convex ( ( f x (k)) + f ( x (k)) T ( x x (k)) + 1 ) x x (k) 2 2 α k 2

148 Motivation Linear approximation+ l 2 term is convex ( ( f x (k)) + f ( x (k)) T ( x x (k)) + 1 ) x x (k) 2 2 α k 2 = f ( x (k)) x x (k) + α k

149 Motivation Linear approximation+ l 2 term is convex ( ( f x (k)) + f ( x (k)) T ( x x (k)) + 1 ) x x (k) 2 2 α k 2 = f ( x (k)) x x (k) + α k Setting the gradient to zero ( x (k+1) = arg min f x (k)) + f ( x (k)) T ( x x (k)) + 1 x x (k) x 2 α k 2 2

150 Motivation Linear approximation+ l 2 term is convex ( ( f x (k)) + f ( x (k)) T ( x x (k)) + 1 ) x x (k) 2 2 α k 2 = f ( x (k)) x x (k) + α k Setting the gradient to zero ( x (k+1) = arg min f x (k)) + f x = x (k) α k f ( x (k)) ( x (k)) T ( x x (k)) + 1 x x (k) 2 α k 2 2

151 Proximal gradient method Idea: Minimize local first-order approximation + h ( x (k+1) = arg min f x (k)) + f ( x (k)) T ( x x (k)) + 1 x x (k) x 2 α k + h ( x) 1 ( = arg min x x (k) α k f ( x (k))) 2 x 2 + α k h ( x) 2 ( = prox αk h x (k) α k f ( x (k))) 2 2 Proximal operator: prox h (y) := arg min x h ( x) y x 2 2

152 Proximal gradient method Method to solve the optimization problem minimize f ( x) + h ( x), where f is differentiable and prox h is tractable Proximal-gradient iteration: x (0) = arbitrary initialization ( x (k+1) = prox αk h x (k) α k f ( x (k)))

153 Interpretation as a fixed-point method A vector x is a solution to minimize f ( x) + h ( x), if and only if it is a fixed point of the proximal-gradient iteration for any α > 0 x = prox α h ( x α f ( x ))

154 Proof x is the solution to min x α h ( x) x α f ( x ) x 2 2 (3) if and only if there is a subgradient g of h at x such that

155 Proof x is the solution to min x α h ( x) x α f ( x ) x 2 2 (3) if and only if there is a subgradient g of h at x such that α f ( x ) + α g = 0

156 Proof x is the solution to min x α h ( x) x α f ( x ) x 2 2 (3) if and only if there is a subgradient g of h at x such that α f ( x ) + α g = 0 x minimizes f + h if and only if there is a subgradient g of h at x such that

157 Proof x is the solution to min x α h ( x) x α f ( x ) x 2 2 (3) if and only if there is a subgradient g of h at x such that α f ( x ) + α g = 0 x minimizes f + h if and only if there is a subgradient g of h at x such that f ( x ) + g = 0

158 Proximal operator of l 1 norm The proximal operator of the l 1 norm is the soft-thresholding operator where α > 0 and S α (y) i := prox α 1 (y) = S α (y) { y i sign (y i ) α if y i α 0 otherwise

159 Proof α x m 2 y x 2 2 = α x[i] + 1 ( y[i] x[i])2 2 We can just consider w (x) := α x (y x)2 = y 2 + x 2 + α x yx 2 i=1

160 Proof If x 0 w (x) = y 2 + x 2 w (x) = 2 (y α) x

161 Proof If x 0 w (x) = y 2 + x 2 (y α) x 2 w (x) = x (y α)

162 Proof If x 0 w (x) = y 2 + x 2 (y α) x 2 w (x) = x (y α) If y α minimum at

163 Proof If x 0 w (x) = y 2 + x 2 (y α) x 2 w (x) = x (y α) If y α minimum at x := y α

164 Proof If x 0 w (x) = y 2 + x 2 (y α) x 2 w (x) = x (y α) If y α minimum at x := y α If y < α minimum at

165 Proof If x 0 w (x) = y 2 + x 2 (y α) x 2 w (x) = x (y α) If y α minimum at x := y α If y < α minimum at 0

166 Proof If x < 0 w (x) = y 2 + x 2 w (x) = 2 (y + α) x

167 Proof If x < 0 w (x) = y 2 + x 2 (y + α) x 2 w (x) = x (y + α)

168 Proof If x < 0 w (x) = y 2 + x 2 (y + α) x 2 w (x) = x (y + α) If y α minimum at

169 Proof If x < 0 w (x) = y 2 + x 2 (y + α) x 2 w (x) = x (y + α) If y α minimum at x := y + α

170 Proof If x < 0 w (x) = y 2 + x 2 (y + α) x 2 w (x) = x (y + α) If y α minimum at x := y + α If y α minimum at

171 Proof If x < 0 w (x) = y 2 + x 2 (y + α) x 2 w (x) = x (y + α) If y α minimum at x := y + α If y α minimum at 0

172 Proof If α y α minimum at x := 0 If y α minimum at x := y α or at x := 0, but w (y α)

173 Proof If α y α minimum at x := 0 If y α minimum at x := y α or at x := 0, but w (y α) = α (y α) + α2 2

174 Proof If α y α minimum at x := 0 If y α minimum at x := y α or at x := 0, but w (y α) = α (y α) + α2 2 = αy α2 2

175 Proof If α y α minimum at x := 0 If y α minimum at x := y α or at x := 0, but because (y α) 2 0 w (y α) = α (y α) + α2 2 = αy α2 2 y 2 2 = w (0)

176 Proof If α y α minimum at x := 0 If y α minimum at x := y α or at x := 0, but because (y α) 2 0 Same argument for y < α w (y α) = α (y α) + α2 2 = αy α2 2 y 2 2 = w (0)

177 Iterative Shrinkage-Thresholding Algorithm (ISTA) The proximal gradient method for the problem is called ISTA minimize 1 2 A x y λ x 1 ISTA iteration: x (0) = arbitrary initialization ( ( )) x (k+1) = S αk λ x (k) α k A T A x (k) y

178 Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) ISTA can be accelerated using Nesterov s accelerated gradient method FISTA iteration: x (0) = arbitrary initialization z (0) = x (0) ( ( )) x (k+1) = S αk λ z (k) α k A T A z (k) y z (k+1) = x (k+1) + k ( x (k+1) x (k)) k + 3

179 Convergence of proximal gradient method Without acceleration: Descent method Convergence rate can be shown to be O (1/ɛ) with constant step or backtracking line search With acceleration: Not a descent method ( ) Convergence rate can be shown to be O 1 ɛ backtracking line search Experiment: minimize 1 2 A x y λ x 1 with constant step or A R , y = A x 0 + z, x sparse and z iid Gaussian

180 Convergence of proximal gradient method Subg. method (α 0 / k) ISTA FISTA f(x (k) ) f(x ) f(x ) k

181 Coordinate descent Idea: Solve the n-dimensional problem minimize c ( x[1], x[2],..., x[n]) by solving a sequence of 1D problems Coordinate-descent iteration: x (0) = arbitrary initialization x (k+1) [i] = arg min c α ( x (k) [1],..., α,..., x (k) [n] ) for some 1 i n

182 Coordinate descent Convergence is guaranteed for functions of the form n f ( x) + h i ( x[i]) where f is convex and differentiable and h 1,..., h n are convex i=1

183 Least-squares regression with l 1 -norm regularization h ( x) := 1 2 A x y λ x 1 The solution to the subproblem min x[i] h ( x[1],..., x[i],..., x[n]) is x [i] = S λ (γ i ) A i 2 2 where A i is the ith column of A and m γ i := A li y[l] l=1 j i A lj x[j]

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

2 Composition. Invertible Mappings

2 Composition. Invertible Mappings Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,

Διαβάστε περισσότερα

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

Other Test Constructions: Likelihood Ratio & Bayes Tests

Other Test Constructions: Likelihood Ratio & Bayes Tests Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :

Διαβάστε περισσότερα

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear

Διαβάστε περισσότερα

Statistical Inference I Locally most powerful tests

Statistical Inference I Locally most powerful tests Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided

Διαβάστε περισσότερα

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science. Bayesian statistics DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Frequentist vs Bayesian statistics In frequentist

Διαβάστε περισσότερα

Example Sheet 3 Solutions

Example Sheet 3 Solutions Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note

Διαβάστε περισσότερα

Homework 3 Solutions

Homework 3 Solutions Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For

Διαβάστε περισσότερα

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i)

Διαβάστε περισσότερα

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3

Διαβάστε περισσότερα

Reminders: linear functions

Reminders: linear functions Reminders: linear functions Let U and V be vector spaces over the same field F. Definition A function f : U V is linear if for every u 1, u 2 U, f (u 1 + u 2 ) = f (u 1 ) + f (u 2 ), and for every u U

Διαβάστε περισσότερα

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS CHAPTER 5 SOLVING EQUATIONS BY ITERATIVE METHODS EXERCISE 104 Page 8 1. Find the positive root of the equation x + 3x 5 = 0, correct to 3 significant figures, using the method of bisection. Let f(x) =

Διαβάστε περισσότερα

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β 3.4 SUM AND DIFFERENCE FORMULAS Page Theorem cos(αβ cos α cos β -sin α cos(α-β cos α cos β sin α NOTE: cos(αβ cos α cos β cos(α-β cos α -cos β Proof of cos(α-β cos α cos β sin α Let s use a unit circle

Διαβάστε περισσότερα

C.S. 430 Assignment 6, Sample Solutions

C.S. 430 Assignment 6, Sample Solutions C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order

Διαβάστε περισσότερα

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in : tail in X, head in A nowhere-zero Γ-flow is a Γ-circulation such that

Διαβάστε περισσότερα

Matrices and Determinants

Matrices and Determinants Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z

Διαβάστε περισσότερα

Lecture 21: Properties and robustness of LSE

Lecture 21: Properties and robustness of LSE Lecture 21: Properties and robustness of LSE BLUE: Robustness of LSE against normality We now study properties of l τ β and σ 2 under assumption A2, i.e., without the normality assumption on ε. From Theorem

Διαβάστε περισσότερα

Parametrized Surfaces

Parametrized Surfaces Parametrized Surfaces Recall from our unit on vector-valued functions at the beginning of the semester that an R 3 -valued function c(t) in one parameter is a mapping of the form c : I R 3 where I is some

Διαβάστε περισσότερα

Every set of first-order formulas is equivalent to an independent set

Every set of first-order formulas is equivalent to an independent set Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent

Διαβάστε περισσότερα

Uniform Convergence of Fourier Series Michael Taylor

Uniform Convergence of Fourier Series Michael Taylor Uniform Convergence of Fourier Series Michael Taylor Given f L 1 T 1 ), we consider the partial sums of the Fourier series of f: N 1) S N fθ) = ˆfk)e ikθ. k= N A calculation gives the Dirichlet formula

Διαβάστε περισσότερα

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required) Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts

Διαβάστε περισσότερα

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University Estimation for ARMA Processes with Stable Noise Matt Calder & Richard A. Davis Colorado State University rdavis@stat.colostate.edu 1 ARMA processes with stable noise Review of M-estimation Examples of

Διαβάστε περισσότερα

Concrete Mathematics Exercises from 30 September 2016

Concrete Mathematics Exercises from 30 September 2016 Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)

Διαβάστε περισσότερα

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8  questions or comments to Dan Fetter 1 Eon : Fall 8 Suggested Solutions to Problem Set 8 Email questions or omments to Dan Fetter Problem. Let X be a salar with density f(x, θ) (θx + θ) [ x ] with θ. (a) Find the most powerful level α test

Διαβάστε περισσότερα

The ε-pseudospectrum of a Matrix

The ε-pseudospectrum of a Matrix The ε-pseudospectrum of a Matrix Feb 16, 2015 () The ε-pseudospectrum of a Matrix Feb 16, 2015 1 / 18 1 Preliminaries 2 Definitions 3 Basic Properties 4 Computation of Pseudospectrum of 2 2 5 Problems

Διαβάστε περισσότερα

Fractional Colorings and Zykov Products of graphs

Fractional Colorings and Zykov Products of graphs Fractional Colorings and Zykov Products of graphs Who? Nichole Schimanski When? July 27, 2011 Graphs A graph, G, consists of a vertex set, V (G), and an edge set, E(G). V (G) is any finite set E(G) is

Διαβάστε περισσότερα

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all

Διαβάστε περισσότερα

Solution Series 9. i=1 x i and i=1 x i.

Solution Series 9. i=1 x i and i=1 x i. Lecturer: Prof. Dr. Mete SONER Coordinator: Yilin WANG Solution Series 9 Q1. Let α, β >, the p.d.f. of a beta distribution with parameters α and β is { Γ(α+β) Γ(α)Γ(β) f(x α, β) xα 1 (1 x) β 1 for < x

Διαβάστε περισσότερα

Lecture 2. Soundness and completeness of propositional logic

Lecture 2. Soundness and completeness of propositional logic Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness

Διαβάστε περισσότερα

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p) Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 2005-03-08 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

Διαβάστε περισσότερα

Tridiagonal matrices. Gérard MEURANT. October, 2008

Tridiagonal matrices. Gérard MEURANT. October, 2008 Tridiagonal matrices Gérard MEURANT October, 2008 1 Similarity 2 Cholesy factorizations 3 Eigenvalues 4 Inverse Similarity Let α 1 ω 1 β 1 α 2 ω 2 T =......... β 2 α 1 ω 1 β 1 α and β i ω i, i = 1,...,

Διαβάστε περισσότερα

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ting Zhang Stanford May 11, 2001 Stanford, 5/11/2001 1 Outline Ordinal Classification Ordinal Addition Ordinal Multiplication Ordinal

Διαβάστε περισσότερα

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007 Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Αν κάπου κάνετε κάποιες υποθέσεις να αναφερθούν στη σχετική ερώτηση. Όλα τα αρχεία που αναφέρονται στα προβλήματα βρίσκονται στον ίδιο φάκελο με το εκτελέσιμο

Διαβάστε περισσότερα

Lecture 34: Ridge regression and LASSO

Lecture 34: Ridge regression and LASSO Lecture 34: Ridge regression and LASSO Ridge regression Consider linear model X = Z β + ε, β R p and Var(ε) = σ 2 I n. The LSE is obtained from the minimization problem min X Z β R β 2 (1) p A type of

Διαβάστε περισσότερα

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Partial Differential Equations in Biology The boundary element method. March 26, 2013 The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet

Διαβάστε περισσότερα

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R + Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b

Διαβάστε περισσότερα

Lecture 34 Bootstrap confidence intervals

Lecture 34 Bootstrap confidence intervals Lecture 34 Bootstrap confidence intervals Confidence Intervals θ: an unknown parameter of interest We want to find limits θ and θ such that Gt = P nˆθ θ t If G 1 1 α is known, then P θ θ = P θ θ = 1 α

Διαβάστε περισσότερα

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER ORDINAL ARITHMETIC JULIAN J. SCHLÖDER Abstract. We define ordinal arithmetic and show laws of Left- Monotonicity, Associativity, Distributivity, some minor related properties and the Cantor Normal Form.

Διαβάστε περισσότερα

Quadratic Expressions

Quadratic Expressions Quadratic Expressions. The standard form of a quadratic equation is ax + bx + c = 0 where a, b, c R and a 0. The roots of ax + bx + c = 0 are b ± b a 4ac. 3. For the equation ax +bx+c = 0, sum of the roots

Διαβάστε περισσότερα

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- -----------------

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- ----------------- Inverse trigonometric functions & General Solution of Trigonometric Equations. 1. Sin ( ) = a) b) c) d) Ans b. Solution : Method 1. Ans a: 17 > 1 a) is rejected. w.k.t Sin ( sin ) = d is rejected. If sin

Διαβάστε περισσότερα

ST5224: Advanced Statistical Theory II

ST5224: Advanced Statistical Theory II ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known

Διαβάστε περισσότερα

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Chi-Kwong Li Department of Mathematics The College of William and Mary Williamsburg, Virginia 23187-8795

Διαβάστε περισσότερα

Pg The perimeter is P = 3x The area of a triangle is. where b is the base, h is the height. In our case b = x, then the area is

Pg The perimeter is P = 3x The area of a triangle is. where b is the base, h is the height. In our case b = x, then the area is Pg. 9. The perimeter is P = The area of a triangle is A = bh where b is the base, h is the height 0 h= btan 60 = b = b In our case b =, then the area is A = = 0. By Pythagorean theorem a + a = d a a =

Διαβάστε περισσότερα

Second Order Partial Differential Equations

Second Order Partial Differential Equations Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y

Διαβάστε περισσότερα

The Simply Typed Lambda Calculus

The Simply Typed Lambda Calculus Type Inference Instead of writing type annotations, can we use an algorithm to infer what the type annotations should be? That depends on the type system. For simple type systems the answer is yes, and

Διαβάστε περισσότερα

Homework 8 Model Solution Section

Homework 8 Model Solution Section MATH 004 Homework Solution Homework 8 Model Solution Section 14.5 14.6. 14.5. Use the Chain Rule to find dz where z cosx + 4y), x 5t 4, y 1 t. dz dx + dy y sinx + 4y)0t + 4) sinx + 4y) 1t ) 0t + 4t ) sinx

Διαβάστε περισσότερα

Tutorial on Multinomial Logistic Regression

Tutorial on Multinomial Logistic Regression Tutorial on Multinomial Logistic Regression Javier R Movellan June 19, 2013 1 1 General Model The inputs are n-dimensional vectors the outputs are c-dimensional vectors The training sample consist of m

Διαβάστε περισσότερα

New bounds for spherical two-distance sets and equiangular lines

New bounds for spherical two-distance sets and equiangular lines New bounds for spherical two-distance sets and equiangular lines Michigan State University Oct 8-31, 016 Anhui University Definition If X = {x 1, x,, x N } S n 1 (unit sphere in R n ) and x i, x j = a

Διαβάστε περισσότερα

The challenges of non-stable predicates

The challenges of non-stable predicates The challenges of non-stable predicates Consider a non-stable predicate Φ encoding, say, a safety property. We want to determine whether Φ holds for our program. The challenges of non-stable predicates

Διαβάστε περισσότερα

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X. Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequalit for metrics: Let (X, d) be a metric space and let x,, z X. Prove that d(x, z) d(, z) d(x, ). (ii): Reverse triangle inequalit for norms:

Διαβάστε περισσότερα

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ. Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο The time integral of a force is referred to as impulse, is determined by and is obtained from: Newton s 2 nd Law of motion states that the action

Διαβάστε περισσότερα

5. Choice under Uncertainty

5. Choice under Uncertainty 5. Choice under Uncertainty Daisuke Oyama Microeconomics I May 23, 2018 Formulations von Neumann-Morgenstern (1944/1947) X: Set of prizes Π: Set of probability distributions on X : Preference relation

Διαβάστε περισσότερα

4.6 Autoregressive Moving Average Model ARMA(1,1)

4.6 Autoregressive Moving Average Model ARMA(1,1) 84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this

Διαβάστε περισσότερα

Approximation of distance between locations on earth given by latitude and longitude

Approximation of distance between locations on earth given by latitude and longitude Approximation of distance between locations on earth given by latitude and longitude Jan Behrens 2012-12-31 In this paper we shall provide a method to approximate distances between two points on earth

Διαβάστε περισσότερα

Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής

Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής Να γραφεί πρόγραμμα το οποίο δέχεται ως είσοδο μια ακολουθία S από n (n 40) ακέραιους αριθμούς και επιστρέφει ως έξοδο δύο ακολουθίες από θετικούς ακέραιους

Διαβάστε περισσότερα

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We

Διαβάστε περισσότερα

Congruence Classes of Invertible Matrices of Order 3 over F 2

Congruence Classes of Invertible Matrices of Order 3 over F 2 International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and

Διαβάστε περισσότερα

Section 8.3 Trigonometric Equations

Section 8.3 Trigonometric Equations 99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.

Διαβάστε περισσότερα

Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def

Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def Matrices and vectors Matrix and vector a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn def = ( a ij ) R m n, b = b 1 b 2 b m Rm Matrix and vectors in linear equations: example E 1 : x 1 + x 2 + 3x 4 =

Διαβάστε περισσότερα

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

6.1. Dirac Equation. Hamiltonian. Dirac Eq. 6.1. Dirac Equation Ref: M.Kaku, Quantum Field Theory, Oxford Univ Press (1993) η μν = η μν = diag(1, -1, -1, -1) p 0 = p 0 p = p i = -p i p μ p μ = p 0 p 0 + p i p i = E c 2 - p 2 = (m c) 2 H = c p 2

Διαβάστε περισσότερα

Second Order RLC Filters

Second Order RLC Filters ECEN 60 Circuits/Electronics Spring 007-0-07 P. Mathys Second Order RLC Filters RLC Lowpass Filter A passive RLC lowpass filter (LPF) circuit is shown in the following schematic. R L C v O (t) Using phasor

Διαβάστε περισσότερα

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)

Διαβάστε περισσότερα

Srednicki Chapter 55

Srednicki Chapter 55 Srednicki Chapter 55 QFT Problems & Solutions A. George August 3, 03 Srednicki 55.. Use equations 55.3-55.0 and A i, A j ] = Π i, Π j ] = 0 (at equal times) to verify equations 55.-55.3. This is our third

Διαβάστε περισσότερα

Μηχανική Μάθηση Hypothesis Testing

Μηχανική Μάθηση Hypothesis Testing ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Μηχανική Μάθηση Hypothesis Testing Γιώργος Μπορμπουδάκης Τμήμα Επιστήμης Υπολογιστών Procedure 1. Form the null (H 0 ) and alternative (H 1 ) hypothesis 2. Consider

Διαβάστε περισσότερα

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1. Exercises 0 More exercises are available in Elementary Differential Equations. If you have a problem to solve any of them, feel free to come to office hour. Problem Find a fundamental matrix of the given

Διαβάστε περισσότερα

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch: HOMEWORK 4 Problem a For the fast loading case, we want to derive the relationship between P zz and λ z. We know that the nominal stress is expressed as: P zz = ψ λ z where λ z = λ λ z. Therefore, applying

Διαβάστε περισσότερα

Math221: HW# 1 solutions

Math221: HW# 1 solutions Math: HW# solutions Andy Royston October, 5 7.5.7, 3 rd Ed. We have a n = b n = a = fxdx = xdx =, x cos nxdx = x sin nx n sin nxdx n = cos nx n = n n, x sin nxdx = x cos nx n + cos nxdx n cos n = + sin

Διαβάστε περισσότερα

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y Stat 50 Homework Solutions Spring 005. (a λ λ λ 44 (b trace( λ + λ + λ 0 (c V (e x e e λ e e λ e (λ e by definition, the eigenvector e has the properties e λ e and e e. (d λ e e + λ e e + λ e e 8 6 4 4

Διαβάστε περισσότερα

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =? Teko Classes IITJEE/AIEEE Maths by SUHAAG SIR, Bhopal, Ph (0755) 3 00 000 www.tekoclasses.com ANSWERSHEET (TOPIC DIFFERENTIAL CALCULUS) COLLECTION # Question Type A.Single Correct Type Q. (A) Sol least

Διαβάστε περισσότερα

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω 0 1 2 3 4 5 6 ω ω + 1 ω + 2 ω + 3 ω + 4 ω2 ω2 + 1 ω2 + 2 ω2 + 3 ω3 ω3 + 1 ω3 + 2 ω4 ω4 + 1 ω5 ω 2 ω 2 + 1 ω 2 + 2 ω 2 + ω ω 2 + ω + 1 ω 2 + ω2 ω 2 2 ω 2 2 + 1 ω 2 2 + ω ω 2 3 ω 3 ω 3 + 1 ω 3 + ω ω 3 +

Διαβάστε περισσότερα

Modern Bayesian Statistics Part III: high-dimensional modeling Example 3: Sparse and time-varying covariance modeling

Modern Bayesian Statistics Part III: high-dimensional modeling Example 3: Sparse and time-varying covariance modeling Modern Bayesian Statistics Part III: high-dimensional modeling Example 3: Sparse and time-varying covariance modeling Hedibert Freitas Lopes 1 hedibert.org 13 a amostra de Estatística IME-USP, October

Διαβάστε περισσότερα

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review Harvard College Statistics 104: Quantitative Methods for Economics Formula and Theorem Review Tommy MacWilliam, 13 tmacwilliam@college.harvard.edu March 10, 2011 Contents 1 Introduction to Data 5 1.1 Sample

Διαβάστε περισσότερα

D Alembert s Solution to the Wave Equation

D Alembert s Solution to the Wave Equation D Alembert s Solution to the Wave Equation MATH 467 Partial Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Objectives In this lesson we will learn: a change of variable technique

Διαβάστε περισσότερα

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03).. Supplemental Material (not for publication) Persistent vs. Permanent Income Shocks in the Buffer-Stock Model Jeppe Druedahl Thomas H. Jørgensen May, A Additional Figures and Tables Figure A.: Wealth and

Διαβάστε περισσότερα

Notes on the Open Economy

Notes on the Open Economy Notes on the Open Econom Ben J. Heijdra Universit of Groningen April 24 Introduction In this note we stud the two-countr model of Table.4 in more detail. restated here for convenience. The model is Table.4.

Διαβάστε περισσότερα

Problem Set 3: Solutions

Problem Set 3: Solutions CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C

Διαβάστε περισσότερα

Local Approximation with Kernels

Local Approximation with Kernels Local Approximation with Kernels Thomas Hangelbroek University of Hawaii at Manoa 5th International Conference Approximation Theory, 26 work supported by: NSF DMS-43726 A cubic spline example Consider

Διαβάστε περισσότερα

derivation of the Laplacian from rectangular to spherical coordinates

derivation of the Laplacian from rectangular to spherical coordinates derivation of the Laplacian from rectangular to spherical coordinates swapnizzle 03-03- :5:43 We begin by recognizing the familiar conversion from rectangular to spherical coordinates (note that φ is used

Διαβάστε περισσότερα

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007 Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Όλοι οι αριθμοί που αναφέρονται σε όλα τα ερωτήματα μικρότεροι του 10000 εκτός αν ορίζεται διαφορετικά στη διατύπωση του προβλήματος. Αν κάπου κάνετε κάποιες υποθέσεις

Διαβάστε περισσότερα

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 10η: Basics of Game Theory part 2 Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 10η: Basics of Game Theory part 2 Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Ψηφιακή Οικονομία Διάλεξη 0η: Basics of Game Theory part 2 Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών Best Response Curves Used to solve for equilibria in games

Διαβάστε περισσότερα

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation Overview Transition Semantics Configurations and the transition relation Executions and computation Inference rules for small-step structural operational semantics for the simple imperative language Transition

Διαβάστε περισσότερα

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS EXERCISE 01 Page 545 1. Use matrices to solve: 3x + 4y x + 5y + 7 3x + 4y x + 5y 7 Hence, 3 4 x 0 5 y 7 The inverse of 3 4 5 is: 1 5 4 1 5 4 15 8 3

Διαβάστε περισσότερα

The Probabilistic Method - Probabilistic Techniques. Lecture 7: The Janson Inequality

The Probabilistic Method - Probabilistic Techniques. Lecture 7: The Janson Inequality The Probabilistic Method - Probabilistic Techniques Lecture 7: The Janson Inequality Sotiris Nikoletseas Associate Professor Computer Engineering and Informatics Department 2014-2015 Sotiris Nikoletseas,

Διαβάστε περισσότερα

Structural and Multidisciplinary Optimization. P. Duysinx, P. Tossings*

Structural and Multidisciplinary Optimization. P. Duysinx, P. Tossings* Structural and Multidisciplinary Optimization P. Duysinx, P. Tossings* 2017-2018 * CONTACT Patricia TOSSINGS Institut de Mathematique B37), 0/57 Telephone: 04/366.93.73. Email: Patricia.Tossings@ulg.ac.be

Διαβάστε περισσότερα

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds! MTH U341 urface Integrals, tokes theorem, the divergence theorem To be turned in Wed., Dec. 1. 1. Let be the sphere of radius a, x 2 + y 2 + z 2 a 2. a. Use spherical coordinates (with ρ a) to parametrize.

Διαβάστε περισσότερα

Section 7.6 Double and Half Angle Formulas

Section 7.6 Double and Half Angle Formulas 09 Section 7. Double and Half Angle Fmulas To derive the double-angles fmulas, we will use the sum of two angles fmulas that we developed in the last section. We will let α θ and β θ: cos(θ) cos(θ + θ)

Διαβάστε περισσότερα

Solutions to Exercise Sheet 5

Solutions to Exercise Sheet 5 Solutions to Eercise Sheet 5 jacques@ucsd.edu. Let X and Y be random variables with joint pdf f(, y) = 3y( + y) where and y. Determine each of the following probabilities. Solutions. a. P (X ). b. P (X

Διαβάστε περισσότερα

Introduction to the ML Estimation of ARMA processes

Introduction to the ML Estimation of ARMA processes Introduction to the ML Estimation of ARMA processes Eduardo Rossi University of Pavia October 2013 Rossi ARMA Estimation Financial Econometrics - 2013 1 / 1 We consider the AR(p) model: Y t = c + φ 1 Y

Διαβάστε περισσότερα

Lecture 15 - Root System Axiomatics

Lecture 15 - Root System Axiomatics Lecture 15 - Root System Axiomatics Nov 1, 01 In this lecture we examine root systems from an axiomatic point of view. 1 Reflections If v R n, then it determines a hyperplane, denoted P v, through the

Διαβάστε περισσότερα

6. MAXIMUM LIKELIHOOD ESTIMATION

6. MAXIMUM LIKELIHOOD ESTIMATION 6 MAXIMUM LIKELIHOOD ESIMAION [1] Maximum Likelihood Estimator (1) Cases in which θ (unknown parameter) is scalar Notational Clarification: From now on, we denote the true value of θ as θ o hen, view θ

Διαβάστε περισσότερα

CRASH COURSE IN PRECALCULUS

CRASH COURSE IN PRECALCULUS CRASH COURSE IN PRECALCULUS Shiah-Sen Wang The graphs are prepared by Chien-Lun Lai Based on : Precalculus: Mathematics for Calculus by J. Stuwart, L. Redin & S. Watson, 6th edition, 01, Brooks/Cole Chapter

Διαβάστε περισσότερα

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013 Notes on Average Scattering imes and Hall Factors Jesse Maassen and Mar Lundstrom Purdue University November 5, 13 I. Introduction 1 II. Solution of the BE 1 III. Exercises: Woring out average scattering

Διαβάστε περισσότερα

Lecture 10 - Representation Theory III: Theory of Weights

Lecture 10 - Representation Theory III: Theory of Weights Lecture 10 - Representation Theory III: Theory of Weights February 18, 2012 1 Terminology One assumes a base = {α i } i has been chosen. Then a weight Λ with non-negative integral Dynkin coefficients Λ

Διαβάστε περισσότερα

MATH1030 Linear Algebra Assignment 5 Solution

MATH1030 Linear Algebra Assignment 5 Solution 2015-16 MATH1030 Linear Algebra Assignment 5 Solution (Question No. referring to the 8 th edition of the textbook) Section 3.4 4 Note that x 2 = x 1, x 3 = 2x 1 and x 1 0, hence span {x 1, x 2, x 3 } has

Διαβάστε περισσότερα

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018 Journal of rogressive Research in Mathematics(JRM) ISSN: 2395-028 SCITECH Volume 3, Issue 2 RESEARCH ORGANISATION ublished online: March 29, 208 Journal of rogressive Research in Mathematics www.scitecresearch.com/journals

Διαβάστε περισσότερα

w o = R 1 p. (1) R = p =. = 1

w o = R 1 p. (1) R = p =. = 1 Πανεπιστήµιο Κρήτης - Τµήµα Επιστήµης Υπολογιστών ΗΥ-570: Στατιστική Επεξεργασία Σήµατος 205 ιδάσκων : Α. Μουχτάρης Τριτη Σειρά Ασκήσεων Λύσεις Ασκηση 3. 5.2 (a) From the Wiener-Hopf equation we have:

Διαβάστε περισσότερα

Iterated trilinear fourier integrals with arbitrary symbols

Iterated trilinear fourier integrals with arbitrary symbols Cornell University ICM 04, Satellite Conference in Harmonic Analysis, Chosun University, Gwangju, Korea August 6, 04 Motivation the Coifman-Meyer theorem with classical paraproduct(979) B(f, f )(x) :=

Διαβάστε περισσότερα

Θεωρία Πληροφορίας και Κωδίκων

Θεωρία Πληροφορίας και Κωδίκων Θεωρία Πληροφορίας και Κωδίκων Δρ. Νικόλαος Κολοκοτρώνης Λέκτορας Πανεπιστήμιο Πελοποννήσου Τμήμα Επιστήμης και Τεχνολογίας Υπολογιστών Τέρμα Οδού Καραϊσκάκη, 22100 Τρίπολη E mail: nkolok@uop.gr Web: http://www.uop.gr/~nkolok/

Διαβάστε περισσότερα