Neural Networks - Prolog 2 - Logistic Regression

Μέγεθος: px
Εμφάνιση ξεκινά από τη σελίδα:

Download "Neural Networks - Prolog 2 - Logistic Regression"

Transcript

1 Neural Networks - Prolog 2 - Logistic Regression September 13, 2016 Marcin Junczys-Dowmunt Machine Translation Marathon Introduction to Neural Networks 1.1 Prolog 2: Logistic Regression 2 The Iris Data Set Iris setosa Iris virginica Iris vericolor In [5]: import pandas data = pandas.read_csv("iris.csv", header=none, names=["sepal length", "Sepal width", "Petal length", "Petal width", "Species"]) data[:8] Out[5]: Sepal length Sepal width Petal length Petal width Species Iris-setosa Iris-setosa Iris-virginica Iris-virginica Iris-virginica Iris-versicolor Iris-versicolor Iris-versicolor Set of classes: C = {c 1, c 2,, c k } C = k Training data:x = 1 x (1) 1 x (1) n 1 x (2) 1 x (2) n y = 1 x (m) 1 x (m) n y (1) y (2). y (m) = c 2 c 1. c 1 dim X = m (n + 1) dim y = m 1 Set of classes: C = {c 1, c 2,, c k } C = k 1

2 Training data (with indicator matrix):x = 1 x (1) 1 x (1) n 1 x (2) 1 x (2) n x (m) 1 x (m) n Y = δ(c 1, y (1) )... δ(c k, y (1) ) δ(c 1, y (2) )... δ(c k, y (2) )..... δ(c 1, y (m) )... δ(c k, y (m) ) δ(x, y) = { 1 if x = y 0 otherwise dim X = m (n + 1) dim Y = m k In [6]: import numpy as np m = len(data) X = np.matrix(data[["sepal length", "Sepal width", "Petal length", "Petal width"]]) X = np.hstack((np.ones(m).reshape(m,1), X)) y = np.matrix(data[["species"]]).reshape(m,1) def mapy(y, c): n = len(y) ybi = np.matrix(np.zeros(n)).reshape(n, 1) ybi[y == c] = 1. return ybi def indicatormatrix(y): classes = np.unique(y.tolist()) Y = mapy(y, classes[0]) for c in classes[1:]: Y = np.hstack((y, mapy(y, c))) Y = np.matrix(y).reshape(len(y), len(classes)) return Y Y = indicatormatrix(y) print(y[:10]) [[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]] 2

3 3

4 2.1 Logistic function: g(x) = e x 4

5 2.2 Logistic regression model For a single feature vector: h θ (x) = g( n θ i x i ) = i= e n i=0 θixi More compact in matrix form (batched): h θ (X) = g(xθ) = 2.3 The cost function (binary cross-entropy) Computed across the training batch: e Xθ J(θ) = 1 m m [ y (i) log h θ (x (i) ) + (1 y (i) ) log(1 h θ (x (i) ))] i=1 Essentially the same in matrix form with vectorized elementary functions 5

6 2.4 And its gradient Computed across the training batch for a single parameter: J(θ) θ j = 1 m m (h θ (x (i) ) y (i) )x (i) j i=1 In matrix form: J(θ) = 1 y XT (h θ (X) y) Looks the same as for linear regression, how come? In [10]: def h(theta, X): return 1.0/(1.0 + np.exp(-x*theta)) def J(h,theta,X,y): m = len(y) s1 = np.multiply(y, np.log(h(theta,x))) s2 = np.multiply((1 - y), np.log(1 - h(theta,x))) return -np.sum(s1 + s2, axis=0)/m def dj(h,theta,x,y): return 1.0/len(y)*(X.T*(h(theta,X)-y)) In [14]: # Divide data into train and test set XTrain, XTest = X[:100], X[100:] YTrain, YTest = Y[:100], Y[100:] [[ ]] [[ 0.18 ] [ ] [ ] [ ] [ ]] # Initialize theta with zeroes theta = np.zeros(5).reshape(5,1) # Select only first column for binary classification (setosa vs. rest) YTrain0 = YTrain[:,0] print(j(h, theta, XTrain, YTrain0)) print(dj(h, theta, XTrain, YTrain0)) 2.5 Let s plug them into SGD In [15]: def mbsgd(h, fj, fdj, theta, X, y, alpha=0.01, maxsteps=10000, batchsize = 10): i = 0 b = batchsize m = X.shape[0] while i < maxsteps: start = (i*b) % m end = ((i+1)*b) % m if(end <= start): 6

7 [[ ] [ ] [ ] [ ] [ ]] end = m Xbatch = X[start:end] Ybatch = y[start:end] theta = theta - alpha * fdj(h, theta, Xbatch, Ybatch) i += 1 return theta thetabest = mbsgd(h, J, dj, theta, XTrain, YTrain0, alpha=0.01, maxsteps=10000, batchsize = 10) print(thetabest) 2.6 Let s calculate test set probabilities In [16]: probs = h(thetabest, XTest) probs[:10] Out[16]: matrix([[ e-05], [ e-01], [ e-05], [ e-04], [ e-05], [ e-01], [ e-03], [ e-02], [ e-05], [ e-05]]) 2.7 Are we done already? 2.8 The decision function (binary case) { 1 if P (y = 1 x; θ) > 0.5 c = 0 otherwise In [17]: YTestBi = YTest[:,testClass] P (y = 1 x; θ) = h θ (x) def classifybi(x): prob = h(thetabest, X).item() return (1, prob) if prob > 0.5 else (0, prob) acc = 0.0 for i, rest in enumerate(ytestbi): cls, prob = classifybi(xtest[i]) print(int(ytestbi[i].item()), "<=>", cls, "-- prob:", round(prob, 4)) acc += cls == YTestBi[i].item() print("accuracy:", acc/len(xtest)) 7

8 0 <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 1 -- prob: <=> 1 -- prob: <=> 0 -- prob: <=> 0 -- prob: <=> 0 -- prob: Accuracy: 1.0 8

9 3 Multi-class logistic regression 3.1 Method 1: One-against-all We create three binary models: h θ1, h θ2, h θ3, one for each class; We select the class with the highest probability. Is this property true? h θc (x) = P (y = c x; θ c ) = 1 c=1,...,3 c=1,...,3 3.2 Softmax Multi-class version of logistic function is the softmax function softmax(k, x 1,..., x n ) = e x k n i=i exi P (y = c x; θ 1,..., θ k ) = softmax(c, θ T 1 x,..., θ T k x) 9

10 Do we now have the following property? c=1,...,3 P (y = c x; θ c ) = 1 In [20]: def softmax(x): return softmaxunsafe(x - np.max(x, axis=1)) def softmaxunsafe(x): return np.exp(x) / np.sum(np.exp(x), axis=1) X = np.matrix([2.1, 0.5, 0.8, 0.9, 3.2]).reshape(1,5) P = softmax(x) print(x) print("suma X =", np.sum(x, axis=1), "\n") print(p) print("suma P =", np.sum(p, axis=1)) [[ ]] Suma X = [[ 7.5]] [[ ]] Suma P = [[ 1.]] In [21]: def trainmaxent(x, Y): n = X.shape[1] thetas = [] for c in range(y.shape[1]): print("training classifier for class %d" % c) YBi = Y[:,c] theta = np.matrix(np.random.normal(0, 0.1,n)).reshape(n,1) thetabest = mbsgd(h, J, dj, theta, X, YBi, alpha=0.01, maxsteps=10000, batchsize = 10) print(thetabest) thetas.append(thetabest) return thetas thetas = trainmaxent(xtrain, YTrain); Training classifier for class 0 [[ ] [ ] [ ] [ ] [ ]] Training classifier for class 1 [[ ] [ ] [ ] [ ] [ ]] Training classifier for class 2 [[ ] [ ] 10

11 [ ] [ ] [ ]] 3.3 Multi-class decision function c = arg max i {1,...,k} P (y = i x; θ 1,..., θ k ) = arg max i {1,...,k} softmax(i, θ1 T x,..., θk T x) In [22]: def classify(thetas, X): regs = np.matrix([(x*theta).item() for theta in thetas]) probs = softmax(regs) return np.argmax(probs), probs print(ytest[:10]) YTestCls = YTest * np.matrix((0,1,2)).t print(ytestcls[:10]) acc = 0.0 for i in range(len(ytestcls)): cls, probs = classify(thetas, XTest[i]) correct = int(ytestcls[i].item()) print(correct, "<=>", cls, " - ", correct == cls, np.round(probs, 4).tolist()) acc += correct == cls print("accuracy =", acc/float(len(xtest))) [[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]] [[ 2.] [ 0.] [ 2.] [ 2.] [ 2.] [ 0.] [ 1.] [ 1.] [ 2.] [ 2.]] 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9957, , 0.0]] 2 <=> 2 - True [[0.0, , ]] 2 <=> 2 - True [[0.0001, , ]] 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9995, , 0.0]] 1 <=> 1 - True [[0.0097, , ]] 1 <=> 1 - True [[0.1576, , ]] 2 <=> 2 - True [[0.0, , ]] 11

12 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9965, , 0.0]] 0 <=> 0 - True [[0.9975, , 0.0]] 1 <=> 1 - True [[0.0149, , ]] 2 <=> 2 - True [[0.0, , ]] 1 <=> 1 - True [[0.0082, 0.852, ]] 0 <=> 0 - True [[0.9992, , 0.0]] 0 <=> 0 - True [[0.9948, , 0.0]] 2 <=> 2 - True [[0.0001, , 0.875]] 0 <=> 0 - True [[0.9948, , 0.0]] 1 <=> 1 - True [[0.0006, , ]] 1 <=> 1 - True [[0.0003, , ]] 0 <=> 0 - True [[0.9992, , 0.0]] 2 <=> 2 - True [[0.0, , ]] 1 <=> 1 - True [[0.0353, , ]] 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9999, , 0.0]] 1 <=> 1 - True [[0.0006, , ]] 0 <=> 0 - True [[0.9984, , 0.0]] 1 <=> 1 - True [[0.0001, , ]] 0 <=> 0 - True [[0.9994, , 0.0]] 2 <=> 2 - True [[0.0, , ]] 1 <=> 1 - True [[0.0113, 0.907, ]] 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9928, , 0.0]] 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9684, , 0.0]] 2 <=> 2 - True [[0.0, , ]] 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9988, , 0.0]] 1 <=> 1 - True [[0.0137, , ]] 1 <=> 1 - True [[0.0021, , ]] 0 <=> 0 - True [[0.9998, , 0.0]] 1 <=> 1 - True [[0.0023, , ]] 0 <=> 0 - True [[0.9983, , 0.0]] 2 <=> 2 - True [[0.0, , ]] 0 <=> 0 - True [[0.9999, , 0.0]] 0 <=> 0 - True [[0.9992, , 0.0]] 2 <=> 2 - True [[0.0, , ]] 2 <=> 2 - True [[0.0001, , 0.872]] 2 <=> 2 - True [[0.0, 0.06, 0.94]] Accuracy = Method 2: Multi-class training Θ = (θ (1),..., θ (c) ) h Θ (x) = [P (k x, Θ)] k=1,...,c = softmax(θx) The cost function (categorial cross-entropy) J(Θ) = 1 m c δ(y (i), k) log P (k x (i), Θ) m i=1 k=1 12

13 { 1 if x = y δ(x, y) = 0 otherwise And its gradient J(Θ) Θ j,k = 1 m m (δ(y (i), k) P (k x (i), Θ)) x (i) j i=1 J(Θ) = 1 m XT (h θ (X) Y ) Y Indicator matrix In [23]: def h_mc(theta, X): return softmax(x*theta) def J_mc(h, Theta, X, Y): return 0 def dj_mc(h, Theta, X, Y): return 1.0/len(y) * (X.T*(h(Theta, X) - Y)) n = XTrain.shape[1] k = YTrain.shape[1] Theta = np.matrix(np.random.normal(0, 0.1, n*k)).reshape(n,k) ThetaBest = mbsgd(h_mc, J_mc, dj_mc, Theta, XTrain, YTrain, alpha=0.01, maxsteps=50000, batchsize = 10) print(thetabest) def classify(theta, X): probs = h_mc(theta, X) return np.argmax(probs, axis=1), probs YTestCls = YTest * np.matrix((0,1,2)).t acc = 0.0 for i in range(len(ytestcls)): cls, probs = classify(thetabest, XTest[i]) correct = int(ytestcls[i].item()) print(correct, "<=>", cls, " - ", correct == cls, np.round(probs, 4).tolist()) acc += correct == cls print("accuracy =", acc/float(len(xtest))) [[ ] [ ] [ ] [ ] [ ]] 2 <=> [[2]] - [[ True]] [[0.0003, , ]] 0 <=> [[0]] - [[ True]] [[0.9257, , ]] 2 <=> [[2]] - [[ True]] [[0.0006, , ]] 2 <=> [[2]] - [[ True]] [[0.0027, , ]] 2 <=> [[2]] - [[ True]] [[0.0007, , ]] 13

14 0 <=> [[0]] - [[ True]] [[0.9653, , 0.0]] 1 <=> [[1]] - [[ True]] [[0.0344, , ]] 1 <=> [[1]] - [[ True]] [[0.1798, , 0.095]] 2 <=> [[2]] - [[ True]] [[0.0008, , 0.756]] 2 <=> [[2]] - [[ True]] [[0.0011, , ]] 0 <=> [[0]] - [[ True]] [[0.9363, , ]] 0 <=> [[0]] - [[ True]] [[0.9413, , ]] 1 <=> [[1]] - [[ True]] [[0.0535, , ]] 2 <=> [[2]] - [[ True]] [[0.0004, , ]] 1 <=> [[1]] - [[ True]] [[0.0338, , ]] 0 <=> [[0]] - [[ True]] [[0.9653, , 0.0]] 0 <=> [[0]] - [[ True]] [[0.9276, , ]] 2 <=> [[2]] - [[ True]] [[0.0026, , 0.67]] 0 <=> [[0]] - [[ True]] [[0.9276, , ]] 1 <=> [[1]] - [[ True]] [[0.0116, , ]] 1 <=> [[1]] - [[ True]] [[0.006, , ]] 0 <=> [[0]] - [[ True]] [[0.9642, , 0.0]] 2 <=> [[2]] - [[ True]] [[0.0004, 0.154, ]] 1 <=> [[1]] - [[ True]] [[0.0886, , 0.16]] 2 <=> [[2]] - [[ True]] [[0.0007, 0.153, ]] 0 <=> [[0]] - [[ True]] [[0.9879, , 0.0]] 1 <=> [[1]] - [[ True]] [[0.0079, , ]] 0 <=> [[0]] - [[ True]] [[0.9535, , ]] 1 <=> [[1]] - [[ True]] [[0.004, , ]] 0 <=> [[0]] - [[ True]] [[0.9704, , 0.0]] 2 <=> [[2]] - [[ True]] [[0.0, , ]] 1 <=> [[1]] - [[ True]] [[0.0422, , ]] 2 <=> [[2]] - [[ True]] [[0.0001, 0.172, ]] 0 <=> [[0]] - [[ True]] [[0.9091, , ]] 2 <=> [[2]] - [[ True]] [[0.0016, , ]] 0 <=> [[0]] - [[ True]] [[0.8405, , 0.001]] 2 <=> [[2]] - [[ True]] [[0.0004, , ]] 2 <=> [[2]] - [[ True]] [[0.0001, , ]] 0 <=> [[0]] - [[ True]] [[0.9536, , ]] 1 <=> [[1]] - [[ True]] [[0.053, , ]] 1 <=> [[1]] - [[ True]] [[0.0182, , ]] 0 <=> [[0]] - [[ True]] [[0.9823, , 0.0]] 1 <=> [[1]] - [[ True]] [[0.0221, , ]] 0 <=> [[0]] - [[ True]] [[0.9536, , ]] 2 <=> [[2]] - [[ True]] [[0.0013, , ]] 0 <=> [[0]] - [[ True]] [[0.99, 0.01, 0.0]] 0 <=> [[0]] - [[ True]] [[0.9643, , ]] 2 <=> [[2]] - [[ True]] [[0.0003, , ]] 2 <=> [[2]] - [[ True]] [[0.0023, , ]] 2 <=> [[2]] - [[ True]] [[0.0012, , ]] Accuracy = [[ 1.]] 3.5 Feature engineering In [24]: n = 2 sgd = True 14

15 3.6 A more difficult example: The MNIST task 3.7 Reuse all the code In [27]: mnistxtrain, mnistytrain = tomatrix(read(dataset="training"), maxitems=60000) mnistytraini = indicatormatrix(mnistytrain) mnistxtest, mnistytest = tomatrix(read(dataset="testing")) mnistytesti = indicatormatrix(mnistytest) n = mnistxtrain.shape[1] k = mnistytraini.shape[1] mnisttheta = np.matrix(np.random.normal(0, 0.1, n*k)).reshape(n,k) mnistthetabest = mbsgd(h_mc, J_mc, dj_mc, mnisttheta, mnistxtrain, mnistytraini, 15

16 alpha=0.1, maxsteps=20000, batchsize = 20) In [28]: cls, probs = classify(mnistthetabest, mnistxtest) print("accuracy: ", np.sum(cls == mnistytest)/len(mnistytest)) Accuracy: Let s look at a few examples In [29]: np.set_printoptions(precision=2) for i, image in enumerate(mnistxtest[:10]): show(image[:,1:].reshape(28,28)) print(cls[i], mnistytest[i], probs[i]) [[7]] [[ 7.]] [[ 1.17e e e e e e e e e e-03]] [[2]] [[ 2.]] [[ 8.53e e e e e e e e e e-07]] 16

17 [[1]] [[ 1.]] [[ 9.68e e e e e e e e e e-03]] [[0]] [[ 0.]] [[ 9.98e e e e e e e e e e-05]] [[4]] [[ 4.]] [[ 8.38e e e e e e e e e e-02]] 17

18 [[1]] [[ 1.]] [[ 6.27e e e e e e e e e e-04]] [[4]] [[ 4.]] [[ 1.08e e e e e e e e e e-02]] [[9]] [[ 9.]] [[ 8.97e e e e e e e e e e-01]] 18

19 [[6]] [[ 5.]] [[ 4.89e e e e e e e e e e-04]] [[9]] [[ 9.]] [[ 3.99e e e e e e e e e e-01]] 4 OK, but what about Neural Networks? Actually, we already trained a whole bunch of neural networks during the lecture! 19

20 4.1 The Neuron 4.2 What s this? 20

21 4.3 All we need for a single-layer single-neuron regression network: Model: Cost function (MSE): Gradient: J(θ) = 1 m J(θ) θ j = 1 m Optimization algorithm: any variant of SGD 4.4 And that? h θ (x) = n θ i x i i=0 m (h θ (x (i) ) y (i) ) 2 i=1 m (h θ (x (i) ) y (i) )x (i) j i=1 4.5 All we need for a single-layer single-neuron binary classifier: Model: Cost function (binary cross-entropy): h θ (x) = σ( n θ i x i ) = P (c = 1 x, θ) i=0 J(θ) = 1 m m i=1 [y(i) log P (c = 1 x (i), θ) +(1 y (i) ) log(1 P (c = 1 x (i), θ))] 21

22 Gradient: J(θ) θ j = 1 m m (h θ (x (i) ) y (i) )x (i) j i=1 Optimization algorithm: any variant of SGD 4.6 And what are these? 4.7 All we need to train a single-layer multi-class classifier Model: h Θ (x) = [P (k x, Θ)] k=1,...,c where Θ = (θ (1),..., θ (c) ) Cost function J(Θ) (categorial cross-entropy): J(Θ) = 1 m m c δ(y (i), k) log P (k x (i), Θ) i=1 k=1 Gradient J(Θ): J(Θ) Θ j,k = 1 m m (δ(y (i), k) P (k x (i), Θ))x (i) j i=1 Optimization algorithm: any variant of SGD 22

23 4.8 Tomorrow: How do we train this monster? 4.9 Or something like this: 23

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Partial Differential Equations in Biology The boundary element method. March 26, 2013 The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet

Διαβάστε περισσότερα

Reminders: linear functions

Reminders: linear functions Reminders: linear functions Let U and V be vector spaces over the same field F. Definition A function f : U V is linear if for every u 1, u 2 U, f (u 1 + u 2 ) = f (u 1 ) + f (u 2 ), and for every u U

Διαβάστε περισσότερα

Other Test Constructions: Likelihood Ratio & Bayes Tests

Other Test Constructions: Likelihood Ratio & Bayes Tests Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :

Διαβάστε περισσότερα

2 Composition. Invertible Mappings

2 Composition. Invertible Mappings Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,

Διαβάστε περισσότερα

derivation of the Laplacian from rectangular to spherical coordinates

derivation of the Laplacian from rectangular to spherical coordinates derivation of the Laplacian from rectangular to spherical coordinates swapnizzle 03-03- :5:43 We begin by recognizing the familiar conversion from rectangular to spherical coordinates (note that φ is used

Διαβάστε περισσότερα

The Simply Typed Lambda Calculus

The Simply Typed Lambda Calculus Type Inference Instead of writing type annotations, can we use an algorithm to infer what the type annotations should be? That depends on the type system. For simple type systems the answer is yes, and

Διαβάστε περισσότερα

4.6 Autoregressive Moving Average Model ARMA(1,1)

4.6 Autoregressive Moving Average Model ARMA(1,1) 84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this

Διαβάστε περισσότερα

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β 3.4 SUM AND DIFFERENCE FORMULAS Page Theorem cos(αβ cos α cos β -sin α cos(α-β cos α cos β sin α NOTE: cos(αβ cos α cos β cos(α-β cos α -cos β Proof of cos(α-β cos α cos β sin α Let s use a unit circle

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

Computing Gradient. Hung-yi Lee 李宏毅

Computing Gradient. Hung-yi Lee 李宏毅 Computing Gradient Hung-yi Lee 李宏毅 Introduction Backpropagation: an efficient way to compute the gradient Prerequisite Backpropagation for feedforward net: http://speech.ee.ntu.edu.tw/~tkagk/courses/mlds_05_/lecture/

Διαβάστε περισσότερα

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007 Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Όλοι οι αριθμοί που αναφέρονται σε όλα τα ερωτήματα μικρότεροι του 10000 εκτός αν ορίζεται διαφορετικά στη διατύπωση του προβλήματος. Αν κάπου κάνετε κάποιες υποθέσεις

Διαβάστε περισσότερα

Dynamic types, Lambda calculus machines Section and Practice Problems Apr 21 22, 2016

Dynamic types, Lambda calculus machines Section and Practice Problems Apr 21 22, 2016 Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Dynamic types, Lambda calculus machines Apr 21 22, 2016 1 Dynamic types and contracts (a) To make sure you understand the

Διαβάστε περισσότερα

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =

Διαβάστε περισσότερα

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1. Exercises 0 More exercises are available in Elementary Differential Equations. If you have a problem to solve any of them, feel free to come to office hour. Problem Find a fundamental matrix of the given

Διαβάστε περισσότερα

Differential equations

Differential equations Differential equations Differential equations: An equation inoling one dependent ariable and its deriaties w. r. t one or more independent ariables is called a differential equation. Order of differential

Διαβάστε περισσότερα

Fractional Colorings and Zykov Products of graphs

Fractional Colorings and Zykov Products of graphs Fractional Colorings and Zykov Products of graphs Who? Nichole Schimanski When? July 27, 2011 Graphs A graph, G, consists of a vertex set, V (G), and an edge set, E(G). V (G) is any finite set E(G) is

Διαβάστε περισσότερα

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the

Διαβάστε περισσότερα

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the

Διαβάστε περισσότερα

Elements of Information Theory

Elements of Information Theory Elements of Information Theory Model of Digital Communications System A Logarithmic Measure for Information Mutual Information Units of Information Self-Information News... Example Information Measure

Διαβάστε περισσότερα

Concrete Mathematics Exercises from 30 September 2016

Concrete Mathematics Exercises from 30 September 2016 Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)

Διαβάστε περισσότερα

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science. Bayesian statistics DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Frequentist vs Bayesian statistics In frequentist

Διαβάστε περισσότερα

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required) Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts

Διαβάστε περισσότερα

The challenges of non-stable predicates

The challenges of non-stable predicates The challenges of non-stable predicates Consider a non-stable predicate Φ encoding, say, a safety property. We want to determine whether Φ holds for our program. The challenges of non-stable predicates

Διαβάστε περισσότερα

Section 9.2 Polar Equations and Graphs

Section 9.2 Polar Equations and Graphs 180 Section 9. Polar Equations and Graphs In this section, we will be graphing polar equations on a polar grid. In the first few examples, we will write the polar equation in rectangular form to help identify

Διαβάστε περισσότερα

ST5224: Advanced Statistical Theory II

ST5224: Advanced Statistical Theory II ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known

Διαβάστε περισσότερα

Finite Field Problems: Solutions

Finite Field Problems: Solutions Finite Field Problems: Solutions 1. Let f = x 2 +1 Z 11 [x] and let F = Z 11 [x]/(f), a field. Let Solution: F =11 2 = 121, so F = 121 1 = 120. The possible orders are the divisors of 120. Solution: The

Διαβάστε περισσότερα

Second Order Partial Differential Equations

Second Order Partial Differential Equations Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y

Διαβάστε περισσότερα

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i)

Διαβάστε περισσότερα

C.S. 430 Assignment 6, Sample Solutions

C.S. 430 Assignment 6, Sample Solutions C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order

Διαβάστε περισσότερα

6. MAXIMUM LIKELIHOOD ESTIMATION

6. MAXIMUM LIKELIHOOD ESTIMATION 6 MAXIMUM LIKELIHOOD ESIMAION [1] Maximum Likelihood Estimator (1) Cases in which θ (unknown parameter) is scalar Notational Clarification: From now on, we denote the true value of θ as θ o hen, view θ

Διαβάστε περισσότερα

Variational Wavefunction for the Helium Atom

Variational Wavefunction for the Helium Atom Technische Universität Graz Institut für Festkörperphysik Student project Variational Wavefunction for the Helium Atom Molecular and Solid State Physics 53. submitted on: 3. November 9 by: Markus Krammer

Διαβάστε περισσότερα

Math 6 SL Probability Distributions Practice Test Mark Scheme

Math 6 SL Probability Distributions Practice Test Mark Scheme Math 6 SL Probability Distributions Practice Test Mark Scheme. (a) Note: Award A for vertical line to right of mean, A for shading to right of their vertical line. AA N (b) evidence of recognizing symmetry

Διαβάστε περισσότερα

Tutorial on Multinomial Logistic Regression

Tutorial on Multinomial Logistic Regression Tutorial on Multinomial Logistic Regression Javier R Movellan June 19, 2013 1 1 General Model The inputs are n-dimensional vectors the outputs are c-dimensional vectors The training sample consist of m

Διαβάστε περισσότερα

Statistical Inference I Locally most powerful tests

Statistical Inference I Locally most powerful tests Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided

Διαβάστε περισσότερα

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear

Διαβάστε περισσότερα

Congruence Classes of Invertible Matrices of Order 3 over F 2

Congruence Classes of Invertible Matrices of Order 3 over F 2 International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and

Διαβάστε περισσότερα

TMA4115 Matematikk 3

TMA4115 Matematikk 3 TMA4115 Matematikk 3 Andrew Stacey Norges Teknisk-Naturvitenskapelige Universitet Trondheim Spring 2010 Lecture 12: Mathematics Marvellous Matrices Andrew Stacey Norges Teknisk-Naturvitenskapelige Universitet

Διαβάστε περισσότερα

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- -----------------

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- ----------------- Inverse trigonometric functions & General Solution of Trigonometric Equations. 1. Sin ( ) = a) b) c) d) Ans b. Solution : Method 1. Ans a: 17 > 1 a) is rejected. w.k.t Sin ( sin ) = d is rejected. If sin

Διαβάστε περισσότερα

Parametrized Surfaces

Parametrized Surfaces Parametrized Surfaces Recall from our unit on vector-valued functions at the beginning of the semester that an R 3 -valued function c(t) in one parameter is a mapping of the form c : I R 3 where I is some

Διαβάστε περισσότερα

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in : tail in X, head in A nowhere-zero Γ-flow is a Γ-circulation such that

Διαβάστε περισσότερα

Homework 8 Model Solution Section

Homework 8 Model Solution Section MATH 004 Homework Solution Homework 8 Model Solution Section 14.5 14.6. 14.5. Use the Chain Rule to find dz where z cosx + 4y), x 5t 4, y 1 t. dz dx + dy y sinx + 4y)0t + 4) sinx + 4y) 1t ) 0t + 4t ) sinx

Διαβάστε περισσότερα

Συστήματα Διαχείρισης Βάσεων Δεδομένων

Συστήματα Διαχείρισης Βάσεων Δεδομένων ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Συστήματα Διαχείρισης Βάσεων Δεδομένων Φροντιστήριο 9: Transactions - part 1 Δημήτρης Πλεξουσάκης Τμήμα Επιστήμης Υπολογιστών Tutorial on Undo, Redo and Undo/Redo

Διαβάστε περισσότερα

Homework for 1/27 Due 2/5

Homework for 1/27 Due 2/5 Name: ID: Homework for /7 Due /5. [ 8-3] I Example D of Sectio 8.4, the pdf of the populatio distributio is + αx x f(x α) =, α, otherwise ad the method of momets estimate was foud to be ˆα = 3X (where

Διαβάστε περισσότερα

Solution Series 9. i=1 x i and i=1 x i.

Solution Series 9. i=1 x i and i=1 x i. Lecturer: Prof. Dr. Mete SONER Coordinator: Yilin WANG Solution Series 9 Q1. Let α, β >, the p.d.f. of a beta distribution with parameters α and β is { Γ(α+β) Γ(α)Γ(β) f(x α, β) xα 1 (1 x) β 1 for < x

Διαβάστε περισσότερα

Example of the Baum-Welch Algorithm

Example of the Baum-Welch Algorithm Example of the Baum-Welch Algorithm Larry Moss Q520, Spring 2008 1 Our corpus c We start with a very simple corpus. We take the set Y of unanalyzed words to be {ABBA, BAB}, and c to be given by c(abba)

Διαβάστε περισσότερα

Section 8.2 Graphs of Polar Equations

Section 8.2 Graphs of Polar Equations Section 8. Graphs of Polar Equations Graphing Polar Equations The graph of a polar equation r = f(θ), or more generally F(r,θ) = 0, consists of all points P that have at least one polar representation

Διαβάστε περισσότερα

Εργαστήριο Ανάπτυξης Εφαρμογών Βάσεων Δεδομένων. Εξάμηνο 7 ο

Εργαστήριο Ανάπτυξης Εφαρμογών Βάσεων Δεδομένων. Εξάμηνο 7 ο Εργαστήριο Ανάπτυξης Εφαρμογών Βάσεων Δεδομένων Εξάμηνο 7 ο Procedures and Functions Stored procedures and functions are named blocks of code that enable you to group and organize a series of SQL and PL/SQL

Διαβάστε περισσότερα

ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ

ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ ΕΛΕΝΑ ΦΛΟΚΑ Επίκουρος Καθηγήτρια Τµήµα Φυσικής, Τοµέας Φυσικής Περιβάλλοντος- Μετεωρολογίας ΓΕΝΙΚΟΙ ΟΡΙΣΜΟΙ Πληθυσµός Σύνολο ατόµων ή αντικειµένων στα οποία αναφέρονται

Διαβάστε περισσότερα

Section 8.3 Trigonometric Equations

Section 8.3 Trigonometric Equations 99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.

Διαβάστε περισσότερα

D Alembert s Solution to the Wave Equation

D Alembert s Solution to the Wave Equation D Alembert s Solution to the Wave Equation MATH 467 Partial Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Objectives In this lesson we will learn: a change of variable technique

Διαβάστε περισσότερα

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8  questions or comments to Dan Fetter 1 Eon : Fall 8 Suggested Solutions to Problem Set 8 Email questions or omments to Dan Fetter Problem. Let X be a salar with density f(x, θ) (θx + θ) [ x ] with θ. (a) Find the most powerful level α test

Διαβάστε περισσότερα

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)

Διαβάστε περισσότερα

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007 Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Αν κάπου κάνετε κάποιες υποθέσεις να αναφερθούν στη σχετική ερώτηση. Όλα τα αρχεία που αναφέρονται στα προβλήματα βρίσκονται στον ίδιο φάκελο με το εκτελέσιμο

Διαβάστε περισσότερα

A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics

A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics A Bonus-Malus System as a Markov Set-Chain Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics Contents 1. Markov set-chain 2. Model of bonus-malus system 3. Example 4. Conclusions

Διαβάστε περισσότερα

Code Breaker. TEACHER s NOTES

Code Breaker. TEACHER s NOTES TEACHER s NOTES Time: 50 minutes Learning Outcomes: To relate the genetic code to the assembly of proteins To summarize factors that lead to different types of mutations To distinguish among positive,

Διαβάστε περισσότερα

Example Sheet 3 Solutions

Example Sheet 3 Solutions Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note

Διαβάστε περισσότερα

Μηχανική Μάθηση Hypothesis Testing

Μηχανική Μάθηση Hypothesis Testing ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Μηχανική Μάθηση Hypothesis Testing Γιώργος Μπορμπουδάκης Τμήμα Επιστήμης Υπολογιστών Procedure 1. Form the null (H 0 ) and alternative (H 1 ) hypothesis 2. Consider

Διαβάστε περισσότερα

Meta-Learning and Universality

Meta-Learning and Universality Meta-Learning and Universality Conference paper at ICLR 2018 Chelsea Finn & Sergey Levine (US Berkeley) Youngseok Yoon Contents The Universality in Meta-Learning Model Construct (Pre-update) Single gradient

Διαβάστε περισσότερα

Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def

Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def Matrices and vectors Matrix and vector a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn def = ( a ij ) R m n, b = b 1 b 2 b m Rm Matrix and vectors in linear equations: example E 1 : x 1 + x 2 + 3x 4 =

Διαβάστε περισσότερα

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all

Διαβάστε περισσότερα

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr 9.9 #. Area inside the oval limaçon r = + cos. To graph, start with = so r =. Compute d = sin. Interesting points are where d vanishes, or at =,,, etc. For these values of we compute r:,,, and the values

Διαβάστε περισσότερα

PARTIAL NOTES for 6.1 Trigonometric Identities

PARTIAL NOTES for 6.1 Trigonometric Identities PARTIAL NOTES for 6.1 Trigonometric Identities tanθ = sinθ cosθ cotθ = cosθ sinθ BASIC IDENTITIES cscθ = 1 sinθ secθ = 1 cosθ cotθ = 1 tanθ PYTHAGOREAN IDENTITIES sin θ + cos θ =1 tan θ +1= sec θ 1 + cot

Διαβάστε περισσότερα

Solutions to Exercise Sheet 5

Solutions to Exercise Sheet 5 Solutions to Eercise Sheet 5 jacques@ucsd.edu. Let X and Y be random variables with joint pdf f(, y) = 3y( + y) where and y. Determine each of the following probabilities. Solutions. a. P (X ). b. P (X

Διαβάστε περισσότερα

Every set of first-order formulas is equivalent to an independent set

Every set of first-order formulas is equivalent to an independent set Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent

Διαβάστε περισσότερα

department listing department name αχχουντσ ϕανε βαλικτ δδσϕηασδδη σδηφγ ασκϕηλκ τεχηνιχαλ αλαν ϕουν διξ τεχηνιχαλ ϕοην µαριανι

department listing department name αχχουντσ ϕανε βαλικτ δδσϕηασδδη σδηφγ ασκϕηλκ τεχηνιχαλ αλαν ϕουν διξ τεχηνιχαλ ϕοην µαριανι She selects the option. Jenny starts with the al listing. This has employees listed within She drills down through the employee. The inferred ER sttricture relates this to the redcords in the databasee

Διαβάστε περισσότερα

Homework 3 Solutions

Homework 3 Solutions Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For

Διαβάστε περισσότερα

ΕΘΝΙΚΟ ΜΕΤΣΟΒΙΟ ΠΟΛΥΤΕΧΝΕΙΟ ΣΧΟΛΗ ΗΛΕΚΤΡΟΛΟΓΩΝ ΜΗΧΑΝΙΚΩΝ ΚΑΙ ΜΗΧΑΝΙΚΩΝ ΥΠΟΛΟΓΙΣΤΩΝ

ΕΘΝΙΚΟ ΜΕΤΣΟΒΙΟ ΠΟΛΥΤΕΧΝΕΙΟ ΣΧΟΛΗ ΗΛΕΚΤΡΟΛΟΓΩΝ ΜΗΧΑΝΙΚΩΝ ΚΑΙ ΜΗΧΑΝΙΚΩΝ ΥΠΟΛΟΓΙΣΤΩΝ ΕΘΝΙΚΟ ΜΕΤΣΟΒΙΟ ΠΟΛΥΤΕΧΝΕΙΟ ΣΧΟΛΗ ΗΛΕΚΤΡΟΛΟΓΩΝ ΜΗΧΑΝΙΚΩΝ ΚΑΙ ΜΗΧΑΝΙΚΩΝ ΥΠΟΛΟΓΙΣΤΩΝ ΤΟΜΕΑΣ ΣΥΣΤΗΜΑΤΩΝ ΜΕΤΑΔΟΣΗΣ ΠΛΗΡΟΦΟΡΙΑΣ ΚΑΙ ΤΕΧΝΟΛΟΓΙΑΣ ΥΛΙΚΩΝ Εξαγωγή χαρακτηριστικών μαστογραφικών μαζών και σύγκριση

Διαβάστε περισσότερα

Διακριτικές Συναρτήσεις

Διακριτικές Συναρτήσεις Διακριτικές Συναρτήσεις Δρ. Δηµήτριος Τσέλιος Επίκουρος Καθηγητής ΤΕΙ Θεσσαλίας Τµήµα Διοίκησης Επιχειρήσεων Θερµικός χάρτης των XYZ ξενοδοχείων σε σχέση µε τη γεωγραφική περιοχή τους P. Adamopoulos New

Διαβάστε περισσότερα

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question 1 a) The cumulative distribution function of T conditional on N n is Pr T t N n) Pr max X 1,..., X N ) t N n) Pr max

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

Problem Set 3: Solutions

Problem Set 3: Solutions CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C

Διαβάστε περισσότερα

Test Data Management in Practice

Test Data Management in Practice Problems, Concepts, and the Swisscom Test Data Organizer Do you have issues with your legal and compliance department because test environments contain sensitive data outsourcing partners must not see?

Διαβάστε περισσότερα

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We

Διαβάστε περισσότερα

How to register an account with the Hellenic Community of Sheffield.

How to register an account with the Hellenic Community of Sheffield. How to register an account with the Hellenic Community of Sheffield. (1) EN: Go to address GR: Πηγαίνετε στη διεύθυνση: http://www.helleniccommunityofsheffield.com (2) EN: At the bottom of the page, click

Διαβάστε περισσότερα

Written Examination. Antennas and Propagation (AA ) April 26, 2017.

Written Examination. Antennas and Propagation (AA ) April 26, 2017. Written Examination Antennas and Propagation (AA. 6-7) April 6, 7. Problem ( points) Let us consider a wire antenna as in Fig. characterized by a z-oriented linear filamentary current I(z) = I cos(kz)ẑ

Διαβάστε περισσότερα

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03).. Supplemental Material (not for publication) Persistent vs. Permanent Income Shocks in the Buffer-Stock Model Jeppe Druedahl Thomas H. Jørgensen May, A Additional Figures and Tables Figure A.: Wealth and

Διαβάστε περισσότερα

Lecture 2. Soundness and completeness of propositional logic

Lecture 2. Soundness and completeness of propositional logic Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness

Διαβάστε περισσότερα

The ε-pseudospectrum of a Matrix

The ε-pseudospectrum of a Matrix The ε-pseudospectrum of a Matrix Feb 16, 2015 () The ε-pseudospectrum of a Matrix Feb 16, 2015 1 / 18 1 Preliminaries 2 Definitions 3 Basic Properties 4 Computation of Pseudospectrum of 2 2 5 Problems

Διαβάστε περισσότερα

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 7η: Consumer Behavior Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 7η: Consumer Behavior Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Ψηφιακή Οικονομία Διάλεξη 7η: Consumer Behavior Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών Τέλος Ενότητας Χρηματοδότηση Το παρόν εκπαιδευτικό υλικό έχει αναπτυχθεί

Διαβάστε περισσότερα

IIT JEE (2013) (Trigonomtery 1) Solutions

IIT JEE (2013) (Trigonomtery 1) Solutions L.K. Gupta (Mathematic Classes) www.pioeermathematics.com MOBILE: 985577, 677 (+) PAPER B IIT JEE (0) (Trigoomtery ) Solutios TOWARDS IIT JEE IS NOT A JOURNEY, IT S A BATTLE, ONLY THE TOUGHEST WILL SURVIVE

Διαβάστε περισσότερα

Lecture 21: Properties and robustness of LSE

Lecture 21: Properties and robustness of LSE Lecture 21: Properties and robustness of LSE BLUE: Robustness of LSE against normality We now study properties of l τ β and σ 2 under assumption A2, i.e., without the normality assumption on ε. From Theorem

Διαβάστε περισσότερα

ΕΠΙΧΕΙΡΗΣΙΑΚΗ ΑΛΛΗΛΟΓΡΑΦΙΑ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΑ ΣΤΗΝ ΑΓΓΛΙΚΗ ΓΛΩΣΣΑ

ΕΠΙΧΕΙΡΗΣΙΑΚΗ ΑΛΛΗΛΟΓΡΑΦΙΑ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΑ ΣΤΗΝ ΑΓΓΛΙΚΗ ΓΛΩΣΣΑ Ανοικτά Ακαδημαϊκά Μαθήματα στο ΤΕΙ Ιονίων Νήσων ΕΠΙΧΕΙΡΗΣΙΑΚΗ ΑΛΛΗΛΟΓΡΑΦΙΑ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΑ ΣΤΗΝ ΑΓΓΛΙΚΗ ΓΛΩΣΣΑ Ενότητα 4: English a Language of Economy Το περιεχόμενο του μαθήματος διατίθεται με άδεια

Διαβάστε περισσότερα

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University Estimation for ARMA Processes with Stable Noise Matt Calder & Richard A. Davis Colorado State University rdavis@stat.colostate.edu 1 ARMA processes with stable noise Review of M-estimation Examples of

Διαβάστε περισσότερα

Matrices and Determinants

Matrices and Determinants Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z

Διαβάστε περισσότερα

Queensland University of Technology Transport Data Analysis and Modeling Methodologies

Queensland University of Technology Transport Data Analysis and Modeling Methodologies Queensland University of Technology Transport Data Analysis and Modeling Methodologies Lab Session #7 Example 5.2 (with 3SLS Extensions) Seemingly Unrelated Regression Estimation and 3SLS A survey of 206

Διαβάστε περισσότερα

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R + Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b

Διαβάστε περισσότερα

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3

Διαβάστε περισσότερα

Stabilization of stock price prediction by cross entropy optimization

Stabilization of stock price prediction by cross entropy optimization ,,,,,,,, Stabilization of stock prediction by cross entropy optimization Kazuki Miura, Hideitsu Hino and Noboru Murata Prediction of series data is a long standing important problem Especially, prediction

Διαβάστε περισσότερα

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds! MTH U341 urface Integrals, tokes theorem, the divergence theorem To be turned in Wed., Dec. 1. 1. Let be the sphere of radius a, x 2 + y 2 + z 2 a 2. a. Use spherical coordinates (with ρ a) to parametrize.

Διαβάστε περισσότερα

len(observed ) 1 (observed[i] predicted[i]) 2

len(observed ) 1 (observed[i] predicted[i]) 2 len(observed ) 1 (observed[i] predicted[i]) 2 i=0 len(observed ) 1 (observed[i] predicted[i]) 2 i=0 model1 = pylab.polyfit(xvals, yvals, 1) pylab.plot(xvals, pylab.polyval(model1, xvals), 'r--', label

Διαβάστε περισσότερα

An Inventory of Continuous Distributions

An Inventory of Continuous Distributions Appendi A An Inventory of Continuous Distributions A.1 Introduction The incomplete gamma function is given by Also, define Γ(α; ) = 1 with = G(α; ) = Z 0 Z 0 Z t α 1 e t dt, α > 0, >0 t α 1 e t dt, α >

Διαβάστε περισσότερα

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =? Teko Classes IITJEE/AIEEE Maths by SUHAAG SIR, Bhopal, Ph (0755) 3 00 000 www.tekoclasses.com ANSWERSHEET (TOPIC DIFFERENTIAL CALCULUS) COLLECTION # Question Type A.Single Correct Type Q. (A) Sol least

Διαβάστε περισσότερα

Notes on the Open Economy

Notes on the Open Economy Notes on the Open Econom Ben J. Heijdra Universit of Groningen April 24 Introduction In this note we stud the two-countr model of Table.4 in more detail. restated here for convenience. The model is Table.4.

Διαβάστε περισσότερα

Βασίλειος Μαχαιράς Πολιτικός Μηχανικός Ph.D.

Βασίλειος Μαχαιράς Πολιτικός Μηχανικός Ph.D. Βασίλειος Μαχαιράς Πολιτικός Μηχανικός Ph.D. Ακέραιος προγραμματισμός πολύ-κριτηριακές αντικειμενικές συναρτήσεις Πανεπιστήμιο Θεσσαλίας Σχολή Θετικών Επιστημών Τμήμα Πληροφορικής Διάλεξη 12-13 η /2017

Διαβάστε περισσότερα

HOMEWORK#1. t E(x) = 1 λ = (b) Find the median lifetime of a randomly selected light bulb. Answer:

HOMEWORK#1. t E(x) = 1 λ = (b) Find the median lifetime of a randomly selected light bulb. Answer: HOMEWORK# 52258 李亞晟 Eercise 2. The lifetime of light bulbs follows an eponential distribution with a hazard rate of. failures per hour of use (a) Find the mean lifetime of a randomly selected light bulb.

Διαβάστε περισσότερα

ΓΕΩΜΕΣΡΙΚΗ ΣΕΚΜΗΡΙΩΗ ΣΟΤ ΙΕΡΟΤ ΝΑΟΤ ΣΟΤ ΣΙΜΙΟΤ ΣΑΤΡΟΤ ΣΟ ΠΕΛΕΝΔΡΙ ΣΗ ΚΤΠΡΟΤ ΜΕ ΕΦΑΡΜΟΓΗ ΑΤΣΟΜΑΣΟΠΟΙΗΜΕΝΟΤ ΤΣΗΜΑΣΟ ΨΗΦΙΑΚΗ ΦΩΣΟΓΡΑΜΜΕΣΡΙΑ

ΓΕΩΜΕΣΡΙΚΗ ΣΕΚΜΗΡΙΩΗ ΣΟΤ ΙΕΡΟΤ ΝΑΟΤ ΣΟΤ ΣΙΜΙΟΤ ΣΑΤΡΟΤ ΣΟ ΠΕΛΕΝΔΡΙ ΣΗ ΚΤΠΡΟΤ ΜΕ ΕΦΑΡΜΟΓΗ ΑΤΣΟΜΑΣΟΠΟΙΗΜΕΝΟΤ ΤΣΗΜΑΣΟ ΨΗΦΙΑΚΗ ΦΩΣΟΓΡΑΜΜΕΣΡΙΑ ΕΘΝΙΚΟ ΜΕΣΟΒΙΟ ΠΟΛΤΣΕΧΝΕΙΟ ΣΜΗΜΑ ΑΓΡΟΝΟΜΩΝ-ΣΟΠΟΓΡΑΦΩΝ ΜΗΧΑΝΙΚΩΝ ΣΟΜΕΑ ΣΟΠΟΓΡΑΦΙΑ ΕΡΓΑΣΗΡΙΟ ΦΩΣΟΓΡΑΜΜΕΣΡΙΑ ΓΕΩΜΕΣΡΙΚΗ ΣΕΚΜΗΡΙΩΗ ΣΟΤ ΙΕΡΟΤ ΝΑΟΤ ΣΟΤ ΣΙΜΙΟΤ ΣΑΤΡΟΤ ΣΟ ΠΕΛΕΝΔΡΙ ΣΗ ΚΤΠΡΟΤ ΜΕ ΕΦΑΡΜΟΓΗ ΑΤΣΟΜΑΣΟΠΟΙΗΜΕΝΟΤ

Διαβάστε περισσότερα

Τελική Εξέταση =1 = 0. a b c. Τµήµα Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών. HMY 626 Επεξεργασία Εικόνας

Τελική Εξέταση =1 = 0. a b c. Τµήµα Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών. HMY 626 Επεξεργασία Εικόνας Τελική Εξέταση. Logic Operations () In the grid areas provided below, draw the results of the following binary operations a. NOT(NOT() OR ) (4) b. ( OR ) XOR ( ND ) (4) c. (( ND ) XOR ) XOR (NOT()) (4)

Διαβάστε περισσότερα

MATHEMATICS. 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81

MATHEMATICS. 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81 We know that KA = A If A is n th Order 3AB =3 3 A. B = 27 1 3 = 81 3 2. If A= 2 1 0 0 2 1 then

Διαβάστε περισσότερα

Ανάκτηση Πληροφορίας

Ανάκτηση Πληροφορίας ΑΡΙΣΤΟΤΕΛΕΙΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΘΕΣΣΑΛΟΝΙΚΗΣ ΑΝΟΙΚΤΑ ΑΚΑΔΗΜΑΪΚΑ ΜΑΘΗΜΑΤΑ Ενότητα 8: Λανθάνουσα Σημασιολογική Ανάλυση (Latent Semantic Analysis) Απόστολος Παπαδόπουλος Άδειες Χρήσης Το παρόν εκπαιδευτικό υλικό

Διαβάστε περισσότερα