Need for a soft decoder. MAP algorithm. Soft bit calculation. Performance. Viterbi E q ualizer. D eco d er. Kalle Ruttik

Σχετικά έγγραφα
Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Homework 3 Solutions

2 Composition. Invertible Mappings

The Simply Typed Lambda Calculus

Block Ciphers Modes. Ramki Thurimella

Elements of Information Theory

Other Test Constructions: Likelihood Ratio & Bayes Tests

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Example of the Baum-Welch Algorithm

Solution Series 9. i=1 x i and i=1 x i.

Math 6 SL Probability Distributions Practice Test Mark Scheme

EE512: Error Control Coding

Section 8.3 Trigonometric Equations

Problem Set 3: Solutions

derivation of the Laplacian from rectangular to spherical coordinates

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

UNIVERSITY OF CALIFORNIA. EECS 150 Fall ) You are implementing an 4:1 Multiplexer that has the following specifications:

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

5.4 The Poisson Distribution.

6.3 Forecasting ARMA processes

Section 7.6 Double and Half Angle Formulas

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Solutions to Exercise Sheet 5

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Approximation of distance between locations on earth given by latitude and longitude

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

[1] P Q. Fig. 3.1

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation

Matrices and Determinants

C.S. 430 Assignment 6, Sample Solutions

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

The challenges of non-stable predicates

Finite Field Problems: Solutions

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Instruction Execution Times

ΚΥΠΡΙΑΚΟΣ ΣΥΝΔΕΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY 21 ος ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ Δεύτερος Γύρος - 30 Μαρτίου 2011

Fractional Colorings and Zykov Products of graphs

Srednicki Chapter 55

Math221: HW# 1 solutions

Mean bond enthalpy Standard enthalpy of formation Bond N H N N N N H O O O

Forced Pendulum Numerical approach

Probability and Random Processes (Part II)

w o = R 1 p. (1) R = p =. = 1

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Second Order Partial Differential Equations

Inverse trigonometric functions & General Solution of Trigonometric Equations

PARTIAL NOTES for 6.1 Trigonometric Identities

Main source: "Discrete-time systems and computer control" by Α. ΣΚΟΔΡΑΣ ΨΗΦΙΑΚΟΣ ΕΛΕΓΧΟΣ ΔΙΑΛΕΞΗ 4 ΔΙΑΦΑΝΕΙΑ 1

CRASH COURSE IN PRECALCULUS

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Differential equations

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Εργαστήριο Ανάπτυξης Εφαρμογών Βάσεων Δεδομένων. Εξάμηνο 7 ο

An Introduction to Signal Detection and Estimation - Second Edition Chapter II: Selected Solutions

Μηχανική Μάθηση Hypothesis Testing

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

Statistical Inference I Locally most powerful tests

Concrete Mathematics Exercises from 30 September 2016

Section 9.2 Polar Equations and Graphs

Example Sheet 3 Solutions

Second Order RLC Filters

ST5224: Advanced Statistical Theory II

( ) 2 and compare to M.

SUPERPOSITION, MEASUREMENT, NORMALIZATION, EXPECTATION VALUES. Reading: QM course packet Ch 5 up to 5.6

Numerical Analysis FMN011

6. MAXIMUM LIKELIHOOD ESTIMATION

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

6.003: Signals and Systems. Modulation

Lifting Entry (continued)

Every set of first-order formulas is equivalent to an independent set

If we restrict the domain of y = sin x to [ π, π ], the restrict function. y = sin x, π 2 x π 2

Physical DB Design. B-Trees Index files can become quite large for large main files Indices on index files are possible.

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Durbin-Levinson recursive method

Parametrized Surfaces

If we restrict the domain of y = sin x to [ π 2, π 2

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Exercises to Statistics of Material Fatigue No. 5

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

( y) Partial Differential Equations

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

4.6 Autoregressive Moving Average Model ARMA(1,1)

department listing department name αχχουντσ ϕανε βαλικτ δδσϕηασδδη σδηφγ ασκϕηλκ τεχηνιχαλ αλαν ϕουν διξ τεχηνιχαλ ϕοην µαριανι

Queensland University of Technology Transport Data Analysis and Modeling Methodologies

Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής

Συστήματα Διαχείρισης Βάσεων Δεδομένων

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Potential Dividers. 46 minutes. 46 marks. Page 1 of 11

MATH423 String Theory Solutions 4. = 0 τ = f(s). (1) dτ ds = dxµ dτ f (s) (2) dτ 2 [f (s)] 2 + dxµ. dτ f (s) (3)

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

Homework for 1/27 Due 2/5

ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ

Αλγόριθμοι και πολυπλοκότητα NP-Completeness (2)

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Solutions to the Schrodinger equation atomic orbitals. Ψ 1 s Ψ 2 s Ψ 2 px Ψ 2 py Ψ 2 pz

Areas and Lengths in Polar Coordinates

Transcript:

S oft d ecod er p erformance Need for a soft decoder y Viterbi E q ualizer ˆx D eco d er ˆb Kalle Ruttik C ommu n ication s L ab oratory H elsin k i U n iv ersity of T echn oy M ay 2, 2 7 Viterbi E q ualizer p ro v id es o nly ML bit seq uence ˆx ML seq uence ˆx co ntains h ard bits T h e d eco d er fo llo w ing th e Viterbi eq ualizer h as to o p erate o n h ard bits Soft decoder performance Soft bit Performance BER 1 1 1 1 2 1 3 1 4 1 5 2 6 8 2 SNR hard soft Comparison of soft and hard Viterbi decoder performance The performance of the decoder can be improved if the decoder operates on the soft values S oft value contains not only the bit value but also the reliability information F or example soft value could be the bit probability Soft bit calculation In order to have soft values at the input of the decoder the equalizer should generate a soft output Viterbi algorithm generates at the output only ML sequence hard bits The soft information can be expressed as aposteriori probability of a bit In the input to the equalizer the soft information is described as the probability of the input symbol A t the output the soft information can be expressed as aposteriori probability of the symbol In case of binary bits the soft information can be expressed as likelihood ratio llr

Soft bit Marginalisation Soft bit as likelihood ratio For deciding a binary bit value accordingly to Bayesian decision approach we have to calculate llr for a bit and compare it to the decision threshold Loglikelihood ratio is a suffi cient statistics - it contains all information for making optimal decision In our setup equalizer decoder at the output of the equalizer we do not want to make decision yet but postpone it to the decoder stage W e provide into the decoder input suffi cient statistics For bit sequence the llr for a bit has to be calculated by fi rst marginalising the bit probabilities Marignalization meas integrating summing over the values of nonrelevant variables Marginalisation Marginalisation of a distribution of a variable is a removal of the impact of the nonrelevant variables For exmaple marginal distribution of a variable x 1 from the distribution of three variables x 1, x 2, x 3 is px 1 px 1, x 2, x 3 d x 1 d x 2 for a discrete variables we could replace the integral over distribution by a sum over possible values px 1 px 1, x 2, x 3 x 3 x 2 Marginalisation Marginalisation Marginalisation... Further simpolification: if the variables are independent the probability can be split px 1 px 2 px 3 The marginal proability of x 1 px 1 px 1 x 2 x 3 Because we sum over probabilities: the sum over all possible values of x 2 and x 3 are 1 Soft bit LLR contain sufficient statistics for making optimal decision about the bit p x 1 y p x y replace both of the conditional probabilities by using Bayes formula p x y p x 1 y py x px py x px py x + py x 1 py py x 1px py x 1px py x + py x 1 py If the symbols are apriori equally probable we can write p x 1 y p x 1 y p x 1 y p x y

Marginalisation MAP Soft information from a symbol sequence Assume that the symbols are observed in the in additive white noise For a symbol sequence we have from marginalisation p x 1, x 2,..., x k 1,..., x N y 1, y 2,..., y N x k 1 p x 1, x 2,..., x k,..., x N y 1, y 2,..., y N x k Based on Bayes formula p x 1, x 2,..., x k 1,..., x N y 1, y 2,..., y N p y1,y2,...,y N x 1,x 2,...x k 1...,x N px 1,x 2,...,x k 1,...,x N py 1,y 2,...,y N p x 1, x 2,..., x k,..., x N y 1, y 2,..., y N p y1,y2,...,y N x 1,x 2,...,x k,...,x N px 1,x 2,...,x k,...,x N py 1,y 2,...,y N We can simplify p y 1, y 2,..., y k,..., y N x 1, x 2,..., x k 1,..., x N x k 1 x k A W G N p y 1, y 2,..., y k,..., y N x 1, x 2,..., x k,..., x N x k 1 p y k1 x 1, x 2,..., x k 1,..., x N k 1 p y k1 x 1, x 2,..., x k,..., x N k 1 p y k1 x k1, x k 1 x k 1 k 1 p y k1 x k1, x k x k k 1 x k sin g le path MAP MAP multipath x k 1 p y k1 x 1, x 2,..., x k 1,..., x N k 1 p y k1 x 1, x 2,..., x k,..., x N k 1 p y k1 s k1 1, x k1, x k 1 x k 1 k 1 p y k1 s k1 1, x k1, x k x k k 1 x k multipath M ark ov model The last equation is the general form for the Maximum Aposteriori P robability MAP estimation algorithm The optimal algorithm for estimating not only bit values but also their reliability is maximum aposteriori probability MAP estimation for each bit in the received codeword For calculating the likelihood ratio from the aposteriori probabilities we have to sum over all possible bit sequences Sum of the probabilities of the bit sequences where the bit x k 1 and where the bit x k Take the arithm of the ratio of these probabilities This is diff erent compared to the Viterbi algorithm where we selected only one ML path sums over all possible codewords paths in the trellis

MAP MAP Comparison of MAP and Viterbi algorithms Viterbi algorithm estiamtes the whole sequence, MAP algorithm estimates each bit separately Minimization of sequence codeword error does not mean that the bit error is minimized Viterbi algorithm estimates ML sequence estimated ˆx is a vector ˆx of hard bits corrsponding to ML sequence the error probability is that given the received symbol vector y the estimted sequence ˆx is not equal to the transmitted codeword x: p ˆx x y Viterbi algorithm minimizes block error ratio BLER no information about the reliability of the estimation Comparison of MAP and Viterbi algorithms... MAP for each bit estimates the a-posteriori probability for each bit and minimizes bit error probability ˆx is a vector of likelihood ratios for each bit the error probability is: given the received symbol vector y the estimations for each bit k ˆx k are not equal to the transmitted bit x k : p ˆx x y The likelihood ratio sign shows the bit value the amplitude describes reliability - how probable the decision is MAP BCJR Implementation of for decoding a linear code was proposed by Bahl, Cocke, J alinek,and Raviv BCJ R in 19 74 For general trellis the algorithm is also known as forward-backward algorithm or Baum-Welch algorithm The algorithm is more complex than Viterbi algorithm When information bits are not equally likely the MAP decoder performs much better than Viterbi decoder U sed in iterative processing turbo codes For turbo processing BCJ R algorithm is slightly modified Derivation of MAP decoding algorithm in trellis BCJR MAP algorithm for binary transmission finds the marginal probability that the received bit was 1 or Since the bit 1 or could occur in many different code words, we have to sum over the probabilities of all these code words The decision is made by using the likelihood ratio of these marginal distributions for x 1 and x The calculation can be structured by using trellis diagram For every state sequence there is an unique path trough the trellis Codewords sharing common state have common bit sequence. This sequence can be computed only once. The objective of the decoder is to examine states s and compute APPs associated with the state transitions

Marginal probability / 1/11 / 1/11 1 /1 1/1 / 1 1/11 /1 1/1 1 11 /11 1/ /1 1/1 / 1/11 /1 1 1/1 /11 1 1/11 /1 11 1/1 / 1/11 1 /1 1/1 /11 1 1/ /1 11 1/1 The probability of the code words is visualised in the code tree For independent bits the probability of one codeword is multiplication of probabilities of the individual bits in the codeword The marginal probability from the code tree for some particular bit beeing 1 or corresponds to the sum of probabilities over all the codewords where this bit is 1 or A structured way for marginal probability calculation is to use trellis Example: calculation of marginal probabilities pc 1, c 2, c 3 p,, p,, 1 p, 1, p, 1, 1 p1,, p1,, 1 p1, 1, p1, 1, 1 p post c 2 p post c 2 1 pc 1,c 2,c 3 c 2 pc 1,c 2,c 3 + pc 1,c 2,c 3 c 2 c 2 1 pc 1,c 2,c 3 c 2 1 pc 1,c 2,c 3 + pc 1,c 2,c 3 c 2 c 2 1 Example: a-posteriori probability For the independent samples we can separate p post c 2 pc 1 c 2 pc 2 pc 3 c 1, c 2 c 2 pc 1 c 2 pc 2 pc 3 c 1, c 2 + pc 1 c 2 pc 2 pc 3 c 1, c 2 c 2 c 21 pc 1 pc 2 pc 3 c 2 pc 1 pc 2 pc 3 + pc 1 pc 2 pc 3 c 2 c 21 pc 2 pc 1 pc 3 c 1 c 3 pc 1 pc 3 pc 1 + pc 2 c 1,c 3 c 2 c 21 Example:a-posteriori probability... The likelihood ratio becomes p post pc 2 1 pc 1 pc 3 c 2 1 p post c 2 c 1 c 3 pc 2 pc 1 pc 3 c 1 c 3 pc 2 1 pc 1 + pc 1 1 pc 3 + pc 3 pc 2 pc 1 + pc 1 1 pc 3 + pc 3 The computation can be simplified by summing over all possible beginnings and ends of the codewords separately In this simple example the sums can be reduced to 1, for a general code this is not the case

Derivation of the forward-backward algorithm For a general HMM the marginalisation can be simplified by grouping together the codewords that have a common terms. p y k1 s k1 1, x k1, x k 1 x k 1 k 1 p y k1 s k1 1, x k1, x k x k k 1 Let denote transition at stage k 1 from state s k1 S i to state s k1 +1 S j in the next stage as Derivation of the forward-backward algorithm x k 1 x k 1 k 1 k 1 p y k1 s k1, x k1, x k 1 x k 1 k 1 p y k1 s k1, s k1+1, x k 1 sum over all the symbol sequences where x k 1 multiplication over all the state transition along a path corresponding to some particular codeword We can regroup: all the codewords contain one common transition py k s k, s k+1, x k 1 M k1,j,i p y k1 s k1 S i, s k1 +1 S j p y k1 S k1, x k1 Derivation of the forward-backward algorithm p y k s k, s k+1, x k 1 s k S p k 1 <k y k1 sk1, s k1 +1 p y k1 sk1, s k1 +1 k 1 >k A k,i p y k1 s k1, s k1 +1 is ca lled fo rw ard m etrics k 1 <k B k,j p y k1 s k1, s k1 +1 is ca lled b a ck w ard m etrics k 1 >k α k,i lo g A k,i is su m o f th e p ro a b ilities a lo n g a ll th e p a th s th a t w h en startin g fro m th e b eg in n in g o f th e trellis a t th e sta g e k w ill m erg e to sta te i β k,j lo g B k,j is su m o f th e p ro a b ilities a lo n g a ll th e p a th s th a t w h en startin g fro m th e en d o f th e trellis a t th e sta g e k + 1 m erg e to sta te j Calculation of metrics A and B can be calculated recursiv ely for particular state k i, i we can write corresponding A k1,i A k1,i s k S i py k1 1, y k1 2,..., y 1 s k1 S i, s k 1,..., s 1 A k1,i B k1,j s k1 1S pyk1 1 s k1 S i, s k1 1 S i1 py k1 2,..., y 1 s k1 1 S i1,..., s 1 i 1 M k1,i,i 1 A k1 1,i 1 s k1 +1S pyk1 +1 s k1 S j, s k1 +1 S j1 py k1 +2,..., y N s k1 +1 S j1,..., s N j 1 M k1,j,j 1 B k1 +1,j 1

Illustration: froward metrics Illustration: bacward metrics A k1 1, A k1, M k1,1,1 M k1,2,1 A k1 1,1 A k1,1 B k1, B k1,1 M k1 +1,1,1 M k1 +1,2,1 B k1+1, B k1+1,1 A k1 1,2 A k1,2 B k1,2 B k1+1,2 A k1 1,3 A k1,3 B k1,3 B k1+1,3 A k1,i i 1 M k1,i,i 1 A k1 1,i 1 B k1,i i 1 M k1+1,i,i 1 B k1+1,i 1 M k1,1,1 A k1 1,1 + M k1,2,1 A k1 1,2 M k1+1,1,1 B k1 1,1 + M k1+1+1,2,1 B k1+1,2 Calculation in domain For numerical stability better to compute in arithmic domain A k,i M k,i,j B k,j A k,i + M k,i,j + B k,j α k,i A k,i α k,i + µ k,i,j + β k,j β k,j B k,j µ k,i,j M k,i,j A k1,i M k1,i,i 1 A k1 1,i 1 i 1 B k1,j M k1+1+1,j,j 1 B k1+1,j 1 j 1 i 1 e µ k 1,i,i 1 +α k1 1,i 1 j 1 e µ k 1+1,j,j 1 +β k1+1,j 1 Metric for AWGN channel Probability calcualtion simplification for the equalizer in A W G N channel 1 e y 2 k f x k,s k,h c h 2σ N 2 2πσ 2n y k f x k, s k, h ch 2 2πσ 2 n 1 2σ 2 N 2σ 2 N y 2 k 2y k f x k, s k, h ch + f x k, s k, h ch 2 1 2σN 2 2y k f x k, s k, h ch 1 2σN 2 f x k, s k, h ch 2

Initialization Suppose the decoder starts and ends with known states. { 1, s S A,, otherwise If the final state is known B N, { 1, sn S, otherwise If the final state of the trellis is unknown B N,j 1 2 m, s N Metric calculation Map algorithm: Summing over all the codewords where the symbol x k and where it is x k 1 at the node we combine the probabilities meging into this node α k s k S i s k S i py k 1, y k 2,..., y 1 s k S i, s k 1,..., s 1 We do not reject other path but sum the probabilities togeher the probability of the part of the codeword continuing from the trellis state where they merge will be same for both codewords Similarly we can calculate for backward metric B by starting at the end of the trellis E x amp le:prob lem Algorithm Initialize the forward and backward metrics α k C ompute the brance metrics µ k,i,j C ompute the forward metrics α k,i C ompute the backward metrics β k,j C ompute APP L -values Lb optional C ompute hard decisions Example:MAP equalizer Use MAP bit estimation for estimating the transmitted bits if the the channel is h ch 1 δ +.3 δ1. T he received signal is y rec [1.3,.1, 1, 1.4,.9 ] T he input is a sequence of binary bits b modulated as x 1 1 1. T he E bn 2d B E stimate the most likely transmitted bits Notice that we are using the same values as in the ex ample about V iterbi equalizer.

Example:S olu tion Metric assignment Metric is calculated based on the received value and the value in trellis branch. 1 1 h ch 2 2σN 2 2y k f x k, s k, h ch 1 2σN 2 f x k, s k, h ch 2 where f x k, s k, h ch h 1 x k + h 2 x k 1 + + h L x k L 1 is the noiseless mean value of the corresponding trellis branch. This metric is normilized by the channel total power 1 h ch 2 T rellis Metric on each branch 1 1.3 1.3 1.3.3 1.7.7.1.7.7.7.7.3 1.3 1.3 1.3 Mean value on each branch. 6.64 4.24 8.8 3.1.8 2.5 3.5 9 1.48 4.5 4 1.42 3.49 2.5 3 1.17 2.7 1.14 1.4 Metric in each trellis branch B ack w ard calculation Example:, Bacward trellis start from the end of the trellis and calcualate the sum of the probabilitites Init the last stage probability to 1 in domain to : β 5, 1 B y using the state probabilities k + 1 and the probabilities on the transitions calculate the probabilities for the states k β k,i j eµ k,i,j +β k+1,j If the values are in the domain for the sum you have to convert them back to the probability domain. 2 Normilize needed for the numerical stability β k,# β k,# max β k,# Example:, Bacward trellis Stage Stage1 Stage2 Stage3 Stage4 Stage5 β, 6.64 β 1, 4.24 β 2, 8.8 β 3, 3.1 β 4,.8 2.53.59 1.48 4.54 1.42 3.49 2.53 1.17 β stage,state β k,i j eµ k+1,i,j +β k+1,j 2.7 1.14 1.4 β 2,1 β 3,1 β 4,1 β 5,

Example: : Stage 5 to 4 Example: : Stage 5 to 4 β, β 1, β 2, β 3, β 4, β, β 1, β 2, β 3,.8.8 1.17 β 2,1 β 3,1 β 4,1 β 2,1 β 3,1 1.17 1 The new state probabilities β 4, µ 5,, + β 5,.8 + β 4,1 µ 5,1, + β 5, 1.17 + 2 β 4,.8.8, β 4,1 1.17.8 1.97 1 The new state probabilities β 4, µ 5,, + β 5,.8 + β 4,1 µ 5,1, + β 5, 1.17 + 2 β 4,.8.8, β 4,1 1.17.8 1.97 Example: : Stage 5 to 4 Example: : Stage 4 to 3 β, β 1, β 2, β 3, β, β 1, β 2, 3.1 3.1 4.5 4 β 2,1 β 3,1 1.97 β 2,1 β 3,1 1.9 7 1 The new state probabilities β 4, µ 5,, + β 5,.8 + β 4,1 µ 5,1, + β 5, 1.17 + 2 β 4,.8.8, β 4,1 1.17.8 1.97 1 S ta tes b a ck w ard p ro b a b ilities β 3, lo g e µ3,,+β4, + e µ3,,1+β4,1 lo g e 3.1+ + e 4.5 4+ 1.9 7 3.1 β 3,1 lo g e µ3,1,+β4, + e µ3,1,1+β4,1 lo g e 2.5 3+ + e 1.4+ 1.9 7 2.5 3 2 β 4, 3.1 3.1, β 4,1 2.5 3 3.1.5 7

E x amp le :S olu tion B ac k w ard c alc u lation Example: : Stage 4 to 3 Example: : Stage 4 to 3 β, β 1, β 2, 3.1 β, β 1, β 2, 2.53 β 2,1 1.4 2.53 1.97 β 2,1.57 1.97 1 States backward probabilities β 3, e µ3,,+β4, + e µ3,,1+β4,1 e 3.1+ + e 4.54+ 1.97 3.1 β 3,1 e µ3,1,+β4, + e µ3,1,1+β4,1 e 2.53+ + e 1.4+ 1.97 2.53 2 β 4, 3.1 3.1, β 4,1 2.53 3.1.57 1 States backward probabilities β 3, e µ3,,+β4, + e µ3,,1+β4,1 e 3.1+ + e 4.54+ 1.97 3.1 β 3,1 e µ3,1,+β4, + e µ3,1,1+β4,1 e 2.53+ + e 1.4+ 1.97 2.53 2 β 4, 3.1 3.1, β 4,1 2.53 3.1.57 Example: : Stage 3 to 2 Example: : Stage 3 to 2 β, β 1,.91 8.8 1.48 β, β 1,.91 3.49 β 2,1.57 1.97 1.14.59.57 1.97 1 States backward probabilities β 2, e µ 3,,+β 3, + e µ 3,,1+β 3,1 e 8.8+ + e 1.48+.57.91 β 2,1 e µ 3,1,+β 3, + e µ 3,1,1β 3,1 e 3.49+ + e 1.14+.57.59 2 β 2,.91.91, β 2,1.59.91.32 1 States backward probabilities β 2, e µ 3,,+β 3, + e µ 3,,1+β 3,1 e 8.8+ + e 1.48+.57.91 β 2,1 e µ 3,1,+β 3, + e µ 3,1,1β 3,1 e 3.49+ + e 1.14+.57.59 2 β 2,.91.91, β 2,1.59.91.32

Example: : Stage 3 to 2 Example: : Stage 3 to 2 β, β 1, β,.88 4.24.59.32.57 1.97.32.57 1.97 1 States backward probabilities β 2, e µ 3,,+β 3, + e µ 3,,1+β 3,1 e 8.8+ + e 1.48+.57.91 β 2,1 e µ 3,1,+β 3, + e µ 3,1,1β 3,1 e 3.49+ + e 1.14+.57.59 2 β 2,.91.91, β 2,1.59.91.32 1 States backward probabilities β 1, e µ 2,,+β 2, + e µ 2,,1+β 2,1 e 4.24+ + e.59+.32.88 e µ 2,1,β 2, + e µ 2,1,1+β 2,1 e 1.42+ + e 2.7+.32 1.24 2 β 1,.88.88, 1.24.88.36 Example: : Stage 3 to 2 Example: : Stage 3 to 2 β,.88 β, 1.42 2.7 1.24.32.57 1.97.36.32.57 1.97 1 States backward probabilities β 1, e µ 2,,+β 2, + e µ 2,,1+β 2,1 e 4.24+ + e.59+.32.88 e µ 2,1,β 2, + e µ 2,1,1+β 2,1 e 1.42+ + e 2.7+.32 1.24 2 β 1,.88.88, 1.24.88.36 1 States backward probabilities β 1, e µ 2,,+β 2, + e µ 2,,1+β 2,1 e 4.24+ + e.59+.32.88 e µ 2,1,β 2, + e µ 2,1,1+β 2,1 e 1.42+ + e 2.7+.32 1.24 2 β 1,.88.88, 1.24.88.36

F orward calculation Example: start from the beginning of the trellis and calcualate the sum of the probabilitites Init the probability of the fi rst state at the stage to 1 in domain to : α, 1 B y using the state probabilities k and the probabilities on the transitions calculate the probabilities for the states k + 1 α k,i j eµ k,i,j +α k 1,j If the values are in the domain for the sum you have to conv ert them back to the probability domain. 2 N ormiliz e needed for the numerical stability α k,# α k,# max α k,# Example:, Forward trellis Stage Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 α, 6.64 α 1, 4.24 α 2, 8.8 α 3, 3.1 α 4,.8 2.53.59 1.48 4.54 1.42 3.49 2.53 α 1,1 α stage,state α k,i j eµ k,i,j +α k 1,j 2.7 1.14 1.4 1.17 α 2,1 α 3,1 α 5, Example: : Stage to 1 Example: : Stage to 1 α 1, 6.64 2.53 α 2, α 3, α 4, 6.64 6.64 2.53 α 2, α 3, α 4, α 1,1 α 2,1 α 3,1 2.53 α 2,1 α 3,1 1 T he new state probabilities α 1, µ 1,, + α, 6.64 α 1,1 µ 1,1, + α, 2.53 2 α 1, 6.64 2.53 9.17, α 1,1 2.53 2.53 1 The new state probabilities α 1, µ 1,, + α, 6.64 α 1,1 µ 1,1, + α, 2.53 2 α 1, 6.64 2.53 9.17, α 1,1 2.53 2.53

Example: : Stage to 1 Example: : Stage 1 to 2 9.17 α 2, α 3, α 4, 9.17 4.24 1.42 α 3, α 4, 1.42 α 2,1 α 3,1 α 2,1 α 3,1 1 The new state probabilities α 1, µ 1,, + α, 6.64 α 1,1 µ 1,1, + α, 2.53 2 α 1, 6.64 2.53 9.17, α 1,1 2.53 2.53 1 α 2, e µ2,,+α1, + e µ2,,1+α1,1 e 4.24+ 9.17 + e 1.42+ 1.42 α 2,1 e µ2,1,+α1, + e µ2,1,1+α1,1 e 1.48+ 9.17 + e 1.14+ 2.7 2 α 2, 1.42 1.42, α 2,1 2.7 1.42 1.28 Example: : Stage 1 to 2 Example: : Stage 1 to 2 9.17 1.42 α 3, α 4, 9.17 α 3, α 4,.59 2.7 2.7 α 3,1 1.28 α 3,1 1 1 α 2, e µ2,,+α1, + e µ2,,1+α1,1 e 4.24+ 9.17 + e 1.42+ 1.42 α 2,1 e µ2,1,+α1, + e µ2,1,1+α1,1 e 1.48+ 9.17 + e 1.14+ 2.7 2 α 2, 1.42 1.42, α 2,1 2.7 1.42 1.28 α 2, lo g e µ2,,+α1, + e µ2,,1+α1,1 lo g e 4.24+ 9.17 + e 1.42+ 1.4 2 α 2,1 lo g e µ2,1,+α1, + e µ2,1,1+α1,1 lo g e 1.48 + 9.17 + e 1.14+ 2.7 2 α 2, 1.4 2 1.4 2, α 2,1 2.7 1.4 2 1.28

E x amp le :S olu tion F orw ard calcu lation Example: : Stage 2 to 3 Example: : Stage 2 to 3 9.17 8 8 3.49 4.73 α 4, 9.17 1.48 4.73 α 4, 1.28 α 3,1 1.14 1.28 1.66 1 α 3, e µ3,,+α2, + e µ3,,1+α2,1 e 8.8+ + e 3.49+ 1.28 4.73 α 3,1 e µ3,1,+α2, + e µ3,1,1+α2,1 e 4.5 4+ + e 1.4+ 1.28 1.66 2 α 3, 4.73 1.66 6.4, α 3,1 1.66 1.66 1 α 3, e µ3,,+α2, + e µ3,,1+α2,1 e 8.8+ + e 3.49+ 1.28 4.73 α 3,1 e µ3,1,+α2, + e µ3,1,1+α2,1 e 4.54+ + e 1.4+ 1.28 1.66 2 α 3, 4.73 1.66 6.4, α 3,1 1.66 1.66 Example: : Stage 2 to 3 Example: : 9.17 6.4 α 4, 9.17 6.4 1.28 1.28 1 α 3, e µ3,,+α2, + e µ3,,1+α2,1 e 8.8+ + e 3.49+ 1.28 4.73 α 3,1 e µ3,1,+α2, + e µ3,1,1+α2,1 e 4.54+ + e 1.4+ 1.28 1.66 2 α 3, 4.73 1.66 6.4, α 3,1 1.66 1.66

Bite aposteriori Pb The bit aposteriori probability is calculated by combing forw ard probability is states k back w ard probabilty in states k + 1 and transitions probabilities. py k x k 1,x py k x k,x e α k,i +µ k,i,j +β k+1,j x k 1 e α k,i +µ k,i,j +β k+1,j x k Example: a-posteriori for the bits: bit 1 α, µ 1,, β 1, µ 1,,1 e α, +µ 1,,1 + e α,+µ 1,, +β 1, e 2.53+.36 e 6.63+ 2.17 6.64 8.81 Example: a-posteriori for the bits: bit 1 Example: a-posteriori for the bits: bit 2 6.64 2.5 3 α 2, µ 2,, µ 2,1, µ 2,,1 β 2, 2.8 α 2,1 µ 1,1,1 β 2,1 e α, +µ 1,,1 + e α,+µ 1,, +β 1, e 2.53+.36 e 6.63+ 2.17 6.64 8.81 e α 1, +µ 2,,1 +β 2,1 + e α 1,1+µ 1,1,1 +β 2,1 e α 1,+µ 2,, +β 2, + e α 1,1+µ 2,1, +β 2, e 9.17 4.24+ + e 1.42+ e 9.17.59.32 + e 2.7.32 3.14

Example: a-posteriori for the bits: bit 2 Example: a-posteriori for the bits: bit 3 52 4.24.59 1.42 α 2, µ 3,, µ 3,1, µ 3,,1 β 3, 2.7 α 2, µ 3,1,1 β 3,1 e α 1, +µ 2,,1 +β 2,1 + e α 1,1+µ 1,1,1 +β 2,1 e α 1,+µ 2,, +β 2, + e α 1,1+µ 2,1, +β 2, e 9.17 4.24+ + e 1.42+ e 9.17.59.32 + e 2.7.32 3.14 e α 2, +µ 3,,1 +β 3,1 + e α 2,1+µ 3,1,1 +β 3,1 e α 2,+µ 3,, +β 3, + e α 2,1+µ 3,1, +β 3, e +8.8+ + e 1.28 3.49 e +1.47.57 + e 1.28 1.14.57 5.64 Example: a-posteriori for the bits: bit 3 Example: a-posteriori for the bits: bit 4 8.8 1.48 3.49 α 3, µ 4,, µ 4,1, µ 4,,1 β 4, 1.149 8 4.8 α 3,1 µ 4,1,1 β 4,1 e α 2, +µ 3,,1 +β 3,1 + e α 2,1+µ 3,1,1 +β 3,1 e α 2,+µ 3,, +β 3, + e α 2,1+µ 3,1, +β 3, e +8.8+ + e 1.28 3.49 e +1.47.57 + e 1.28 1.14.57 5.64 e α 3, +µ 4,,1 +β 4,1 + e α 3,1+µ 4,1,1 +β 4,1 e α 3,+µ 4,, +β 4, + e α 3,1+µ 4,1, +β 4, e 6.4+3.1+ + e +2.53+ 1.97 e 6.4 4.54+ + e 1.4 1.97 1.41

Example: a-posteriori for the bits: bit 4 36 3.1 4.54 2.53 1.4 1.8 e α 3, +µ 4,,1 +β 4,1 + e α 3,1+µ 4,1,1 +β 4,1 e α 3,+µ 4,, +β 4, + e α 3,1+µ 4,1, +β 4, e 6.4+3.1+ + e +2.53+ 1.97 e 6.4 4.54+ + e 1.4 1.97 1.41