Probability Models (779819), страница 6
Текст из файла (страница 6)
In Equation (3.121), thetransition probability is expressed in a general time-dependent form. Themarginal probability that a Markov process is in the state j at time m, Pj(m),can be expressed asP j (m) =N∑ Pi (m − 1)aij (m − 1,m)(3.122)i=1A Markov chain is defined by the following set of parameters:number of states Nstate probability vectorp T (m) =[ p1 (m), p2 (m),, p N (m)]and the state transition matrix a11 (m − 1,m) a12 (m − 1,m) a (m − 1,m) a22 (m − 1,m)A(m − 1,m) = 21 a (m − 1,m) a (m − 1,m)N2 N1 a1N (m − 1,m) a2 N (m − 1,m) a NN (m − 1,m) Homogenous and Inhomogeneous Markov ChainsA Markov chain with time-invariant state transition probabilities is knownas a homogenous Markov chain. For a homogenous Markov process, theprobability of a transition from state i to state j of the process is independentof the time of the transition m, as expressed in the following equation:Prob(x(m) = j x(m − 1) = i ) = aij (m − 1, m) = aij(3.123)Inhomgeneous Markov chains have time-dependent transition probabilities.In most applications of Markov chains, homogenous models are usedbecause they usually provide an adequate model of the signal process, andbecause homogenous Markov models are easier to train and use.
Markovmodels are considered in Chapter 5.81Transformation of a Random Processx(m)y(m)h[x(m)]Figure 3.14 Transformation of a random process x(m) to an output process y(m).3.6 Transformation of a Random ProcessIn this section we consider the effect of filtering or transformation of arandom process on its probability density function.
Figure 3.14 shows ageneralised mapping operator h(·) that transforms a random input process Xinto an output process Y. The input and output signals x(m) and y(m) arerealisations of the random processes X and Y respectively. If x(m) and y(m)are both discrete-valued such that x(m)∈{x1 ,,x N } and y(m) ∈{y1 ,, yM }then we havePY (y (m) = y j ) =∑ PX (x(m) = xi )(3.124)xi → y jwhere the summation is taken over all values of x(m) that map to y(m)=yj.Now consider the transformation of a discrete-time, continuous-valued,process. The probability that the output process Y has a value in the rangey(m)<Y<y(m)+∆y isProb[ y (m)< Y < y (m) + û\ ] = ∫x ( m) y ( m)<Y < y ( m) + û\f X (x(m) ) dx(m) (3.125)where the integration is taken over all the values of x(m) that yield an outputin the range y(m) to y(m)+∆y .3.6.1 Monotonic Transformation of Random ProcessesNow for a monotonic one-to-one transformation y(m)=h[x(m)] (e.g.
as inFigure 3.15) Equation (3.125) becomesProb( y (m)< Y < y(m) + û\ ) = Prob(x(m)< X < x(m) + û[ )(3.126)82Probability Modelsy=e x∆yx∆xFigure 3.15 An example of a monotonic one-to-one mapping.or, in terms of the cumulative distribution functionsFY ( y (m) + û\ )− FY ( y(m)) = FX (x(m) + û[ ) − FX (x(m))(3.127)Multiplication of the left-hand side of Equation (3.127) by ∆y/∆y and theright-hand side by ∆x/∆x and re-arrangement of the terms yieldsFY ( y (m) + û\ )− FY ( y (m) ) û[ FX (x(m) + û[ )− FX (x(m) )=û\û\û[(3.128)Now as the intervals ∆x and ∆y tend to zero, Equation (3.128) becomesf Y ( y ( m) ) =∂x ( m )f X ( x ( m) )∂y ( m )(3.129)where fY(y(m)) is the probability density function.
In Equation (3.129),substitution of x(m)=h–1(y(m)) yieldsf Y ( y ( m) ) =∂ h −1 ( y ( m) )f X h −1 ( y (m) )∂ y ( m)()(3.130)Equation (3.130) gives the pdf of the output signal in terms of the pdf of theinput signal and the transformation.83Transformation of a Random ProcessfX(x)xFigure 3.16 A log-normal distribution.Example 3.11 Transformation of a Gaussian process to a log-normalprocess. Log-normal pdfs are used for modelling positive-valued processessuch as power spectra.
If a random variable x(m) has a Gaussian pdf as inEquation (3.80) then the non-negative valued variable y(m)=exp(x(m)) has alog-normal distribution (Figure 3.16) obtained using Equation (3.130) asfY ( y) = [ln y (m) − µ x ] 2 1exp −2π σ x y (m)2σ x2(3.131)Conversely, if the input y to a logarithmic function has a log-normaldistribution then the output x=ln y is Gaussian. The mapping functions fortranslating the mean and variance of a log-normal distribution to a normaldistribution can be derived as(1µ x = ln µ y − ln 1+σ 2y / µ 2y2(σ x2 = ln 1+σ 2y / µ 2y))(3.132)(3.133)( µ x , σ 2x ), and ( µ y , σ 2y ) are the mean and variance of x and y respectively.The inverse mapping relations for the translation of mean and variances ofnormal to log-normal variables areµ y = exp( µ x +σ x2 / 2)σ y2 = µ x2 [exp(σ 2y ) − 1](3.134)(3.135)84Probability Modelsy=h(x)∆y∆ x1∆ x2∆ x4∆ x3xFigure 3.17 Illustration of a many to one transformation.3.6.2 Many-to-One Mapping of Random SignalsNow consider the case when the transformation h(·) is a non-monotonicfunction such as that shown in Figure 3.17.
Assuming that the equationy(m)=h[x(m)] has K roots, there are K different values of x(m) that map tothe same y(m). The probability that a realisation of the output process Y hasa value in the range y(m) to y(m)+∆y is given byKProb( y (m) < Y < y (m) + û\ ) = ∑ Prob(x k (m) < X < x k (m) + û[ k ) (3.136)k =1where xk is the kth root of y(m)=h(x(m)). Similar to the development inSection 3.6.1, Equation (3.136) can be written asFY ( y ( m) + û\ ) − FY ( y ( m) )û\Kû\ = ∑F X ( x k ( m ) + û[ k ) − F X ( x k ( m ) )k =1û[ kû[ k(3.137)Equation (3.137) can be rearranged asFY ( y ( m) + û\ ) − FY ( y ( m) )û\û[ k F X ( x k ( m ) + û[ k ) − F X ( x k ( m ) )û[ kk =1 û\K=∑(3.138)Now as the intervals ∆x and ∆y tend to zero Equation (3.138) becomes85Transformation of a Random Process∂x k ( m )f X ( x k ( m) )∂ym()k =1Kf Y ( y ( m) ) = ∑K(3.139)1f X ( x k ( m) )=∑′hxm(())kk =1where h ′( x k (m)) = ∂h( x k (m)) / ∂x k (m) .
Note that for a monotonic function,K=1 and Equation (3.139) becomes the same as Equation (3.130). Equation(3.139) can be expressed asKf Y ( y ( m) ) =∑ J ( x k ( m) ) −1 f X ( x k ( m) )(3.140)k =1where J(x k (m)) = h ′(xk (m)) is called the Jacobian of the transformation.For a multi-variate transformation of a vector-valued process such asy (m) = H ( x (m) )(3.141)the pdf of the output y(m) is given byKfY ( y ( m ) ) = ∑ J ( x k ( m ) )k =1−1f X ( xk (m) )(3.142)where |J(x)|, the Jacobian of the transformation H(·), is the determinant of amatrix of derivatives:∂y1 ∂y1∂y1∂x1 ∂x 2∂x PJ (x ) = (3.143)∂y P ∂y P∂y P∂x1 ∂x 2∂x PFor a monotonic linear vector transformation such asy = Hxthe pdf of y becomesfY ( y ) = J−1(f X H −1 y(3.144))where |J| is the Jacobian of the transformation.(3.145)86Probability ModelsExample 3.12 The input–output relation of a P × P linear transformationmatrix H is given byy= H x(3.146)The Jacobian of the linear transformation H is |H|.
Assume that the input xis a zero-mean Gaussian P-variate process with a covariance matrix of Σ xxand a probability density function given by:1f X ( x) =(2π )P/2Σ xx1/ 21−1 exp − x T Σ xxx 2(3.147)From Equations (3.145)–(3.147), the pdf of the output y is given byf Y ( y )==1(2π )P/2Σ xx1/ 21( 2π ) P / 2 Σ xx1/ 2T −1 −1 1exp − y T H −1 Σ xxH y H 2 1−1 exp − y T Σ yyy2H−1(3.148)where Σ yy =HΣ xx H T .
Note that a linear transformation of a Gaussianprocess yields another Gaussian process.3.7 SummaryThe theory of statistical processes is central to the development of signalprocessing algorithms. We began this chapter with basic definitions ofdeterministic signals, random signals and random processes. A randomprocess generates random signals, and the collection of all signals that canbe generated by a random process is the space of the process. Probabilisticmodels and statistical measures, originally developed for random variables,were extended to model random signals. Although random signals arecompletely described in terms of probabilistic models, for manyapplications it may be sufficient to characterise a process in terms of a set ofrelatively simple statistics such as the mean, the autocorrelation function,the covariance and the power spectrum.
Much of the theory and applicationof signal processing is concerned with the identification, extraction, andutilisation of structures and patterns in a signal process. The correlation andBibliography87its Fourier transform the power spectrum are particularly important becausethey can be used to identify the patterns in a stochastic process.We considered the concepts of stationary, ergodic stationary and nonstationary processes.
The concept of a stationary process is central to thetheory of linear time-invariant systems, and furthermore even non-stationaryprocesses can be modelled with a chain of stationary subprocesses asdescribed in Chapter 5 on hidden Markov models. For signal processingapplications, a number of useful pdfs, including the Gaussian, the mixtureGaussian, the Markov and the Poisson process, were considered. These pdfmodels are extensively employed in the remainder of this book. Signalprocessing normally involves the filtering or transformation of an inputsignal to an output signal. We derived general expressions for the pdf of theoutput of a system in terms of the pdf of the input.