Channel Equalization and Blind Deconvolution (Vaseghi - Advanced Digital Signal Processing and Noise Reduction), страница 6
Описание файла
Файл "Channel Equalization and Blind Deconvolution" внутри архива находится в папке "Vaseghi - Advanced Digital Signal Processing and Noise Reduction". PDF-файл из архива "Vaseghi - Advanced Digital Signal Processing and Noise Reduction", который расположен в категории "". Всё это находится в предмете "теория управления" из 5 семестр, которые можно найти в файловом архиве МГТУ им. Н.Э.Баумана. Не смотря на прямую связь этого архива с МГТУ им. Н.Э.Баумана, его также можно найти и в других разделах. Архив можно найти в разделе "книги и методические указания", в предмете "теория управления" в общих файлах.
Просмотр PDF-файла онлайн
Текст 6 страницы из PDF
The Bayesian estimate of the channelinput given the equalizer output can be expressed in a general form asxˆ (m) = arg min ∫ C ( x(m), xˆ (m) ) f X |Z (x(m) | z (m) ) dx(m)xˆ ( m )X(15.97)where C (x(m), xˆ (m) ) is a cost function and fX|Z(x(m)|z(m)) is the posteriorpdf of the channel input signal. The choice of the cost function determinesthe type of the estimator as described in Chapter 4. Using a uniform costfunction in Equation (15.97) yields the maximum a posteriori (MAP)estimatexˆ MAP (m) = arg max f X |Z (x(m) | z (m) )x( m)= arg max f E (z (m) − x(m) )PX ( x(m) )(15.98)x( m)Now, as an example consider an M-ary pulse amplitude modulation system,and let {ai i=1, ..., M} denote the set of M pulse amplitudes with aprobability mass functionMPX ( x(m) ) = ∑ Pi δ ( x(m) − ai )(15.99)i =1The pdf of the equalizer output z(m) can be expressed as the mixture pdfMf Z ( z (m) ) = ∑ Pi f E ( x(m) − ai )i =1(15.100)Blind Equalization for Digital Communication Channels451The posterior density of the channel input isPX |Z (x(m) = ai z (m) ) =1f E ( z (m) − ai )PX (x(m) = ai )f Z ( z ( m) )(15.101)and the MAP estimate is obtained fromxˆ MAP (m) = arg max( f E (z (m) − ai )PX (x(m) = ai ))ai(15.102)Note that the classification of the continuous-valued equalizer output z(m)into one of M discrete channel input symbols is basically a non-linearprocess.
Substitution of the zero-mean Gaussian model for the convolutionalnoise e(m) in Equation (102) yields [z (m) − a i ] 2 xˆ MAP (m) = arg max PX ( x(m) = ai ) exp −2σ e2ai(15.103)Note that when the symbols are equiprobable, the MAP estimate reduces toa simple threshold decision device. Figure 15.13 shows a channel equalizerfollowed by an M-level quantiser. In this system, the output of the equalizerfilter is passed to an M-ary decision circuit. The decision device, which isessentially an M-level quantiser, classifies the channel output into one of Mvalid symbols. The output of the decision device is taken as an internallygenerated desired signal to direct the equalizer adaptation.15.5.2 Equalization of a Binary Digital ChannelConsider a binary PAM communication system with an input symbolalphabet {a0, a1} and symbol probabilities P(a0 )=P0 and P(a1)=P1=1–P0.The pmf of the amplitude of the channel input signal can be expressed asP(x(m) )= P0δ (x(m) − a0 )+ P1δ (x(m) − a1 )(15.104)Assume that at the output of the linear adaptive equalizer in Figure 15.13,the convolutional noise v(m) is a zero-mean Gaussian process with variance452Equalization and Deconvolutionσ v2 .
Therefore the pdf of the equalizer output z(m)=x(m)+v(m) is a mixtureof two Gaussian pdfs and can be described as [z (m) − a 0 ] 2 [z (m) − a1 ] 2 P1f Z ( z (m) )=exp −exp−+ 2π σ v2π σ v2σ v22σ v2(15.105)The MAP estimate of the channel input signal isP0axˆ ( m ) = 0 a1ifP02 πσ v [ z ( m )− a0exp −22σ v]2<P12 πσ v [ z ( m )− a ] 2 1exp −22σ votherwise(15.106)For the case when the channel alphabet consists of a0=–a, a1=a and P0=P1,the MAP estimator is identical to the signum function sgn(x(m)), and theerror signal is given bye(m) = z(m) − sgn( z(m) )a(15.107)Figure 15.14 shows the error signal as a function of z(m). An undesirableproperty of a hard non-linearity, such as the sgn(·) function, is that itproduces a large error signal at those instances when z(m) is around zero,e(m)aσ=10-2a2az(m)σ⇒0-aFigure 15.14 Comparison of the error functions produced by the hard non-linearityof a sign function Equation (15.107) and the soft non-linearity of EquationEqualization Based on Higher-Order Statistics453and a decision based on the sign of z(m) is most likely to be incorrect.A large error signal based on an incorrect decision would have anunsettling effect on the convergence of the adaptive equalizer.
It is desirableto have an error function that produces small error signals when z(m) isaround zero. Nowlan and Hinton proposed a soft non-linearity of thefollowing forme 2 az ( m ) / σ − 1e(m) = z (m) − 2 az ( m ) / σ 2 ae+12(15.108)The error e(m) is small when the magnitude of z(m) is small and large whenmagnitude of z(m) is large.15.6 Equalization Based on Higher-Order StatisticsThe second-order statistics of a random process, namely the autocorrelationor its Fourier transform the power spectrum, are central to the developmentthe linear estimation theory, and form the basis of most statistical signalprocessing methods such as Wiener filters and linear predictive models. Anattraction of the correlation function is that a Gaussian process, of a knownmean vector, can be completely described in terms of the covariance matrix,and many random processes can be well characterised by Gaussian ormixture Gaussian models.
A shortcoming of second-order statistics is thatthey do not include the phase characteristics of the process. Therefore, giventhe channel output, it is not possible to estimate the channel phase from thesecond-order statistics. Furthermore, as a Gaussian process of known meandepends entirely on the autocovariance function, it follows that blinddeconvolution, based on a Gaussian model of the channel input, cannotestimate the channel phase.Higher-order statistics, and the probability models based on them, canmodel both the magnitude and the phase characteristics of a random process.In this section, we consider blind deconvolution based on higher-orderstatistics and their Fourier transforms known as the higher-order spectra.The prime motivation in using the higher-order statistics is their ability tomodel the phase characteristics.
Further motivations are the potential of thehigher order statistics to model channel non-linearities, and to estimate anon-Gaussian signal in a high level of Gaussian noise.454Equalization and Deconvolution15.6.1 Higher-Order Moments, Cumulants and SpectraThe kth order moment of a random variable X is defined asm k = E[ x k ]= (− j ) k∂ k Φ X (ω )∂ω kω =0(15.109)where ΦX(ω) is the characteristic function of the random variable X definedasΦ X (ω ) = E[exp( jωx)](15.110)From Equations (15.109) and (15.110), the first moment of X is m1=E[x],the second moment of X is m2=E[x2], and so on. The joint kth order moment(k=k1+k2) of two random variables X1 and X2 is defined asE[ x1k1 x2k 2 ] = (− j ) k1 + k 2∂ k1 ∂ k 2 Φ X 1 X 2 (ω1 , ω 2 )∂ω1k1 ∂ω 2k 2(15.111)ω 1 =ω 2 = 0and in general the joint kth order moment of N random variables is definedask km k = E[ x1 1 x2 2 ...x N k N ]= (− j) k∂ k Φ (ω1 , ω 2 ,..., ω N )∂ω1k1 ∂ω 2k 2k ∂ω NN ω =ω =12(15.112) =ωN=0where k=k1+k2+...
+ kN and the joint characteristic function isΦ(ω1 , ω 2 ,..., ω N ) = E [exp( jω1 x1 + ω 2 x 2 + + ω N x N )](15.113)Now the higher-order moments can be applied for characterization ofdiscrete-time random processes. The kth order moment of a random processx(m) is defined asm x (τ 1 ,τ 2 ,,τ K −1 ) = E [ x(m), x(m + τ1 ) x(m + τ 2 )x(m + τ k −1 )] (15.114)455Equalisation Based on Higher-Order StatisticsNote that the second-order moment E[x(m)x(m+τ)] is the autocorrelationfunction.CumulantsCumulants are similar to moments; the difference is that the moments of arandom process are derived from the characteristic function ΦX(ω), whereasthe cumulant generating function CX(ω) is defined as the logarithm of thecharacteristic function asC X (ω ) = lnΦ X (ω ) = ln E [exp( jωx)](15.115)Using a Taylor series expansion of the term E [exp(jωx)] in Equation(15.115) the cumulant generating function can be expanded asm3mmC X (ω ) = ln 1+m 1 ( jω ) + 2 ( jω ) 2 + ( jω ) 3 + + n ( jω ) n + (15.116)n!2!3!where mk=E [xk] is the kth moment of the random variable x.
The kth ordercumulant of a random variable is defined ask∂ C X (ω )ck = (− j)∂ω kkω =0(15.117)From Equations (15.116) and (15.117), we havec1 =m1(15.118)c2 = m 2 −m12(15.119)c3 = m 3 − 3m1 m 2 + 2m12(15.120)and so on. The general form of the kth order (k=k1+k2+ ... + kN) jointcumulant generating function is456Equalization and Deconvolutionck1 k N = (− j)k1 ++ kN∂k1 + + k NlnΦ X (ω1 , , ω N )∂ω1k1 ∂ω Nk N(15.121)ω1 =ω 2 = =ω N = 0The cumulants of a zero mean random process x(m) are given asc x = E [ x(k )] = m x = 0(mean)(15.122)(covariance)(15.123)c x (k ) =E [ x(m) x(m + k )] − E [ x(m)]2= m x (k ) − m x2 =m x (k )c x (k1 , k 2 ) =m x (k1 , k 2 ) − m x [m x (k1 ) + m x (k 2 ) + m x (k 2 − k1 )] + 2(m x )3=m x (k1 , k 2 )(skewness)(15.124)c x ( k 1 , k 2 , k 3 ) = m x ( k 1 , k 2 , k 3 ) −m x ( k 1 ) m x ( k 3 − k 2 )−m x ( k 2 ) m x ( k 3 − k 1 ) −m x ( k 3 ) m x ( k 2 − k 1 )(15.125)and so on.
Note that mx(k1, k2, ..., kN)=E[x(m)x(m+k1), x(m+k2), ...,x(m+kN)]. The general formulation of the kth order cumulant of a randomprocess x(m) (Rosenblatt) is defined asc x (k1 , k 2 ,, k n ) = m x (k1 , k 2 ,, k n ) −m G(15.126)x ( k1 , k 2 ,, k n )for n = 3, 4, ...Gwhere m x ( k 1 , k 2 ,, k n ) is the kth order moment of a Gaussian processhaving the same mean and autocorrelation as the random process x(m).From Equation (15.126), it follows that for a Gaussian process, thecumulants of order greater than 2 are identically zero.Higher-Order SpectraThe kth order spectrum of a signal x(m) is defined as the (k–1)-dimensionalFourier transform of the kth order cumulant sequence as457Equalisation Based on Higher-Order StatisticsC X (ω 1 , ,ω k −1 ) =∞1(2π )k −1∑τ1 = −∞∑ c x (τ 1 , ,τ k −1 )e − j (ω τ ++ω∞1 1k −1τ k −1 )τ k −1 = −∞(15.127)For the case k=2, the second-order spectrum is the power spectrum given asC X (ω ) =1 ∞c x (τ ) e − jωτ∑2π τ =−∞(15.128)The bi-spectrum is defined asC X (ω1 ,ω 2 ) =∞1∞∑ ∑ c x (τ 1 ,τ 2 )e − j (ω τ +ω τ1 1(2π )22 2)(15.129)τ1 = −∞ τ 2 = −∞and the tri-spectrum isC X (ω 1 ,ω 2 ,ω 3 ) =∞1( 2π )∞∞∑ ∑ ∑ c x (τ 1 ,τ 2 ,τ 3 )e − j (ω τ +ω τ1 132 2 +ω 3τ 3 )(15.130)τ1 = −∞ τ 2 = −∞ τ 3 = −∞Since the term ejωt is periodic with a period of 2π, it follows that higherorder spectra are periodic in each ωk with a period of 2π.15.6.2 Higher-Order Spectra of Linear Time-Invariant SystemsConsider a linear time-invariant system with an impulse response sequence{hk}, input signal x(m) and output signal y(m).