<< Chapter < Page | Chapter >> Page > |
Far and away the most common decision problem in signal processing is determining which of several signals occurs in data contaminated by additive noise. Specializing to the case when one of two possible of signals is present, the data models are
We form the discrete-time observations into avector: .Now the models become
By far the easiest detection problem to solve occurs when the noise vector consists of statistically independent, identicallydistributed, Gaussian random variables, what is commonly termed white Gaussian noise . The mean of white noise is usually taken to be zero The zero-mean assumption is realistic for the detection problem. If the mean were non-zero, simplysubtracting it from the observed sequence results in a zero-mean noise component. and each component's variance is . The equal-variance assumption implies the noise characteristics are unchanging throughout the entire set ofobservations. The probability density of the noise vector evaluated at equals that of a Gaussian random vector having independent components with mean . The resulting detection problem is similar to the Gaussianexample we previously examined, with the difference here being a non-zero mean---the signal---under both models. The logarithm of the likelihood ratio becomes The usual simplifications yield in The model-specific components on the left side express the signal processing operations for each model. If more than two signals were assumed possible, quantities such as these would need to be computed foreach signal and the largest selected.
Each term in the computations for the optimum detector has a signal processing interpretation. When expanded, the term equals , the signal energy . The remaining term, , is the only one involving the observations and henceconstitutes the sufficient statistic for the additive white Gaussian noise detection problem. An abstract, but physically relevant, interpretation of thisimportant quantity comes from the theory of linear vector spaces. In that context, the quantity would be termed the projection of onto . From the Schwarz inequality, we know that the largest value of thisprojection occurs when these vectors are proportional to each other. Thus, a projection measures how much aliketwo vectors are: they are completely alike when they areparallel (proportional to each other) and completely dissimilar when orthogonal (the projection is zero). In effect, the projection operation removes those components from the observations which areorthogonal to the signal, thereby generalizing the familiar notion of filtering a signal contaminated bybroadband noise. In filtering, the signal-to-noise ratio of a bandlimited signal can be drastically improved by lowpassfiltering; the output would consist only of the signal and "in-band" noise. The projection serves a similar role, ideallyremoving those "out-of-band" components (the orthogonal ones) and retaining the "in-band" ones (those parallel to the signal).
Notification Switch
Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?