<< Chapter < Page | Chapter >> Page > |
A common detection problem in array processing is to determine whether a signal is present ( ) or not ( ) in the array output. In this case, The optimal detector relies on filtering the array output with a matched filter having an impulse response based on theassumed signal. Letting the signal under be denoted simply by , the optimal detector consists of or The false-alarm and detection probabilities are given by displays the probability of detection as a function of the signal-to-noise ratio for several values of false-alarm probability. Given an estimate of the expected signal-to-noise ratio, these curvescan be used to assess the trade-off between the false-alarm and detection probabilities.
The important parameter determining detector performance derived in this example is the signal-to-noise ratio : the larger it is, the smaller the false-alarm probability is (generally speaking). Signal-to-noise ratios can bemeasured in many different ways. For example, one measure might be the ratio of the rms signal amplitude to the rms noiseamplitude. Note that the important one for the detection problem is much different. The signal portion is the sum of the squared signal values over the entire set of observed values - the signal energy; the noise portion is the variance of each noise component - the noise power. Thus, energy can be increased in two ways that increase thesignal-to-noise ratio: the signal can be made larger or the observations can be extended to encompass a larger number of values.
To illustrate this point, two signals having the same energy are shown in . When these signals are shown in the presence of additive noise, the signal is visible on theleft because its amplitude is larger; the one on the right is much more difficult to discern. The instantaneoussignal-to-noise ratio-the ratio of signal amplitude to average noise amplitude - is the important visual cue. However, the kindof signal-to-noise ratio that determines detection performance belies the eye. The matched filter outputs have similar maximalvalues, indicating that total signal energy rather than amplitude determines the performance of a matched filterdetector.
The optimal detection paradigm for the additive, white Gaussian noise problem has a relatively simple solution:construct FIR filters whose unit-sample responses are related to the presumed signals and compare the filtered outputs witha threshold. We may well wonder which assumptions made in this problem are most questionable in "real-world"applications. noise is additive in most cases. In many situation, the additive noise present in observed data isGaussian. Because of the Central Limit Theorem, if numerous noise sources impinge on a measuring device, theirsuperposition will be Gaussian to a great extent. As we know from the discussion on the Central Limit Theorem , glibly appealing to the Central Limit Theorem is not without hazards; the non-Gaussian detectionproblem will be discussed in some detail later. Interestingly, the weakest assumption is the "whiteness" of the noise. Notethat the observation sequence is obtained as a result of sampling the sensor outputs. Assuming white noise samples does not mean that the continuous-time noise was white. White noise in continuoustime has infinite variance and cannot be sampled; discrete-time white noise has a finite variance with aconstant power spectrum. The Sampling Theorem suggests that a signal is represented accurately by its samples only if wechoose a sampling frequency commensurate with the signal's bandwidth. One should note that fidelity of representationdoes not mean that the sample values are independent. In most cases, satisfying the Sampling Theoremmeans that the samples are correlated. As shown in Sampling and Random Sequences , the correlation function of sampled noise equals samples of theoriginal correlation function. For the sampled noise to be white, for : the samples of the correlation function at locations other than the origin must all be zero. While some correlationfunctions have this property, many examples satisfy the sampling theorem but do not yield uncorrelatedsamples . In many practical situations, undersampling the noise will reduce inter-sample correlation. Thus, we obtain uncorrelated sampleseither by deliberately undersampling, which wastes signal energy, or by imposing anti-aliasing filters that have abandwidth larger than the signal and sampling at the signal's Nyquist rate. Since the noise power spectrum usually extends tohigher frequencies than the signal, this intentional undersampling can result in larger noise variance. in eithercase, by trying to make the problem at hand match the solution, we are actually reducing performance! We need a direct approach to attacking the correlated noise issue that arises in virtually all sampled-data detection problems rather than trying to work around it.
Notification Switch
Would you like to follow the 'Signal and information processing for sonar' conversation and receive update notifications?