<< Chapter < Page | Chapter >> Page > |
Many signals are either partly or wholly stochastic, or random. Important examples include human speech, vibration in machines,and CDMA communication signals. Given the ever-present noise in electronic systems, it can be arguedthat almost all signals are at least partly stochastic. Such signals may have a distinct average spectral structure that reveals important information (such as for speechrecognition or early detection of damage in machinery). Spectrum analysis of any single block of data using window-based deterministic spectrum analysis , however, produces a random spectrum that may be difficult to interpret.For such situations, the classical statistical spectrum estimation methods described in this module can be used.
The goal in classical statistical spectrum analysis is to estimate , the power spectral density (PSD) across frequency of the stochastic signal.That is, the goal is to find the expected (mean, or average) energy density of the signal as a function of frequency.(For zero-mean signals, this equals the variance of each frequency sample.) Since the spectrum of each block of signal samples is itself random,we must average the squared spectral magnitudes over a number of blocks of data to find the mean.There are two main classical approaches, the periodogram and auto-correlation methods.
The periodogram method divides the signal into a number of shorter (and often overlapped) blocks of data, computes the squared magnitudeof the windowed (and usually zero-padded ) DFT , , of each block,and averages them to estimate the power spectral density. The squared magnitudes of the DFTs of possibly overlapped length- windowed blocks of signal (each probably with zero-padding ) are averaged to estimate the power spectral density: For a fixed total number of samples, this introduces a tradeoff: Larger individual data blocks providesbetter frequency resolution due to the use of a longer window, but it means there are less blocks to average, so the estimatehas higher variance and appears more noisy. The best tradeoff depends on the application.Overlapping blocks by a factor of two to four increases the number of averages and reduces the variance, but since the same data is beingreused, still more overlapping does not further reduce the variance. As with any window-based spectrum estimation procedure, the window function introduces broadening and sidelobes into the power spectrumestimate. That is, the periodogram produces an estimate of the windowed spectrum , not of .
shows the non-negative frequencies of the DFT (zero-padded to 1024 total samples) of 64 samples of areal-valued stochastic signal. With no averaging, the power spectrum is very noisy and difficult to interpret other than noting a significant reduction in spectral energyabove about half the Nyquist frequency. Various peaks and valleys appear in the lower frequencies,but it is impossible to say from this figure whether they represent actual structure in the power spectral density (PSD)or simply random variation in this single realization. shows the same frequencies of a length-1024 DFT of a length-1024 signal. While the frequency resolution has improved,there is still no averaging, so it remains difficult to understand the power spectral density of this signal.Certain small peaks in frequency might represent narrowband components in the spectrum, or may just be random noise peaks. In , a power spectral density computed from averaging the squared magnitudes of length-1024 zero-padded DFTs of 508 length-64blocks of data (overlapped by a factor of four, or a 16-sample step between blocks) are shown. While the frequency resolution corresponds to that of a length-64 truncation window, the averaging greatlyreduces the variance of the spectral estimate and allows the user to reliably conclude that the signal consists of lowpass broadband noisewith a flat power spectrum up to half the Nyquist frequency, with a stronger narrowband frequency component at around 0.65 radians.
The averaging necessary to estimate a power spectral density can be performed in the discrete-time domain, rather than in frequency,using the auto-correlation method. The squared magnitude of the frequency response,from the DTFT multiplication and conjugation properties, corresponds in the discrete-time domain to the signal convolvedwith the time-reverse of itself, or its auto-correlation We can thus compute the squared magnitude of the spectrum of a signal by computingthe DFT of its auto-correlation. For stochastic signals, the power spectral densityis an expectation, or average, and by linearity of expectation can be found by transforming theaverage of the auto-correlation. For a finite block of signal samples, the average of the autocorrelation values, , is Note that with increasing lag , , fewer values are averaged, so they introducemore noise into the estimated power spectrum. By windowing the auto-correlation before transforming it to the frequency domain, aless noisy power spectrum is obtained, at the expense of less resolution.The multiplication property of the DTFT shows that the windowing smooths the resulting powerspectrum via convolution with the DTFT of the window: This yields another important interpretation of how the auto-correlation method works: it estimates the power spectral density by averaging the power spectrum over nearby frequencies , through convolution with the window function's transform,to reduce variance. Just as with the periodogram approach, there is always avariance vs. resolution tradeoff. The periodogram and the auto-correlation method givesimilar results for a similar amount of averaging; the user should simply note that in the periodogram case, the window introduces smoothingof the spectrum via frequency convolution before squaring the magnitude, whereas the periodogram convolves the squared magnitude with .
Notification Switch
Would you like to follow the 'The dft, fft, and practical spectral analysis' conversation and receive update notifications?