One extension of parametric estimation theory necessary for its
application to array processing is the estimation of signalparameters. We assume that we observe a signal
, whose characteristics are known save a few parameters
, in the presence of noise.
Signal parameters, such as amplitude, time origin, and frequencyif the signal is sinusoidal, must be determined in some way. In
many cases of interest, we would find it difficult to justify aparticular form for the unknown parameters'
a
priori density. Because of such uncertainties, the
minimum mean-squared error and maximum
a
posteriori estimators
cannot be
used in many cases. The minimum mean-squared error
linear estimator does not require this
density, but it is most fruitfully used when the unknownparameter appears in the problem in a linear fashion (such as
signal amplitude as we shall see).
Linear minimum mean-squared error estimator
The only parameter that is linearly related to a signal is the
amplitude. Consider, therefore, the problem where theobservations at an array's output are modeled as
The signal waveform
is known and its energy normalized to be unity (
). The linear estimate of the signal's amplitude is
assumed to be of the form
, where
minimizes the mean-squared error. To use the
Orthogonality Principle expressed by
this equation , an inner product must be
defined for scalars. Little choice avails itself butmultiplication as the inner product of two scalars. The
Orthogonality Principle states that the estimation error mustbe orthogonal to all linear transformations defining the kind
of estimator being sought.
Manipulating this equation to make the universality constraint
more transparent results in
Written in this way, the expected value must be 0
for each value of
to satisfy the constraint. Thus, the quantity
of the estimator of the signal's amplitude must
satisfy
Assuming that the signal's amplitude has zero mean and is
statistically independent of the zero-mean noise, the expectedvalues in this equation are given by
where
is the covariance function of the noise. The
equation that must be solved for the unit-sample response
of the optimal linear MMSE estimator of signal
amplitude becomes
This equation is easily solved once phrased in matrix
notation. Letting
denote the covariance matrix of the noise,
the signal vector, and
the vector of coefficients, this equation becomes
The matched filter for colored-noise problems consisted of the
dot product between the vector of observations and
(see the
detector
result ). Assume that the solution to the linear
estimation problem is proportional to the detectiontheoretical one:
, where
is a scalar constant. This
proposed solution satisfies the equation; the MMSE estimate ofsignal amplitude corresponds to applying a matched filter to
the observations with
The mean-squared estimation error of signal amplitude is given by
Substituting the vector expression for
yields the result that the mean-squared estimation error
equals the proportionality constant
defined earlier.
Thus, the linear filter that produces the optimal estimate of
signal amplitude is equivalent to the matched filter used todetect the signal's presence. We have found this situation to
occur when estimates of unknown parameters are needed to solvethe detection problem (see
Detection
in the Presence of Uncertainties ). If we had not
assumed the noise to be Gaussian, however, thisdetection-theoretic result would be different, but the
estimator would be unchanged. To repeat, this invarianceoccurs because the linear MMSE estimator requires
no assumptions on the noise's amplitude
characteristics.
Let the noise be white so that its covariance matrix is
proportional to the identity matrix (
). The weighting factor in the minimum
mean-squared error linear estimator is proportional to thesignal waveform.
This proportionality constant depends only on the relative
variances of the noise and the parameter.
If the noise variance can be considered
to be much smaller than the
a priori variance of the amplitude, then this constant does not
depend on these variances and equals unity. Otherwise, thevariances must be known.
We find the mean-squared estimation error to be
This error is significantly reduced from its nominal value
only when the
variance of the noise is small compared with the
a priori variance of the amplitude.
Otherwise, this admittedly optimum amplitude estimateperforms poorly, and we might as well as have ignored the
data and "guessed" that the amplitude was zero
In other words, the problem is difficult
in this case.
.