The size of the errors encountered in the time-delay estimation
problem can be more accurately assessed by a bounding techniquetailored to the problem: the Ziv-Zakai bound (
Wiess and Weinstein ,
Ziv and Zakai ). The derivation of this
bound relies on results from detection theory (
Chazan, Zakai, and Ziv ).
This result is an example of detection and estimation theory
complementing each other to advantage. Consider the detection problem in which we must distinguish the
signals
and
while observing them in the presence of white noise
that is not necessarily Gaussian. Let hypothesis
represent the case in which the delay, denoted by our
parameter symbol
, is
and
the case in which
. The
suboptimum test statistic
consists of estimating the delay, then determining the closest
a priori delay to the estimate.
By using this ad hoc hypothesis test as an essential
part of the derivation, the bound can apply to many situations.Furthermore, by not restricting the type of parameter estimate,
the bound applies to any estimator. The probability of errorfor the optimum hypothesis test (derived from the likelihood
ratio) is denoted by
. Assuming equally likely hypotheses, the probability
of error resulting from the
ad hoc test must
be greater than that of the optimum.
Here,
denotes the estimation error appropriate to the
hypothesis.
The delay is assumed to range uniformly between 0 and
. Combining this restriction to the hypothesized
delays yields bounds on both
and
:
and
. Simple manipulations show that the integral of this
inequality with respect to
over the possible range of delays is given by
Here again, the issue of the discrete nature of
the delay becomes a consideration; this step in the derivationimplicitly assumes that the delay is continuous valued. This
approximation can be greeted more readily as it involvesintegration rather than differentiation (as in the
Cramér-Rao bound).
Note that if we define
to be the right side of this equation so that
is the complementary distribution function
The complementary distribution function of a
probability distribution function
is defined to be
, the probability that a random variable exceeds
. of the magnitude of the average estimation
error. Multiplying
by
and integrating, the result is
The reason for these rather obscure manipulations is
now revealed: Because
is related to the probability distribution function of
the absolute error, the right side of this equation is twice themean-squared error
. The general Ziv-Zakai bound for the mean-squared
estimation error of signal delay is thus expressed as
In many cases, the optimum probability of error
does not depend on
, the time origin of the observations. This lack of
dependence is equivalent to ignoring edge effects and simplifiescalculation of the bound. Thus, the Ziv-Zakai bound for
time-delay estimation relates the mean-squared estimation errorfor delay to the probability of error incurred by the optimal
detector that is deciding whether a nonzero delay is present ornot.
To apply this bound to time-delay estimates (unbiased or not),
the optimum probability of error for the type of noise and therelative delay between the two signals must be determined.
Substituting this expression into either integral yields theZiv-Zakai bound.
The general behavior of this bound at parameter extremes can be
evaluated in some cases. Note that the Cramér-Rao boundin this problem approaches infinity as either the noise variance
grows or the observation interval shrinks to 0 (either forcesthe signal-to-noise ratio to approach 0). This result is
unrealistic as the actual delay is bounded, lying between 0 and
. In this very noisy situation, one should ignore the
observations and "guess"
any reasonable
value for the delay; the estimation error is smaller. Theprobability of error approaches
in this situation no matter what the delay
may be. Considering the simplified form of the
Ziv-Zakai bound, the integral in the second form is 0 in thisextreme case.
The Ziv-Zakai bound is exactly the variance of a random variable
uniformly distributed over
. The Ziv-Zakai bound thus predicts the size of
mean-squared errors more accurately than does theCramér-Rao bound.
Let the noise be Gaussian of variance
and the signal have energy
. The probability of error resulting from the
likelihood ratio test is given by
The quantity
is the normalized autocorrelation function of the
signal evaluated at the delay
.
Evaluation of the Ziv-Zakai bound for a general
signal is very difficult in this Gaussian noise case.Fortunately, the normalized autocorrelation function can be
bounded by a relatively simple expression to yield a moremanageable expression. The key quantity
in the probability of error expression can be
rewritten using Parseval's Theorem.
Using the inequality
,
is bounded from above by
, where
is the root-mean-squared (
RMS ) signal
bandwidth.
Because
is a decreasing function, we have
, where
is a combination of all of the constants involved in
the argument of
:
. This quantity varies with the product of the
signal-to-noise ratio
and the squared RMS bandwidth
. The parameter
is known as the
critical delay and is
twice the reciprocal RMS bandwidth. We can use this lowerbound for the probability of error in the Ziv-Zakai bound to
produce a lower bound on the mean-squared estimation error.The integral in the first form of the bound yields the
complicated, but computable result
The quantity
is the probability distribution function of a
random variable having three degrees of
freedom.
This distribution function has
the "closed-form" expression
. Thus, the threshold effects in this
expression for the mean-squared estimation error depend on therelation between the critical delay and the signal duration.
In most cases, the minimum equals the critical delay
, with the opposite choice possible for very low
bandwidth signals.
The Ziv-Zakai bound and the Cramér-Rao bound for the
time-delay estimation problem are shown in
[link] . Note how the Ziv-Zakai bound matches the
Cramér-Rao bound only for large signal-to-noise ratios,where they both equal
. For smaller values, the former bound is much
larger and provides a better indication of the size of theestimation errors. These errors are because of the "cycle
skipping" phenomenon described earlier. The Ziv-Zakai bounddescribes them well, whereas the Cramér-Rao bound
ignores them.