To fix the instability of the Shannon representation, we assume that
the signal is slightly more bandlimited than before
and instead of using
, we multiply by another
function
which is very similar in form to the
characteristic function, but decays at its boundaries in a smootherfashion (i.e. it has more derivatives). A candidate function
is sketched in
.
Now, it is a property of the Fourier transform that an increased
smoothness in one domain translates into a faster decay in theother. Thus, we can fix our instability problem, by choosing
so that
is smooth and
,
and
,
. By choosing the smoothness of
suitably large, we can, for any given
, choose
to satisfy
for some constant
.
Using such a
, we can rewrite (
)
as
Thus, we have the new representation
where we gain stability from our additional assumption that the
signal is bandlimited on
.
Does this assumption really hurt? No, not really because if our
signal is really bandlimited to
and not
, we can always take a slightly larger
bandwidth, say
where
is a
little larger than one, and carry out the same analysis as above.Doing so, would only mean slightly oversampling the signal (small
cost).
Recall that in the end we want to convert analog
signals into bit streams. Thus far, we have the two representations
Shannon's Theorem tells us that if
, we should
sample
at the Nyquist rate
(which is twice the support of
) and then take the binary
representation of the samples. Our more stable representation saysto slightly oversample
and then convert to a binary
representation. Both representations offer perfect reconstruction,although in the more stable representation, one is straddled with
the additional task of choosing an appropriate
.
In practical situations, we shall be interested in approximating
on an interval
for some
and not for all time.
Questions we still want to answer include
How many bits do we need to represent
in
on some interval
in the norm
?
Using this methodology, what is the optimal way of encoding?
How is the optimal encoding implemented?
Towards this end, we define
Then for any
, we can write
In other words, samples at 0,
,
are sufficient to reconstruct
. Recall also
that
decays poorly
(leading to numerical instability). We can overcome this problem byslight over-sampling. Say we over-sample by a factor
.
Then, we can write
Hence we need samples at 0,
,
, etc.
What is the advantage? Sampling more often than necessary buys us stability because we now have a choicefor
.
If we choose
infinitely differentiable whose Fourier transform looks
as shown in
we can obtain
and therefore
decays very fast. In other words,
a sample's influence is felt only locally. Note however, thatover-sampling generates basis functions that are redundant (linearly
dependent), unlike the integer translates of the
function.
If we restrict our reconstruction to
in the interval
, we will only need
samples only from
, for
(see
),
because the distant samples will have little effect on the reconstruction in
.