The module will explain Autocorrelation and its function and properties. Also, examples will be provided to help you step through some of the more complicated statistical analysis.
Before diving into a more complex statistical analysis of
random signals and
processes , let us quickly review the idea of
correlation . Recall that
the correlation of two signals or variables is the expectedvalue of the product of those two variables. Since our focus
will be to discover more about a random process, a collection ofrandom signals, then imagine us dealing with two samples of a
random process, where each sample is taken at a different pointin time. Also recall that the key property of these random
processes is that they are now functions of time; imagine themas a collection of signals. The
expected value of the product of these two
variables (or samples) will now depend on how quickly theychange in regards to
time . For example, if
the two variables are taken from almost the same time period,then we should expect them to have a high correlation. We will
now look at a correlation function that relates a pair of randomvariables from the same process to the time separations between
them, where the argument to this correlation function will bethe time difference. For the correlation of signals from two
different random process, look at the
crosscorrelation function .
Autocorrelation function
The first of these correlation functions we will discuss is
the
autocorrelation , where each of the random
variables we will deal with come from the same random process.
Autocorrelation
the expected value of the product of a random variable or
signal realization with a time-shifted version of itself
With a simple calculation and analysis of the autocorrelation
function, we can discover a few important characteristicsabout our random process. These include:
How quickly our random signal or processes changes with
respect to the time function
Whether our process has a periodic component and what the
expected frequency might be
As was mentioned above, the autocorrelation function is simply
the expected value of a product. Assume we have a pair ofrandom variables from the same process,
and
, then the autocorrelation is often written as
The above equation is valid for stationary and nonstationary
random processes. For
stationary processes , we can generalize
this expression a little further. Given a wide-sensestationary processes, it can be proven that the expected
values from our random process will be independent of theorigin of our time function. Therefore, we can say that our
autocorrelation function will depend on the time differenceand not some absolute time. For this discussion, we will let
, and thus we generalize our autocorrelation
expression as
for the continuous-time case. In most DSP course we will be
more interested in dealing with real signal sequences, and thuswe will want to look at the discrete-time case of the
autocorrelation function. The formula below will prove to bemore common and useful than
:
And again we can generalize the notation for ourautocorrelation function as
Properties of autocorrelation
Below we will look at several properties of the
autocorrelation function that hold for
stationary random processes.
Autocorrelation is an even function for
The mean-square value can be found by evaluating the
autocorrelation where
, which gives us
The autocorrelation function will have its largest value
when
. This value can appear again, for example in
a periodic function at the values of the equivalentperiodic points, but will never be exceeded.
If we take the autocorrelation of a period function,
then
will also be periodic with the same frequency.
Estimating the autocorrleation with time-averaging
Sometimes the whole random process is not available to us.
In these cases, we would still like to be able to find outsome of the characteristics of the stationary random
process, even if we just have part of one sample function.In order to do this we can
estimate the
autocorrelation from a given interval,
to
seconds, of the sample function.
However, a lot of times we will not have sufficientinformation to build a complete continuous-time function of
one of our random signals for the above analysis. If thisis the case, we can treat the information we do know about
the function as a discrete signal and use the discrete-timeformula for estimating the autocorrelation.
Examples
Below we will look at a variety of examples that use the
autocorrelation function. We will begin with a simple exampledealing with Gaussian White Noise (GWN) and a few basic
statistical properties that will prove very useful in theseand future calculations.
We will let
represent our GWN. For this problem, it is
important to remember the following fact about the mean of aGWN function:
Along with being
zero-mean , recall that
GWN is always
independent . With these
two facts, we are now ready to do the short calculationsrequired to find the autocorrelation.
Since the function,
, is independent, then we can take the product of
the individual expected values of both functions.
Now, looking at the above equation we see that we can break
it up further into two conditions: one when
and
are equal and one when they
are not equal. When they are equal we can combine theexpected values. We are left with the following piecewise
function to solve:
We can now solve the two parts of the above equation. Thefirst equation is easy to solve as we have already stated
that the expected value of
will be zero. For the second part, you should
recall from statistics that the expected value of the squareof a function is equal to the variance. Thus we get the
following results for the autocorrelation:
Or in a more concise way, we can represent the results as
Questions & Answers
discuss how the following factors such as predation risk, competition and habitat structure influence animal's foraging behavior in essay form
Abiotic factors are non living components of ecosystem.These include physical and chemical elements like temperature,light,water,soil,air quality and oxygen etc