<< Chapter < Page | Chapter >> Page > |
Definition 1 A random variable x is defined by a distribution function
The density function is given by
Definition 2 The expectation of a function over the random variable is
Definition 3 Pairs of random variables are defined by the joint distribution function
The joint density function is given by
The expectation of a function is given by
Definition 4 Let be a collection of zero-mean ( ) random variables. The space of all random variables that are linear combinations of those random variables is a Hilbert space with inner product
We can easily check that this is a valid inner product:
Note in particular that orthogonality, i.e., , implies , i.e., and are independent random variables. Additionally, the induced norm is the standard deviation of the zero-mean random variable .
One can define random vectors , whose entries are random variables:
For these, the following inner product is an extension of that given above:
The induced norm is
the expected norm of the vector .
In an MMSE estimation problem, we consider , where are two random vectors and is usually additive white Gaussian noise ( is , is , X is , and is ). Due to this noise model, we want an estimate of that minimizes ; such an estimate has highest likelihood under an additive white Gaussian noise model. For computational simplicity, we often want to restrict the estimator to be linear, i.e.
where denotes the row of the estimation matrix and . We use the definition of the norm to simplify the equation:
Since the terms involved in the sum are independent from each other and nonnegative, this minimization can be posed in terms of individual minimizations: for , we solve
where the norm is the induced norm for the Hilbert space of random variables. Note at this point that the set of random variables over the choices of can be written as . Thus, the optimal is given by the coefficients of the closest point in to the random variable according to the induced norm for the Hilbert space of random variables. Therefore, we solve for using results from the projection theorem with the corresponding inner product. Recall that given a basis for the subspace of interest, we obtain the equation , where and is the Gramian matrix. More specifically, we have
Thus, one can solve for . In the Hilbert space of random variables, we have
Here is the correlation matrix of the random vector and is the cross-correlation vector of the random variable and vector . Thus, we have , and so . Concatenating all the rows of together, we get , where is the cross-correlation matrix for the random vectors and . We therefore obtain the optimal linear estimator .
At first, there may be some confusion on the difference between least squares and minimum mean-square error. To summarize:
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?