<< Chapter < Page Chapter >> Page >

The mean-squared error for any estimate of a nonrandom parameter has a lower bound, the Cramr-Rao Bound (Cramr (1946) pp. 474-477) , which defines the ultimate accuracy of any estimation procedure. This lower bound, as shown later, is intimately related to the maximum likelihood estimator.

We seek a "bound" on the mean-squared error matrix M defined to be M A matrix is "lower bounded" by a second matrix if the difference between the two is a non-negative definite matrix. Define thecolumn matrix x to be x b p r r where b denotes the column matrix of estimator biases. To derive the Cramr-Rao bound, evaluate x x . x x M b b I b I b F where b represents the matrix of partial derivatives of the bias j b i and the matrix F is the Fisher information matrix

F p r r p r r
Note that this matrix can alternatively be expressed as F T p r r The notation T means the matrix of all second partials of the quantity it operates on (the gradient of the gradient). The matrix is knownas the Hessian . Demonstrating the equivalence of these two forms for the Fisher information is quiteeasy. Because r p r r 1 for all choices of the parameter vector, the gradient of the expression equals zero. Furthermore, p r r p r r p r r . Combining these results yields r p r r p r r 0 Evaluating the gradient for this quantity (using the chain rule) also yields zero. r T p r r p r r p r r p r r p r r 0 or p r r p r r T p r r Calculating the expected value for the Hessian for is somewhat easier than finding the expected value of the outer product ofthe gradient with itself. In the scalar case, we have p r r 2 2 2 p r r

Returning to the derivation, the matrix x x is non-negative definite because it is a correlation matrix. Thus, for any column matrix, , the quadratic form x x is non-negative. Choose a form for that simplifies the quadratic form. A convenient choice is F I b where is an arbitrary column matrix. The quadratic form becomes in this case x x M b b I b F I b As this quadratic form must be non-negative, the matrix expression enclosed in brackets must be non-negativedefinite. We thus obtain the well-known Cramr-Rao bound on the mean-square error matrix.

b b I b F I b

This form for the Cramr-Rao Bound does not mean that each term in the matrix of squared errors is greater than the corresponding term in thebounding matrix. As stated earlier, this expression means that the difference between these matrices is non-negativedefinite. For a matrix to be non-negative definite, each term on the main diagonal must be non-negative. The elements of the maindiagonal of are the squared errors of the estimate of the individualparameters. Thus, for each parameter, the mean-squared estimation error can be no smaller than i i 2 b i 2 I b F I b i i

This bound simplifies greatly if the estimator is unbiased ( b 0 ). In this case, the Cramr-Rao bound becomes i i 2 F i i Thus, the mean-squared error for each parameter in a multiple-parameter, unbiased-estimator problem can be no smallerthan the corresponding diagonal term in the inverse for the Fisher information matrix. In such problems, the estimate's error characteristicsof any parameter become intertwined with the other parameters in a complicated way. Any estimator satisfying the Cramr-Rao bound with equality is said to be efficient .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical signal processing. OpenStax CNX. Dec 05, 2011 Download for free at http://cnx.org/content/col11382/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical signal processing' conversation and receive update notifications?

Ask