<< Chapter < Page | Chapter >> Page > |
In decision making problems, we know the value of the observation, but do not know the value . Therefore, it is appealing to consider the conditional density or pmf as a function of the unknown values , with fixed at its observed value. The resulting function is called the likelihood function. As the name suggests, values of where the likelihood function is largest are intuitively reasonable indicators of the true value of the unknown quantity, which we will denoteby . The rationale for this is that these values would produce conditional densities or pmfs that place high probability on theobservation .
The Maximum Likelihood Estimator (MLE) is defined to be the value of that maximizes the likelihood function; i.e., in the continuous case
with an analogous definition for the discrete case by replacing the conditional densitywith the conditional pmf. The decision rule is called an “estimator,” which is common in decision problemsinvolving a continuous parameter. Note that maximizing the likelihood function is equivalent to minimizing the negative log-likelihoodfunction (since the logarithm is a monotonic transformation). Now let denote the true value of . Then we can view the negative log-likelihood as a loss function
where the dependence on on the right hand side is embodied in the observation on the left. An interesting special case of the MLE results when the conditional density is a Gaussian, in which case the negative log-likelihood corresponds to a squared errorloss function.
Now let us consider the expectation of this loss, with respect to the conditional distribution :
The true value minimizes the expected negative log-likelihood (or, equivalently, maximizes the expected log-likelihood ). To seethis, compare the expected log-likelihood of with that of any other value :
The quantity is called the Kullback-Leibler (KL) divergence between the conditional densityfunction and . The KL divergence is non-negative, and zero if and only if the two densities are equal [link] . So, we see that the KL divergence acts as a sort of risk function in the context of Maximum Likelihood Estimation.
The MLE is based on finding the value for that maximizes the likelihood function. Intuitively, if the maximum point is verydistinct, say a well isolated peak in the likelihood function, then the easier it will be to distinguish the MLE from alternativedecisions. Consider the case in which is a scalar quantity. The “peakiness” of the log-likelihood function can be gauged byexamining its curvature, , at the point of maximum likelihood. The higher the curvature, the more peaky is the behavior of the likelihood functionat the maximum point. Of course, we hope that the MLE will be a good predictor (decision) for the unknown true value . So, rather than looking at the curvature of the log-likelihood function at themaximum likelihood point, a more appropriate measure of how easily it will be to distinguish from the alternatives is the expected curvature of the log-likelihood function evaluated at the value . The expectation taken over all possible observations with respect tothe conditional density . This quantity, denoted , is called the Fisher Information (FI). In fact, the FI provides us with an important performance bound known asthe Cramer-Rao Lower Bound (CRLB).
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?