<< Chapter < Page | Chapter >> Page > |
Armed with the tools of matrix derivatives, let us now proceed to find in closed-form the value of that minimizes . We begin by re-writing in matrix-vectorial notation.
Given a training set, define the design matrix to be the -by- matrix (actually -by- , if we include the intercept term) that contains the training examples' input values in its rows:
Also, let be the -dimensional vector containing all the target values from the training set:
Now, since , we can easily verify that
Thus, using the fact that for a vector , we have that :
Finally, to minimize , let's find its derivatives with respect to . Combining the second and third equation in [link] , we find that
Hence,
In the third step, we used the fact that the trace of a real number is just the real number; the fourth step used the fact that , and the fifth step used Equation [link] with , , and , and Equation [link] . To minimize , we set its derivatives to zero, and obtain the normal equations :
Thus, the value of that minimizes is given in closed form by the equation
When faced with a regression problem, why might linear regression, and specifically why might the least-squares cost function , be a reasonable choice? In this section, we will give a set of probabilistic assumptions, under which least-squares regressionis derived as a very natural algorithm.
Let us assume that the target variables and the inputs are related via the equation
where is an error term that captures either unmodeled effects (such as if there are some features very pertinentto predicting housing price, but that we'd left out of the regression), or random noise. Let us further assume that the are distributed IID (independently and identically distributed) accordingto a Gaussian distribution (also called a Normal distribution) with mean zero and some variance . We can write this assumption as “ .” I.e., the density of is given by
This implies that
The notation “ ” indicates that this is the distribution of given and parameterized by . Note that we should not condition on (“ ”), since is not a random variable. We can also write the distribution of as as .
Given (the design matrix, which contains all the 's) and , what is the distribution of the 's? The probability of the data is given by . This quantity is typically viewed a function of (and perhaps ), for a fixed value of . When we wish to explicitly view this as a function of , we will instead call it the likelihood function:
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?