<< Chapter < Page | Chapter >> Page > |
The third of these assumptions might seem the least well justified of the above, and it might be better thought of as a “design choice” in ourrecipe for designing GLMs, rather than as an assumption per se. These three assumptions/design choices will allow us to derive avery elegant class of learning algorithms, namely GLMs, that have many desirableproperties such as ease of learning. Furthermore, the resultingmodels are often very effective for modelling different types of distributions over ; for example, we will shortly show that both logistic regression and ordinary least squarescan both be derived as GLMs.
To show that ordinary least squares is a special case of the GLM family of models, consider the setting where the target variable (also called the response variable in GLM terminology) is continuous, and we model the conditional distribution of given as as a Gaussian . (Here, may depend .) So, we let the distribution above be the Gaussian distribution. As we saw previously, in the formulation of the Gaussian as an exponential family distribution, we had . So, we have
The first equality follows from Assumption 2, above; the second equality follows from the fact that , and so its expected value is given by ; the third equality follows from Assumption 1 (and our earlier derivation showing that in the formulation of the Gaussian as an exponential family distribution); and thelast equality follows from Assumption 3.
We now consider logistic regression. Here we are interested in binary classification, so . Given that is binary-valued, it therefore seems natural to choose the Bernoulli family of distributions to model the conditional distribution of given . In our formulation of the Bernoulli distribution as an exponential family distribution, we had . Furthermore, note that if , then . So, following a similar derivation as the one for ordinary least squares, we get:
So, this gives us hypothesis functions of the form . If you are previously wondering how we came up with the form of thelogistic function , this gives one answer: Once we assume that conditioned on is Bernoulli, it arises as a consequence of the definition of GLMs and exponential family distributions.
To introduce a little more terminology, the function giving the distribution's mean as a function of the natural parameter ( ) is called the canonical response function . Its inverse, , is called the canonical link function . Thus, the canonical response function for the Gaussian family is just the identify function; and the canonicalresponse function for the Bernoulli is the logistic function. Many texts use to denote the link function, and to denote the response function; but the notation we're using here, inherited from the early machinelearning literature, will be more consistent with the notation used in the rest of the class.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?