<< Chapter < Page | Chapter >> Page > |
Let's look at one more example of a GLM. Consider a classification problem in which the response variable can take on any one of values, so . For example, rather than classifying email into the two classes spam or not-spam—which would have been a binaryclassification problem—we might want to classify it into three classes, such as spam, personal mail, and work-related mail. The response variable isstill discrete, but can now take on more than two values. We will thus model it as distributed according to a multinomial distribution.
Let's derive a GLM for modelling this type of multinomial data. To do so, we will begin by expressing the multinomial as an exponential family distribution.
To parameterize a multinomial over possible outcomes, one could use parameters specifying the probability of each of the outcomes. However, these parameters would be redundant, or more formally, they would not beindependent (since knowing any of the 's uniquely determines the last one, as they must satisfy = 1). So, we will instead parameterize the multinomial with only parameters, , where , and . For notational convenience, we will also let , but we should keep in mind that this is not a parameter, and that it is fully specified by .
To express the multinomial as an exponential family distribution, we will define as follows:
Unlike our previous examples, here we do not have ; also, is now a dimensional vector, rather than a real number. We will write to denote the -th element of the vector .
We introduce one more very useful piece of notation. An indicator function takes on a value of 1 if its argument is true, and 0 otherwise ( , ). For example, , and . So, we can also write the relationship between and as . (Before you continue reading, please make sure you understand why this is true!) Further, wehave that .
We are now ready to show that the multinomial is a member of the exponential family. We have:
where
This completes our formulation of the multinomial as an exponential family distribution.
The link function is given (for ) by
For convenience, we have also defined . To invert the link function and derive the response function, we therefore havethat
This implies that , which can be substituted back into Equation [link] to give the response function
This function mapping from the 's to the 's is called the softmax function.
To complete our model, we use Assumption 3, given earlier, that the 's are linearly related to the 's. So, have (for ), where are the parameters of our model. For notational convenience, we can also define , so that , as given previously. Hence, our model assumes that the conditional distribution of given is given by
This model, which applies to classification problems where , is called softmax regression . It is a generalization of logistic regression.
Our hypothesis will output
In other words, our hypothesis will output the estimated probability that , for every value of . (Even though as defined above is only dimensional, clearly can be obtained as .)
Lastly, let's discuss parameter fitting. Similar to our original derivation of ordinary least squares and logistic regression, if we have a training set of examples and would like to learn the parameters of this model, we would begin by writing down the log-likelihood
To obtain the second line above, we used the definition for given in Equation [link] . We can now obtain the maximum likelihood estimate of the parameters by maximizing in terms of , using a method such as gradient ascent or Newton's method.
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?