<< Chapter < Page | Chapter >> Page > |
Our project involves the examination of four sets of images from each subject. The program is equipped to detect four emotions: happy , angry , sad , and surprised . It should be able to examine the four images, distinguish between them, and correctly classify each image with its corresponding emotion to a satisfactory degree. Although initial implementations will have a user-defined cropping of relevant facial features, we will also need to design a way to accurately automate this process.
Naturally, we want two images of the same person to be different only in emotion, not in lighting, position, intensity, etc. Thus, all pictures were taken in the same environment with the same digital camera, each framing the subject's face only as he or she looked straight ahead. The images were then grayscaled and reduced to a size of 250 by 333 pixels. This took care of issues of normalization, since all images were then uniform.
A second, more important issue was which portions of the face to examine. The brain tends to look at several regions: the eyes, the mouth, the cheeks, and the forehead. However, the difference between emotions is very subtle for all of these regions except the mouth, which tends to be the most expressive. Thus, we decided to focus exclusively on the mouth for our project and attempt to gain accurate results using only that portion of the face.
Notification Switch
Would you like to follow the 'Ece 301 projects fall 2003' conversation and receive update notifications?