<< Chapter < Page | Chapter >> Page > |
Image processing of the iris region is computationally expensive. In addition the area of interest in the image is a 'donut' shape, and grabbing pixels in this region requires repeated rectangular-to-polar conversions. To make things easier, the iris region is first unwrapped into a rectangular region using simple trigonometry. This allows the iris decoding algorithm to address pixels in simple (row,column) format.
Although the pupil and iris circles appear to be perfectly concentric, they rarely are. In fact, the pupil and iris regions each have their own bounding circle radius and center location. This means that the unwrapped region between the pupil and iris bounding does not map perfectly to a rectangle. This is easily taken care of with a little trigonometry.
There is also the matter of the pupil, which grows and contracts its area to control the amount of light entering the eye. Between any two images of the same person's eye, the pupil will likely have a different radius. When the pupil radius changes, the iris stretches with it like a rubber sheet. Luckily, this stretching is almost linear, and can be compensated back to a standard dimension before further processing.
In figure1, points Cp and Ci are the detected centers of the pupil and iris respectively. We extend a wedge of angle starting at an angle , from both points Cp and Ci, with radii Rp and Ri, respectively. The intersection points of these wedges with the pupil and iris circles form a skewed wedge polygon . The skewed wedge is subdivided radially into N blocks. and the image pixel values in each block are averaged to form a pixel (j,k) in the unwrapped iris image, where j is the current angle number and k is the current radius number.
For this project, the standard dimensions of the extracted iris rectangle are 128 rows and 8 columns (see Figure 4). This corresponds to N=128 wedges, each of angle , with each wedge divided radially into 8 sections. The equations below define the important points marked in Figure 1. Points Pa through Pd are interpolated along line segments P1-P3 and P2-P4.
Subfigure 2.2 demonstrates a high-resolution unwrapping. Note the large eyelid regions at the top and bottom of the image. These are the areas inside the iris circle that are covered by an eyelid. These regions do not contain any useful data and need to be discarded. One way to do this is to detect regions of the image that are unneeded and note the position of pixels within the region. Then, when the iris pattern is decoded and compared to another image, only regions that are marked "useful" in both images are considered.
A less robust method of ignoring the eyelid regions is to extract the inner 60% of the region between the pupil and iris. This assumes that an eyelid within this 50% will be detected before unwrapping and the image will be discarded. While simpler to implement, this method has the drawback that less iris data is extracted for comparison.
Notice that subfigures 2.2 and 2.3 appear to be better contrasted than subfigure 2.1. These images have been equalized in contrast to maximize the range of luminance values in the iris image. This makes it numerically easier to encode the iris data. The equalizing is done by acquiring a luminance histogram of the image and stretching the upper and lower boundaries of the histogram to span the entire range of luminance values 0-255. Figure 3 demonstrates an example of this process.
Notification Switch
Would you like to follow the 'Iris recognition' conversation and receive update notifications?