<< Chapter < Page | Chapter >> Page > |
The Scale- Invariant Feature Transform is an algorithm in computer vision to detect and describe points of interest in an image.
According to Prof. Dowe’s paper on Distinctive Image Features from Scale-Invariant Keypoints(2004), there are four stages to SIFT:
Our project seeks to achieve scale, rotation and translation resistance. However due to time-constraint, we did not implement stage 4 “Keypoint descriptor” of SIFT.
Apply Gaussian filters of different scales to the image. By using different scales the Gaussian filters would have different variances. Due to the inherent properties of Gaussian filters, this would “smooth” out the images, removing finer details of the image. At different scales, the details of the image that are insignificant compared to the standard deviation of the Gaussian filter applied would be removed. The Gaussians are generated using the following formula:
Then the image, represented as an array of digits, is convolved with the Gaussian.
L(x,y,sigma) is the value of the resulting image at location (x,y) under the Gaussian filter with standard deviation sigma. I stands for the original image.
We applied Gaussians with scale 0, 1, and 2 to the image. At scale 0, we are essentially preserving the original image, at scales 1 and 2 we are “smoothing out” the image to an increasing extend. We have 3 octaves of resulting images, each octave consists of images resulting from repeated applying the gaussian filter of the same scale to the original image. After each octave, the image is down-sampled by two.
Now we have the image smoothed to different extends, with variant amount of fine detailed preserved in the resulting images. Within each octave, we use Difference of Gaussian, which is basically subtracting neighboring images from each other. Difference of Gaussian is proven to be a close approximation of scale-normalized Laplacian of Gaussian, which is shown to "produce the most stable image features compared to a range of other possible image functions, such as the gradient, Hessian, or Harris corner function”. Moreover, Difference of Gaussian is efficient to compute since it’s just subtracting images.
Then for each pixel in a resulting image, we compare it to its eight neighboring pixels in the same image and nine neighboring pixels in the images processed by adjacent scales. It’s selected if it’s greater or smaller than all its neighbors. The result is a candidate key point.
To calculate the magnitude and orientation of each key point, we look at all it’s neighboring pixels in the image that is processed with the same scale.
m(x,y) stands for the gradient magnitude of the point and theta(x,y) stands for the orientation of the point.
Notification Switch
Would you like to follow the 'A comparison of object recognition using the hough transform and the properties of moment of inertia' conversation and receive update notifications?