<< Chapter < Page | Chapter >> Page > |
While CUDA is an excellent language for maximizing performance, it could be infeasible due to inexperience with C,time constraints, or lack of an NVIDIA GPU. To address these constraints, we also implemented our algorithms in Pythonusing the Python OpenCV library. We chose Python because it is very easy for inexperienced programmers to use, and wechose OpenCV because it offers implementations of many common image processing algorithms. Unfortunately, usingOpenCV limits us to the library’s implementation, which does not currently support the use of GPU parallelization in Pythonwithout using C wrappers. For each of the algorithms in this paper, we present code snippets that provide a sample CPUbasedPython implementation alongside a GPU-based parallel CUDA implementation.
GPU parallelization is a powerful tool for accelerating image processing tasks. The motivation for our CUDA motion detectionand facial recognition algorithms stems from their diverse applications and the ease of coding a GPU implementation,which is due in large part to the simplicity of CUDA APIs and OpenCV libraries. For example, motion detection is applied inreal-time traffic analysis, vision-based security, and consumer technology (e.g. motion-triggered home automation solutions).Further, since our computer vision algorithm calculates the location of motion in a frame based on detected edges, our2 approach is both more accurate and supplemented with morespecific data. Facial recognition is applied in real-time security, smart photographic analysis, and military purposes, includingtarget-based missile guiding systems. Therefore, we view an opportunity use CUDA to explore the speedup possible byusing a GPU to accelerate these common tasks in computer vision.
Edge detection is the process of determining the locations of boundaries that separate objects, or sections of objects, inan image. This edge data simplifies the features of an image to only its basic outlines, which makes it a convenient inputfor many other problems in computer vision, including motion detection and tracking, object classification, three-dimensionalreconstruction, and others. The edge detection algorithm we present is composed of three stages:
Edges correspond to the points on the image that exhibit a high gradient magnitude. The edge data from our algorithm isthen used as input to our motion detection algorithm.
The filtering necessary in steps (1) and (2) from our algorithm above involve the convolution of a two-dimensionalimage and a two-dimensional kernel filter of constant width and height. A na¨ıve implementation of two-dimensional convolutionwould require a double sum across the kernel and image for each individual pixel:
Notification Switch
Would you like to follow the 'Elec 301 projects fall 2015' conversation and receive update notifications?