<< Chapter < Page | Chapter >> Page > |
In order facilitate on the fly sound analysis requiring demanding DSP algorithms, we implemented a parallel three-stage pipelined sound engine that allows sound events to be time correlated to in game events. This pipelined implementation allows the analysis of the song to be performed on chunks of the song simultaneously along with loading new chunks from the input device and playing already analyzed chunks. This method gives the analyzer ample time to perform the analysis, as long as that analysis takes less time to perform than the chunk it is analyzing takes to play. Because each stage of the analysis (the Loader, the Analyzer, and the Player) each run in their own threads, this system can take advantage of multiprocessor systems such as the multi-core processors found in most of today's computers.
Each stage communicates with the next via an adapter with a loadBuffer method.This method blocks the the current stage of the pipeline until the next stage of the pipeline is ready to receive the buffer. In this way, the pipeline will fill up as much as possible while the player stage is processing the buffer. The player also controls the speed of the pipeline indirectly through this loading method. Great care was taken to minimize the time the transfer of each buffer takes. Rather than copying the buffer from each stage to the next, each stage instead just passes on control of its buffer to the next stage, allocating a new buffer entirely. This allocation process is much faster than copying the data, though if the allocator is bad, it can use more memory than is necessary. Finally, each time a stage of the pipeline blocks to wait until the next stage is done processing or blocks waiting for new data, it relinquishes control of the processor resources, allowing the other stages to run at full speed.
Notification Switch
Would you like to follow the 'Tetris sound: using music to affect gameplay' conversation and receive update notifications?