-
Home
- Musical signal processing with
- Musical signal processing with
- Musical signal processing with
Additive synthesis
-
Additive Synthesis Concepts --
Additive synthesis creates complex sounds by adding together individual sinusoidal signals called partials.A partial's frequency and amplitude are each time-varying functions, so a partial is a more flexible version
of the harmonic associated with a Fourier series decomposition of a periodic waveform.Learn about partials, how to model the timbre of natural instruments, various sources of
control information for partials, and how to make a sinusoidal oscillator with an instantaneous frequency that varies with time.
-
Additive Synthesis Techniques --
Learn how to synthesize audio waveforms by designing the frequency and amplitude trajectories of partials.LabVIEW programming techniques for additive synthesis will also be introduced in two examples.
-
Mini-Project: Risset Bell Synthesis --
Use additive synthesis to emulate the sound of a bell using a technique described by Jean-Claude Risset, an early pioneer in computer music.
-
Mini-Project: Spectrogram Art --
Create an oscillator whose output tracks a specified amplitude and frequency trajectory, and then definemultiple frequency/amplitude trajectories that can be combined to create complex sounds.
Learn how to design the sound so that its spectrogram makes a recognizable picture.
Subtractive synthesis
-
Subtractive Synthesis Concepts --
Subtractive synthesis describes a wide range of synthesis techniques that apply a filter (usually time-varying) to a wideband excitation source such as noise or a pulse train. Thefilter shapes the wideband spectrum into the desired spectrum. This excitation/filter technique well-models many types of physical instruments and the human voice. Excitation sources and time-varying digital filters are introduced in this module.
-
Interactive Time-Varying Digital Filter in LabVIEW --
A time-varying digital filter can easily be implemented in LabVIEW, and this module demonstrates the complete process necessary to develop a digital filter that operates in real-time and responds to parameter changes from the front panel controls. An audio demonstration of the finished result includes discussion of practical issues such as eliminating click noise in the output signal.
-
Band-Limited Pulse Generator --
Subtractive synthesis techniques often require a wideband excitation source such as a pulse train to drive a time-varying digital filter. Traditional rectangular pulses have theoretically infinite bandwidth, and therefore always introduce aliasing noise into the input signal. A band-limited pulse (BLP) source is free of aliasing problems, and is more suitable for subtractive synthesis algorithms. The mathematics of the band-limited pulse is presented, and a LabVIEW VI is developed to implement the BLP source. An audio demonstration is included.
-
Formant (Vowel) Synthesis --
Speech and singing contain a mixture of voiced and un-voiced sounds (sibilants like "s"). The spectrum of a voiced sound contains characteristic resonant peaks called formants caused by frequency shaping of the vocal tract. In this module, a formant synthesizer is developed and implemented in LabVIEW. The filter is implemented as a set of parallel two-pole resonators (bandpass filters) that filter a band-limited pulse source.
-
Linear Prediction and Cross Synthesis --
Linear prediction coding (LPC) models a speech signal as a time-varying filter driven by an excitation signal. The time-varying filter coefficients model the vocal tract spectral envelope. "Cross synthesis" is an interesting special effect in which a musical instrument signal drives the digital filter (or vocal tract model), producing the sound of a "singing instrument." The theory and implementation of linear prediction are presented in this module.
-
Mini-Project: Linear Prediction and Cross Synthesis --
Linear prediction is a method used to estimate a time-varying filter, often as a model of a vocal tract. Musical applications of linear prediction substitute various signals as excitation sources for the time-varying filter. This mini-project guides you to develop the basic technique for computing and applying a time-varying filter in LabVIEW. After experimenting with different excitation sources and linear prediction model parameters, you will develop a VI to cross-synthesize a speech signal and a musical signal.
-
Karplus-Strong Plucked String Algorithm --
The Karplus-Strong algorithm plucked string algorithm produces remarkably realistic tones with modest computational effort. The algorithm requires a delay line and lowpass filter arranged in a closed loop, which can be implemented as a single digital filter. The filter is driven by a burst of white noise to initiate the sound of the plucked string. Learn about the Karplus-Strong algorithm and how to implement it as a LabVIEW "virtual musical instrument" (VMI) to be played from a MIDI file using "MIDI JamSession."
-
Karplus-Strong Plucked String Algorithm with Improved Pitch Accuracy --
The basic Karplus-Strong plucked string algorithm must be modified with a continuously adjustable loop delay to produce an arbitrary pitch with high accuracy. An all-pass filter provides a continuously-adjustable fractional delay, and is an ideal device to insert into the closed loop. The delay characteristics of both the lowpass and all-pass filters are explored, and the modified digital filter coefficients are derived. The filter is then implemented as a LabVIEW "virtual musical instrument" (VMI) to be played from a MIDI file using "MIDI JamSession."
Sound spatialization
-
Reverberation --
Reverberation is a property of concert halls that greatly adds to the enjoyment of a musical performance. Sound waves propagate directly from the stage to the listener, and also reflect from the floor, walls, ceiling, and back wall of the stage to create myriad copies of the direct sound that are time-delayed and reduced in intensity. In this module, learn about the concept of reverberation in more detail and ways to emulate reverberation using a digital filter structure known as a comb filter.
-
Schroeder Reverberator --
The Schroeder reverberator uses parallel comb filters followed by cascaded all-pass filters to produce an impulse response that closely resembles a physical reverberant environment. Learn how to implement the Schroeder reverberator block diagram as a digital filter in LabVIEW, and apply the filter to an audio .wav file.
-
Localization Cues --
Learn about two localization cues called interaural intensity difference (IID) and interaural timing difference (ITD), and learn how to create a LabVIEW implementation that places a virtual sound source in a stereo sound field.
Source:
OpenStax, Musical signal processing with labview (all modules). OpenStax CNX. Jan 05, 2010 Download for free at http://cnx.org/content/col10507/1.3
Google Play and the Google Play logo are trademarks of Google Inc.