Show that the simple binary coding is
inefficient.
Find an unequal-length codebook for this sequence
that satisfies the Source Coding Theorem. Does yourcode achieve the entropy limit?
How much more efficient is this code than the
simple binary code?
Source compression
Consider the following 5-letter source.
Letter
Probability
a
0.4
b
0.2
c
0.15
d
0.15
e
0.1
Find this source's entropy.
Show that the simple binary coding is
inefficient.
Find the Huffman code for this source.
What is its average code length?
Speech compression
When we sample a signal, such as
speech, we quantize the signal's amplitude to a set ofintegers. For a
-bit
converter, signal amplitudes are represented by
integers. Although these integers could be represented
by a binary code for digital transmission, we shouldconsider whether a Huffman coding would be more
efficient.
Load into Matlab the segment of speech contained
in
y.mat . Its sampled values lie in the
interval (-1, 1). To simulate a 3-bit converter, we useMatlab's round function to create quantized amplitudes
corresponding to the integers
[0 1 2 3 4 5 6
7] .
y_quant = round(3.5*y + 3.5);
Find the relative frequency of occurrence of quantized
amplitude values. The following Matlab program computesthe number of times each quantized value occurs.
for n=0:7;
count(n+1) = sum(y_quant == n);end;
Find the entropy of this source.
Find the Huffman code for this source. How
would you characterize this source code inwords?
How many fewer bits would be used in
transmitting this speech segment with your Huffmancode in comparison to simple binary coding?
Digital communication
In a digital cellular system, a signal bandlimited to 5
kHz is sampled with a two-bit A/D converter at itsNyquist frequency. The sample values are found to have
the shown relative frequencies.
Sample
Value
Probability
0
0.15
1
0.35
2
0.3
3
0.2
We send the bit stream consisting of
Huffman-coded samples using one of the two
depicted signal sets .
What is the datarate of the compressed
source?
Which choice of signal set maximizes the
communication system's performance?
With no error-correcting coding, what
signal-to-noise ratio would be needed for your chosensignal set to guarantee that the bit error probability
will not exceed
?
If the receiver moves twice as far from the transmitter(relative to the distance at which the
error rate was obtained), how does the performance change?
Signal compression
Letters drawn from a four-symbol alphabet have the
indicated probabilities.
Letter
Probability
a
1/3
b
1/3
c
1/4
d
1/12
What is the average number of bits necessary to
represent this alphabet?
Using a simple binary code for this alphabet, a
two-bit block of data bits naturally emerges. Find anerror correcting code for two-bit data blocks that
corrects all single-bit errors.
How would you modify your code so that the
probability of the letter
being confused with the
letter
is minimized? If
so, what is your new code; if not, demonstrate that thisgoal cannot be achieved.