Scalar quantization is a process that maps all inputs within a specified range to a common value. This process maps inputs in a different range of values to a different common value. In effect, scalar quantization digitizes an analog signal. Two parameters determine a quantization: a partition and a codebook.
A quantization partition defines several contiguous, nonoverlapping ranges of values within the set of real numbers. To specify a partition in the MATLAB® environment, list the distinct endpoints of the different ranges in a vector.
For example, if the partition separates the real number line into the four sets
{x: x ≤ 0}
{x: 0< x ≤ 1}
{x: 1 < x ≤ 3}
{x: 3 < x}
then you can represent the partition as the three-element vector
partition = [0,1,3];
The length of the partition vector is one less than the number of partition intervals.
A codebook tells the quantizer which common value to assign to inputs that fall into each range of the partition. Represent a codebook as a vector whose length is the same as the number of partition intervals. For example, the vector
codebook = [-1, 0.5, 2, 3];
is one possible codebook for the partition [0,1,3]
.
The quantiz
function also returns a
vector that tells which interval each input is in. For example, the output below
says that the input entries lie within the intervals labeled 0, 6, and 5,
respectively. Here, the 0th interval consists of real numbers less than or equal to
3; the 6th interval consists of real numbers greater than 8 but less than or equal
to 9; and the 5th interval consists of real numbers greater than 7 but less than or
equal to 8.
partition = [3,4,5,6,7,8,9]; index = quantiz([2 9 8],partition)
The output is
index = 0 6 5
If you continue this example by defining a codebook vector such as
codebook = [3,3,4,5,6,7,8,9];
then the equation below relates the vector index
to the
quantized signal quants
.
quants = codebook(index+1);
This formula for quants
is exactly what the
quantiz
function uses if you instead phrase the example more
concisely as below.
partition = [3,4,5,6,7,8,9]; codebook = [3,3,4,5,6,7,8,9]; [index,quants] = quantiz([2 9 8],partition,codebook);
Quantization distorts a signal. You can reduce distortion by choosing appropriate partition and codebook parameters. However, testing and selecting parameters for large signal sets with a fine quantization scheme can be tedious. One way to produce partition and codebook parameters easily is to optimize them according to a set of so-called training data.
Note
The training data you use should be typical of the kinds of signals you will actually be quantizing.
The lloyds
function optimizes the
partition and codebook according to the Lloyd algorithm. The code below
optimizes the partition and codebook for one period of a sinusoidal signal,
starting from a rough initial guess. Then it uses these parameters to quantize
the original signal using the initial guess parameters as well as the optimized
parameters. The output shows that the mean square distortion after quantizing is
much less for the optimized parameters. The quantiz
function automatically
computes the mean square distortion and returns it as the third output
parameter.
% Start with the setup from 2nd example in "Quantizing a Signal." t = [0:.1:2*pi]; sig = sin(t); partition = [-1:.2:1]; codebook = [-1.2:.2:1]; % Now optimize, using codebook as an initial guess. [partition2,codebook2] = lloyds(sig,codebook); [index,quants,distor] = quantiz(sig,partition,codebook); [index2,quant2,distor2] = quantiz(sig,partition2,codebook2); % Compare mean square distortions from initial and optimized [distor, distor2] % parameters.
The output is
ans = 0.0148 0.0024
The quantization in the section Quantize a Signal requires no a priori knowledge about the transmitted signal. In practice, you can often make educated guesses about the present signal based on past signal transmissions. Using such educated guesses to help quantize a signal is known as predictive quantization. The most common predictive quantization method is differential pulse code modulation (DPCM).
The functions dpcmenco
, dpcmdeco
, and dpcmopt
can help you implement a
DPCM predictive quantizer with a linear predictor.
To determine an encoder for such a quantizer, you must supply not only a partition and codebook as described in Represent Partitions and Represent Codebooks, but also a predictor. The predictor is a function that the DPCM encoder uses to produce the educated guess at each step. A linear predictor has the form
y(k) = p(1)x(k-1) + p(2)x(k-2) + ... + p(m-1)x(k-m+1) + p(m)x(k-m)
where x is the original signal, y(k)
attempts to predict
the value of x(k)
, and p
is an
m
-tuple of real numbers. Instead of quantizing
x
itself, the DPCM encoder quantizes the
predictive error, x-y. The integer m
above is called the predictive order. The special case when
m = 1
is called delta
modulation.
If the guess for the k
th value of the signal
x
, based on earlier values of x
,
is
y(k) = p(1)x(k-1) + p(2)x(k-2) +...+ p(m-1)x(k-m+1) + p(m)x(k-m)
then the corresponding predictor vector for toolbox functions is
predictor = [0, p(1), p(2), p(3),..., p(m-1), p(m)]
Note
The initial zero in the predictor vector makes sense if you view the vector as the polynomial transfer function of a finite impulse response (FIR) filter.
A simple special case of DPCM quantizes the difference between the signal's
current value and its value at the previous step. Thus the predictor is just
y(k) = x (k - 1)
. The code below
implements this scheme. It encodes a sawtooth signal, decodes it, and plots both
the original and decoded signals. The solid line is the original signal, while
the dashed line is the recovered signals. The example also computes the mean
square error between the original and decoded signals.
predictor = [0 1]; % y(k)=x(k-1) partition = [-1:.1:.9]; codebook = [-1:.1:1]; t = [0:pi/50:2*pi]; x = sawtooth(3*t); % Original signal % Quantize x using DPCM. encodedx = dpcmenco(x,codebook,partition,predictor); % Try to recover x from the modulated signal. decodedx = dpcmdeco(encodedx,codebook,predictor); plot(t,x,t,decodedx,'--') legend('Original signal','Decoded signal','Location','NorthOutside'); distor = sum((x-decodedx).^2)/length(x) % Mean square error
The output is
distor = 0.0327
The section Optimize Quantization Parameters describes how to use training
data with the lloyds
function to help find quantization
parameters that will minimize signal distortion.
This section describes similar procedures for using the
dpcmopt
function in conjunction with the two functions
dpcmenco
and dpcmdeco
, which first
appear in the previous section.
Note
The training data you use with dpcmopt
should be
typical of the kinds of signals you will actually be quantizing with
dpcmenco
.
This example is similar to the one in the last section. However, where the
last example created predictor
, partition
,
and codebook
in a straightforward but haphazard way, this
example uses the same codebook (now called initcodebook
) as
an initial guess for a new optimized codebook parameter.
This example also uses the predictive order, 1, as the desired order of the new
optimized predictor. The dpcmopt
function creates these
optimized parameters, using the sawtooth signal x
as training
data. The example goes on to quantize the training data itself; in theory, the
optimized parameters are suitable for quantizing other data that is similar to
x
. Notice that the mean square distortion here is much
less than the distortion in the previous example.
t = [0:pi/50:2*pi]; x = sawtooth(3*t); % Original signal initcodebook = [-1:.1:1]; % Initial guess at codebook % Optimize parameters, using initial codebook and order 1. [predictor,codebook,partition] = dpcmopt(x,1,initcodebook); % Quantize x using DPCM. encodedx = dpcmenco(x,codebook,partition,predictor); % Try to recover x from the modulated signal. decodedx = dpcmdeco(encodedx,codebook,predictor); distor = sum((x-decodedx).^2)/length(x) % Mean square error
The output is
distor = 0.0063
In certain applications, such as speech processing, it is common to use a logarithm computation, called a compressor, before quantizing. The inverse operation of a compressor is called an expander. The combination of a compressor and expander is called a compander.
The compand
function supports two kinds of companders:
µ-law and A-law companders. Its reference page lists both compressor
laws.
The code below quantizes an exponential signal in two ways and compares the
resulting mean square distortions. First, it uses the quantiz
function with a partition
consisting of length-one intervals. In the second trial, compand
implements a µ-law
compressor, quantiz
quantizes the compressed data, and
compand
expands the quantized data. The output shows
that the distortion is smaller for the second scheme. This is because
equal-length intervals are well suited to the logarithm of
sig
, but not well suited to sig
. The
figure shows how the compander changes sig
.
Mu = 255; % Parameter for mu-law compander sig = -4:.1:4; sig = exp(sig); % Exponential signal to quantize V = max(sig); % 1. Quantize using equal-length intervals and no compander. [index,quants,distor] = quantiz(sig,0:floor(V),0:ceil(V)); % 2. Use same partition and codebook, but compress % before quantizing and expand afterwards. compsig = compand(sig,Mu,V,'mu/compressor'); [index,quants] = quantiz(compsig,0:floor(V),0:ceil(V)); newsig = compand(quants,Mu,max(quants),'mu/expander'); distor2 = sum((newsig-sig).^2)/length(sig); [distor, distor2] % Display both mean square distortions. plot(sig); % Plot original signal. hold on; plot(compsig,'r--'); % Plot companded signal. legend('Original','Companded','Location','NorthWest')
The output and figure are below.
ans = 0.5348 0.0397
Huffman coding offers a way to compress data. The average length of a Huffman code depends on the statistical frequency with which the source produces each symbol from its alphabet. A Huffman code dictionary, which associates each data symbol with a codeword, has the property that no codeword in the dictionary is a prefix of any other codeword in the dictionary.
The huffmandict
, huffmanenco
, and
huffmandeco
functions support Huffman coding and
decoding.
Note
For long sequences from sources having skewed distributions and small alphabets, arithmetic coding compresses better than Huffman coding. To learn how to use arithmetic coding, see Arithmetic Coding.
Huffman coding requires statistical information about the source of the data
being encoded. In particular, the p
input argument in the
huffmandict
function lists the probability with which
the source produces each symbol in its alphabet.
For example, consider a data source that produces 1s with probability 0.1, 2s with probability 0.1, and 3s with probability 0.8. The main computational step in encoding data from this source using a Huffman code is to create a dictionary that associates each data symbol with a codeword. The example here creates such a dictionary and then shows the codeword vector associated with a particular value from the data source.
This example shows how to create a Huffman code dictionary using the huffmandict
function.
Create a vector of data symbols and assign a probability to each.
symbols = [1 2 3]; prob = [0.1 0.1 0.8];
Create a Huffman code dictionary. The most probable data symbol, 3, is associated with a one-digit codeword, while less probable data symbols are associated with two-digit codewords.
dict = huffmandict(symbols,prob)
dict=3×2 cell array
{[1]} {1x2 double}
{[2]} {1x2 double}
{[3]} {[ 0]}
Display the second row of the dictionary. The output also shows that a Huffman encoder receiving the data symbol 2
substitutes the sequence 1 0
.
dict{2,:}
ans = 2
ans = 1×2
1 0
The example performs Huffman encoding and decoding using a source whose alphabet has three symbols. Notice that the huffmanenco
and huffmandeco
functions use the dictionary created by huffmandict
.
Generate a data sequence to encode.
sig = repmat([3 3 1 3 3 3 3 3 2 3],1,50);
Define the set of data symbols and the probability associated with each element.
symbols = [1 2 3]; p = [0.1 0.1 0.8];
Create the Huffman code dictionary.
dict = huffmandict(symbols,p);
Encode and decode the data. Verify that the original data, sig
, and the decoded data, dhsig
, are identical.
hcode = huffmanenco(sig,dict); dhsig = huffmandeco(hcode,dict); isequal(sig,dhsig)
ans = logical
1
Arithmetic coding offers a way to compress data and can be useful for data sources having a small alphabet. The length of an arithmetic code, instead of being fixed relative to the number of symbols being encoded, depends on the statistical frequency with which the source produces each symbol from its alphabet. For long sequences from sources having skewed distributions and small alphabets, arithmetic coding compresses better than Huffman coding.
The arithenco
and arithdeco
functions support arithmetic coding and decoding.
Arithmetic coding requires statistical information about the source of the
data being encoded. In particular, the counts
input argument
in the arithenco
and arithdeco
functions lists the frequency with which the source produces each symbol in its
alphabet. You can determine the frequencies by studying a set of test data from
the source. The set of test data can have any size you choose, as long as each
symbol in the alphabet has a nonzero frequency.
For example, before encoding data from a source that produces 10 x's, 10 y's, and 80 z's in a typical 100-symbol set of test data, define
counts = [10 10 80];
Alternatively, if a larger set of test data from the source contains 22 x's, 23 y's, and 185 z's, then define
counts = [22 23 185];
Encode and decode a sequence from a source having three symbols.
Create a sequence vector containing symbols from the set of {1,2,3}.
seq = [3 3 1 3 3 3 3 3 2 3];
Set the counts
vector to define an encoder that produces 10 ones, 20 twos, and 70 threes from a typical 100-symbol set of test data.
counts = [10 20 70];
Apply the arithmetic encoder and decoder functions.
code = arithenco(seq,counts); dseq = arithdeco(code,counts,length(seq));
Verify that the decoder output matches the original input sequence.
isequal(seq,dseq)
ans = logical
1
The code below shows how the quantiz
function uses
partition
and codebook
to map a real
vector, samp
, to a new vector, quantized
,
whose entries are either -1, 0.5, 2, or 3.
partition = [0,1,3]; codebook = [-1, 0.5, 2, 3]; samp = [-2.4, -1, -.2, 0, .2, 1, 1.2, 1.9, 2, 2.9, 3, 3.5, 5]; [index,quantized] = quantiz(samp,partition,codebook); quantized
The output is below.
quantized = Columns 1 through 6 -1.0000 -1.0000 -1.0000 -1.0000 0.5000 0.5000 Columns 7 through 12 2.0000 2.0000 2.0000 2.0000 2.0000 3.0000 Column 13 3.0000
This example illustrates the nature of scalar quantization more clearly. After
quantizing a sampled sine wave, it plots the original and quantized signals. The
plot contrasts the x
's that make up the sine curve with the
dots that make up the quantized signal. The vertical coordinate of each dot is a
value in the vector codebook
.
t = [0:.1:2*pi]; % Times at which to sample the sine function sig = sin(t); % Original signal, a sine wave partition = [-1:.2:1]; % Length 11, to represent 12 intervals codebook = [-1.2:.2:1]; % Length 12, one entry for each interval [index,quants] = quantiz(sig,partition,codebook); % Quantize. plot(t,sig,'x',t,quants,'.') legend('Original signal','Quantized signal'); axis([-.2 7 -1.2 1.2])