2.1 The Theoretical Basis for Data Communication
Information can be transmitted on wires by varying some
physical property such as voltage or current. By representing the value of this
voltage or current as a single-valued function of time, f(t), we can model the
behavior of the signal and analyze it mathematically. This analysis is the
subject of the following sections.
2.1.1 Fourier Analysis
In the early 19th century, the French mathematician
Jean-Baptiste Fourier proved that any reasonably behaved periodic function,
g(t) with period
T can be constructed as the sum of a (possibly
infinite) number of sines and cosines:
Equation 2
where f = 1/T is the fundamental frequency, an and bn are the
sine and cosine amplitudes of the nth harmonics (terms), and c is a constant. Such a decomposition is called a Fourier series. From the Fourier series, the function
can be reconstructed; that is, if the period, T,
is known and the amplitudes are given, the original function of time can be
found by performing the sums of Eq.
(2-1).
A data signal that has a finite duration (which all of them do)
can be handled by just imagining that it repeats the entire pattern over and
over forever (i.e., the interval from T to 2T is the same as from 0 to T, etc.).
The an amplitudes can be computed for any given
g(t) by
multiplying both sides of Eq. (2-1) by
sin(2pkft) and then
integrating from 0 to T. Since
only one term of the summation survives: an. The bn summation vanishes completely. Similarly,
by multiplying Eq. (2-1) by cos(2pkft) and integrating between
0 and T, we can derive bn. By just integrating both sides of the equation as it
stands, we can find c. The results of performing
these operations are as follows:
2.1.2 Bandwidth-Limited Signals
To see what all this has to do with data communication, let us
consider a specific example: the transmission of the ASCII character ''b''
encoded in an 8-bit byte. The bit pattern that is to be transmitted is 01100010.
The left-hand part of Fig. 2-1(a) shows
the voltage output by the transmitting computer. The Fourier analysis of this
signal yields the coefficients:
Figure 2-1. (a) A binary signal and its root-mean-square Fourier amplitudes. (b)-(e) Successive approximations to the original signal.
The root-mean-square amplitudes, , for the
first few terms are shown on the right-hand side of Fig. 2-1(a). These values are of interest because their
squares are proportional to the energy transmitted at the corresponding
frequency.
No transmission facility can transmit signals without losing
some power in the process. If all the Fourier components were equally
diminished, the resulting signal would be reduced in amplitude but not distorted
[i.e., it would have the same nice squared-off shape as Fig. 2-1(a)]. Unfortunately, all transmission facilities
diminish different Fourier components by different amounts, thus introducing
distortion. Usually, the amplitudes are transmitted undiminished from 0 up to
some frequency fc [measured in cycles/sec or Hertz (Hz)]
with all frequencies above this cutoff frequency attenuated. The range of
frequencies transmitted without being strongly attenuated is called the bandwidth. In practice, the cutoff is not really
sharp, so often the quoted bandwidth is from 0 to the frequency at which half
the power gets through.
The bandwidth is a physical property of the transmission medium
and usually depends on the construction, thickness, and length of the medium. In
some cases a filter is introduced into the circuit to limit the amount of
bandwidth available to each customer. For example, a telephone wire may have a
bandwidth of 1 MHz for short distances, but telephone companies add a filter
restricting each customer to about 3100 Hz. This bandwidth is adequate for
intelligible speech and improves system-wide efficiency by limiting resource
usage by customers.
Now let us consider how the signal of Fig. 2-1(a) would look if the bandwidth were so low that
only the lowest frequencies were transmitted [i.e., if the function were being
approximated by the first few terms of Eq.
(2-1)]. Figure 2-1(b) shows the
signal that results from a channel that allows only the first harmonic (the
fundamental, f) to pass through. Similarly, Fig. 2-1(c)-(e) show the spectra and reconstructed functions for
higher-bandwidth channels.
Given a bit rate of b bits/sec,
the time required to send 8 bits (for example) 1 bit at a time is 8/b sec, so the frequency of the first harmonic is b/8 Hz. An ordinary telephone line, often called a
voice-grade line, has an
artificially-introduced cutoff frequency just above 3000 Hz. This restriction
means that the number of the highest harmonic passed through is roughly
3000/(b/8) or
24,000/b, (the cutoff is not sharp).
For some data rates, the numbers work out as shown in Fig. 2-2. From these numbers, it is clear
that trying to send at 9600 bps over a voice-grade telephone line will transform
Fig. 2-1(a) into something looking like
Fig. 2-1(c), making accurate reception of
the original binary bit stream tricky. It should be obvious that at data rates
much higher than 38.4 kbps, there is no hope at all for binary signals, even if the transmission facility is
completely noiseless. In other words, limiting the bandwidth limits the data
rate, even for perfect channels. However, sophisticated coding schemes that make
use of several voltage levels do exist and can achieve higher data rates. We
will discuss these later in this chapter.
Figure 2-2. Relation between data rate and harmonics.2.1.3 The Maximum Data Rate of a Channel
As early as 1924, an AT&T engineer, Henry Nyquist, realized
that even a perfect channel has a finite transmission capacity. He derived an
equation expressing the maximum data rate for a finite bandwidth noiseless
channel. In 1948, Claude Shannon carried Nyquist's work further and extended it
to the case of a channel subject to random (that is, thermodynamic) noise
(Shannon, 1948). We will just briefly summarize their now classical results
here.
Nyquist proved that if an arbitrary signal has been run through
a low-pass filter of bandwidth H, the filtered
signal can be completely reconstructed by making only 2H (exact) samples per second. Sampling the line faster
than 2H times per second is pointless because the
higher frequency components that such sampling could recover have already been
filtered out. If the signal consists of V
discrete levels, Nyquist's theorem states:
For example, a noiseless 3-kHz channel cannot transmit binary
(i.e., two-level) signals at a rate exceeding 6000 bps.
So far we have considered only noiseless channels. If random
noise is present, the situation deteriorates rapidly. And there is always random
(thermal) noise present due to the motion of the molecules in the system. The
amount of thermal noise present is measured by the ratio of the signal power to
the noise power, called the signal-to-noise
ratio. If we denote the signal power by S
and the noise power by N, the signal-to-noise
ratio is S/N. Usually, the ratio itself is not
quoted; instead, the quantity 10 log10 S/N is given. These units are called decibels (dB). An S/N
ratio of 10 is 10 dB, a ratio of 100 is 20 dB, a ratio of 1000 is 30 dB, and so
on. The manufacturers of stereo amplifiers often characterize the bandwidth
(frequency range) over which their product is linear by giving the 3-dB
frequency on each end. These are the points at which the amplification factor
has been approximately halved (because log103 0.5).
Shannon's major result is that the maximum data rate of a noisy
channel whose bandwidth is H Hz, and whose
signal-to-noise ratio is S/N, is given by
For example, a channel of 3000-Hz bandwidth with a signal to
thermal noise ratio of 30 dB (typical parameters of the analog part of the
telephone system) can never transmit much more than 30,000 bps, no matter how
many or how few signal levels are used and no matter how often or how
infrequently samples are taken. Shannon's result was derived from
information-theory arguments and applies to any channel subject to thermal
noise. Counterexamples should be treated in the same category as perpetual
motion machines. It should be noted that this is only an upper bound and real
systems rarely achieve it.
|
Translate
Monday, September 5, 2016
The Theoretical Basis for Data Communication
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
silahkan membaca dan berkomentar