Translate

Wednesday, September 28, 2016

ANALOG AND DIGITAL DATA TRANSMISSION



ANALOG AND DIGITAL DATA TRANSMISSION
In transmitting data from a source to a destination, one must be concerned with the
nature of the data, the actual physical means used to propagate the data, and what
processing or adjustments may be required along the way to assure that the received
data are intelligible. For all of these considerations, the crucial question is whether
we are dealing with analog or digital entities.
The terms analog and digital correspond, roughly, to continuous and discrete,
respectively. These two terms are used frequently in data communications in at
least three contexts:
Data
Signaling
Transmission
We can define data as entities that convey meaning. Signals are electric or electromagnetic
encoding of data. Signaling is the act of propagating the signal along a
suitable medium. Finally, transmission is the communication of data by the propagation
and processing of signals. In what follows, we try to make these abstract concepts
clear by discussing the terms analog and digital in these three contexts.
Data
The concepts of analog and digital data are simple enough. Analog data take on
continuous values on some interval. For example, voice and video are continuously
varying patterns of intensity. Most data collected by sensors, such as temperature
and pressure, are continuous-valued. Digital data take on discrete values; examples
are text and integers.
The most familiar example of analog data is audio or acoustic data, which, in
the form of sound waves, can be perceived directly by human beings. Figure 2.10
shows the acoustic spectrum for human speech. Frequency components of speech
may be found between 20 Hz and 20 kHz. Although much of the energy in speech
is concentrated at the lower frequencies, tests have shown that frequencies up to
600 to 700 Hz add very little to the intelligibility of speech to the human ear. The
dashed line more accurately reflects the intelligibility or emotional content of
speech.
Another common example of analog data is video. Here it is easier to characterize
the data in terms of the viewer (destination) of the TV screen rather than the
original scene (source) that is recorded by the TV camera. To produce a picture on
the screen, an electron beam scans across the surface of the screen from left to right
and top to bottom. For black-and-white television, the amount of illumination produced
(on a scale from black to white) at any point is proportional to the intensity
of the beam as it passes that point. Thus, at any instant in time, the beam takes on
an analog value of intensity to produce the desired brightness at that point on the
screen. Further, as the beam scans, the analog value changes. The video image,
then, can be viewed as a time-varying analog signal.
Figure 2.11a depicts the scanning process. At the end of each scan line, the
beam is swept rapidly back to the left (horizontal retrace). When the beam reaches
the bottom, it is swept rapidly back to the top (vertical retrace). The beam is turned
off (blanked out) during the retrace intervals.
To achieve adequate resolution, the beam produces a total of 483 horizontal
lines at a rate of 30 complete scans of the screen per second. Tests have shown that
this rate will produce a sensation of flicker rather than smooth motion. However,
the flicker is eliminated by a process of interlacing, as depicted in Figure 2.11b. The
electron beam scans across the screen starting at the far left, very near the top. The
beam reaches the bottom at the middle after 241 1/2 lines. At this point, the beam is
quickly repositioned at the top of the screen and, beginning in the middle, produces
an additional 241% lines interlaced with the original set. Thus, the screen is
refreshed 60 times per second rather than 30, and flicker is avoided. Note that the
total count of lines is 525. Of these, 42 are blanked out during the vertical retrace
interval, leaving 483 actually visible on the screen.
A familiar example of digital data is text or character strings. While textual
data are most convenient for human beings, they cannot, in character form, be easily
stored or transmitted by data processing and communications systems. Such systems
are designed for binary data. Thus, a number of codes have been devised by
which characters are represented by a sequence of bits. Perhaps the earliest common
example of this is the Morse code. Today, the most commonly used code in the
United States is the ASCII (American Standard Code for Information Interchange)
(Table 2.1) promulgated by ANSI. ASCII is also widely used outside the United
 
States. Each character in this code is represented by a unique 7-bit pattern; thus, 128
different characters can be represented. This is a larger number than is necessary,
and some of the patterns represent "control" characters (Table 2.2). Some of these
control characters have to do with controlling the printing of characters on a page.
Others are concerned with communications procedures and will be discussed later.
ASCII-encoded characters are almost always stored and transmitted using 8 bits per
character (a block of 8 bits is referred to as an octet or a byte). The eighth bit is a
parity bit used for error detection. This bit is set such that the total number of binary
1s in each octet is always odd (odd parity) or always even (even parity). Thus, a
transmission error that changes a single bit can be detected.
Signals
In a communications system, data are propagated from one point to another by
means of electric signals. An analog signal is a continuously varying electromagnetic
TABLE 2.2 ASCII control characters. (Continued on next page.)
Format control
BS (Backspace): Indicates movement of the printing
mechanism or display cursor backward one
position.
HT (Horizontal Tab): Indicates movement of the
printing mechanism or display cursor forward to
the next preassigned 'tab' or stopping position.
LF Fine Feed): Indicates movement of the printing
mechanism or display cursor to the start of the next
line.
VT (Vertical Tab): Indicates movement of the printing
mechanism or display cursor to the next of a
series preassigned printing lines.
FF (Form Feed): Indicates movement of the printing
mechanism or display cursor to the starting position
of the next page, form, or screen.
CR (Carriage Return): Indicates movement of the
printing mechanism or display cursor to the starting
position of the same line.
Transmission control
SOH (Start of Heading): Used to indicate the start of
a heading, which may contain address or routing
information.
STX (Start of Text): Used to indicate the start of the
text and so also indicates the end of the heading.
ETX (End of Text): Used to terminate the text that
was started with STX.
EOT (End of Transmission): Indicates the end of a
transmission, which may have included one or more
'texts' with their headings.
ENQ (Enquiry): A request for a response from a
remote station. It may be used as a 'WHO ARE
YOU' request for a station to identify itself.
ACK (Acknowledge): A character transmitted by a
receiving device as an affirmation response to a
sender. It is used as a positive response to polling
messages.
NAK (Negative Acknowledgment): A character
transmitted by a receiving device as a negative
response to a sender. It is used as a negative
response to polling messages.
SYN (Synchronous/Idle): Used by a synchronous
transmission system to achieve synchronization.
When no data are being sent, a synchronous transmission
system may send SYN characters continuously.
ETB (End of Transmission Block): Indicates the end
of a block of data for communication purposes. It
is used for blocking data where the block structure
is not necessarily related to the processing format.
Information separator
FS (File Separator)
GS (Group Separator)
RS (Record Separator)
US (United Separator)
Information separators to be used in an optional
manner except that their hierarchy shall be FS
(the most inclusive) to US (the least inclusive).
wave that may be propagated over a variety of media, depending on spectrum;
examples are wire media, such as twisted pair and coaxial cable, fiber optic cable,
and atmosphere or space propagation. A digital signal is a sequence of voltage
pulses that may be transmitted over a wire medium; for example, a constant positive
voltage level may represent binary 1, and a constant negative voltage level may
represent binary 0.
In what follows, we look first at some specific examples of signal types and
then discuss the relationship between data and signals.
Examples
Let us return to our three examples of the preceding subsection. For each example,
we will describe the signal and estimate its bandwidth.

Miscellaneous
NUL (Null): No character. Used for filling in time
or filling space on tape when there are no data.
BEL (Bell): Used when there is need to call human
attention. It may control alarm or attention devices.
SO (Shift Out): Indicates that the code combinations
that follow shall be interpreted as outside of the
standard character set until an SI character is
reached.
SI (Shift In): Indicates that the code combinations
that follow shall be interpreted according to the
standard character set.
DEL (Delete): Used to obliterate unwanted characters,
for example, by overwriting.
SP (Space): A nonprinting character used to separate
words, or to move the printing mechanism or display
cursor forward by one position.
DLE (Data Link Escape): A character that shall
change the meaning of one or more contiguously
following characters. It can provide supplementary
controls or permit the sending of data characters
having any bit combination.
DCl, DC2, DC3, DC4 (Device Controls): Characters
for the control of ancillary devices or special terminal
features.
CAN (Cancel): Indicates that the data that precede it
in a message or block should be disregarded (usually
because an error has been detected).
EM (End of Medium): Indicates the physical end of
a tape or other medium, or the end of the required
or used portion of the medium.
SUB (Substitute): Substituted for a character that is
found to be erroneous or invalid.
ESC (Escape): A character intended to provide code
extension in that it gives a specified number of
continuously following characters an alternate
meaning.
In the case of acoustic data (voice), the data can be represented directly by an
electromagnetic signal occupying the same spectrum. However, there is a need to
compromise between the fidelity of the sound, as transmitted electrically, and the
cost of transmission, which increases with increasing bandwidth. Although, as mentioned,
the spectrum of speech is approximately 20 Hz to 20 kHz, a much narrower
bandwidth will produce acceptable voice reproduction. The standard spectrum for
a voice signal is 300 to 3400 Hz. This is adequate for voice reproduction, it minimizes
required transmission capacity, and it allows for the use of rather inexpensive
telephone sets. Thus, the telephone transmitter converts the incoming acoustic
voice signal into an electromagnetic signal over the range 300 to 3400 Hz. This signal
is then transmitted through the telephone system to a receiver, which reproduces
an acoustic signal from the incoming electromagnetic signal.
Now, let us look at the video signal, which, interestingly, consists of both analog
and digital components. To produce a video signal, a TV camera, which performs
similar functions to the TV receiver, is used. One component of the camera
is a photosensitive plate, upon which a scene is optically focused. An electron beam
sweeps across the plate from left to right and top to bottom, in the same fashion as
depicted in Figure 2.11 for the receiver. As the beam sweeps, an analog electric signal
is developed proportional to the brightness of the scene at a particular spot.
Now we are in a position to describe the video signal. Figure 2.12a shows three
lines of a video signal; in this diagram, white is represented by a small positive voltage,
and black by a much larger positive voltage. So, for example, line 3 is at a
medium gray level most of the way across with a blacker portion in the middle.
Once the beam has completed a scan from left to right, it must retrace to the left
edge to scan the next line. During this period, the picture should be blanked out (on
both camera and receiver). This is done with a digital "horizontal blanking pulse."
Also, to maintain transmitter-receiver synchronization, a synchronization (sync)
pulse is sent between every line of video signal. This horizontal sync pulse rides on
top of the blanking pulse, creating a staircase-shaped digital signal between adjacent
analog video signals. Finally, when the beam reaches the bottom of the screen,
it must return to the top, with a somewhat longer blanking interval required. This is
shown in Figure 2.12b. The vertical blanking pulse is actually a series of synchronization
and blanking pulses, whose details need not concern us here.
Next, consider the timing of the system. We mentioned that a total of 483 lines
are scanned at a rate of 30 complete scans per second. This is an approximate number
taking into account the time lost during the vertical retrace interval. The actual
US. standard is 525 lines. but of these about 42 are lost during vertical retrace.
Finally, we are in a position to estimate the bandwidth required for the video
signal. To do this, we must estimate the upper (maximum) and lower (minimum)
frequency of the band. We use the following reasoning to arrive at the maximum
frequency: The maximum frequency would occur during the horizontal scan if the
scene were alternating between black and white as rapidly as possible. We can estimate
this maximum value by considering the resolution of the video image. In the
vertical dimension, there are 483 lines, so the maximum vertical resolution would be
483. Experiments have shown that the actual subjective resolution is about 70 percent
of that number, or about 338 lines. In the interest of a balanced picture, the
horizontal and vertical resolutions should be about the same. Because the ratio of
width to height of a TV screen is 4:3, the horizontal resolution should be about
413 X 338 = 450 lines. As a worst case, a scanning line would be made up of 450
elements alternating black and white. The scan would result in a wave, with each
cycle of the wave consisting of one higher (black) and one lower (white) voltage
level. Thus, there would be 450/2 = 225 cycles of the wave in 52.5 ps, for a maximum
frequency of about 4 MHz. This rough reasoning, in fact, is fairly accurate.
The maximum frequency, then, is 4 MHz. The lower limit will be a dc or zero frequency,
where the dc component corresponds to the average illumination of the
scene (the average value by which the signal exceeds the reference white level).
Thus, the bandwidth of the video signal is approximately 4 MHz - 0 = 4 MHz.
The foregoing discussion did not consider color or audio components of the
signal. It turns out that, with these included, the bandwidth remains about 4 MHz.
Finally, the third example described above is the general case of binary digital
data. A commonly used signal for such data uses two constant (dc) voltage levels,
one level for binary 1 and one level for binary 0. (In Lesson 3, we shall see that
this is but one alternative, referred to as NRZ.) Again, we are interested in the
bandwidth of such a signal. This will depend, in any specific case, on the exact shape
of the waveform and on the sequence of Is and 0s. We can obtain some understanding
by considering Figure 2.9 (compare Figure 2.8). As can be seen, the greater
the bandwidth of the signal, the more faithfully it approximates a digital pulse
stream.
Data and Signals
In the foregoing discussion, we have looked at analog signals used to represent analog
data and digital signals used to represent digital data. Generally, analog data are
a function of time and occupy a limited frequency spectrum; such data can be represented
by an electromagnetic signal occupying the same spectrum. Digital data
can be represented by digital signals, with a different voltage level for each of the
two binary digits.
As Figure 2.13 illustrates, these are not the only possibilities. Digital data can
also be represented by analog signals by use of a modem (modulator/demodulator).
The modem converts a series of binary (two-valued) voltage pulses into an analog
signal by encoding the digital data onto a carrier frequency. The resulting signal
occupies a certain spectrum of frequency centered about the carrier and may be
propagated across a medium suitable for that carrier. The most common modems
represent digital data in the voice spectrum and, hence, allow those data to be prop
agated over ordinary voice-grade telephone lines. At the other end of the line, the
modem demodulates the signal to recover the original data.
In an operation very similar to that performed by a modem, analog data can
be represented by digital signals. The device that performs this function for voice
data is a codec (coder-decoder). In essence, the codec takes an analog signal that
directly represents the voice data and approximates that signal by a bit stream. At
the receiving end, the bit stream is used to reconstruct the analog data.
Thus, Figure 2.13 suggests that data may be encoded into signals in a variety
of ways. We will return to this topic in Lesson 4.
Transmission
A final distinction remains to be made. Both analog and digital signals may be
transmitted on suitable transmission media. The way these signals are treated is a
function of the transmission system. Table 2.3 summarizes the methods of data
transmission. Analog transmission is a means of transmitting analog signals without
regard to their content; the signals may represent analog data (e.g., voice) or digital
data (e.g., binary data that pass through a modem). In either case, the analog signal
will become weaker (attenuated) after a certain distance. To achieve longer distances,
the analog transmission system includes amplifiers that boost the energy in
the signal. Unfortunately, the amplifier also boosts the noise components. With
amplifiers cascaded to achieve long distances, the signal becomes more and more
distorted. For analog data, such as voice, quite a bit of distortion can be tolerated
and the data remain intelligible. However, for digital data, cascaded amplifiers will
introduce errors.
Digital transmission, in contrast, is concerned with the content of the signal. A
digital signal can be transmitted only a limited distance before attenuation endangers
the integrity of the data. To achieve greater distances, repeaters are used. A
repeater receives the digital signal, recovers the pattern of 1s and Os, and retransmits
a new signal, thereby overcoming the attenuation.
The same technique may be used with an analog signal if it is assumed that the
signal carries digital data. At appropriately spaced points, the transmission system
has repeaters rather than amplifiers. The repeater .recovers the digital data from
the analog signal and generates a new, clean analog signal. Thus, noise is not cumulative.
The question naturally arises as to which is the preferred method of transmission;
the answer being supplied by the telecommunications industry and its customers
is digital, this despite an enormous investment in analog communications
facilities. Both long-haul telecommunications facilities and intrabuilding services
are gradually being converted to digital transmission and, where possible, digital
signaling techniques. The most important reasons are
Digital technology. The advent of large-scale integration (LSI) and very largescale
integration (VLSI) technology has caused a continuing drop in the cost
and size of digital circuitry. Analog equipment has not shown a similar drop.
Data integrity. With the use of repeaters rather than amplifiers, the effects of
noise and other signal impairments are not cumulative. It is possible, then, to
transmit data longer distances and over lesser quality lines by digital means
while maintaining the integrity of the data. This is explored in Section 2.3.
Capacity utilization. It has become economical to build transmission links of
very high bandwidth, including satellite channels and connections involving
optical fiber. A high degree of multiplexing is needed to effectively utilize
such capacity, and this is more easily and cheaply achieved with digital (timedivision)
rather than analog (frequency-division) techniques. This is explored
in Lesson 7.
Security and privacy. Encryption techniques can be readily applied to digital
data and to analog data that have been digitized.
Integration. By treating both analog and digital data digitally, all signals have
the same form and can be treated similarly. Thus, economies of scale and convenience
can be achieved by integrating voice, video, and digital data.

TRANSMISSION IMPAIRMENTS
With any communications system, it must be recognized that the received signal will
differ from the transmitted signal due to various transmission impairments. For analog
signals, these impairments introduce various random modifications that degrade
the signal quality. For digital signals, bit errors are introduced: A binary 1 is trans- - formed into a binary 0 and vice versa. In this section, we examine the various
impairments and comment on their effect on the information-carrying capacity of a
communication link; the next lesson looks at measures to compensate for these
impairments.
The most significant impairments are
Attenuation and attenuation distortion
Delay distortion
Noise
Attenuation
The strength of a signal falls off with distance over any transmission medium. For
guided media, this reduction in strength, or attenuation, is generally logarithmic and
is thus typically expressed as a constant number of decibels per unit distance. For
unguided media, attenuation is a more complex function of distance and of the
makeup of the atmosphere. Attenuation introduces three considerations for the
transmission engineer. First, a received signal must have sufficient strength so that
the electronic circuitry in the receiver can detect and interpret the signal. Second,
the signal must maintain a level sufficiently higher than noise to be received without
error. Third, attenuation is an increasing function of frequency.
The first and second problems are dealt with by attention to signal strength
and by the use of amplifiers or repeaters. For a point-to-point link, the signal
strength of the transmitter must be strong enough to be received intelligibly, but not
so strong as to overload the circuitry of the transmitter, which would cause a distorted
signal to be generated. Beyond a certain distance, the attenuation is unacceptably
great, and repeaters or amplifiers are used to boost the signal from time
to time. These problems are more complex for multipoint lines where the distance
from transmitter to receiver is variable.
The third problem is particularly noticeable for analog signals. Because the
attenuation varies as a function of frequency, the received signal is distorted, reducing
intelligibility. To overcome this problem, techniques are available for equalizing
attenuation across a band of frequencies. This is commonly done for voice-grade
telephone lines by using loading coils that change the electrical properties of the
line; the result is to smooth out attenuation effects. Another approach is to use
amplifiers that amplify high frequencies more than lower frequencies.
An example is shown in Figure 2.14a, which shows attenuation as a function
of frequency for a typical leased line. In the figure, attenuation is measured relative
to the attenuation at 1000 Hz. Positive values on the y axis represent attenuation
greater than that at 1000 Hz. A 1000-Hz tone of a given power level is applied to
the input, and the power, Plooo, is measured at the output. For any other frequency
f, the procedure is repeated and the relative attenuation in decibels is
The solid line in Figure 2.14a shows attenuation without equalization. As can
be seen, frequency components at the upper end of the voice band are attenuated
much more than those at lower frequencies. It should be clear that this will result in
a distortion of the received speech signal. The dashed line shows the effect of equalization.
The flattened response curve improves the quality of voice signals. It also
allows higher data rates to be used for digital data that are passed through a modem.
Attenuation distortion is much less of a problem with digital signals. As we
have seen, the strength of a digital signal falls off rapidly with frequency (Figure
2.6b); most of the content is concentrated near the fundamental frequency, or bit
rate, of the signal.
Delay Distortion
Delay distortion is a phenomenon peculiar to guided transmission media. The distortion
is caused by the fact that the velocity of propagation of a signal through a
guided medium varies with frequency. For a bandlimited signal, the velocity tends
to be highest near the center frequency and lower toward the two edges of the band.
Thus, various frequency components of a signal will arrive at the receiver at different
times.
This effect is referred to as delay distortion, as the received signal is distorted
due to variable delay in its components. Delay distortion is particularly critical for
digital data. Consider that a sequence of bits is being transmitted, using either analog
or digital signals. Because of delay distortion, some of the signal components of
one bit position will spill over into other bit positions, causing intersymbol interference,
which is a major limitation to maximum bit rate over a transmission control.
Equalizing techniques can also be used for delay distortion. Again using a
leased telephone line as an example, Figure 2.14b shows the effect of equalization
on delay as a function of frequency.
Noise
For any data transmission event, the received signal will consist of the transmitted
signal, modified by the various distortions imposed by the transmission system, plus
additional unwanted signals that are inserted somewhere between transmission and
reception; the latter, undesired signals are referred to as noise-a major limiting
factor in communications system performance.
Noise may be divided into four categories:
Thermal noise
Intermodulation noise
Crosstalk
Impulse noise
Thermal noise is due to thermal agitation of electrons in a conductor. It is
present in all electronic devices and transmission media and is a function of temperature.
Thermal noise is uniformly distributed across the frequency spectrum and
hence is often referred to as white noise; it cannot be eliminated and therefore
places an upper bound on communications system performance. The amount of
thermal noise to be found in a bandwidth of 1 Hz in any device or conductor is
 
The noise is assumed to be independent of frequency. Thus, the thermal noise,
in watts, present in a bandwidth of W hertz can be expressed as
When signals at different frequencies share the same transmission medium,
the result may be intermodulation noise. The effect of intermodulation noise is to
produce signals at a frequency that is the sum or difference of the two original frequencies,
or multiples of those frequencies. For example, the mixing of signals at
frequencies fi and fi might produce energy at the frequency fi + f2. This derived signal
could interfere with an intended signal at the frequency fi + f2.
Intermodulation noise is produced when there is some nonlinearity in the
transmitter, receiver, or intervening transmission system. Normally, these components
behave as linear systems; that is, the output is equal to the input, times a constant.
In a nonlinear system, the output is a more complex function of the input.
Such nonlinearity can be caused by component malfunction or the use of excessive
signal strength. It is under these circumstances that the sum and difference terms
occur.
Crosstalk has been experienced by anyone who, while using the telephone,
has been able to hear another conversation; it is an unwanted coupling between signal
paths. It can occur by electrical coupling between nearby twisted pair or, rarely,
coax cable lines carrying multiple signals. Crosstalk can also occur when unwanted
signals are picked up by microwave antennas; although highly directional,
microwave energy does spread during propagation. Typically, crosstalk is of the
same order of magnitude (or less) as thermal noise.
All of the types of noise discussed so far have reasonably predictable and reasonably
constant magnitudes; it is thus possible to engineer a transmission system to
cope with them. Impulse noise, however, is noncontinuous, consisting of irregular
pulses or noise spikes of short duration and of relatively high amplitude. It is generated
from a variety of causes, including external electromagnetic disturbances,
such as lightning, and faults and flaws in the communications system.
Impulse noise is generally only a minor annoyance for analog data. For example,
voice transmission may be corrupted by short clicks and crackles with no loss of
intelligibility. However, impulse noise is the primary source of error in digital data
communication. For example, a sharp spike of energy of 0.01-second duration
would not destroy any voice data, but would wash out about 50 bits of data being
transmitted at 4800 bps. Figure 2.15 is an example of the effect on a digital signal.
Here the noise consists of a relatively modest level of thermal noise plus occasional
spikes of impulse noise. The digital data are recovered from the signal by sampling
the received waveform once per bit time. As can be seen, the noise is occasionally
sufficient to change a 1 to a 0 or a 0 to a 1.
Channel Capacity
We have seen that there are a variety of impairments that distort or corrupt a signal.
For digital data, the question that then arises is to what extent these impairments
limit the data rate that can be achieved. The rate at which data can be transmitted
over a given communication path, or channel, under given conditions, is
referred to as the channel capacity.
There are four concepts here that we are trying to relate to one another:
Data rate. This is the rate, in bits per second (bps), at which data can be communicated.
Bandwidth. This is the bandwidth of the transmitted signal as constrained by
the transmitter and by the nature of the transmission medium, expressed in
cycles per second, or hertz.
Noise. The average level of noise over the communications path.
Error rate. The rate at which errors occur, where an error is the reception of
a 1 when a 0 was transmitted, or the reception of a 0 when a 1 was transmitted.
The problem we are addressing is this: Communications facilities are expensive,
and, in general, the greater the bandwidth of a facility, the greater the cost.
Furthermore, all transmission channels of any practical interest are of limited bandwidth.
The limitations arise from the physical properties of the transmission
medium or from deliberate limitations at the transmitter on the bandwidth to prevent
interference from other sources. Accordingly, we would like to make as efficient
use as possible of a given bandwidth. For digital data, this means that we
would like to get as high a data rate as possible at a particular limit of error rate for
a given bandwidth. The main constraint on achieving this efficiency is noise.
To begin, let us consider the case of a channel that is noise-free. In this environment,
the limitation on data rate is simply the bandwidth of the signal. A formulation
of this limitation, due to Nyquist, states that if the rate of signal transmission
is 2W, then a signal with frequencies no greater than W is sufficient to carry the
data rate. The converse is also true: Given a bandwidth of W, the highest signal rate
that can be carried is 2W. This limitation is due to the effect of intersymbol interference,
such as is produced by delay distortion. The result is useful in the development
of digital-to-analog encoding schemes and is derived in Lesson 4A.
Note that in the last paragraph, we referred to signal rate. If the signals to be
transmitted are binary (two voltage levels), then the data rate that can be supported
by W Hz is 2W bps. As an example, consider a voice channel being used, via
modem, to transmit digital data. Assume a bandwidth of 3100 Hz. Then the capacity,
C, of the channel is 2W = 6200 bps. However, as we shall see in Lesson 4, signals
with more than two levels can be used; that is, each signal element can represent
more than one bit. For example, if four possible voltage levels are used as
signals, then each signal element can represent two bits. With multilevel signaling,
the Nyquist formulation becomes
where M is the number of discrete signal or voltage levels. Thus, for M = 8, a value
used with some modems, C becomes 18,600 bps.
So, for a given bandwidth, the data rate can be increased by increasing the
number of different signals. However, this places an increased burden on the
receiver: Instead of distinguishing one of two possible signals during each signal
time, it must distinguish one of M possible signals. Noise and other impairments on
the transmission line will limit the practical value of M.
Thus, all other things being equal, doubling the bandwidth doubles the data
rate. Now consider the relationship between data rate, noise, and error rate. This
can be explained intuitively by again considering Figure 2.15. The presence of noise
can corrupt one or more bits. If the data rate is increased, then the bits become
"shorter" so that more bits are affected by a given pattern of noise. Thus, at a given
noise level, the higher the data rate, the higher the error rate.
All of these concepts can be tied together neatly in a formula developed by
the mathematician Claude Shannon. As we have just illustrated, the higher the data
rate, the more damage that unwanted noise can do. For a given level of noise, we
would expect that a greater signal strength would improve the ability to correctly
receive data in the presence of noise. The key parameter involved in this reasoning
is the signal-to-noise ratio (SIN), which is the ratio of the power in a signal to the
power contained in the noise that is present at a particular point in the transmission.
Typically, this ratio is measured at a receiver, as it is at this point that an attempt is
made to process the signal and eliminate the unwanted noise. For convenience, this
ratio is often reported in decibels:
This expresses the amount, in decibels, that the intended signal exceeds the noise
level. A high S/N will mean a high-quality signal and a low number of required
intermediate repeaters.
The signal-to-noise ratio is important in the transmission of digital data
because it sets the upper bound on the achievable data rate. Shannon's result is that
the maximum channel capacity, in bits per second, obeys the equation
where C is the capacity of the channel in bits per second and W is the bandwidth of
the channel in hertz. As an example, consider a voice channel being used, via
modem, to transmit digital data. Assume a bandwidth of 3100 Hz. A typical value
of S/N for a voice-grade line is 30 dB, or a ratio of 1000:l. Thus,
This represents the theoretical maximum that can be achieved. In
practice,however, only much lower rates are achieved. One reason for this is that
the formula assumes white noise (thermal noise). Impulse noise is not accounted
for, nor are attenuation or delay distortion.
The capacity indicated in the preceding equation is referred to as the errorfree
capacity. Shannon proved that if the actual information rate on a channel is less
than the error-free capacity, then it is theoretically possible to use a suitable signal
code to achieve error-free transmission through the channel. Shannon's theorem
unfortunately does not suggest a means for finding such codes, but it does provide
a yardstick by which the performance of practical communication schemes may be
measured.
The measure of efficiency of a digital transmission is the ratio of CIW, which
is the bps per hertz that is achieved. Figure 2.16 illustrates the theoretical efficiency
 
of a transmission. It also shows the actual results obtained on a typical voice-grade
line.
Several other observations concerning the above equation may be instructive.
For a given level of noise, it would appear that the data rate could be increased by
increasing either signal strength or bandwidth. However, as the signal strength
increases, so do nonlinearities in the system, leading to an increase in intermodulation
noise. Note also that, because noise is assumed to be white, the wider
the bandwidth, the more noise is admitted to the system. Thus, as W increases,
SIN decreases.
Finally, we mention a parameter related to SIN that is more convenient for
determining digital data rates and error rates. The parameter is the ratio of signal
energy per bit to noise-power density per hertz, Eb/No. Consider a signal, digital or
analog, that contains binary digital data transmitted at a certain bit rate R. Recalling
that 1 watt = 1 joulels, the energy per bit in a signal is given by Eb = STb, where
S is the signal power and Tb is the time required to send one bit. The data rate R is
just R = l/Tb. Thus,
The ratio EbINo is important because the bit error rate for digital data is a (decreasing)
function of this ratio. Given a value of EbINo needed to achieve a desired error
rate, the parameters in the preceding formula may be selected. Note that as the bit
rate R increases, the transmitted signal power, relative to noise, must increase to
maintain the required EbINo.
Let us try to grasp this result intuitively by considering again Figure 2.15. The
signal here is digital, but the reasoning would be the same for an analog signal. In
several instances, the noise is sufficient to alter the value of a bit. Now, if the data
rate were doubled, the bits would be more tightly packed together, and the same
passage of noise might destroy two bits. Thus, for constant signal and noise strength,
an increase in data rate increases the error rate.
Example

No comments:

Post a Comment

silahkan membaca dan berkomentar