PROTOCOL
AND ARCHITECTURE
We
begin with an exposition of the concept of a communications protocol. It
is
shown that protocols are fundamental to all data communications. Next, we look
at
a way of systematically describing and implementing the communications function
by
viewing the communications task in terms of a column of layers, each of
which
contains protocols; this is the view of the now-famous Open Systems
Interconnection
(OSI)
model.
Although
the OSI model is almost universally accepted as the framework for
discourse
in this area, there is another model, known as the TCPIIP protocol
architecture,
which
has gained commercial dominance at the expense of OSI. Most of the
specific
protocols described in Part Four are part of the TCPIIP suite of protocols.
PROTOCOLS
We
begin with an overview of characteristics of protocols
.
Following
this
overview, we look in more detail at OSI and TCPIIP.
Characteristics
The
concepts of distributed processing and computer networking imply that entities
in
different systems need to communicate. We use the terms entity and system
in a
very
general sense. Examples of entities are user application programs, file
transfer
packages,
data base management systems, electronic mail facilities, and terminals.
Examples
of systems are computers, terminals, and remote sensors. Note that in
some
cases the entity and the system in which it resides are coextensive (e.g.,
terminals).
In
general, an entity is anything capable of sending or receiving information,
and
a system is a physically distinct object that contains one or more entities.
For
two entities to successfully communicate, they must "speak the same
language."
What
is communicated, how it is communicated, and when it is communicated
must
conform to some mutually acceptable set of conventions between the
entities
involved. The set of conventions is referred to as a protocol, which may be
defined
as a set of rules governing the exchange of data between two entities. The
key
elements of a protocol are
Syntax.
Includes
such things as data format, coding, and signal levels.
Semantics.
Includes
control information for coordination and error handling.
Timing.
Includes
speed matching and sequencing.
HDLC
is an example of a protocol. The data to be exchanged must be sent in
frames
of a specific format (syntax). The control field provides a variety of regula
tory
functions, such as setting a mode and establishing a connection (semantics).
Provisions
are also included for flow control (timing). Most of Part Four will be
devoted
to discussing other examples of protocols.
Some
important characteristics of a protocol are
Direct
/indirect
Monolithic/structured
Symmetric/asymmetric
Standard
/nonstandard
Communication
between two entities may be direct or indirect. Figure 15.1
depicts
possible situations. If two systems share a point-to-point link, the entities
in
these
systems may communicate directly; that is, data and control information pass
directly
between entities with no intervening active agent. The same may be said of
a
multipoint configuration, although here the entities must be concerned with the
issue
of access control, making the protocol more complex. If systems connect
through
a switched communication network, a direct protocol is no longer possible.
The
two entities must depend on the functioning of other entities to exchange data.
A
more extreme case is a situation in which two entities do not even share the
same
switched
network, but are indirectly connected through two or more networks. A
set
of such interconnected networks is termed an internet.
A
protocol is either monolithic or structured. It should become clear as Part
Four
proceeds that the task of communication between entities on different systems
is
too complex to be handled as a unit. For example, consider an electronic mail
package
running on two computers connected by a synchronous HDLC link. To be
truly
monolithic, the package would need to include all of the HDLC logic. If the
connection
were over a packet-switched network, the package would still need the
HDLC
logic (or some equivalent) to attach to the network. It would also need logic
for
breaking up mail into packet-sized chunks, logic for requesting a virtual
circuit,
and
so forth. Mail should only be sent when the destination system and entity are
active
and ready to receive; logic is needed for that kind of coordination, and, as we
shall
see, the list goes on. A change in any aspect means that this huge package must
be
modified, with the risk of introducing difficult-to-find bugs.
An
alternative is to use structured design and implementation techniques.
Instead
of a single protocol, there is a set of protocols that exhibit a hierarchical
or
layered
structure. Primitive functions are implemented in lower-level entities that
provide
services to higher-level entities. For example, there could be an HDLC
module
(entity) that is invoked by an electronic mail facility when needed, which is
just
another form of indirection; higher-level entities rely on lower-level entities
to
exchange
data.
When
structured protocol design is used, we refer to the hardware and software
that
implements the communications function as a communications architecture;
the
remainder of this lesson, after this section, is devoted to this concept.
A
protocol may be either symmetric or asymmetric. Most of the protocols that
we
shall study are symmetric; that is, they involve communication between peer
entities.
Asymmetry may be dictated by the logic of an exchange (e.g., a client and
a
server process), or by the desire to keep one of the entities or systems as
simple
as
possible. An example of the latter situation is the normal response mode of
HDLC.
Typically, this involves a computer that polls and selects a number of
terminals.
The
logic on the terminal end is quite straightforward.
Finally,
a protocol may be either standard or nonstandard. A nonstandard
protocol
is one built for a specific communications situation or, at most, a particular
model
of a computer. Thus, if K different kinds of information sources have to
communicate
with L types of information receivers, K X L different protocols are
needed
without standards and a total of 2 X K X L implementations are required
(Figure
15.2a). If all systems shared a common protocol, only K + L
implementations
would
be needed (Figure 15.2b). The increasing use of distributed processing
and
the decreasing inclination of customers to remain locked into a single vendor
dictate
that all vendors implement protocols that conform to an agreed-upon
standard.
Functions
Before
turning to a discussion of communications architecture and the various levels
of
protocols, let us consider a rather small set of functions that form the basis
of
all
protocols. Not all protocols have all functions; this would involve a
significant
duplication
of effort. There are, nevertheless, many instances of the same type of
function
being present in protocols at different levels.
This
discussion will, of necessity, be rather abstract; it does, however, provide
an
integrated overview of the characteristics and functions of communications
protocols.
The
concept of protocol is fundamental to all of the remainder of Part Four,
and,
as we proceed, specific examples of all these functions will be seen.
We
can group protocol functions into the following categories:
Segmentation
and reassembly
Encapsulation
Connection
control
Ordered
delivery
Flow
control
Error
control
Addressing
Multiplexing
Transmission
services
Segmentation
and Reassembly
A
protocol is concerned with exchanging streams of data between two entities.
Usually,
the
transfer can be characterized as consisting of a sequence of blocks of data
of
some bounded size. At the application level, we refer to a logical unit of data
transfer
as a message. Now, whether the application entity sends data in messages
or
in a continuous stream, lower-level protocols may need to break the data up
into
blocks
of some smaller bounded size: this process is called segmentation. For
convenience,
we
shall refer to a block of data exchanged between two entities via a protocol
as
a protocol data unit (PDU).
There
are a number of motivations for segmentation, depending on the context.
Among
the typical reasons for segmentation are
The
communications network may only accept blocks of data up to a certain
size.
For example, an ATM network is limited to blocks of 53 octets; Ethernet
imposes
a maximum size of 1526 octets.
0
Error
control may be more efficient with a smaller PDU size. For example,
fewer
bits need to be retransmitted using smaller blocks with the selective
repeat
technique.
More
equitable access to shared transmission facilities, with shorter delay, can
be
provided. For example, without a maximum block size, one station could
monopolize
a multipoint medium.
A
smaller PDU size may mean that receiving entities can allocate smaller
buffers.
An
entity may require that data transfer comes to some sort of closure from
time
to time, for checkpoint and restartlrecovery operations.
There
are several disadvantages to segmentation that argue for making blocks
as
large as possible:
a
Each
PDU, as we shall see, contains a fixed minimum amount of control information.
Hence,
the smaller the block, the greater the percentage overhead.
PDU
arrival may generate an interrupt that must be serviced. Smaller blocks
result
in more interrupts.
s More time is spent processing smaller,
more numerous PDUs.
All
of these factors must be taken into account by the protocol designer in
determining
minimum and maximum PDU size.
The
counterpart of segmentation is reassembly. Eventually, the segmented
data
must be reassembled into messages appropriate to the application level. If
PDUs
arrive out of order, the task is complicated.
The
process of segmentation was illustrated in Figure 1.7.
Encapsulation
Each
PDU contains not only data but control information. Indeed, some PDUs consist
solely
of control information and no data. The control information falls into
three
general categories:
Address.
The
address of the sender and/or receiver may be indicated.
Error-detecting
code. Some
sort of frame check sequence is often included for
error
detection.
Protocol
control. Additional
information is included to implement the protocol
functions
listed in the remainder of this section.
The
addition of control information to data is referred to as encapsulation.
Data
are accepted or generated by an entity and encapsulated into a PDU containing
that
data plus control information (See Figures 1.7 and 1.8). An example of this
is
the HDLC frame (Figure 6.10).
Connection
Control
An
entity may transmit data to another entity in such a way that each PDU is
treated
independently of all prior PDUs. This process is known as connectionless
data
transfer; an example is the use of the datagram. While this mode is useful, an
equally
important technique is connection-oriented data transfer, of which the virtual
circuit
is an example.
Connection-oriented
data transfer is to be preferred (even required) if stations
anticipate
a lengthy exchange of data and/or certain details of their protocol
must
be worked out dynamically. A logical association, or connection, is established
between
the entities. Three phases occur (Figure 15.3):
Connection
establishment
Data
transfer
Connection
termination
With
more sophisticated protocols, there may also be connection interrupt
and
recovery phases to cope with errors and other sorts of interruptions.
During
the connection establishment phase, two entities agree to exchange
data.
Typically, one station will issue a connection request (in connectionless
fashion!)
to
the other. A central authority may or may not be involved. In simpler
protocols,
the
receiving entity either accepts or rejects the request and, in the former
case,
away they go. In more complex proposals, this phase includes a negotiation
concerning
the syntax, semantics, and timing of the protocol. Both entities must, of
course,
be using the same protocol. But the protocol may allow certain optional
features,
and
these must be agreed upon by means of negotiation. For example, the
protocol
may specify a PDU size of up to 8000 octets; one station may wish to
restrict
this to 1000 octets.
Following
connection establishment, the data transfer phase is entered; here,
both
data and control information (e.g., flow control, error control) are exchanged.
The
figure shows a situation in which all of the data flows in one direction, with
acknowledgments
returned in the other direction. More typically, data and
acknowledgments
flow in both directions. Finally, one side or the other wishes to
terminate
the connection and does so by sending a termination request. Alternatively,
a
central authority might forcibly terminate a connection.
The
key characteristic of connection-oriented data transfer is that sequencing
is
used. Each side sequentially numbers the PDUs that it sends to the other side.
Because
each side remembers that it is engaged in a logical connection: it can keep
track
of both outgoing numbers, which it generates, and incoming numbers, which
are
generated by the other side. Indeed, one can essentially define a
connectionoriented
data
transfer as one in which both sides number PDUs and keep track of
the
incoming and outgoing numbers. Sequencing supports three main functions:
ordered
deliver, flow control, and error control.
Ordered
Delivery
If
two communicating entities are in different hosts connected by a network, there
is
a risk that PDUs will not arrive in the order in which they were sent, because
they
may
traverse different paths through the network. In connection-oriented protocols,
it
is generally required that PDU order be maintained. For example, if a file is
transferred
between two systems, we would like to be assured that the records of
the
received file are in the same order as those of the transmitted file, and not
shuffled.
If
each PDU is given a unique number, and numbers are assigned sequentially,
then
it is a logically simple task for the receiving entity to reorder received PDUs
on
the basis of sequence number. A problem with this scheme is that, with a finite
sequence
number field, sequence numbers repeat (modulo some maximum number).
Evidently,
the maximum sequence number must be greater than the maximum
number
of PDUs that could be outstanding at any time. In fact, the maximum number
may
need to be twice the maximum number of PDUs that could be outstanding
(e.g.,
selective-repeat ARQ; see Lesson 6).
Flow
Control
Flow
control was introduced in Lesson 6. In essence, flow control is a function
performed
by
a receiving entity to limit the amount or rate of data that is sent by a
transmitting
entity.
The
simplest form of flow control is a stop-and-wait procedure, in which each
PDU
must be acknowledged before the next can be sent. More efficient protocols
involve
some form of credit provided to the transmitter, which is the amount of data
that
can be sent without an acknowledgment. The sliding-window technique is an
example
of this mechanism.
Flow
control is a good example of a function that must be implemented in several
protocols.
Consider again Figure 1.6. The network will need to exercise flow
control
over station 1's network services module via the network access protocol, in
order
to enforce network traffic control. At the same time, station 2's network
services
module
has only limited buffer space and needs to exercise flow control over
station
1's network services module via the process-to-process protocol. Finally,
even
though station 2's network service module can control its data flow, station
2's
application
may be vulnerable to overflow. For example, the application could be
hung
up waiting for disk access. Thus, flow control is also needed over the
application-
oriented
protocol.
Error
Control
Another
previously introduced function is error control. Techniques are needed to
guard
against loss or damage of data and control information. Most techniques
involve
error detection, based on a frame check sequence, and PDU retransmission.
Retransmission
is often activated by a timer. If a sending entity fails to receive an
acknowledgment
to a PDU within a specified period of time, it will retransmit.
As
with flow control, error control is a function that must be performed at
various
levels
of protocol. Consider again Figure 1.6. The network access protocol
should
include error control to assure that data are successfully exchanged between
station
and network. However, a packet of data may be lost inside the network, and
the
process-to-process protocol should be able to recover from this loss.
Addressing
The
concept of addressing in a communications architecture is a complex one and
covers
a number of issues. At least four separate issues need to be discussed:
e
Addressing
level
a Addressing scope
e
Connection
identifiers
e
Addressing
mode
During
the discussion, we illustrate the concepts using Figure 15.4, which
shows
a configuration using the TCPIIP protocol architecture. The concepts are
essentially
the same for the OSI architecture or any other communications architecture.
Addressing
level refers
to the level in the communications architecture at
which
an entity is named. Typically, a unique address is associated with each end
system
(e.g., host or terminal) and each intermediate system (e.g., router) in a
configuration.
Such
an address is, in general, a network-level address. In the case of the
TCPIIP
architecture, this is referred to as an IP address, or simply an internet
address.
In the case of the OSI architecture, this is referred to as a network service
access
point (NSAP). The network-level address is used to route a PDU through a
network
or networks to a system indicated by a network-level address in the PDU.
Once
data arrives at a destination system, it must be routed to some process
or
application in the system. Typically, a system will support multiple
applications,
and
an application may support multiple users. Each application and, perhaps, each
concurrent
user of an application, is assigned a unique identifier, referred to as a
port
in the TCPIIP architecture, and as a service access point (SAP) in the OSI
architecture.
For example, a host system might support both an electronic mail
application
and a file transfer application. At minimum, each application would
have
a port number or SAP that is unique within that system. Further, the file
transfer
application
might support multiple simultaneous transfers, in which case, each
transfer
is dynamically assigned a unique port number or SAP.
Figure
15.4
illustrates
two levels of addressing within a system; this is typically
the
case for the TCPIIP architecture. However, there can be addressing at each
level
of
an architecture. For example, a unique SAP can be assigned to each level of the
OSI
architecture.
Another
issue that relates to the address of an end system or intermediate system
is
addressing scope. The internet address or NSAP address referred to above
is
a
global address. The key characteristics of a global address are
Global
nonambiguity. A
global address identifies a unique system. Synonyms
are
permitted. That is, a system may have more than one global address.
Global
applicability. It
is possible at any global address to identify any other
global
address, in any system, by means of the global address of the other
system.
Because
a global address is unique and globally applicable, it enables an internet
to
route data from any system attached to any network to any other system
attached
to any other network.
Figure
15.4 illustrates that another level of addressing may be required. Each
subnetwork
must maintain a unique address for each device interface on the subnetwork.
Examples
are a MAC address on an IEEE 802 network and an X.25 DTE
address,
both of which enable the subnetwork to route data units (e.g., MAC
frames,
X.25 packets) through the subnetwork and deliver them to the intended
attached
system. We can refer to such an address as a subnetwork attachment-point
address.
The
issue of addressing scope is generally only relevant for network-level
addresses.
A port or SAP above the network level is unique within a given system
but
need not be globally unique. For example, in Figure 15.4, there can be a port 1
in
system A and a port 1 in system B. The full designation of these two ports
could
be
expressed as A.l and B.l, which are unique designations.
The
concept of connection identifiers comes into play when we consider
connection-
oriented
data transfer (e.g., virtual circuit) rather than connectionless data
transfer
(e.g., datagram). For connectionless data transfer, a global name is used
with
each data transmission. For connection-oriented transfer, it is sometimes
desirable
to
use only a connection name during the data transfer phase. The scenario is
this:
Entity 1 on system A requests a connection to entity 2 on system B, perhaps
using
the global address B.2. When B.2 accepts the connection, a connection name
(usually
a number) is provided and is used by both entities for future transmissions.
The
use of a connection name has several advantages:
Reduced
overhead. Connection
names are generally shorter than global
names.
For example, in the X.25 protocol (discussed in Lesson 9) used over
packet-switched
networks, connection-request packets contain both source
and
destination address fields, each with a system-defined length that may be
a
number of octets. After a virtual circuit is established, data packets contain
just
a 12-bit virtual circuit number.
Routing.
In
setting up a connection, a
fixed
route may be defined. The connection
name
serves to identify the route to intermediate systems, such as
packet-switching
nodes, for handling future PDUs.
Multiplexing.
We
address this function in more general terms below. Here we
note
that an entity may wish to enjoy more than one connection simultaneously.
Thus,
incoming PDUs must be identified by connection name.
Use
of state information. Once a connection is established, the end systems
can
maintain state information relating to the connection; this enables such
functions
as flow control and error control using sequence numbers.
Figure
15.4
shows
several examples of connections. The logical connection
between
router J
and
host B is at the network level. For example, if network 2 is a
packet-switching
network using X.25,
then
this logical connection would be a virtual
circuit.
At a higher level, many transport-level protocols, such as TCP, support logical
connections
between users of the transport service. Thus, TCP can maintain a
connection
between two ports on different systems.
Another
addressing concept is that of addressing mode. Most commonly, an
address
refers to a single system or port; in this case, it is referred to as an
individual
or
unicast address. It is also possible for an address to refer to more than one
entity
or port. Such an address identifies multiple simultaneous recipients for data.
For
example, a user might wish to send a memo to a number of individuals. The
network
control
center may wish to notify all users that the network is going down. An
address
for multiple recipients may be broadcast-intended for all entities within a
domain-or
multicast-intended for a specific subset of entities. Table 15.1 illustrates
the
possibilities.
Multiplexing
Related
to the concept of addressing is that of multiplexing. One form of multiplexing
is
supported by means of multiple connections into a single system. For
example,
with X.25,
there
can be multiple virtual circuits terminating in a single end
system;
we can say that these virtual circuits are multiplexed over the single physical
interface
between the end system and the network. Multiplexing can also be
accomplished
via port names, which also permit multiple simultaneous connections.
For
example, there can be multiple TCP connections terminating in a given system,
each
connection supporting a different pair of ports.
Multiplexing
is used in another context as well, namely the mapping of connections
from
one level to another. Consider again Figure 15.4. Network A might
provide
a virtual circuit service. For each process-to-process connection established
at
the network services level, a virtual circuit could be created at the network
access
level.
This is a one-to-one relationship, but need not be so. Multiplexing can be used
in
one of two directions (Figure 15.5). Upward multiplexing occurs when
multiple
higher-level
connections are multiplexed on, or share, a single lower-level connec-
tion
in order to make more efficient use of the lower-level service or to provide
several
higher-level
connections in an environment where only a single lower-level
connection
exists. Figure 15.5
shows
an example of upward multiplexing. Downward
multiplexing,
or splitting, means that a single higher-level connection is built
on
top of multiple lower-level connections, the traffic on the higher connection
being
divided among the various lower connections. This technique may be used to
provide
reliability, performance, or efficiency.
Transmission
Services
A
protocol may provide a variety of additional services to the entities that use
it.
We
mention here three common examples:
Priority.
Certain
messages, such as control messages, may need to get through
to
the destination entity with minimum delay. An example would be a
closeconnection
request.
Thus, priority could be assigned on a per-message basis.
Additionally,
priority could be assigned on a per-connection basis.
Grade
of service. Certain
classes of data may require a minimum throughput
or
a maximum delay threshold.
Security.
Security
mechanisms, restricting access, may be invoked.
All
of these services depend on the underlying transmission system and on any
intervening
lower-level entities. If it is possible for these services to be provided
from
below, the protocol can be used by the two entities to exercise such services.
OSI
As
discussed in Lesson 1, standards are needed to promote interoperability among
vendor
equipment and to encourage economies of scale. Because of the complexity
of
the communications task, no single standard will suffice. Rather, the functions
should
be broken down into more manageable parts and organized as a communications
architecture,
which would then form the framework for standardization.
This
line of reasoning led IS0 in 1977 to establish a subcommittee to develop
such
an architecture. The result was the Open Systems Interconnection (OSI)
reference
model.
Although the essential elements of the model were put into place
quickly,
the final IS0 standard, IS0 7498, was not published until 1984. A technically
compatible
version was issued by CCITT as X.200 (now ITU-T).
The
Model
A
widely accepted structuring technique, and the one chosen by ISO, is layering.
The
communications functions are partitioned into a hierarchical set of layers.
Each
layer
performs a related subset of the functions required to communicate with
another
system, relying on the next-lower layer to perform more primitive functions,
and
to conceal the details of those functions, as it provides services to the
next-higher
layer. Ideally, the layers should be defined so that changes in one layer
do
not require changes in the other layers. Thus, we have decomposed one problem
into
a number of more manageable subproblems.
The
task of IS0 was to define a set of layers and to delineate the services
performed
by
each layer. The partitioning should group functions logically, and should
have
enough layers to make each one manageably small, but should not have so
many
layers that the processing overhead imposed by their collection is burdensome.
The
principles that guided the design effort are summarized in Table 15.2.
The
resulting reference model has seven layers, which are listed with a brief defin
ition
in Figure 1.10. Table 15.3 provides ISO's justification for the selection of
these
layers.
Figure
15.6 illustrates the OSI architecture. Each system contains the seven
layers.
Communication is between applications in the two computers, labeled application
X
and application Y in the figure. If application X wishes to send a message
to
application Y, it invokes the application layer (layer 7). Layer 7 establishes
a peer
relationship
with layer 7 of the target computer, using a layer-7 protocol (application
protocol).
This protocol requires services from layer 6, so the two layer-6 entities
use
a protocol of their own, and so on down to the physical layer, which actually
transmits
bits over a transmission medium.
Note
that there is no direct communication between peer layers except at the
physical
layer. That is, above the physical layer, each protocol entity sends data
down
to the next-lower layer to get the data across to its peer entity. Even at the
physical
layer, the OSI model does not stipulate that two systems be directly connected.
For
example, a
packet-switched
or circuit-switched network may be used to
provide
the communication link.
Figure
15.6 also highlights the use of protocol data units (PDUs) within the
OSI
architecture. First, consider the most common way in which protocols are
realized.
When
application X has a message to send to application Y, it transfers those
data
to an application entity in the application layer. A header is appended to the
data
that contains the required information for the peer-layer-7 protocol
(encapsulation).
The
original data, plus the header, are now passed as a unit to layer 6. The
presentation
entity treats the whole unit as data and appends its own header (a second
encapsulation).
This process continues down through layer 2, which generally
adds
both a header and a trailer (e.g., HDLC). This layer-2 unit, called a frame,
is
then
passed by the physical layer onto the transmission medium. When the frame is
received
by the target system, the reverse process occurs. As the data ascend, each
layer
strips off the outermost header, acts on the protocol information contained
therein,
and passes the remainder up to the next layer.
At
each stage of the process, a layer may fragment the data unit it receives
from
the next-higher layer into several parts, in order to accommodate its own
requirements.
These data units must then be reassembled by the corresponding
peer
layer before being passed up.
Standardization Within the OSI Framework
The
principal motivation for the development of the OSI model was to provide a
framework
for standardization. Within the model, one or more protocol standards
can
be developed at each layer. The model defines, in general terms, the functions
to
be performed at that layer and facilitates the standards-making process in two
ways:
Because
the functions of each layer are well-defined, standards can be developed
independently
and simultaneously for each layer, thereby speeding up
the
standards-making process.
Because
the boundaries between layers are well defined, changes in standards
in
one layer need not affect already existing software in another layer; this
makes
it easier to introduce new standards.
Figure
15.7
illustrates
the use of the OSI model as such a framework. The
overall
communications function is decomposed into seven distinct layers, using the
principles
outlined in Table 15.2. These principles essentially amount to the use of
modular
design. That is, the overall function is broken up into a number of modules,
making
the interfaces between modules as simple as possible. In addition, the
design
principle of information-hiding is used: Lower layers are concerned with
greater
levels of detail; upper layers are independent of these details. Within each
layer,
there exist both the service provided to the next higher layer and the protocol
to
the peer layer in other systems.
Figure
15.8 shows more specifically the nature of the standardization required
at
each layer. Three elements are key:
Protocol
specification. Two
entities at the same layer in different systems
cooperate
and interact by means of a protocol. Because two different open
systems
are involved, the protocol must be specified precisely; this includes
the
format of the protocol data units exchanged, the semantics of all fields,
and
the allowable sequence of PDUs.
Service
definition. In
addition to the protocol or protocols that operate at a
given
layer, standards are needed for the services that each layer provides to
the
next-higher layer. Typically, the definition of services is equivalent to a
functional
description that defines what services are provided, but not how the
services
are to be provided.
Addressing.
Each
layer provides services to entities at the next-higher layer.
These
entities are referenced by means of a service access point (SAP). Thus,
a
network service access point (NSAP) indicates a transport entity that is a
user
of the network service.
The
need to provide a precise protocol specification for open systems is
selfevident.
The
other two items listed above warrant further comment. With respect
to
service definitions, the motivation for providing only a functional definition
is as
follows.
First, the interaction between two adjacent layers takes places within the
confines
of a single open system and is not the concern of any other open system.
Thus,
as long as peer layers in different systems provide the same services to their
next-higher
layersj the details of how the services are provided may differ from one
system
to another without loss of interoperability. Second, it will usually be the
case
that
adjacent layers are implemented on the same processor. In that case, we would
like
to leave the system programmer free to exploit the hardware and operating
system
to
provide an interface that is as efficient as possible.
Service
Primitives and Parameters
The
services between adjacent layers in the OSI architecture are expressed in terms
of
primitives and parameters. A primitive specifies the function to
be performed,
and
a parameter is used to pass data and control information. The actual form of a
primitive
is implementation-dependent; an example is a procedure call.
Four
types of primitives are used in standards to define the interaction
between
adjacent layers in the architecture (X.210). These are defined in Table
15.4.
The layout of Figure 15.9a suggests the time ordering of these events. For
example,
consider the transfer of data from an (N) entity to a peer
(N) entity in
another
system. The following steps occur:
1. The source (N) entity invokes its (N-1)
entity with a DATA.request primitive.
Associated
with the primitive are the needed parameters, such as the data to
be
transmitted and the destination address.
2.
The
source (N-1) entity prepares an (N-1) PDU to be sent to its peer (N-1)
entity.
3.
The
destination (N-1) entity delivers the data to the appropriate destination
(N)
entity via a
DATAhdication,
which includes the data and source address
as
parameters.
4.
If
an acknowledgment is called for, the destination (N) entity issues a
DATA.response
to its (N-1) entity.
5.
The
(N-1) entity conveys the acknowledgment in an (N-1) PDU.
6.
The
acknowledgment is delivered to the (N) entity as a DATA.confirm.
This
sequence of events is referred to as a confirmed service, as the initiator
receives
confirmation that the requested service has had the desired effect at the
other
end. If only request and indication primitives are involved (corresponding to
steps
1 through 3), then the service dialogue is a nonconfirmed service; the
initiator
receives
no confirmation that the requested action has taken place (Figure 15.9b).
The
OSI
Layers
In
this section, we discuss briefly each of the layers and, where appropriate,
give
examples
of standards for protocols at those layers.
Physical
Layer
The
physical layer covers the physical interface between devices and the rules by
which
bits are passed from one to another. The physical layer has four important
characteristics:
Mechanical.
Relates
to the physical properties of the interface to a transmission
medium.
Typically, the specification is of a pluggable connector that
joins
one or more signal conductors, called circuits.
Electrical.
Relates
to the representation of bits (e.g., in terms of voltage levels)
and
the data transmission rate of bits.
Functional.
Specifies
the functions performed by individual circuits of the
physical
interface between a system and the transmission medium.
Procedural.
Specifies
the sequence of events by which bit streams are
exchanged
across the physical medium.
We
have already covered physical layer protocols in some detail in Section
5.3.
Examples of standards at this layer are EIA-232-E, as well as portions of ISDN
and
LAN standards.
Data
Link Layer
Whereas
the physical layer provides only a raw bit-stream service, the data link
layer
attempts to make the physical link reliable while providing the means to
activate,
maintain,
and deactivate the link. The principal service provided by the data
link
layer to higher layers is that of error detection and control. Thus, with a
fully
functional
data-link-layer protocol, the next higher layer may assume error-free
transmission
over the link. However, if communication is between two systems that
are
not directly connected, the connection will comprise a number of data links in
tandem,
each functioning independently. Thus, the higher layers are not relieved of
any
error control responsibility.
Lesson
6
was
devoted to data link protocols; examples of standards at this
layer
are HDLC, LAPB, LLC, and LAPD.
Network
Layer
The
network layer provides for the transfer of information between end systems
across
some sort of communications network. It relieves higher layers of the need
to
know anything about the underlying data transmission and switching technologies
used
to connect systems. At this layer, the computer system engages in a dialogue
with
the network to specify the destination address and to request certain network
facilities,
such as priority.
There
is a spectrum of possibilities for intervening communications facilities
to
be managed by the network layer. At one extreme, there is a direct
point-topoint
link
between stations. In this case, there may be no need for a network layer
because
the data link layer can perform the necessary function of managing the
link.
Next,
the systems could be connected across a single network, such as a
circuit-switching
or packet-switching network. As an example, the packet level of
the
X.25 standard is a network layer standard for this situation. Figure 15.10
shows
how
the presence of a network is accommodated by the OSI architecture. The lower
three
layers are concerned with attaching to and communicating with the network.
The
packets created by the end system pass through one or more network nodes
that
act as relays between the two end systems. The network nodes implement layers
1-3
of the architecture. In the figure, two end systems are connected through a
single
network node. Layer 3 in the node performs a switching and routing function.
Within
the node, there are two data link layers and two physical layers, corresponding
to
the links to the two end systems. Each data link (and physical) layer
operates
independently to provide service to the network layer over its respective
link.
The upper four layers are end-to-end protocols between the attached end
systems.
At
the other extreme, two end systems might wish to communicate but are not
even
connected to the same network. Rather, they are connected to networks that,
directly
or indirectly, are connected to each other. This case requires the use of
some
sort of internetworking technique; we explore this approach in Lesson 16.
Transport
Layer
The
transport layer provides a mechanism for the exchange of data between end
systems.
The connection-oriented transport service ensures that data are delivered
error-free,
in sequence, with no losses or duplications. The transport layer may also
be
concerned with optimizing the use of network services and with providing a
requested
quality of service to session entities. For example, the session entity may
specify
acceptable error rates, maximum delay, priority, and security.
The
size and complexity of a transport protocol depend on how reliable or
unreliable
the underlying network and network layer services are. Accordingly, IS0
has
developed a family of five transport protocol standards, each oriented toward a
different
underlying service. In the TCPIIP protocol suite, there are two common
transport-layer
protocols: the connection-oriented TCP (transmission control protocol)
and
the connectionless UDP (user datagram protocol).
Session
Layer
The
lowest four layers of the OSI model provide the means for the reliable
exchange
of data and provide an expedited data service. For many applications, this
basic
service is insufficient. For example, a remote terminal access application
might
require
a half-duplex dialogue. A transaction-processing application might require
checkpoints
in the data-transfer stream to permit backup and recovery. A messageprocessing
application
might require the ability to interrupt a dialogue in order to
prepare
a new portion of a message and later to resume the dialogue where it was
left
off.
All
these capabilities could be embedded in specific applications at layer 7.
However,
because these types of dialogue-structuring tools have widespread
applicability,
it
makes sense to organize them into a separate layer: the session layer.
The
session layer provides the mechanism for controlling the dialogue
between
applications in end systems. In many cases, there will be little or no need
for
session-layer services, but for some applications, such services are used. The
key
services
provided by the session layer include
Dialogue
discipline. This
can be two-way simultaneous (full duplex) or twoway
alternate
(half duplex).
Grouping.
The
flow of data can be marked to define groups of data. For
example,
if a retail store is transmitting sales data to a regional office, the data
can
be marked to indicate the end of the sales data for each department; this
would
signal the host computer to finalize running totals for that department
and
start new running counts for the next department.
Recovery.
The
session layer can provide a checkpointing mechanism, so that
if
a failure of some sort occurs between checkpoints, the session entity can
retransmit
all data since the last checkpoint.
IS0
has issued a standard for the session layer that includes, as options, services
such
as those just described.
Presentation
Layer
The
presentation layer defines the format of the data to be exchanged between
applications
and offers application programs a set of data transformation services.
The
presentation layer also defines the syntax used between application entities
and
provides
for the selection and subsequent modification of the representation used.
Examples
of specific services that may be performed at this layer include data
compression
and
encryption.
Application
Layer
The
application layer provides a means for application programs to access the
OSI
environment. This layer contains management func'tions and generally useful
mechanisms
that support distributed applications. In addition, general-purpose
applications
such as file transfer, electronic mail, and terminal access to remote
computers
are considered to reside at this layer.
TCP/IP PROTOCOL SUITE
For
many years, the technical literature on protocol architectures was dominated by
discussions
related to OSI and to the development of protocols and services at each
layer.
Throughout the 1980s, the belief was widespread that OSI would come to
dominate
commercially, both over architectures such as IBM's SNA, as well as over
competing
multivendor schemes such as TCPIIP; this promise was never realized. In
the
1990s, TCPIIP has become firmly established as the dominant commercial
architecture
and
as the protocol suite upon which the bulk of new protocol development
is
to be done.
There
are a number of reasons for the success of the TCPIIP protocols over
OSI:
1. TCPIIP protocols were specified, and
enjoyed extensive use, prior to IS0
standardization
of alternative protocols. Thus, organizations in the 1980s with
an
immediate need were faced with the choice of waiting for the always
promised,
never-delivered complete OSI package, and the up-and-running,
plug-and-play
TCPIIP suite. Once the obvious choice of TCPIIP was made,
the
cost and technical risks of migrating from an installed base inhibited OSI
acceptance.
2.
The
TCPIIP protocols were initially developed as a U.S. military research
effort
funded by the Department of Defense (DOD). Although DOD, like the
rest
of the U.S. government, was committed to international standards, DOD
had
immediate operational needs that could not be met during the 1980s and
early
1990s by off-the-shelf OSI-based products. Accordingly, DOD mandated
the
use of TCPIIP protocols for virtually all software purchases. Because
DOD
is the largest consumer of software products in the world, this
policy
created an enormous market, encouraging vendors to develop TCPIIPbased
products.
3.
The
Internet is built on the foundation of the TCPIIP suite. The dramatic
growth
of the Internet, and especially the World Wide Web, has cemented the
victory
of TCPIIP over OSI.
The
TCP/PP Approach
The
TCPIIP protocol suite recognizes that the task of communications is too complex
and
too diverse to be accomplished by a single unit. Accordingly, the task is
broken
up into modules or entities that may communicate with peer entities in
another
system. One entity within a system provides services to other entities and,
in
turn, uses the services of other entities. Good software-design practice
dictates
that
these entities be arranged in a modular and hierarchical fashion.
The
OSI model is based on this system of communication, but takes it one step
further,
recognizing that, in many respects, protocols at the same level of the hierarchy
have
certain features in common. This thinhng yields the concept of rows or
layers,
as well as the attempt to describe in an abstract fashion what features are
held
in common by the protocols within a given row.
As
an explanatory tool, a layered model has significant value and, indeed, the
OSI
model is used for precisely that purpose in many lessons on data communications
and
telecommunications. The objection sometimes raised by the designers of
the
TCPIIP protocol suite and its protocols is that the OSI model is prescriptive
rather
than descriptive. It dictates that protocols within a given layer perform
certain
functions,
which may not be always desirable. It is possible to define more than
one
protocol at a given layer, and the functionality of those protocols may not be
the
same or even similar. Rather, what is common about a set of protocols at the
same
layer is that they share the same set of support protocols at the next lower
layer.
Furthermore,
there is the implication in the OSI model that, because interfaces
between
layers are well-defined, a new protocol can be substituted for an old
one
at a given layer with no impact on adjacent layers (see principle 6, Table
15.2);
this
is not always desirable or even possible. For example, a LAN lends itself
easily
to
multicast and broadcast addressing at the link level. If the IEEE 802 link
level
were
inserted below a network protocol entity that did not support multicasting and
broadcasting,
that service would be denied to upper layers of the hierarchy. To get
around
some of these problems, OSI proponents talk of null layers and sublayers.
It
sometimes seems that these artifacts save the model at the expense of good
protocol
design.
In
the TCPIIP model, as we shall see, the strict use of all layers is not
mandated.
For
example, there are application-level protocols that operate directly on
top
of IP.
TCP/IP Protocol Architecture
The
TCPIIP protocol suite was introduced in Lesson 1. As we pointed out, there is
no
official TCPIIP protocol model. However, it is useful to characterize the
protocol
suite
as involving five layers. To summarize from Lesson 1, these layers are
0
Application
layer. Provides
communication between processes or applications
on
separate hosts.
0
Host-to-host,
or transport layer. Provides
end-to-end, data-transfer service.
This
layer may include reliability mechanisms. It hides the details of the
underlying
network or networks from the application layer.
0
Internet
layer. Concerned
with routing data from source to destination host
through
one or more networks connected by routers.
Network
access layer. Concerned
with the logical interface between an end
system
and a subnetwork.
Physical
layer. Defines
characteristics of the transmission medium, signaling
rate,
and signal encoding scheme.
Operation of TCP and IP
Figure
15.4 indicates how the TCPIIP protocols are configured for communications.
To
make clear that the total communications facility may consist of multiple
networks,
the
constituent networks are usually referred to as subnetworks. Some sort
of
network access protocol, such as token ring, is used to connect a computer to a
subnetwork.
This protocol enables the host to send data across the subnetwork to
another
host or, in the case of a host on another subnetwork, to a router. IP is
implemented
in
all of the end systems and the routers, acting as a relay to move a block
of
data from one host, through one or more routers, to another host. TCP is
implemented
only
in the end systems; it keeps track of the blocks of data to assure that
all
are delivered reliably to the appropriate application.
For
successful communication, every entity in the overall system must have a
unique
address. Actually, two levels of addressing are needed. Each host on a
subnetwork
must
have a unique global internet address; this allows the data to be delivered
to
the proper host. Each process with a host must have an address that is
unique
within the host; this allows the host-to-host protocol (TCP) to deliver data
to
the proper process. These latter addresses are known as ports.
Let
us trace a simple operation. Suppose that a process, associated with port
1
at host A, wishes to send a message to another process, associated with port 2
at
host
B. The process at A hands the message down to TCP with instructions to send
it
to host B, port 2. TCP hands the message down to IP with instructions to send
it
to
host B. Note that IP need not be told the identity of the destination port. All
that
it
needs to know is that the data are intended for host B. Next, IP hands the
message
down
to the' network access layer (e.g., Ethernet logic) with instructions to
send
it to router X (the first hop on the way to B).
To
control this operation, control information as well as user data must be
transmitted,
as suggested in Figure 15.11. Let us say that the sending
process generates
a
block of data and passes this to TCP. TCP may break this block into smaller
pieces
to make it more manageable. To each of these pieces, TCP appends control
information
known as the TCP header, thereby forming a TCP segment. The control
information
is to be used by the peer TCP protocol entity at host B. Examples
of
items that are included in this header are
Destination
port. When
the TCP entity at B receives the segment, it must
know
to whom the data are to be delivered.
Sequence
number. TCP
numbers the segments that it sends to a particular
destination
port sequentially, so that if they arrive out of order, the TCP entity
at
B can reorder them.
Checksum.
The
sending TCP includes a code that is a function of the contents
of
the remainder of the segment. The receiving TCP performs the same calculation
and
compares the result with the incoming code. A discrepancy
results
if there has been some error in transmission.
Next,
TCP hands each segment over to IP, with instructions to transmit it to
B.
These segments must be transmitted across one or more subnetworks and
relayed
through one or more intermediate routers. This operation, too, requires the
use
of control information. Thus, IP appends a header of control information to
each
segment to form an IP datagram. An example of an item stored in the IP
header
is the destination host address (in this example, B).
Finally,
each IP datagram is presented to the network access layer for transmission
across
the first subnetwork in its journey to the destination. The network
access
layer appends its own header, creating a packet, or frame. The packet is
transmitted
across the subnetwork to router J. The packet header contains the
information
that the subnetwork needs to transfer the data across the subnetwork.
Examples
of items that may be contained in this header include
Destination
subnetwork address. The
subnetwork must know to which
attached
device the packet is to be delivered.
Facilities
requests. The
network access protocol might request the use of certain
subnetwork
facilities, such as priority.
At
router X, the packet header is stripped off and the IP header examined. On
the
basis of the destination-address information in the IP header, the IP module in
the
router directs the datagram out across subnetwork 2 to B; to do this, the
datagram
is
again augmented with a network access header.
When
the data are received at B, the reverse process occurs. At each layer, the
corresponding
header is removed, and the remainder is passed on to the next higher
layer
until the original user data are delivered to the destination process.
Protocol
Interfaces
Each
layer in the TCPIIP protocol suite interacts with its immediate adjacent
layers.
At
the source, the process layer makes use of the services of the host-to-host
layer
and provides data down to that layer. A similar relationship exists at the
interface
of
the host-to-host and internet layers and at the interface of the internet and
network
access layers. At the destination, each layer delivers data up to the
nexthigher
layer.
This
use of each individual layer is not required by the architecture. As Figure
15.12
suggests, it is possible to develop applications that directly invoke the
services
of
any one of the layers. Most applications require a reliable end-to-end protocol
and
thus make use of TCP; some special-purpose applications, however, do not
need
such services, for example, the simple network management protocol (SNMP)
that
uses an alternative host-to-host protocol known as the user datagram
protocol
(UDP);
others may make use of IP directly. Applications that do not involve
internetworking
and
that do not need TCP have been developed to invoke the network
access
layer directly.
The
Applications
Figure
15.12 shows the position of some of the key protocols commonly implemented
as
part of the TCPIIP protocol suite. Most of these protocols are discussed
in
the remainder of Part Four. In this section, we briefly highlight three
protocols
that
have traditionally been considered mandatory elements of TCPIIP, and which
were
designated as military standards, along with TCP and IP, by DOD.
The
simple mail transfer protocol (SMTP) provides a basic electronic mail
facility.
It provides a mechanism for transferring messages among separate hosts.
Features
of SMTP include mailing lists, return receipts, and forwarding. The SMTP
protocol
does not specify the way in which messages are to be created; some local
editing
or native electronic mail facility is required. Once a message is created,
SMTP
accepts the message and makes use of TCP to send it to an SMTP module on
another
host. The target SMTP module will make use of a local electronic mail
package
to store the incoming message in a user's mailbox.
The
file transfer protocol (FTP) is used to send files from one system to
another
under user command. Both text and binary files are accommodated, and
the
protocol provides features for controlling user access. When a user requests a
file
transfer, FTP sets up a TCP connection to the target system for the exchange of
control
messages; these allow user ID and password to be transmitted and allow the
user
to specify the file and file actions desired. Once a file transfer is approved,
a
second
TCP connection is set up for the data transfer. The file is transferred over
the
data connection, without the overhead of any headers or control information at
the
application level. When the transfer is complete, the control connection is
used
to
signal the completion and to accept new file transfer commands.
TELNET
provides a remote log-on capability, which enables a user at a terminal
or
personal computer to log on to a remote computer and function as if
directly
connected to that computer. The protocol was designed to work with simple
scroll-mode
terminals. TELNET is actually implemented in two modules. User
TELNET
interacts with the terminal I10 module to communicate with a local terminal;
it
converts the characteristics of real terminals to the network standard and
vice
versa. Server TELNET interacts with an application, acting as a surrogate
terminal
handler
so that remote terminals appear as local to the application. Terminal
traffic between User and Server TELNET
is carried on a TCP connection.
No comments:
Post a Comment
silahkan membaca dan berkomentar