Networking Basics

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 35

Networking Basics

Computer networking has become an integral part of business today. Individuals,


professionals and academics have also learned to rely on computer networks for capabilities
such as electronic mail and access to remote databases for research and communication
purposes. Networking has thus become an increasingly pervasive, worldwide reality because it
is fast, efficient, reliable and effective. Just how all this information is transmitted, stored,
categorized and accessed remains a mystery to the average computer user.

This tutorial will explain the basics of some of the most popular technologies used in
networking, and will include the following:

 Types of Networks - including LANs, WANs and WLANs


 The Internet and Beyond - The Internet and its contributions to intranets and
extranets
 Types of LAN Technology - including Ethernet, Fast Ethernet, Gigabit Ethernet, 10
Gigabit Ethernet,
ATM, PoE and Token Ring
 Networking and Ethernet Basics - including standard code, media, topographies,
collisions and CSMA/CD
 Ethernet Products - including transceivers, network interface cards, hubs and
repeaters

Types of Networks

In describing the basics of networking technology, it will be helpful to explain the different
types of networks in use.

Local Area Networks (LANs)

A network is any collection of independent computers that exchange information with each
other over a shared communication medium. Local Area Networks or LANs are usually
confined to a limited geographic area, such as a single building or a college campus. LANs can
be small, linking as few as three computers, but can often link hundreds of computers used by
thousands of people. The development of standard networking protocols and media has
resulted in worldwide proliferation of LANs throughout business and educational organizations.

Wide Area Networks (WANs)

Often elements of a network are widely separated physically. Wide area networking combines
multiple LANs that are geographically separate. This is accomplished by connecting the several
LANs with dedicated leased lines such as a T1 or a T3, by dial-up phone lines (both
synchronous and asynchronous), by satellite links and by data packet carrier services. WANs
can be as simple as a modem and a remote access server for employees to dial into, or it can
be as complex as hundreds of branch offices globally linked. Special routing protocols and
filters minimize the expense of sending data over vast distances.
Wireless Local Area Networks (WLANs)

Wireless LANs, or WLANs, use radio frequency (RF) technology to transmit and receive data
over the air. This minimizes the need for wired connections. WLANs give users mobility as they
allow connection to a local area network without having to be physically connected by a cable.
This freedom means users can access shared resources without looking for a place to plug in
cables, provided that their terminals are mobile and within the designated network coverage
area. With mobility, WLANs give flexibility and increased productivity, appealing to both
entrepreneurs and to home users. WLANs may also enable network administrators to connect
devices that may be physically difficult to reach with a cable.

The Institute for Electrical and Electronic Engineers (IEEE) developed the 802.11 specification
for wireless LAN technology. 802.11 specifies over-the-air interface between a wireless client
and a base station, or between two wireless clients. WLAN 802.11 standards also have
security protocols that were developed to provide the same level of security as that of a wired
LAN.
The first of these protocols is Wired Equivalent Privacy (WEP). WEP provides security by
encrypting data sent over radio waves from end point to end point.

The second WLAN security protocol is Wi-Fi Protected Access (WPA). WPA was developed as an
upgrade to the security features of WEP. It works with existing products that are WEP-enabled
but provides two key improvements: improved data encryption through the temporal key
integrity protocol (TKIP) which scrambles the keys using a hashing algorithm. It has means for
integrity-checking to ensure that keys have not been tampered with. WPA also provides user
authentication with the extensible authentication protocol (EAP).

Wireless Protocols

Specification Data Rate Modulation Scheme Security


802.11 1 or 2 Mbps in the 2.4 FHSS, DSSS WEP and
GHz band WPA

802.11a 54 Mbps in the 5 GHz OFDM WEP and


band WPA

802.11b/High 11 Mbps (with a fallback DSSS with CCK WEP and


Rate/Wi-Fi to 5.5, 2, and 1 Mbps) in WPA
the 2.4 GHz band

802.11g/Wi-Fi 54 Mbps in the 2.4 GHz OFDM when above 20Mbps, WEP and
band DSSS with CCK when below WPA
20Mbps

The Internet and Beyond

More than just a technology, the Internet has become a way of life for many people, and it has
spurred a revolution of sorts for both public and private sharing of information. The most
popular source of information about almost anything, the Internet is used daily by technical
and non-technical users alike.
The Internet: The Largest Network of All

With the meteoric rise in demand for connectivity, the Internet has become a major
communications highway for millions of users. It is a decentralized system of linked networks
that are worldwide in scope. It facilitates data communication services such as remote log-in,
file transfer, electronic mail, the World Wide Web and newsgroups. It consists of independent
hosts of computers that can designate which Internet services to use and which of their local
services to make available to the global community.

Initially restricted to military and academic institutions, the Internet now operates on a three-
level hierarchy composed of backbone networks, mid-level networks and stub networks. It is a
full-fledged conduit for any and all forms of information and commerce. Internet websites now
provide personal, educational, political and economic resources to virtually any point on the
planet.

Intranet: A Secure Internet-like Network for Organizations

With advancements in browser-based software for the Internet, many private organizations
have implemented intranets. An intranet is a private network utilizing Internet-type tools, but
available only within that organization. For large organizations, an intranet provides easy
access to corporate information for designated employees.

Extranet: A Secure Means for Sharing Information with Partners

While an intranet is used to disseminate confidential information within a corporation, an


extranet is commonly used by companies to share data in a secure fashion with their business
partners. Internet-type tools are used by content providers to update the extranet. Encryption
and user authentication means are provided to protect the information, and to ensure that
designated people with the proper access privileges are allowed to view it.

Types of LAN Technology

Ethernet

Ethernet is the most popular physical layer LAN technology in use today. It defines the
number of conductors that are required for a connection, the performance thresholds that can
be expected, and provides the framework for data transmission. A standard Ethernet network
can transmit data at a rate up to 10 Megabits per second (10 Mbps). Other LAN types include
Token Ring, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, Fiber Distributed Data
Interface (FDDI), Asynchronous Transfer Mode (ATM) and LocalTalk.

Ethernet is popular because it strikes a good balance between speed, cost and ease of
installation. These benefits, combined with wide acceptance in the computer marketplace and
the ability to support virtually all popular network protocols, make Ethernet an ideal
networking technology for most computer users today.
The Institute for Electrical and Electronic Engineers developed an Ethernet standard known as
IEEE Standard 802.3. This standard defines rules for configuring an Ethernet network and also
specifies how the elements in an Ethernet network interact with one another. By adhering to
the IEEE standard, network equipment and network protocols can communicate efficiently.

Fast Ethernet

The Fast Ethernet standard (IEEE 802.3u) has been established for Ethernet networks that
need higher transmission speeds. This standard raises the Ethernet speed limit from 10 Mbps
to 100 Mbps with only minimal changes to the existing cable structure. Fast Ethernet provides
faster throughput for video, multimedia, graphics, Internet surfing and stronger error
detection and correction.

There are three types of Fast Ethernet: 100BASE-TX for use with level 5 UTP cable; 100BASE-
FX for use with fiber-optic cable; and 100BASE-T4 which utilizes an extra two wires for use
with level 3 UTP cable. The 100BASE-TX standard has become the most popular due to its
close compatibility with the 10BASE-T Ethernet standard.

Network managers who want to incorporate Fast Ethernet into an existing configuration are
required to make many decisions. The number of users in each site on the network that need
the higher throughput must be determined; which segments of the backbone need to be
reconfigured specifically for 100BASE-T; plus what hardware is necessary in order to connect
the 100BASE-T segments with existing 10BASE-T segments. Gigabit Ethernet is a future
technology that promises a migration path beyond Fast Ethernet so the next generation of
networks will support even higher data transfer speeds.

Gigabit Ethernet

Gigabit Ethernet was developed to meet the need for faster communication networks with
applications such as multimedia and Voice over IP (VoIP). Also known as "gigabit-Ethernet-
over-copper" or 1000Base-T, GigE is a version of Ethernet that runs at speeds 10 times faster
than 100Base-T. It is defined in the IEEE 802.3 standard and is currently used as an
enterprise backbone. Existing Ethernet LANs with 10 and 100 Mbps cards can feed into a
Gigabit Ethernet backbone to interconnect high performance switches, routers and servers.

From the data link layer of the OSI model upward, the look and implementation of Gigabit
Ethernet is identical to that of Ethernet. The most important differences between Gigabit
Ethernet and Fast Ethernet include the additional support of full duplex operation in the MAC
layer and the data rates.

10 Gigabit Ethernet
10 Gigabit Ethernet is the fastest and most recent of the Ethernet standards. IEEE 802.3ae
defines a version of Ethernet with a nominal rate of 10Gbits/s that makes it 10 times faster
than Gigabit Ethernet.

Unlike other Ethernet systems, 10 Gigabit Ethernet is based entirely on the use of optical fiber
connections. This developing standard is moving away from a LAN design that broadcasts to
all nodes, toward a system which includes some elements of wide area routing. As it is still
very new, which of the standards will gain commercial acceptance has yet to be determined.

Asynchronous Transfer Mode (ATM)

ATM is a cell-based fast-packet communication technique that can support data-transfer rates
from sub-T1 speeds to 10 Gbps. ATM achieves its high speeds in part by transmitting data in
fixed-size cells and dispensing with error-correction protocols. It relies on the inherent
integrity of digital lines to ensure data integrity.

ATM can be integrated into an existing network as needed without having to update the entire
network. Its fixed-length cell-relay operation is the signaling technology of the future and
offers more predictable performance than variable length frames. Networks are extremely
versatile and an ATM network can connect points in a building, or across the country, and still
be treated as a single network.

Power over Ethernet (PoE)

PoE is a solution in which an electrical current is run to networking hardware over the Ethernet
Category 5 cable or higher. This solution does not require an extra AC power cord at the
product location. This minimizes the amount of cable needed as well as eliminates the
difficulties and cost of installing extra outlets.

LAN Technology Specifications

Name IEEE Data Media Type Maximum


Standard Rate Distance
Ethernet 802.3 10 Mbps 10Base-T 100 meters

Fast Ethernet/ 802.3u 100 Mbps 100Base-TX 100 meters


100Base-T 100Base-FX 2000 meters

Gigabit Ethernet/ 802.3z 1000 Mbps 1000Base-T 100 meters


GigE 1000Base-SX 275/550 meters
1000Base-LX 550/5000 meters

10 Gigabit Ethernet IEEE 802.3ae 10 Gbps 10GBase-SR 300 meters


10GBase-LX4 300m MMF/ 10km SMF
10GBase-LR/ER 10km/40km
10GBase-SW/LW/EW 300m/10km/40km

Token Ring
Token Ring is another form of network configuration. It differs from Ethernet in that all
messages are transferred in one direction along the ring at all times. Token Ring networks
sequentially pass a “token” to each connected device. When the token arrives at a particular
computer (or device), the recipient is allowed to transmit data onto the network. Since only
one device may be transmitting at any given time, no data collisions occur. Access to the
network is guaranteed, and time-sensitive applications can be supported. However, these
benefits come at a price. Component costs are usually higher, and the networks themselves
are considered to be more complex and difficult to implement. Various PC vendors have been
proponents of Token Ring networks.

Networking and Ethernet Basics

Protocols

After a physical connection has been established, network protocols define the standards that
allow computers to communicate. A protocol establishes the rules and encoding specifications
for sending data. This defines how computers identify one another on a network, the form that
the data should take in transit, and how this information is processed once it reaches its final
destination. Protocols also define procedures for determining the type of error checking that
will be used, the data compression method, if one is needed, how the sending device will
indicate that it has finished sending a message, how the receiving device will indicate that it
has received a message, and the handling of lost or damaged transmissions or "packets".

The main types of network protocols in use today are: TCP/IP (for UNIX, Windows NT,
Windows 95 and other platforms); IPX (for Novell NetWare); DECnet (for networking Digital
Equipment Corp. computers); AppleTalk (for Macintosh computers), and NetBIOS/NetBEUI (for
LAN Manager and Windows NT networks).

Although each network protocol is different, they all share the same physical cabling. This
common method of accessing the physical network allows multiple protocols to peacefully
coexist over the network media, and allows the builder of a network to use common hardware
for a variety of protocols. This concept is known as "protocol independence," which means that
devices which are compatible at the physical and data link layers allow the user to run many
different protocols over the same medium.

The Open System Interconnection Model

The Open System Interconnection (OSI) model specifies how dissimilar computing devices
such as Network Interface Cards (NICs), bridges and routers exchange data over a network by
offering a networking framework for implementing protocols in seven layers. Beginning at the
application layer, control is passed from one layer to the next. The following describes the
seven layers as defined by the OSI model, shown in the order they occur whenever a user
transmits information.
Layer 7: Application
This layer supports the application and end-user processes. Within this layer,
user privacy is considered and communication partners, service and
constraints are all identified. File transfers, email, Telnet and FTP applications
are all provided within this layer.

Layer 6: Presentation (Syntax)


Within this layer, information is translated back and forth between application
and network formats. This translation transforms the information into data the
application layer and network recognize regardless of encryption and
formatting.

Layer 5: Session
Within this layer, connections between applications are made, managed and
terminated as needed to allow for data exchanges between applications at
each end of a dialogue.

Layer 4: Transport
Complete data transfer is ensured as information is transferred transparently
between systems in this layer. The transport layer also assures appropriate
flow control and end-to-end error recovery.

Layer 3: Network
Using switching and routing technologies, this layer is responsible for creating
virtual circuits to transmit information from node to node. Other functions
include routing, forwarding, addressing, internetworking, error and congestion
control, and packet sequencing.

Layer 2: Data Link


Information in data packets are encoded and decoded into bits within this
layer. Errors from the physical layer flow control and frame synchronization are
corrected here utilizing transmission protocol knowledge and management.
This layer consists of two sub layers: the Media Access Control (MAC) layer,
which controls the way networked computers gain access to data and transmit
it, and the Logical Link Control (LLC) layer, which controls frame
synchronization, flow control and error checking.

Layer 1: Physical
This layer enables hardware to send and receive data over a carrier such as
cabling, a card or other physical means. It conveys the bitstream through the
network at the electrical and mechanical level. Fast Ethernet, RS232, and ATM
are all protocols with physical layer components.

This order is then reversed as information is received, so that the physical layer is the
first and application layer is the final layer that information passes through.

Standard Ethernet Code

In order to understand standard Ethernet code, one must understand what each digit means.
Following is a guide:
Guide to Ethernet Coding

10 at the beginning means the network operates at 10Mbps.

BASE means the type of signaling used is baseband.

2 or 5 at the end indicates the maximum cable length in meters.

T the end stands for twisted-pair cable.

X at the end stands for full duplex-capable cable.

FL at the end stands for fiber optic cable.

For example: 100BASE-TX indicates a Fast Ethernet connection (100 Mbps) that uses a
twisted pair cable capable of full-duplex transmissions.

Media

An important part of designing and installing an Ethernet is selecting the appropriate Ethernet
medium. There are four major types of media in use today: Thickwire for 10BASE5 networks;
thin coax for 10BASE2 networks; unshielded twisted pair (UTP) for 10BASE-T networks; and
fiber optic for 10BASE-FL or Fiber-Optic Inter-Repeater Link (FOIRL) networks. This wide
variety of media reflects the evolution of Ethernet and also points to the technology's
flexibility. Thickwire was one of the first cabling systems used in Ethernet, but it was
expensive and difficult to use. This evolved to thin coax, which is easier to work with and less
expensive. It is important to note that each type of Ethernet, Fast Ethernet, Gigabit Ethernet,
10 Gigabit Ethernet, has its own preferred media types.

The most popular wiring schemes are 10BASE-T and 100BASE-TX, which use unshielded
twisted pair (UTP) cable. This is similar to telephone cable and comes in a variety of grades,
with each higher grade offering better performance. Level 5 cable is the highest, most
expensive grade, offering support for transmission rates of up to 100 Mbps. Level 4 and level
3 cable are less expensive, but cannot support the same data throughput speeds; level 4 cable
can support speeds of up to 20 Mbps; level 3 up to 16 Mbps. The 100BASE-T4 standard allows
for support of 100 Mbps Ethernet over level 3 cables, but at the expense of adding another
pair of wires (4 pair instead of the 2 pair used for 10BASE-T). For most users, this is an
awkward scheme and therefore 100BASE-T4 has seen little popularity. Level 2 and level 1
cables are not used in the design of 10BASE-T networks.

For specialized applications, fiber-optic, or 10BASE-FL, Ethernet segments are popular. Fiber-
optic cable is more expensive, but it is invaluable in situations where electronic emissions and
environmental hazards are a concern. Fiber-optic cable is often used in inter-building
applications to insulate networking equipment from electrical damage caused by lightning.
Because it does not conduct electricity, fiber-optic cable can also be useful in areas where
heavy electromagnetic interference is present, such as on a factory floor. The Ethernet
standard allows for fiber-optic cable segments up to two kilometers long, making fiber-optic
Ethernet perfect for connecting nodes and buildings that are
otherwise not reachable with copper media.

Cable Grade Capabilities

Cable Makeup Frequency Data Rate Network


Name Support Compatibility
Cat-5 4 twisted pairs of 100 MHz Up to ATM, Token
copper wire -- 1000Mbps Ring,1000Base-T,
terminated by RJ45 100Base-TX,
connectors 10Base-T

Cat-5e 4 twisted pairs of 100 MHz Up to 10Base-T,


copper wire -- 1000Mbps 100Base-TX,
terminated by RJ45 1000Base-T
connectors

Cat-6 4 twisted pairs of 250 MHz 1000Mbps 10Base-T,


copper wire -- 100Base-TX,
terminated by RJ45 1000Base-T
connectors

Topologies

Network topology is the geometric arrangement of nodes and


cable links in a LAN. Two general configurations are used, bus
and star. These two topologies define how nodes are
connected to one another in a communication network. A node is an active device connected
to the network, such as a computer or a printer. A node can also be a piece of networking
equipment such as a hub, switch or a router.

A bus topology consists of nodes linked together in a series with each node connected to a
long cable or bus. Many nodes can tap into the bus and begin communication with all other
nodes on that cable segment. A break anywhere in the cable will usually cause the entire
segment to be inoperable until the break is repaired. Examples of bus topology include
10BASE2 and 10BASE5.

General Topology Configurations

10BASE-T Ethernet and Fast Ethernet use a star topology where access is controlled by a
central computer. Generally a computer is located at one end of the segment, and the other
end is terminated in central location with a hub or a switch. Because UTP is often run in
conjunction with telephone cabling, this central location can be a telephone closet or other
area where it is convenient to connect the UTP segment to a backbone. The primary
advantage of this type of network is reliability, for if one of these 'point-to-point' segments has
a break; it will only affect the two nodes on that link. Other computer users on the network
continue to operate as if that segment were non-existent.

Collisions
Ethernet is a shared medium, so there are rules for sending packets of data to avoid conflicts
and to protect data integrity. Nodes determine when the network is available for sending
packets. It is possible that two or more nodes at different locations will attempt to send data
at the same time. When this happens, a packet collision occurs.

Minimizing collisions is a crucial element in the design and operation of networks. Increased
collisions are often the result of too many users on the network. This leads to competition for
network bandwidth and can slow the performance of the network from the user's point of
view. Segmenting the network is one way of reducing an overcrowded network, i.e., by
dividing it into different pieces logically joined together with a bridge or switch.

CSMA/CD

In order to manage collisions Ethernet uses a protocol called Carrier Sense Multiple
Access/Collision Detection (CSMA/CD). CSMA/CD is a type of contention protocol that defines
how to respond when a collision is detected, or when two devices attempt to transmit
packages simultaneously. Ethernet allows each device to send messages at any time without
having to wait for network permission; thus, there is a high possibility that devices may try to
send messages at the same time.

After detecting a collision, each device that was transmitting a packet delays a random
amount of time before re-transmitting the packet. If another collision occurs, the device waits
twice as long before trying to re-transmit.

Ethernet Products

The standards and technology just discussed will help define the specific products that
network managers use to build Ethernet networks. The following presents the key products
needed to build an Ethernet LAN.

Transceivers

Transceivers are also referred to as Medium Access Units (MAUs). They are used to connect
nodes to the various Ethernet media. Most computers and network interface cards contain a
built-in 10BASE-T or 10BASE2 transceiver which allows them to be connected directly to
Ethernet without the need for an external transceiver.

Many Ethernet devices provide an attachment unit interface (AUI) connector to allow the user
to connect to any type of medium via an external transceiver. The AUI connector consists of a
15-pin D-shell type connector, female on the computer side, male on the transceiver side.
For Fast Ethernet networks, a new interface called the MII (Media Independent Interface) was
developed to offer a flexible way to support 100 Mbps connections. The MII is a popular way
to connect 100BASE-FX links to copper-based Fast Ethernet devices.

Network Interface Cards

Network Interface Cards, commonly referred to as NICs, are used to connect a PC to a


network. The NIC provides a physical connection between the networking cable and the
computer's internal bus. Different computers have different bus architectures. PCI bus slots
are most commonly found on 486/Pentium PCs and ISA expansion slots are commonly found
on 386 and older PCs. NICs come in three basic varieties: 8-bit, 16-bit, and 32-bit. The larger
the number of bits that can be transferred to the NIC, the faster the NIC can transfer data to
the network cable. Most NICs are designed for a particular type of network, protocol, and
medium, though some can serve multiple networks.

Many NIC adapters comply with plug-and-play specifications. On these systems, NICs are
automatically configured without user intervention, while on non-plug-and-play systems,
configuration is done manually through a set-up program and/or DIP switches.

Cards are available to support almost all networking standards. Fast Ethernet NICs are often
10/100 capable, and will automatically set to the appropriate speed. Gigabit Ethernet NICs are
10/100/1000 capable with auto negotiation depending on the user’s Ethernet speed. Full
duplex networking is another option where a dedicated connection to a switch allows a NIC to
operate at twice the speed.

Hubs/Repeaters

Hubs/repeaters are used to connect together two or more Ethernet segments of any type of
medium. In larger designs, signal quality begins to deteriorate as segments exceed their
maximum length. Hubs provide the signal amplification required to allow a segment to be
extended a greater distance. A hub repeats any incoming signal to all ports.

Ethernet hubs are necessary in star topologies such as 10BASE-T. A multi-port twisted pair
hub allows several point-to-point segments to be joined into one network. One end of the
point-to-point link is attached to the hub and the other is attached to the computer. If the hub
is attached to a backbone, then all computers at the end of the twisted pair segments can
communicate with all the hosts on the backbone. The number and type of hubs in any one-
collision domain is limited by the Ethernet rules. These repeater rules are discussed in more
detail later.

A very important fact to note about hubs is that they only allow users to share Ethernet. A
network of hubs/repeaters is termed a "shared Ethernet," meaning that all members of the
network are contending for transmission of data onto a single network (collision domain). A
hub/repeater propagates all electrical signals including the invalid ones. Therefore, if a
collision or electrical interference occurs on one segment, repeaters make it appear on all
others as well. This means that individual members of a shared network will only get a
percentage of the available network bandwidth.

Basically, the number and type of hubs in any one collision domain for 10Mbps Ethernet
is limited by the following rules:

Network Type Max Nodes Per Segment Max Distance Per Segment

10BASE-T 2 100m

10BASE-FL 2 2000m

Network Switching

Switches can be a valuable asset to networking. Overall, they can increase the capacity and
speed of your network. However, switching should not be seen as a cure-all for network
issues. Before incorporating network switching, you must first ask yourself two important
questions: First, how can you tell if your network will benefit from switching? Second, how do
you add switches to your network design to provide the most benefit?

This tutorial is written to answer these questions. Along the way, we'll describe how switches
work, and how they can both harm and benefit your networking strategy. We’ll also discuss
different network types, so you can profile your network and gauge the potential benefit of
network switching for your environment.

What is a Switch?

Switches occupy the same place in the network as hubs. Unlike hubs, switches examine each
packet and process it accordingly rather than simply repeating the signal to all ports. Switches
map the Ethernet addresses of the nodes residing on each network segment and then allow
only the necessary traffic to pass through the switch. When a packet is received by the switch,
the switch examines the destination and source hardware addresses and compares them to a
table of network segments and addresses. If the segments are the same, the packet is
dropped or "filtered"; if the segments are different, then the packet is "forwarded" to the
proper segment. Additionally, switches prevent bad or misaligned packets from spreading by
not forwarding them.

Filtering packets and regenerating forwarded packets enables switching technology to split a
network into separate collision domains. The regeneration of packets allows for greater
distances and more nodes to be used in the total network design, and dramatically lowers the
overall collision rates. In switched networks, each segment is an independent collision domain.
This also allows for parallelism, meaning up to one-half of the computers connected to a
switch can send data at the same time. In shared networks all nodes reside in a single shared
collision domain.

Easy to install, most switches are self learning. They determine the Ethernet addresses in use
on each segment, building a table as packets are passed through the switch. This "plug and
play" element makes switches an attractive alternative to hubs.

Switches can connect different network types (such as Ethernet and Fast Ethernet) or
networks of the same type. Many switches today offer high-speed links, like Fast Ethernet,
which can be used to link the switches together or to give added bandwidth to important
servers that get a lot of traffic. A network composed of a number of switches linked together
via these fast uplinks is called a "collapsed backbone" network.

Dedicating ports on switches to individual nodes is another way to speed access for critical
computers. Servers and power users can take advantage of a full segment for one node, so
some networks connect high traffic nodes to a dedicated switch port.

Full duplex is another method to increase bandwidth to dedicated workstations or servers. To


use full duplex, both network interface cards used in the server or workstation and the switch
must support full duplex operation. Full duplex doubles the potential bandwidth on that link.

Network Congestion

As more users are added to a shared


network or as applications requiring more data are added, performance deteriorates. This is
because all users on a shared network are competitors for the Ethernet bus. A moderately
loaded 10 Mbps Ethernet network is able to sustain utilization of 35 percent and throughput in
the neighborhood of 2.5 Mbps after accounting for packet overhead, inter-packet gaps and
collisions. A moderately loaded Fast Ethernet or Gigabit Ethernet shares 25 Mbps or 250 Mbps
of real data in the same circumstances. With shared Ethernet and Fast Ethernet, the likelihood
of collisions increases as more nodes and/or more traffic is added to the shared collision
domain.

Ethernet itself is a shared media, so there are rules for sending packets to avoid conflicts and
protect data integrity. Nodes on an Ethernet network send packets when they determine the
network is not in use. It is possible that two nodes at different locations could try to send data
at the same time. When both PCs are transferring a packet to the network at the same time, a
collision will result. Both packets are retransmitted, adding to the traffic problem. Minimizing
collisions is a crucial element in the design and operation of networks. Increased collisions are
often the result of too many users or too much traffic on the network, which results in a great
deal of contention for network bandwidth. This can slow the performance of the network from
the user’s point of view. Segmenting, where a network is divided into different pieces joined
together logically with switches or routers, reduces congestion in an overcrowded network by
eliminating the shared collision domain.

Collision rates measure the percentage of packets that are collisions. Some collisions are
inevitable, with less than 10 percent common in well-running networks.

The Factors Affecting Network Efficiency

 Amount of traffic
 Number of nodes
 Size of packets

 Network diameter

Measuring Network Efficiency

 Average to peak load deviation


 Collision Rate

 Utilization Rate

Utilization rate is another widely accessible statistic about the health of a network. This
statistic is available in Novell's console monitor and WindowsNT performance monitor as well
as any optional LAN analysis software. Utilization in an average network above 35 percent
indicates potential problems. This 35 percent utilization is near optimum, but some networks
experience higher or lower utilization optimums due to factors such as packet size and peak
load deviation.

A switch is said to work at "wire speed" if it has enough processing power to handle full
Ethernet speed at minimum packet sizes. Most switches on the market are well ahead of
network traffic capabilities supporting the full "wire speed" of Ethernet, 14,480 pps (packets
per second), and Fast Ethernet, 148,800 pps.

Routers

Routers work in a manner similar to switches and bridges in that they filter out network traffic.
Rather than doing so by packet addresses, they filter by specific protocol. Routers were born
out of the necessity for dividing networks logically instead of physically. An IP router can
divide a network into various subnets so that only traffic destined for particular IP addresses
can pass between segments. Routers recalculate the checksum, and rewrite the MAC header
of every packet. The price paid for this type of intelligent forwarding and filtering is usually
calculated in terms of latency, or the delay that a packet experiences inside the router. Such
filtering takes more time than that exercised in a switch or bridge which only looks at the
Ethernet address. In more complex networks network efficiency can be improved. An
additional benefit of routers is their automatic filtering of broadcasts, but overall they are
complicated to setup.

Switch Benefits

 Isolates traffic, relieving congestion


 Separates collision domains, reducing collisions

 Segments, restarting distance and repeater rules

Switch Costs

 Price: currently 3 to 5 times the price of a hub


 Packet processing time is longer than in a hub

 Monitoring the network is more complicated

General Benefits of Network Switching


Switches replace hubs in networking designs, and they are more expensive. So why is the
desktop switching market doubling ever year with huge numbers sold? The price of switches is
declining precipitously, while hubs are a mature technology with small price declines. This
means that there is far less difference between switch costs and hub costs than there used to
be, and the gap is narrowing.

Since switches are self learning, they are as easy to install as a hub. Just plug them in and go.
And they operate on the same hardware layer as a hub, so there are no protocol issues.

There are two reasons for switches being included in network designs. First, a switch breaks
one network into many small networks so the distance and repeater limitations are restarted.
Second, this same segmentation isolates traffic and reduces collisions relieving network
congestion. It is very easy to identify the need for distance and repeater extension, and to
understand this benefit of network switching. But the second benefit, relieving network
congestion, is hard to identify and harder to understand the degree by which switches will help
performance. Since all switches add small latency delays to packet processing, deploying
switches unnecessarily can actually slow down network performance. So the next section
pertains to the factors affecting the impact of switching to congested networks.

Network Switching

The benefits of switching vary from network to network. Adding a switch for the first time has
different implications than increasing the number of switched ports already installed.
Understanding traffic patterns is very important to network switching - the goal being to
eliminate (or filter) as much traffic as possible. A switch installed in a location where it
forwards almost all the traffic it receives will help much less than one that filters most of the
traffic.

Networks that are not congested can actually be negatively impacted by adding switches.
Packet processing delays, switch buffer limitations, and the retransmissions that can
result sometimes slows performance compared with the hub based alternative. If your
network is not congested, don't replace hubs with switches. How can you tell if
performance problems are the result of network congestion? Measure utilization factors
and collision rates.

Good Candidates for Performance Boosts from


Switching

 Utilization more than 35%

 Collision rates more than 10%

Utilization load is the amount of total traffic as a percent of the theoretical


maximum for the network type, 10 Mbps in Ethernet, 100 Mbps in Fast
Ethernet. The collision rate is the number of packets with collisions as a
percentage of total packages
Network response times (the user-visible part of network performance) suffers as the load on
the network increases, and under heavy loads small increases in user traffic often results in
significant decreases in performance. This is similar to automobile freeway dynamics, in that
increasing loads results in increasing throughput up to a point, then further increases in
demand results in rapid deterioration of true throughput. In Ethernet, collisions increase as
the network is loaded, and this causes retransmissions and increases in load which cause even
more collisions. The resulting network overload slows traffic considerably.

Using network utilities found on most server operating systems network managers can
determine utilization and collision rates. Both peak and average statistics should be
considered.

Replacing a Central Hub with a Switch

This switching opportunity is typified by a fully shared network, where many users are
connected in a cascading hub architecture. The two main impacts of switching will be faster
network connection to the server(s) and the isolation of non-relevant traffic from each
segment. As the network bottleneck is eliminated performance grows until a new system
bottleneck is encountered - such as maximum server performance.

Adding Switches to a Backbone Switched Network

Congestion on a switched network can usually be relieved by adding more switched ports, and
increasing the speed of these ports. Segments experiencing congestion are identified by their
utilization and collision rates, and the solution is either further segmentation or faster
connections. Both Fast Ethernet and Ethernet switch ports are added further down the tree
structure of the network to increase performance.

Designing for Maximum Benefit

Changes in network design tend to be evolutionary rather than revolutionary-rarely is a


network manager able to design a network completely from scratch. Usually, changes are
made slowly with an eye toward preserving as much of the usable capital investment as
possible while replacing obsolete or outdated technology with new equipment.

Fast Ethernet is very easy to add to most networks. A switch or bridge allows Fast Ethernet to
connect to existing Ethernet infrastructures to bring speed to critical links. The faster
technology is used to connect switches to each other, and to switched or shared servers to
ensure the avoidance of bottlenecks.

Many client/server networks suffer from too many clients trying to access the same server
which creates a bottleneck where the server attaches to the LAN. Fast Ethernet, in
combination with switched Ethernet, creates the perfect cost-effective solution for avoiding
slow client server networks by allowing the server to be placed on a fast port.
Distributed processing also benefits from Fast Ethernet and switching. Segmentation of
the network via switches brings big performance boosts to distributed traffic networks,
and the switches are commonly connected via a Fast Ethernet backbone.

Good Candidates for Performance Boosts from


Switching

 Important to know network demand per node


 Try to group users with the nodes they communicate with most
often on the same segment
 Look for departmental traffic patterns
 Avoid switch bottlenecks with fast uplinks

 Move users switch between segments in an iterative process


until all nodes seeing less than 35% utilization

Client/Server Traffic Distributed Traffic

Advanced Switching Technology Issues

There are some technology issues with switching that do not affect 95% of all networks. Major
switch vendors and the trade publications are promoting new competitive technologies, so
some of these concepts are discussed here.

Managed or Unmanaged

Management provides benefits in many networks. Large networks with mission critical
applications are managed with many sophisticated tools, using SNMP to monitor the health of
devices on the network. Networks using SNMP or RMON (an extension to SNMP that provides
much more data while using less network bandwidth to do so) will either manage every
device, or just the more critical areas. VLANs are another benefit to management in a switch.
A VLAN allows the network to group nodes into logical LANs that behave as one network,
regardless of physical connections. The main benefit is managing broadcast and multicast
traffic. An unmanaged switch will pass broadcast and multicast packets through to all ports. If
the network has logical grouping that are different from physical groupings then a VLAN-based
switch may be the best bet for traffic optimization.
Another benefit to management in the switches is Spanning Tree Algorithm. Spanning Tree
allows the network manager to design in redundant links, with switches attached in loops. This
would defeat the self learning aspect of switches, since traffic from one node would appear to
originate on different ports. Spanning Tree is a protocol that allows the switches to coordinate
with each other so that traffic is only carried on one of the redundant links (unless there is a
failure, then the backup link is automatically activated). Network managers with switches
deployed in critical applications may want to have redundant links. In this case management is
necessary. But for the rest of the networks an unmanaged switch would do quite well, and is
much less expensive.

Store-and-Forward vs. Cut-Through

LAN switches come in two basic architectures, cut-through and store-and-forward. Cut-
through switches only examine the destination address before forwarding it on to its
destination segment. A store-and-forward switch, on the other hand, accepts and analyzes the
entire packet before forwarding it to its destination. It takes more time to examine the entire
packet, but it allows the switch to catch certain packet errors and collisions and keep them
from propagating bad packets through the network.

Today, the speed of store-and-forward switches has caught up with cut-through switches to
the point where the difference between the two is minimal. Also, there are a large number of
hybrid switches available that mix both cut-through and store-and-forward architectures.

Blocking vs. Non-Blocking Switches

Take a switch's specifications and add up all the ports at theoretical maximum speed, then you
have the theoretical sum total of a switch's throughput. If the switching bus, or switching
components cannot handle the theoretical total of all ports the switch is considered a "blocking
switch". There is debate whether all switches should be designed non-blocking, but the added
costs of doing so are only reasonable on switches designed to work in the largest network
backbones. For almost all applications, a blocking switch that has an acceptable and
reasonable throughput level will work just fine.

Consider an eight port 10/100 switch. Since each port can theoretically handle 200 Mbps (full
duplex) there is a theoretical need for 1600 Mbps, or 1.6 Gbps. But in the real world each port
will not exceed 50% utilization, so a 800 Mbps switching bus is adequate. Consideration of
total throughput versus total ports demand in the real world loads provides validation that the
switch can handle the loads of your network.

Switch Buffer Limitations

As packets are processed in the switch, they are held in buffers. If the destination segment is
congested, the switch holds on to the packet as it waits for bandwidth to become available on
the crowded segment. Buffers that are full present a problem. So some analysis of the buffer
sizes and strategies for handling overflows is of interest for the technically inclined network
designer.

In real world networks, crowded segments cause many problems, so their impact on switch
consideration is not important for most users, since networks should be designed to eliminate
crowded, congested segments. There are two strategies for handling full buffers. One is
"backpressure flow control" which sends packets back upstream to the source nodes of
packets that find a full buffer. This compares to the strategy of simply dropping the packet,
and relying on the integrity features in networks to retransmit automatically. One solution
spreads the problem in one segment to other segments, propagating the problem. The other
solution causes retransmissions, and that resulting increase in load is not optimal. Neither
strategy solves the problem, so switch vendors use large buffers and advise network
managers to design switched network topologies to eliminate the source of the problem -
congested segments.

Layer 3 Switching

A hybrid device is the latest improvement in internetworking technology. Combining the


packet handling of routers and the speed of switching, these multilayer switches operate on
both layer 2 and layer 3 of the OSI network model. The performance of this class of switch is
aimed at the core of large enterprise networks. Sometimes called routing switches or IP
switches, multilayer switches look for common traffic flows, and switch these flows on the
hardware layer for speed. For traffic outside the normal flows, the multilayer switch uses
routing functions. This keeps the higher overhead routing functions only where it is needed,
and strives for the best handling strategy for each network packet.

Many vendors are working on high end multilayer switches, and the technology is definitely a
"work in process". As networking technology evolves, multilayer switches are likely to replace
routers in most large networks.

Sharing Devices
A Look at Device Server Technology

Device networking starts with a device server, which allows almost any device with serial
connectivity to connect to Ethernet networks quickly and cost-effectively. These products
include all of the elements needed for device networking and because of their scalability; they
do not require a server or gateway.

This tutorial provides an introduction to the functionality of a variety of device servers. It will
cover print servers, terminal servers and console servers, as well as embedded and external
device servers. For each of these categories, there will also be a review of specific Lantronix
offerings.

An Introduction to Device Servers

A device server is characterized by a minimal operating architecture that requires no per seat
network operating system license, and client access that is independent of any operating
system or proprietary protocol. In addition the device server is a "closed box," delivering
extreme ease of installation, minimal maintenance, and can be managed by the client
remotely via a web browser.

By virtue of its independent operating system, protocol independence, small size and
flexibility, device servers are able to meet the demands of virtually any network-enabling
application. The demand for device servers is rapidly increasing because organizations need to
leverage their networking infrastructure investment across all of their resources. Many
currently installed devices lack network ports or require dedicated serial connections for
management -- device servers allow those devices to become connected to the network.

Device servers are currently used in a wide variety of environments in which machinery,
instruments, sensors and other discrete devices generate data that was previously inaccessible
through enterprise networks. They are also used for security systems, point-of-sale
applications, network management and many other applications where network access to a
device is required.
As device servers become more widely adopted and implemented into specialized applications,
we can expect to see variations in size, mounting capabilities and enclosures. Device servers
are also available as embedded devices, capable of providing instant networking support for
developers of future products where connectivity will be required.

Print servers, terminal servers, remote access servers and network time servers are examples
of device servers which are specialized for particular functions. Each of these types of servers
has unique configuration attributes in hardware or software that help them to perform best in
their particular arena.

External Device Servers

External device servers are stand-alone serial-to-wireless (802.11b) or serial-to-Ethernet


device servers that can put just about any device with serial connectivity on the network in a
matter of minutes so it can be managed remotely.

External Device Servers from Lantronix

Lantronix external device servers provide the ability to remotely control, monitor, diagnose
and troubleshoot equipment over a network or the Internet. By opting for a powerful external
device with full network and web capabilities, companies are able to preserve their present
equipment investments.

Lantronix offers a full line of external device servers: Ethernet or wireless, advanced
encryption for maximum security, and device servers designed for commercial or heavy-duty
industrial applications.

Wireless:
Providing a whole new level of flexibility and mobility, these devices allow
users to connect devices that are inaccessible via cabling. Users can also add
intelligence to their businesses by putting mobile devices, such as medical
instruments or warehouse equipment, on networks.

Security:
Ideal for protecting data such as business transactions, customer information,
financial records, etc., these devices provide enhanced security for networked
devices.

Commercial:
These devices enable users to network-enable their existing equipment (such
as POS devices, AV equipment, medical instruments, etc.) simply and cost-
effectively, without the need for special software.

Industrial:
For heavy-duty factory applications, Lantronix offers a full complement of
industrial-strength external device servers designed for use with
manufacturing, assembly and factory automation equipment. All models
support Modbus industrial protocols.

Embedded Device Servers

Embedded device servers integrate all the required hardware and software into a single
embedded device. They use a device’s serial port to web-enable or network-enable products
quickly and easily without the complexities of extensive hardware and software integration.
Embedded device servers are typically plug-and-play solutions that operate independently of a
PC and usually include a wireless or Ethernet connection, operating system, an embedded web
server, a full TCP/IP protocol stack, and some sort of encryption for secure communications.

Embedded Device Servers from Lantronix

Lantronix recognizes that design engineers are looking for a simple, cost-effective and reliable
way to seamlessly embed network connectivity into their products. In a fraction of the time it
would take to develop a custom solution, Lantronix embedded device servers provide a
variety of proven, fully integrated products. OEMs can add full Ethernet and/or wireless
connectivity to their products so they can be managed over a network or the Internet.

Module:
These devices allow users tonetwork-enable just about any electronic device
with Ethernet and/or wireless connectivity.

Board-Level:
Users can integrate networking capabilities onto the circuit boards of
equipment like factory machinery, security systems and medical devices.

Single-Chip Solutions:
These powerful, system-on-chip solutions help users address networking
issues early in the design cycle to support the most popular embedded
networking technologies.

Terminal Servers

Terminal servers are used to enable terminals to transmit data to and from host computers
across LANs, without requiring each terminal to have its own direct connection. And while the
terminal server's existence is still justified by convenience and cost considerations, its inherent
intelligence provides many more advantages. Among these is enhanced remote monitoring
and control. Terminal servers that support protocols like SNMP make networks easier to
manage.
Devices that are attached to a network through a server can be shared between terminals and
hosts at both the local site and throughout the network. A single terminal may be connected
to several hosts at the same time (in multiple concurrent sessions), and can switch between
them. Terminal servers are also used to network devices that have only serial outputs. A
connection between serial ports on different servers is opened, allowing data to move between
the two devices.

Given its natural translation ability, a multi-protocol server can perform conversions between
the protocols it knows such as LAT and TCP/IP. While server bandwidth is not adequate for
large file transfers, it can easily handle host-to-host inquiry/response applications, electronic
mailbox checking, etc. In addition, it is far more economical than the alternatives -- acquiring
expensive host software and special-purpose converters. Multiport device and print servers
give users greater flexibility in configuring and managing their networks.

Whether it is moving printers and other peripherals from one network to another, expanding
the dimensions of interoperability or preparing for growth, terminal servers can fulfill these
requirements without major rewiring. Today, terminal servers offer a full range of
functionality, ranging from 8 to 32 ports, giving users the power to connect terminals,
modems, servers and virtually any serial device for remote access over IP networks.

Print Servers

Print servers enable printers to be shared by other users on the network. Supporting either
parallel and/or serial interfaces, a print server accepts print jobs from any person on the
network using supported protocols and manages those jobs on each appropriate printer.
The earliest print servers were external devices, which supported printing via parallel or
serial ports on the device. Typically, only one or two protocols were supported. The latest
generations of print servers support multiple protocols, have multiple parallel and serial
connection options and, in some cases, are small enough to fit directly on the parallel port of
the printer itself. Some printers have embedded or internal print servers. This design has an
integral communication benefit between printer and print server, but lacks flexibility if the
printer has physical problems.

Print servers generally do not contain a large amount of memory; printers simply store
information in a queue. When the desired printer becomes available, they allow the host to
transmit the data to the appropriate printer port on the server. The print server can then
simply queue and print each job in the order in which print requests are received, regardless
of protocol used or the size of the job.

Device Server Technology in the Data Center

The IT/data center is considered the pulse of any modern business. Remote management
enables users to monitor and manage global networks, systems and IT equipment from
anywhere and at any time. Device servers play a major role in allowing for the remote
capabilities and flexibility required for businesses to maximize personnel resources and
technology ROI.

Console Servers

Console servers provide the flexibility of both standard and emergency remote access via
attachment to the network or to a modem. Remote console management serves as a valuable
tool to help maximize system uptime and system operating costs.

Secure console servers provide familiar tools to leverage the console or emergency
management port built into most serial devices, including servers, switches, routers, telecom
equipment - anything in a rack - even if the network is down. They also supply complete in-
band and out-of-band local and remote management for the data center with tools such as
telnet and SSH that help manage the performance and availability of critical business
information systems.

Console Management Solutions from Lantronix

Lantronix provides complete in-band and out-of-band local and remote management solutions
for the data center. SecureLinx™ secure console management products give IT managers
unsurpassed ability to securely and remotely manage serial devices, including servers,
switches, routers, telecom equipment - anything in a rack - even if the network is down.

Conclusion
The ability to manage virtually any electronic device over a network or the Internet is
changing the way the world works and does business. With the ability to remotely manage,
monitor, diagnose and control equipment, a new level of functionality is added to networking
— providing business with increased intelligence and efficiency. Lantronix leads the way in
developing new network intelligence and has been a tireless pioneer in machine-to-machine
(M2M) communication technology.

We hope this introduction to networking has been helpful and informative. This tutorial was
meant to be an overview and not a comprehensive guide that explains everything there is to
know about planning, installing, administering and troubleshooting a network. There are many
Internet websites, books and magazines available that explain all aspects of computer
networks, from LANs to WANs, network hardware to running cable. To learn about these
subjects in greater detail, check your local bookstore, software retailer or newsstand for more
information.

Networking Hardware
In the past, all of the articles that I have written for this Web site have been intended for
use by administrators with at least some level of experience. Recently though, there have
been requests for articles targeted toward those who are just getting started with
networking and that have absolutely no experience at all. This article will be the first in a
series targeted toward novices. In this article series, I will start with the absolute basics,
and work toward building a functional network. In this article I will begin by discussing
some of the various networking components and what they do.

Network Adapters
The first piece of hardware that I want to discuss is a network adapter. There are many
different names for network adapters, including network cards, Network Interface Cards,
NICs. These are all generic terms for the same piece of hardware. A network card’s job is
to physically attach a computer to a network, so that the computer can participate in
network communications.

The first thing that you need to know about network cards is that the network card has to
match the network medium. The network medium refers to the type of cabling that is
being used on the network. Wireless networks are a science all their own, and I will talk
about them in a separate article.

At one time making sure that a network card matched the network medium was a really
big deal, because there were a large number of competing standards in existence. For
example, before you built a network and started buying network cards and cabling, you
had to decide if you were going to use Ethernet, coaxal Ethernet, Token Ring, Arcnet, or
one of the other networking standards of the time. Each networking technology had its
strengths and weaknesses, and it was important to figure out which one was the most
appropriate for your organization.

Today, most of the networking technologies that I mentioned above are quickly
becoming extinct. Pretty much the only type of wired network used by small and medium
sized businesses is Ethernet. You can see an example of an Ethernet network card, shown
in Figure A.

Figure A: This is what an Ethernet card looks like

Modern Ethernet networks use twisted pair cabling containing eight wires. These wires
are arranged in a special order, and an RJ-45 connecter is crimped onto the end of the
cable. An RJ-45 cable looks like the connector on the end of a phone cord, but it’s bigger.
Phone cords use RJ-11 connectors as opposed to the RJ-45 connectors used by Ethernet
cable. You can see an example of an Ethernet cable with an RJ-45 connector, shown in
Figure B.
Figure B: This is an Ethernet cable with an RJ-45 connector installed

Hubs and Switches


As you can see, computers use network cards to send and receive data. The data is
transmitted over Ethernet cables. However, you normally can’t just run an Ethernet cable
between two PCs and call it a network.

In this day and age of high speed Internet access being almost universally available, you
tend to hear the term broadband thrown around a lot. Broadband is a type of network in
which data is sent and received across the same wire. In contrast, Ethernet uses Baseband
communications. Baseband uses separate wires for sending and receiving data. What this
means is that if one PC is sending data across a particular wire within the Ethernet cable,
then the PC that is receiving the data needs to have the wire redirected to its receiving
port.

You can actually network two PCs together in this way. You can create what is known as
a cross over cable. A cross over cable is simply a network cable that has the sending and
receiving wires reversed at one end, so that two PCs can be linked directly together.

The problem with using a cross over cable to build a network is that the network will be
limited to using no more and no less than two PCs. Rather than using a cross over cable,
most networks use normal Ethernet cables that do not have the sending and receiving
wires reversed at one end.

Of course the sending and receiving wires have to be reversed at some point in order for
communications to succeed. This is the job of a hub or a switch. Hubs are starting to
become extinct, but I want to talk about them any way because it will make it easier to
explain switches later on.
There are different types of hubs, but generally speaking a hub is nothing more than a
box with a bunch of RJ-45 ports. Each computer on a network would be connected to a
hub via an Ethernet cable. You can see a picture of a hub, shown in Figure C.

Figure C: A hub is a device that acts as a central connection point for computers on a
network

A hub has two different jobs. Its first job is to provide a central point of connection for all
of the computers on the network. Every computer plugs into the hub (multiple hubs can
be daisy chained together if necessary in order to accommodate more computers).

The hub’s other job is to arrange the ports in such a way so that if a PC transmits data, the
data is sent over the other computer’s receive wires.

Right now you might be wondering how data gets to the correct destination if more than
two PCs are connected to a hub. The secret lies in the network card. Each Ethernet card is
programmed at the factory with a unique Media Access Control (MAC) address. When a
computer on an Ethernet network transmits data across an Ethernet network containing
PCs connected to a hub, the data is actually sent to every computer on the network. As
each computer receives the data, it compares the destination address to its own MAC
address. If the addresses match then the computer knows that it is the intended recipient,
otherwise it ignores the data.

As you can see, when computers are connected via a hub, every packet gets sent to every
computer on the network. The problem is that any computer can send a transmission at
any given time. Have you ever been on a conference call and accidentally started to talk
at the same time as someone else? This is the same thing that happens on this type of
network.
When a PC needs to transmit data, it checks to make sure that no other computers are
sending data at the moment. If the line is clear, it transmits the necessary data. If another
computer tries to communicate at the same time though, then the packets of data that are
traveling across the wire collide and are destroyed (this is why this type of network is
sometimes referred to as a collision domain). Both PCs then have to wait for a random
amount of time and attempt to retransmit the packet that was destroyed.

As the number of PCs on a collision domain increases, so does the number of collisions.
As the number of collisions increase, network efficiency is decreased. This is why
switches have almost completely replaced hubs.

A switch, such as the one shown in Figure D, performs all of the same basic tasks as a
hub. The difference is that when a PC on the network needs to communicate with another
PC, the switch uses a set of internal logic circuits to establish a dedicated, logical path
between the two PCs. What this means is that the two PCs are free to communicate with
each other, without having to worry about collisions.

Figure D: A switch looks a lot like a hub, but performs very differently

Switches greatly improve a network’s efficiency. Yes, they eliminate collisions, but there
is more to it than that. Because of the way that switches work, they can establish parallel
communications paths. For example, just because computer A is communicating with
computer B, there is no reason why computer C can’t simultaneously communicate with
computer D. In a collision domain, these types of parallel communications would be
impossible because they would result in collisions.
DNS Servers
In the last part of this article series, I talked about how all of the computers on a network
segment share a common IP address range. I also explained that when a computer needs
to access information from a computer on another network or network segment, it’s a
router’s job to move the necessary packets of data from the local network to another
network (such as the Internet).

If you read that article, you probably noticed that in one of my examples, I made a
reference to the IP address that’s associated with my Web site. To be able to access a
Web site, your Web browser has to know the Web site’s IP address. Only then can it give
that address to the router, which in turn routes the outbound request packets to the
appropriate destination. Even though every Web site has an IP address, you probably visit
Web sites every day without ever having to know an IP address. In this article, I will
show you why this is possible.

I have already explained that IP addresses are similar to street addresses. The network
portion of the address defines which network segment the computer exists on, and the
computer portion of the address designates a specific computer on that network. Knowing
an IP address is a requirement for TCP/IP based communications between two
computers.

When you open a Web browser and enter the name of a Web site (which is known as the
site’s domain name, URL, or Universal Resource Locator), the Web browser goes
straight to the Web site without you ever having to enter an IP address. With that in mind,
consider my comparison of IP addresses to postal addresses. You can’t just write
someone’s name on an envelope, drop the envelope in the mail, and expect it to be
delivered. The post office can’t deliver the letter unless it has an address. The same basic
concept applies to visiting Web sites. Your computer cannot communicate with a Web
site unless it knows the site’s IP address.

So if your computer needs to know a Web site’s IP address before it can access the site,
and you aren’t entering the IP address, where does the IP address come from? Translating
domain names into IP addresses is the job of a DNS server.

In the two articles leading up to this one, I talked about several aspects of a computer’s
TCP/IP configuration, such as the IP address, subnet mask, and default gateway. If you
look at Figure A, you will notice that there is one more configuration option that has been
filled in; the Preferred DNS server.
Figure A: The Preferred DNS Server is defined as a part of a computer’s TCP/IP
configuration

As you can see in the figure, the preferred DNS server is defined as a part of a
computer’s TCP/IP configuration. What this means is that the computer will always
know the IP address of a DNS server. This is important because a computer cannot
communicate with another computer using the TCP/IP protocol unless an IP address is
known.

With that in mind, let’s take a look at what happens when you attempt to visit a Web site.
The process begins when you open a Web browser and enter a URL. When you do, the
Web browser knows that it can not locate the Web site based on the URL alone. It
therefore retrieves the DNS server’s IP address from the computer’s TCP/IP
configuration and passes the URL on to the DNS server. The DNS server then looks up
the URL on a table which also lists the site’s IP address. The DNS server then returns the
IP address to the Web browser, and the browser is then able to communicate with the
requested Web site.

Actually, that explanation is a little bit over simplified. DNS name resolution can only
work in the way that I just described if the DNS server contains a record that corresponds
to the site that’s being requested. If you were to visit a random Web site, there is a really
good chance that your DNS server does not contain a record for the site. The reason for
this is because the Internet is so big. There are millions of Web sites, and new sites are
created every day. There is no way that a single DNS server could possibly keep up with
all of those sites and service requests from everyone who is connected to the Internet.

Let’s pretend for a moment that it was possible for a single DNS server to store records
for every Web site in existence. Even if the server’s capacity were not an issue, the server
would be overwhelmed by the sheer volume of name resolution requests that it would
receive from people using the Internet. A centralized DNS server would also be a very
popular target for attacks.

Instead, DNS servers are distributed so that a single DNS server does not have to provide
name resolutions for the entire Internet. There is an organization named the Internet
Corporation for Assigned Names and Numbers, or ICANN for short, that is responsible
for all of the registered domain names on the Internet. Because managing all of those
domain names is such a huge job, ICANN delegates portions of the domain naming
responsibility to various other firms. For example, Network Solutions is responsible for
all of the .com domain names. Even so, Network Solutions does not maintain a list of the
IP addresses associated with all of the .com domains. In most cases, Network Solution’s
DNS servers contain records that point to the DNS server that is considered to be
authoritative for each domain.

To see how all this works, imagine that you wanted to visit the
http://www.brienposey.com/ website. When you enter the request into your Web browser,
your Web browser forwards the URL to the DNS server specified by your computer’s
TCP/IP configuration. More than likely, your DNS server is not going to know the IP
address of this website. Therefore, it will send the request to the ICANN DNS server. The
ICANN DNS server wouldn’t know the IP address for the website that you are trying to
visit. It would however know the IP address of the DNS server that is responsible for
domain names ending in .COM. It would return this address to your Web browser, which
in return would submit the request to the specified DNS server.

The top level DNS server for domains ending in .COM would not know the IP address of
the requested Web site either, but it would know the IP address of a DNS server that is
authoritative for the brienposey.com domain. It would send this address back to the
machine that made the request. The Web browser would then send the DNS query to the
DNS server that is authoritative for the requested domain. That DNS server would then
return the websites IP address, thus allowing the machine to communicate with the
requested website.

As you can see, there are a lot of steps that must be completed in order for a computer to
find the IP address of a website. To help reduce the number of DNS queries that must be
made, the results of DNS queries are usually cached for either a few hours or a few days,
depending on how the machine is configured. Caching IP addresses greatly improves
performance and minimizes the amount of bandwidth consumed by DNS
queries. Imagine how inefficient Web browsing would be if your computer had to do a
full set of DNS queries every time you visit a new page.
Workstations and Servers
So far in this article series, I have talked a lot about networking hardware and about the
TCP/IP protocol. The networking hardware is used to establish a physical connection
between devices, while the TCP/IP protocol is essentially the language that the various
devices use to communicate with each other. In this article, I will continue the discussion
by talking a little bit about the computers that are connected to a network.

Even if you are new to networking, you have no doubt heard terms such as server and
workstation. These terms are generally used to refer to a computer’s role on the network
rather than the computer’s hardware. For example, just because a computer is acting as a
server, it doesn’t necessarily mean that it has to be running server hardware. It is possible
to install a server operating system onto a PC, and have that PC act as a network server.
Of course in most real life networks, servers are running specialized hardware to help
them to be able to handle the heavy workload that servers are typically subjected to.

What might make the concept of network servers a little bit more confusing is that
technically speaking a server is any computer that hosts resources over a network. This
means that even a computer that’s running Windows XP could be considered to be a
server if it is configured to share some kind of resource, such as files or a printer.

Computers on a network typically fall into one of three roles. Usually a computer is
considered to be either a workstation (sometimes referred to as a client), server, or a peer.

Workstations are computers that use network resources, but that do not host resources of
their own. For example, a computer that is running Windows XP would be considered a
workstation so long as it is connected to a network and is not sharing files or printers.

Servers are computers that are dedicated to the task of hosting network resources.
Typically, nobody is going to be sitting down at a server to do their work. Windows
servers (that is, computers running Windows Server 2003, Windows 2000 Server, or
Windows NT Server) have a user interface that is very similar to what you would find on
a Windows workstation. It is possible that someone with an appropriate set of
permissions could sit down at the server and run Microsoft Office or some other
application. Even so, such behavior is strongly discouraged because it undermines the
server’s security, decreases the server’s performance, and has the potential to affect the
server’s stability.

The last type of computer that is commonly found on a network is a peer. A peer machine
is a computer that acts as both a workstation and a server. Such machines typically run
workstation operating systems (such as Windows XP), but are used to both access and
host network resources.

In the past, peers were found primarily on very small networks. The idea was that if a
small company lacks the resources to purchase true servers, then the workstations could
be configured to perform double duty. For example, each user could make their own files
accessible to every other user on the network. If a user happens to have a printer attached
to their PC, they can also share the printer so that others on the network can print to it.

Peer networks have been traditionally discouraged in larger companies because of their
inherent lack of security, and because they cannot be centrally managed. That’s why peer
networks are primarily found in extremely small companies or in homes with multiple
PCs. Windows Vista (the successor to Windows XP) is attempting to change that.
Windows Vista will allow users on traditional client/server networks to form peer groups
that will allow the users and those groups to share resources amongst themselves in a
secure manner, without breaking their connection to network servers. This new feature is
being marketed as a collaboration tool.

Earlier I mentioned that peer networks are discouraged in favor of client/server networks
because they lack security and centralized manageability. However, just because a
network is made up of workstations and servers, it doesn’t necessarily guarantee security
and centralized management. Remember, a server is only a machine that is dedicated to
the task of hosting resources over a network. Having said that, there are countless
varieties of servers and some types of servers are dedicated to providing security and
manageability.

For example, Windows servers fall into two primary categories; member servers and
domain controllers. There is really nothing special about a member server. A member
server is simply a computer that is connected to a network, and is running a Windows
Server operating system. A member server might be used as a file repository (known as a
file server), or to host one or more network printers (known as a print server). Member
servers are also frequently used to host network applications. For example, Microsoft
offers a product called Exchange Server 2003 that when installed on a member server,
allows that member server to function as a mail server. The point is that a member server
can be used for just about anything.

Domain controllers are much more specialized. A domain controller’s job is to provide
security and manageability to the network. I am assuming that you’re probably familiar
with the idea of logging on to a network by entering a username and password. On a
Windows network, it is the domain controller that is responsible for keeping track of
usernames and passwords.

The person who is responsible for managing the network is known as the network
administrator. Whenever a user needs to gain access to resources on a Windows network,
the administrator uses a utility provided by a domain controller to create a user account
and password for the new user. When the new user (or any user for that matter) attempts
to log onto the network, the users credentials (their username and password) are
transmitted to the domain controller. The domain controller validates the user’s
credentials by comparing them against the copy stored in the domain controller’s
database. Assuming that the password that the user entered matches the password that the
domain controller has on file, the user is granted access to the network. This process is
called authentication.

On a Windows network, only the domain controllers perform authentication services. Of


course users will probably need to access resources stored on member servers. This is not
a problem because resources on member servers are protected by a set of permissions that
are related to the security information stored on domain controllers.

For example, suppose that my user name was Brien. I enter my username and password,
which is sent to a domain controller for authentication. When the domain controller
authenticates me, it has not actually given me access to any resources. Instead, it
validates that I am who I claim to be. When I go to access resources off of a member
server, my computer presents a special access token to the member server that basically
says that I have been authenticated by a domain controller. The member server does not
trust me, but it does trust the domain controller. Therefore, since the domain controller
has validated my identity, the member server accepts that I am who I claim to be and
gives me access to any resources for which I have permission to access.

You might also like