Stars
Communications and Networking
Home
About Me
Favorite Links
Contact Me
Family Photo Album
firewall
Personal Computer Background
Communications and Networking
Motherboards
Overview of System Components
Memory
Software and Hardware Diagnostic Tools
The Power Supply
Video Display Hardware
Hard Disk Interfaces
Bus Slots and I/O Cards

Communications and Networking

Most computer-to-computer connections occur through a serial port, a parallel port, or a network adapter. In this chapter, you explore ways to connect your PC to other computers. Such connections enable you to transfer and share files, send electronic mail, access software on other computers, and generally make two or more computers behave as a team.

Using Communications Ports and Devices

The basic communications ports in any PC system are the serial and parallel ports. The serial ports are used primarily for devices that must communicate bidirectionally with the system. Such devices include modems, mice, scanners, digitizers, and any other devices that "talk to" and receive information from the PC.

Several companies also manufacture communications programs that perform high-speed transfers between PC systems using serial or parallel ports. Several products are currently on the market that make nontraditional use of the parallel port. You can purchase network adapters, floppy disk drives, CD-ROM drives, or tape backup units that attach to the parallel port, for example.

Serial Ports

The asynchronous serial interface is the primary system-to-system communications port. Asynchronous means that no synchronization or clocking signal is present, so characters may be sent with any arbitrary time spacing.

Each character sent over a serial connection is framed by a standard start and stop signal. A single 0 bit, called the start bit, precedes each character to tell the receiving system that the next 8 bits constitute a byte of data. One or two stop bits follow the character to signal that the character has been sent. At the receiving end of the communication, characters are recognized by the start and stop signals instead of by the timing of their arrival. The asynchronous interface is character-oriented and has about a 20 percent overhead for the extra information needed to identify each character.

Serial refers to data sent over a single wire, with each bit lining up in a series as the bits are sent. This type of communication is used over the phone system, because this system provides one wire for data in each direction. Figure 11.1 shows the standard 9-pin AT-style serial port, and Figure 11.2 shows the 25-pin version.

FIG. 11.1  AT-style 9-pin serial port connector.

FIG. 11.2  Standard 25-pin serial port connector.

Serial ports may connect to a variety of devices such as modems, plotters, printers, other computers, bar code readers, scales, and device control circuits. Basically, anything that needs a two-way connection to the PC uses the industry-standard Reference Standard number 232 revision c (RS-232c) serial port. This device enables data transfer between otherwise incompatible devices. Tables 11.1, 11.2, and 11.3 show the pinouts of the 9-pin (AT-style), 25-pin, and 9-pin-to-25-pin serial connectors.

Table 11.1  9-Pin (AT) Serial Port Connector

Pin Signal Description I/O
1 CD Carrier Detect In
2 RD Receive Data In
3 TD Transmit Data Out
4 DTR Data Terminal Ready Out
5 SG Signal Ground --
6 DSR Data Set Ready In
7 RTS Request To Send Out
8 CTS Clear To Send In
9 RI Ring Indicator In

Table 11.2  25-Pin (PC, XT, and PS/2) Serial Port Connector

Pin Signal Description I/O
1 -- Chassis Ground --
2 TD Transmit Data Out
3 RD Receive Data In
4 RTS Request To Send Out
5 CTS Clear To Send In
6 DSR Data Set Ready In
7 SG Signal Ground --
8 CD Carrier Detect In
9 -- +Transmit Current Loop Return Out
11 -- -Transmit Current Loop Data Out
18 -- +Receive Current Loop Data In
20 DTR Data Terminal Ready Out
22 RI Ring Indicator In
25 -- -Receive Current Loop Return In

Table 11.3  9-Pin to 25-Pin Serial Cable Adapter Connections

9-Pin 25-Pin Signal Description
1 8 CD Carrier Detect
2 3 RD Receive Data
3 2 TD Transmit Data
4 20 DTR Data Terminal Ready
5 7 SG Signal Ground
6 6 DSR Data Set Ready
7 4 RTS Request To Send
8 5 CTS Clear To Send
9 22 RI Ring Indicator


NOTE: Macintosh systems use a similar serial interface defined as RS-422. Most external modems can interface with either RS-232 or RS-422, but it is safest to make sure that the external modem you get for your PC is designed for a PC, not a Macintosh.

The heart of any serial port is the Universal Asynchronous Receiver/Transmitter (UART) chip. This chip completely controls the process of breaking the native parallel data within the PC into serial format, and later converting serial data back into the parallel format.

There are several types of UART chips on the market. The original PC and XT used the 8250 UART. In the PC/AT (and later types), the 16450 UART is used. The only difference between these chips is their suitability for high-speed communications. The 16450 is better suited for high-speed communications than the 8250; otherwise, both chips appear identical to most software.

The 16550 UART was the first serial chip used in the PS/2 line. This chip could function as the earlier 16450 and 8250 chips, but it also included a 16-byte buffer that aided in faster communications. This is sometimes referred to as a FIFO (First In/First Out) buffer. Unfortunately, the 16550 also had a few bugs, particularly in the buffer area. These bugs were corrected with the release of the 16550A UART, which is used in newer high-performance serial ports.


TIP: The 16550 UART chip is pin-for-pin compatible with the 16450 UART. If your 16450 UART is socketed, it is a cheap and easy way to improve serial performance to install a 16550 UART chip in the socket.

Because the 16550A is a faster, more reliable chip than its predecessors, it is best to look for serial ports that use it. If you are in doubt about which chip you have in your system, you can use the Microsoft MSD program (provided with MS DOS 6.x and Windows) to determine the type of UART you have.

Another way to tell if you have a 16650 UART in Windows 95 is to right-click My Computer, and then click Properties. This brings up the System Properties dialog box. Choose the Device Manager tab, Ports, and then the communications port that you want to check. Choose the Port Settings tab and then click the Advanced button. This will bring up the Advanced Port Settings box. If you have a 16650 UART, there will be a check mark in the use FIFO Buffers option.

The original designer of these UARTs is National Semiconductor (NS). Many other manufacturers have been producing clones of these UARTs, such that you probably don't have an actual NS brand part in your system. Even so, the part you have will be compatible with one of the NS parts, hopefully the 16550. In other words, you should check to see that whatever UART chip you do have does indeed feature the 16-byte FIFO buffer as found in the NS 16550 part.

Some manufacturers also started making integrated chips which take on the functions of multiple chips. Boca Research, for instance, produced serial and parallel cards with little more than one Integrated Circuit (IC) on them. Most of these integrated chips function as a 16550 would; however, you should make sure that they have 16550 compatibility.

Table 11.4 lists the standard UART chips used in IBM and compatible systems.

Table 11.4  UART Chips in PC or AT Systems

Chip Description
8250 IBM used this original chip in the PC serial port card. The chip has several bugs, none of which are serious. The PC and XT ROM BIOS are written to anticipate at least one of the bugs. This chip was replaced by the 8250B.
8250A Do not use the second version of the 8250 in any system. This upgraded chip fixes several bugs in the 8250, including one in the interrupt enable register, but because the PC and XT ROM BIOS expect the bug, this chip does not work properly with those systems. The 8250A should work in an AT system that does not expect the bug, but does not work adequately at 9600 bps.
8250B The last version of the 8250 fixes bugs from the previous two versions. The interrupt enable bug in the original 8250, expected by the PC and XT ROM BIOS software, has been put back into this chip, making the 8250B the most desirable chip for any non-AT serial port application. The 8250B chip may work in an AT under DOS, but does not run properly at 9600 bps.
16450 IBM selected the higher-speed version of the 8250 for the AT. Because this chip has fixed the interrupt enable bug mentioned earlier, the 16450 does not operate properly in many PC or XT systems, because they expect this bug to be present. OS/2 requires this chip as a minimum, or the serial ports do not function properly. It also adds a scratch-pad register as the highest register. The 16450 is used primarily in AT systems because of its increase in throughput over the 8250B.
16550 This newer UART improves on the 16450. This chip cannot be used in a FIFO buffering mode because of problems with the design, but it does enable a programmer to use multiple DMA channels and thus increase throughput on an AT or higher class computer system. It's recommended to replace the 16550 UART with the 16550A.
16550A This chip is a faster 16450 with a built-in 16-character Transmit and Receive FIFO buffer that works. It also allows multiple DMA channel access. You should install this chip in your AT system serial port cards if you do any serious communications at 9600 bps or higher. If your communications program makes use of the FIFO, it can greatly increase communications speed and eliminate lost characters and data at the higher speeds.

High-Speed Serial Ports

Some modem manufacturers have gone a step further on improving serial data transfer by introducing Enhanced Serial Ports (ESP) or Super High Speed Serial Ports. These ports enable a 28,800 bps modem to communicate with the computer at data rates up to 921,600 bps. The extra speed on these ports is generated by increasing the buffer size. These ports are usually based on a 16550AF UART or a 16550AF UART emulator with dual 1,024-byte buffers and on-board data flow control, and can provide great benefit in an environment where both your computer and the "receiving" computer are equipped with these ports. Otherwise, just one of the computers having an ESP doesn't yield any benefit.

As the need for additional serial devices continued to increase, users were beginning to need more than the two com ports that were standard in PCs. As a result, multi-port serial cards were created. These cards generally have 2-32 ports on them. Often they also provide higher baud rates than can be achieved on a standard serial port.

Most of the multiport serial cards use standard 16550A UARTs with a processor (typically an 80x86 based processor) and some memory. These cards can improve performance slightly because the processor is dedicated to handling serial information. However, it's not always the best method for high-performance applications.

Some of the better multiport serial cards have broken the model of the 16550A UART in favor of a single integrated circuit. These cards have the advantage of higher sustainable throughput without loss. One such card is the Rocketport by Comtrol. It comes in ISA and PCI versions with up to 32 ports. Each port is capable of 232K baud sustained.

Various manufacturers have made versions of the 16550A; National Semiconductor was the first. Its full part number for the 40-pin DIP is NS16550AN or NS16550AFN. Make sure that the part you get is the 16550A, and not the older 16550.

Serial Port Configuration

Each time a character is received by a serial port, it has to get the attention of the computer by raising an Interrupt Request Line (IRQ). 8-bit ISA bus systems have 8 of these lines, and systems with a 16-bit ISA bus (or a newer bus system) have 16 lines. The 8259 interrupt controller chip usually handles these requests for attention. In a standard configuration, COM1 uses IRQ4, and COM2 uses IRQ3.

When a serial port is installed in a system, it must be configured to use specific I/O addresses (called ports), and interrupts (called IRQs for Interrupt ReQuest). The best plan is to follow the existing standards for how these devices should be set up. For configuring serial ports, you should use the addresses and interrupts indicated in Table 11.5.

Table 11.5  Standard Serial I/O Port Addresses and Interrupts

System COMx Port IRQ
All COM1 3F8h IRQ4
All COM2 2F8h IRQ3
ISA bus COM3 3E8h IRQ4 *
ISA bus COM4 2E8h IRQ3 *

* Note that although many serial ports can be set up to share IRQ 3 and 4 with COM1 and COM2, it is not recommended. The best recommendation is setting COM3 to IRQ 5. If ports above COM3 are required, it is recommended that you use a multiport serial board.

You should ensure that if you are adding more than the standard COM1 and COM2 serial ports, they use unique and nonconflicting interrupts. If you purchase a serial port adapter card and intend to use it to supply ports beyond the standard COM1 and COM2, be sure that it can use interrupts other than IRQ3 and IRQ4.

Another problem is that IBM never built BIOS support for COM3 and COM4 into its original ISA bus systems. Therefore, the MODE command in DOS cannot work with serial ports above COM2 because DOS gets its I/O information from the BIOS, which finds out what is installed in your system and where during the POST. The POST in these older systems checks only for the first two installed ports. PS/2 systems have an improved BIOS that checks for as many as eight serial ports, although DOS is limited to handling only four of them.

To get around this problem, most communications software and some serial peripherals (such as mice) support higher COM ports by addressing them directly, rather than making DOS function calls. The communications program Procomm, for example, supports the additional ports even if your BIOS or DOS does not. Of course, if your system or software does not support these extra ports or you need to redirect data using the MODE command, trouble arises.

Windows 95 has added the support for up to 128 serial ports. This allows for the use of multiport boards in the system. Multiport boards give your system the ability to collect or share data with multiple devices, while using only one slot and one interrupt.

A couple of utilities enable you to append your COM port information to the BIOS, making the ports DOS-accessible. A program called Port Finder is one of the best. Port Finder activates the extra ports by giving the BIOS the addresses and providing utilities for swapping the addresses among the different ports. Address swapping enables programs that don't support COM3 and COM4 to access them. Software that already directly addresses these additional ports usually is unaffected.


CAUTION: Sharing interrupts between COM ports or any devices can function some times and not others. It is recommended that you never share interrupts. It will cause you hours of frustration trying to track down drivers, patches, and updates to allow this to work successfully--if it's even possible in your system.

Modem Standards

Bell Labs and the CCITT have set standards for modem protocols. CCITT is an acronym for a French term that translates into English as the Consultative Committee on International Telephone and Telegraph. The organization was renamed the International Telecommunications Union (ITU) in the early 1990s, but the protocols developed under the old name are often referred to as such. Newly developed protocols are referred to as ITU-T standards. A protocol is a method by which two different entities agree to communicate. Most newer modems conform to the CCITT standards.

The ITU is an international body of technical experts responsible for developing data communications standards for the world. The group falls under the organizational umbrella of the United Nations, and its members include representatives from major modem manufacturers, common carriers (such as AT&T), and governmental bodies. The ITU establishes communications standards and protocols in many areas, so one modem often adheres to several different standards, depending on its various features and capabilities. Modem standards can be grouped into the following three areas:

  • Modulation standards

Bell 103 CCITT V.29
Bell 212A CCITT V.32
CCITT V.21 CCITT V.32bis
CCITT V.22bis CCITT V.34

  • Error-correction standards

  • CCITT V.42

  • Data-compression standards

  • V.42bis

Other standards have been developed by different companies (not Bell Labs or the ITU). These are sometimes called proprietary standards, even though most of these companies publish the full specifications of their protocols so that other manufacturers can develop modems to work with them. The following list shows some of the proprietary standards that have become fairly popular:

  • Modulation

HST
PEP
DIS

  • Error correction

MNP 1-4
Hayes V-series

  • Data compression

MNP 5
CSP


56K Modems
Two competing factions have developed for the development of so-called 56K modems. US Robotics has developed a "standard" which they call X2. Rockwell and others have proposed a K56Flex "standard". In 1998 the V.90 standard was released, replacing both X2 and K56Flex. See the section "56K Modems" later in this chapter for more information.

Almost all newer modems claim to be Hayes-compatible, a phrase which has come to be as meaningless as IBM-compatible when referring to PCs. It does not refer to any communication protocol, but instead to the commands required to operate the modem. Because almost every modem uses the Hayes command set, this compatibility is a given and should not really affect your purchasing decisions about modems.

Not all modems that function at the same speed have the same functionality. Many modem manufacturers produce modems that have different feature sets at different price points. The more expensive modem usually supports such features as distinctive ring support and caller ID. When purchasing a modem, be sure that it supports all the features that you need.

The basic modem commands don't vary from modem manufacturer to manufacturer as much as they did. Some modems, most notably US Robotics, allow you to query the command set by simply sending AT$ to the modem.

The best sources of modem commands are the manuals that came with the modem.

Baud Versus Bits Per Second (bps)

Baud rate and bit rate often are confused in discussions about modems. Baud rate is the rate at which a signal between two devices changes in one second. If a signal between two modems can change frequency or phase at a rate of 300 times per second, for example, that device is said to communicate at 300 baud.

Sometimes a single modulation change is used to carry a single bit. In that case, 300 baud also equals 300 bits per second (bps). If the modem could signal two bit values for each signal change, the bps rate would be twice the baud rate, or 600 bps at 300 baud. Most modems transmit several bits per baud, so that the actual baud rate is much slower than the bps rate. In fact, people usually use the term baud incorrectly. We normally are not interested in the raw baud rate, but in the bps rate, which is the true gauge of communications speed.

Modulation Standards

Modems start with modulation, which is the electronic signaling method used by the modem (from modulator to demodulator). Modems must use the same modulation method to understand each other. Each data rate uses a different modulation method, and sometimes more than one method exists for a particular rate.

The three most popular modulation methods are:

  • Frequency-Shift Keying (FSK). A form of frequency modulation, otherwise known as FM (Frequency Modulation). By causing and monitoring frequency changes in a signal sent over the phone line, two modems can send information.

  • Phase-Shift Keying (PSK). A form of phase modulation, in which the timing of the carrier signal wave is altered and the frequency stays the same.

  • Quadrature-Amplitude Modulation (QAM). A modulation technique that combines phase changes with signal-amplitude variations, resulting in a signal that can carry more information than the other methods.

Bell 103

Bell 103 is a U.S. and Canadian 300 bps modulation standard. It uses FSK modulation at 300 baud to transmit 1 bit per baud. Most higher-speed modems will still support this protocol, even though it is obsolete.

Bell 212A

Bell 212A is the U.S. and Canadian 1200 bps modulation standard. It uses Differential Phase-Shift Keying (DPSK) at 600 baud to transmit 2 bits per baud.

V.21

V.21 is an international data-transmission standard for 300 bps communications, similar to Bell 103. Because of some differences in the frequencies used, Bell 103 modems are not compatible with V.21 modems. This standard is used primarily outside the United States.

V.22

V.22 is an international 1200 bps data-transmission standard. This standard is similar to the Bell 212A standard, but is incompatible in some areas, especially in answering a call. This standard was used primarily outside the United States.

V.22bis

V.22bis is a data-transmission standard for 2400 bps communications. Bis is derived from the Latin meaning second, indicating that this data transmission is an improvement to or follows V.22. This data transmission is an international standard for 2,400 bps and is used inside and outside the United States. V.22bis uses QAM at 600 baud and transmits 4 bits per baud to achieve 2,400 bps.

V.23

V.23 is a split data-transmission standard, operating at 1,200 bps in one direction and 75 bps in the reverse direction. Therefore, the modem is only pseudo-full-duplex, meaning that it can transmit data in both directions simultaneously, but not at the maximum data rate. This standard was developed to lower the cost of 1200 bps modem technology, which was expensive in the early 1980s. This standard was used primarily in Europe.

V.29

V.29 is a data-transmission standard at 9,600 bps, which defines a half duplex (one-way) modulation technique. This standard generally is used in Group III facsimile (fax) transmissions, and only rarely in modems. Because V.29 is a half-duplex method, it is substantially easier to implement this high-speed standard than to implement a high-speed full-duplex standard. As a modem standard, V.29 has not been fully defined, so V.29 modems of different brands seldom can communicate with each other. This does not affect fax machines, which have a fully defined standard.

V.32

V.32 is a full-duplex (two-way) data transmission standard that runs at 9,600 bps. It is a full modem standard, and also includes forward error-correcting and negotiation standards. V.32 uses TCQAM (Trellis-Coded Quadrature Amplitude Modulation) at 2,400 baud to transmit 4 bits per baud, resulting in the 9,600 bps transmission speed. The trellis coding is a special forward error-correction technique that creates an additional bit for each packet of 4. This extra check bit is used to allow on-the-fly error correction to take place at the other end. It also greatly increases the resistance of V.32 to noise on the line. In the past, V.32 has been expensive to implement because the technology it requires is complex. Because a one-way, 9600 bps stream uses almost the entire bandwidth of the phone line, V.32 modems implement echo cancellation, meaning that they cancel out the overlapping signal that their own modems transmit and just listen to the other modem's signal. This procedure is complicated and was at one time costly. Advances in lower-cost chipsets then made these modems inexpensive, and they were the de facto 9,600 bps standard for some time.

V.32bis

V.32bis is a 14,400 bps extension to V.32. This protocol uses TCQAM modulation at 2,400 baud to transmit 6 bits per baud, for an effective rate of 14,400 bps. The trellis coding makes the connection more reliable. This protocol is also a full-duplex modulation protocol, with a fallback to V.32 if the phone line is impaired. It is the communications standard for dialup lines because of its excellent performance and resistance to noise. The V.32bis-type modem is recommended.

V.32fast

V.32fast, or V.FC (Fast Class) as it is also called, was a new standard being proposed to the CCITT. V.32fast is an extension to V.32 and V.32bis, but offers a transmission speed of 28,800 bps. It has been superseded by V.34.

V.34

V.34 has superseded all the other 28.8Kbps standards. It has been proven as the most reliable standard of communication at 28.8Kbps. A later annex to the V.34 standard also defines optional higher speeds of 31.2 and 33.6Kbps, which most of the newer V.34 modems are capable of. Many existing V.34 modems designed using sophisticated Digital Signal Processors (DSPs) can be upgraded to support the 33.6Kbps speeds by merely installing a software upgrade in the modem. This is accomplished by downloading the Modem ROM upgrade from the manufacturer, and then running a program they supply to "flash" the modem's ROM with the new code.

Error-Correction Protocols

Error correction refers to the capability of some modems to identify errors during a transmission, and to automatically resend data that appears to have been damaged in transit. For error correction to work, both modems must adhere to the same correction standard. Fortunately, most modem manufacturers adhere to the same error-correction standards.

MNP 1-4

This is a proprietary standard that was developed by Microcom which provides basic error correction. The Microcom Networking Protocol (MNP) is covered in more detail in the "Proprietary Standards" section.

V.42

V.42 is an error-correction protocol, with fallback to MNP 4. Because the V.42 standard includes MNP compatibility through Class 4, all MNP 4-compatible modems can establish error-controlled connections with V.42 modems. This standard uses a protocol called LAPM (Link Access Procedure for Modems). LAPM, like MNP, copes with phone-line impairments by automatically retransmitting data corrupted during transmission, assuring that only error-free data passes between the modems. V.42 is considered to be better than MNP 4 because it offers about a 20 percent higher transfer rate due to its more intelligent algorithms.

Data-Compression Standards

Data compression refers to a built-in capability in some modems to compress the data they're sending, thus saving time and money for long-distance modem users. Depending on the type of files that are sent, data can be compressed to one-fourth its original size, effectively quadrupling the speed of the modem. For example, a 14,400 modem with compression can yield transmission rates of up to 57,600 bps, and a 28,800 modem can yield up to 115,200 bps.

MNP 5

Microcom continued the development of its MNP protocols to include a compression protocol named MNP 5. This protocol is discussed more fully in the section "Proprietary Protocols".

V.42bis

V.42bis is a CCITT data-compression standard similar to MNP Class 5 but providing about 35 percent better compression. V.42bis is not actually compatible with MNP Class 5, but nearly all V.42bis modems include the MNP 5 data-compression capability as well. This protocol can sometimes quadruple throughput, depending on the compression technique used. This fact has led to some mildly false advertising; for example, a 2400 bps V.42bis modem might advertise "9600 bps throughput" by including V.42bis as well, but this would be possible in only extremely optimistic cases, such as in sending text files that are very loosely packed. In the same manner, many 9600 bps V.42bis makers advertised "up to 38.4K bps throughput" by virtue of the compression. Just make sure that you see the truth behind such claims. V.42bis is superior to MNP 5 because it analyzes the data first, and then determines whether compression would be useful. V.42bis only compresses data that needs compression. Files found on bulletin board systems often are compressed already (using PKZIP or a similar program). Further attempts at compressing already compressed data can increase the size of the file and slow things down. MNP 5 always attempts to compress the data, which slows down throughput on previously compressed files. V.42bis, however, compresses only data that will benefit from the compression. To negotiate a standard connection using V.42bis, V.42 also must be present. Therefore, a modem with V.42bis data compression is assumed to include V.42 error correction.

Proprietary Standards

In addition to the industry-standard protocols for modulation, error correction, and data compression that generally are set forth or approved by the ITU-T, several protocols in these areas were invented by various companies and included in their products without any official endorsement by any standards body. Some of these protocols have been quite popular at times and became pseudo-standards of their own.

The most successful proprietary protocols are the MNP (Microcom Networking Protocols) that were developed by Microcom. These error-correction and data-compression protocols are supported widely by other modem manufacturers as well.

Another company that has been successful in establishing proprietary protocols as limited standards is US Robotics, with its HST (High-Speed Technology) modulation protocols. Because of an aggressive marketing campaign with BBS operators, it captured a large portion of the market with its products in the 1980s.

This section examines these and other proprietary modem protocols.

HST

The HST is a 14,400 bps and 9,600 bps modified half-duplex proprietary modulation protocol used by US Robotics. HST modems run at 9,600 bps or 14,400 bps in one direction, and 300 or 450 bps in the other direction. This is an ideal protocol for interactive sessions. Because echo-cancellation circuitry is not required, costs are lower. US Robotics also marketed modems that used the standard protocols as well as their proprietary standard. These dual standard modems incorporated both V.32bis and HST protocols, giving you the best of the standard and proprietary worlds and enabling you to connect to virtually any other system at the maximum communications rate. They were at one time among the best modems available.

DIS

The DIS is a 9,600 bps proprietary modulation protocol by CompuCom, which uses Dynamic Impedance Stabilization (DIS), with claimed superiority in noise rejection over V.32. Implementation appeared to be very inexpensive, but like HST, only one company made modems with the DIS standard. Because of the lower costs of V.32 and V.32bis, this proprietary standard became obsolete.

MNP

MNP offers end-to-end error correction, meaning that the modems are capable of detecting transmission errors and requesting retransmission of corrupted data. Some levels of MNP also provide data compression. As MNP evolved, different classes of the standard were defined, describing the extent to which a given MNP implementation supports the protocol. Most implementations support Classes 1 through 5. Higher classes usually are unique to modems manufactured by Microcom, Inc., because they are proprietary. MNP generally is used for its error-correction capabilities, but MNP Classes 4 and 5 also provide performance increases, with Class 5 offering real-time data compression. The lower classes of MNP usually are not important to you as a modem user, but they are included in the following list for the sake of completeness:

  • MNP Class 1 (block mode) uses asynchronous, byte-oriented, half-duplex (one-way) transmission. This method provides about 70 percent efficiency and error correction only, so it's rarely used.

  • MNP Class 2 (stream mode) uses asynchronous, byte-oriented, full-duplex (two-way) transmission. This class also provides error correction only. Because of protocol overhead (the time it takes to establish the protocol and operate it), throughput at Class 2 is only about 84 percent of that for a connection without MNP, delivering about 202 cps (characters per second) at 2,400 bps (240 cps is the theoretical maximum). Class 2 is used rarely.

  • MNP Class 3 incorporates Class 2 and is more efficient. It uses a synchronous, bit-oriented, full-duplex method. The improved procedure yields throughput about 108 percent of that of a modem without MNP, delivering about 254 cps at 2,400 bps.

  • MNP Class 4 is a performance-enhancement class that uses Adaptive Packet Assembly and Optimized Data Phase techniques. Class 4 improves throughput and performance by about 5 percent, although actual increases depend on the type of call and connection, and can be as high as 25 to 50 percent.

  • MNP Class 5 is a data-compression protocol that uses a real-time adaptive algorithm. It can increase throughput up to 50 percent, but the actual performance of Class 5 depends on the type of data being sent. Raw text files allow the highest increase, although program files cannot be compressed as much and the increase is smaller. On precompressed data (files already compressed with ARC, PKZIP, and so on), MNP 5 decreases performance, and therefore is often disabled on BBS systems.

V-Series

The Hayes V-series is a proprietary error-correction protocol by Hayes that was used in some of its modems. Since the advent of lower-cost V.32 and V.32bis modems (even from Hayes), the V-series has all but become extinct. These modems used a modified V.29 protocol, which is sometimes called a ping-pong protocol because it has one high-speed channel and one low-speed channel that alternate back and forth.

CSP

The CSP (CompuCom Speed Protocol) is an error-correction and data-compression protocol available on CompuCom DIS modems.

FAXModem Standards

Facsimile technology is a science unto itself, although it has many similarities to data communications. These similarities have led to the combination of data and faxes into the same modem. Over the years, the CCITT has set international standards for fax transmission. This has led to the grouping of faxes into one of four groups. Each group (I through IV) uses different technology and standards for transmitting and receiving faxes. Groups I and II are relatively slow and provide results that are unacceptable by the newer standards. Group III is the standard in use by virtually all regular fax machines, including those combined with modems. Whereas Groups I through III are analog in nature (similar to modems), Group IV is digital and designed for use with ISDN or other digital networks.

Group III Fax

There are two general subdivisions within the Group III fax standard--Class 1 and Class 2. Many times you will hear about a FAXModem supporting Group III, Class 1 fax communications. This simply indicates which protocols the board is able to send and receive. If your FAXModem does this, it can communicate with most of the other fax machines in the world. In FAXModems, the Class 1 specification is implemented by an additional group of modem commands that the modem translates and acts upon. Earlier you learned about the V.29 modulation standard. As stated in that section, this standard is used for Group III fax transmissions.

Modem Recommendations

Normally it's recommended that you purchase an internal modem if your computer has space for it; however, for troubleshooting external modems are recommended due to the additional capabilities possible by watching the LEDs that indicate the modem's status. External modems are also recommended if you use obsolete or non-standard operating systems. Some internal modems will only work with Windows 95 or higher operating systems (the so-called Winmodems), or will only work in PCs with Pentium MMX or newer processors. Internal modems are more sensitive for resource conflicts (for example, you can encounter a memory conflict with your VGA adapter), and it usually needs an extra IRQ. On the other hand, internal modems usually ship with a high-speed UART on the modem card, thus eliminating the need to upgrade any older, slower UARTs you may have in your PC. If you use an external modem, be sure that you have the appropriate UART.

Integrated Services Digital Network (ISDN)

ISDN modems make the break from the old technology of analog data transfer to the newer digital data transfer. Digital technology allows you to send voice, data, images, and faxes simultaneously over the same pair of wires at up to 128Kbps. ISDN requires different telephone wiring and service from the telephone company. You will also have to purchase an ISDN modem.


CAUTION: When purchasing an ISDN modem, you will almost always want to purchase an internal version. An ISDN modem with compression can easily exceed a serial port's ability to reliably send and receive data. Consider that even a moderate 2:1 compression ratio exceeds the maximum rated speed of 232Kbps that most high-speed COM ports support.

ISDN modems have three separate channels. Two of the channels are called B channels; these are the data-carrying channels and are 64Kbps each. The third channel is the D channel, which is 16Kbps. The slower D channel is used for routing and handling information.

To be technically precise, ISDN devices are not "modems". Modems modulate digital signals so they can be transmitted over an analog phone line and then demodulate the signal back to digital form for the computer. ISDN runs over an entirely digital telephone network, so there is no need for the modulation and demodulation processes. The most common type of ISDN device for a PC is called a terminal adapter. ISDN can be implemented as either a serial device or as a network interface. Using a network type interface eliminates the bottleneck at the computer's serial port. This type of ISDN terminal adapter may be the preferred solution for reasons of performance, even if you have only one computer and don't need the other services provided by a network.

56K Modems

56K Modems represent a special category of analog modem communications. They allow for downstream communications--those going from the host to the client--up to 56Kbits/sec. This doubles 28.8K and not quite doubles the preceding standard of 33.6K/sec.

To understand how this additional speed was captured, you need to understand a few basic principles of modem technology. In a traditional modem, circuit information is converted from digital form to analog, so it can travel over the Public Switched Telephone Network (PSTN), and finally back to a digital signal.

This conversion from digital to analog and back causes some speed loss. Even though the phone line is capable of carrying about 56K of information over it, the effective maximum speed because of conversions is about 33.6Kbits. A man by the name of Shannon came up with a law (Shannon's Law) which states that the maximum speed over an analog phone circuit is 33.6K.

However, Shannon's Law assumes that the telephone network is entirely analog. That is not the case in most of the telephone networks. Most circuits are digital until they reach the CO (Central Office) which your phone line is connected to. The CO converts the digital signal into an analog signal before sending it to your home.

Considering the fact that the phone system is largely digital, you can--in some cases--remove the first step of translating the information from digital form to analog form for transmission over the digital PSTN.

The result is that you can, if you connect the host modem digitally, eliminate the restriction of 33.6K shown in Shannon's Law. The result is that data can be transmitted at the full 56K capacity of the phone line in one direction. The other direction, from your computer to the host, will still operate at the 33.6K speed.

There are some very specific requirements to make 56K modems work. They are:

  • There can be only one digital-to-analog conversion in the network. This means that the connections between your CO and the CO which services the host must be all digital.

  • The host must be connected digitally. This means that one end of the connection must be connected to the PSTN.

  • Both modems must support the 56K technology. This means that both modems must support the same 56K technology (X2, K56Flex, or V.90).

Three different standards have been developed for 56K modems. US Robotics created a standard called X2, while Rockwell and others proposed a standard called K56Flex. These different standards were not compatible, and a battle was the result. In 1998, the ITU (International Telecommunications Union), formerly called CCITT, declared the V.90 standard to replace both X2 and K56Flex.

The V.90 protocol is a small improvement over X2, and a little more of an improvement over K56Flex. It can handle poor line conditions a bit better, and it maintains more stability once the connection is made. V.90 can adjust the speed of the connection to the quality of the line. When the connection is made, it tries to get a feel for the line quality over a period of time so it can adjust to the optimum speed.

Parallel Ports

A parallel port has eight lines for sending all the bits that comprise 1 byte of data simultaneously across 8 wires. This interface is fast and has traditionally been used for printers. However, programs to transfer data between systems have always used the parallel port as an option for transmitting data because it can do so 4 bits at a time rather than 1 bit at a time with a serial interface.

In the following section, we'll look at how these programs transfer data between parallel ports. The only problem with parallel ports is that their cables cannot be extended for any great length without amplifying the signal, or errors occur in the data. Table 11.6 shows the pinout for a standard PC parallel port.

Table 11.6  25-Pin PC-Compatible Parallel Port Connector

Pin Description I/O
1 -Strobe Out
2 Data 0 Out
3 Data 1 Out
4 Data 2 Out
5 Data 3 Out
6 Data 4 Out
7 Data 5 Out
8 Data 6 Out
9 Data 7 Out
10 -Acknowledge In
11 Busy In
12 Paper End In
13 Select In
14 -Auto Feed Out
15 -Error In
16 -Initialize Printer Out
17 -Select Input Out
18 Data 0 Ground In
19 Data 1 Ground In
20 Data 2 Ground In
21 Data 3 Ground In
22 Data 4 Ground In
23 Data 5 Ground In
24 Data 6 Ground In
25 Data 7 Ground In

Over the years, several types of parallel ports have evolved. Some of them are IBM-specific, while others can be found in any PC-compatible system. Here are the primary types of parallel ports found in PC systems:

  • Unidirectional (4-bit)

  • Bidirectional (8-bit) Type 1

  • Bidirectional (8-bit DMA) Type 3 (IBM specific)

  • Enhanced Parallel Port (EPP)

  • Enhanced Capabilities Port (ECP)

The following sections discuss each of these types of parallel ports.

Unidirectional (4-bit)

The original IBM PC did not have different types of parallel ports available. The only port available was the parallel port used to send information from the computer to a device, such as a printer. This is not to say that bidirectional parallel ports were not available; indeed, they were common in other computers on the market and in hobbyist computers at the time. The unidirectional nature of the original PC parallel port is consistent with its primary use--that is, of sending data to a printer. There were times, however, when it was desirable to have a bidirectional port--for example, when you need feedback from a printer, which is common with PostScript printers. This could not be done with the original unidirectional ports. Although it was never intended to be used for input, a clever scheme was devised where four of the signal lines could be used as a 4-bit input connection. Thus these ports can do 8-bit byte output and 4-bit (nibble) input. Systems built after 1993 are likely to have more capable parallel ports, such as 8-bit, EPP, or ECP. Four-bit ports are capable of effective transfer rates of about 40-60K/sec with typical devices and can be pushed to upwards of 140K/sec with certain design tricks.

Bidirectional (8-bit) Type 1

With the introduction of the PS/2 in 1987, IBM introduced the bidirectional parallel port. These are commonly found in PC-compatible systems, and may be designated "PS/2 type," "bidirectional," or "extended" parallel port. This port design opened the way for true communications between the computer and the peripheral across the parallel port. This was done by defining a few of the previously unused pins in the parallel connector, and defining a status bit to indicate in which direction information was traveling across the channel. In IBM documentation, this original PS/2 port became known as a Type 1 parallel port. Other vendors also introduced third-party ports that were compatible with the Type 1 port. These ports can usually be configured in both standard and bidirectional modes, and unless you specifically configure the port for bidirectional use, it will function just like the original unidirectional port. This configuration is normally done with the CMOS SETUP or configuration program that accompanies your system. Most systems built since 1991 have this capability, although many do not enable it as a default setting. These ports can do both 8-bit input and output using the standard eight data lines, and are considerably faster than the 4-bit ports when used with external devices. 8-bit ports are capable of speeds ranging from 80-300K/sec, depending on the speed of the attached device, the quality of the driver software, and the port's electrical characteristics. These ports are also largely not supported by software because they were almost universally installed in PS/2 machines and not standard PC-compatible machines.

Bidirectional (8-bit DMA) Type 3

With the introduction of the PS/2 Models 57, 90, and 95, IBM introduced the Type 3 parallel port. This was a special bidirectional port that featured greater throughput through the use of DMA techniques. This port was specifically used in IBM systems only, and was not found in other PC compatibles. You may be wondering why IBM skipped from Type 1 to Type 3. In reality, they did not. There is a Type 2 parallel port, and it served as a predecessor to the Type 3. It is only slightly less capable, but was never used widely in any IBM systems. The Type 3 bidirectional parallel port also never gained enough industry acceptance to obtain good driver or software support.

Enhanced Parallel Port (EPP)

EPP is a newer specification sometimes referred to as the Fast Mode parallel port. The EPP was developed by Intel, Xircom, and Zenith Data Systems and announced in October 1991. The first products to offer EPP were ZDS laptops, Xircom Pocket LAN Adapters, and the Intel 82360 SL I/O chip. EPP operates almost at ISA bus speed, and offers a 10-fold increase in the raw throughput capability over a conventional parallel port. EPP is especially designed for parallel port peripherals such as LAN adapters, disk drives, and tape backups. EPP has been included in the IEEE 1284 Parallel Port standard. Transfer rates of 1 to 2M/sec are possible with EPP. Since the original Intel 82360 SL I/O chip in 1992, other major chip vendors (such as National Semiconductor, SMC, Western Digital, and VLSI) have also produced I/O chipsets offering some form of EPP capability. One problem is that the procedure for enabling EPP across the various chips differs widely from vendor to vendor, and many vendors offer more than one I/O chip. EPP version 1.7 (March 1992) identifies the first popular version of the hardware specification. With minor changes, this has since been abandoned and folded into the IEEE 1284 standard. Some technical reference materials have erroneously made reference to "EPP specification version 1.9," causing confusion about the EPP standard. Note that version 1.9 does not exist, and any EPP specification after the original version 1.7 is technically a part of the IEEE 1284 specification. Unfortunately, this has resulted in two somewhat incompatible standards for EPP parallel ports: the original EPP Standards Committee version 1.7 standard, and the IEEE 1284 Committee standard. The two standards are sufficiently similar that new peripherals may be designed in such a way as to support both standards, but existing EPP 1.7 peripherals may not operate with IEEE 1284 ports. EPP ports were more common with IBM machines than with other hardware manufacturers who seemed to stay away from the printer port issue until the Enhanced Capabilities Port (ECP) was introduced by Microsoft and Hewlett-Packard (HP). However, because the EPP port is defined in the IEEE 1284 standard, it has gained software and driver support, including support in Windows NT.

Enhanced Capabilities Port (ECP)

Another type of high-speed parallel port called the ECP (Enhanced Capabilities Port) was jointly developed by Microsoft and Hewlett-Packard and formally announced in 1992. Like EPP, ECP offers improved performance for the parallel port and requires special hardware logic. Since the original announcement, ECP is included in IEEE 1284 just like EPP. Unlike EPP, ECP is not tailored to support portable PC's parallel port peripherals; its purpose is to support an inexpensive attachment to a very high-performance printer. Further, ECP mode requires the use of a DMA channel, which EPP did not define, and which can cause troublesome conflicts with other devices that use DMA. Most PCs with newer "super I/O" chips will be able to support either EPP or ECP mode. In most cases, the ECP ports can be turned into EPP, or standard unidirectional parallel ports via BIOS. However, it's recommended that the port be placed in ECP mode for the best throughput.

IEEE 1284

The IEEE 1284 standard called "Standard Signaling Method for a Bidirectional Parallel Peripheral Interface for Personal Computers" was approved for final release in March 1994. This standard defines the physical characteristics of the parallel port, including data transfer modes and physical and electrical specifications. IEEE 1284 defines the electrical signaling behavior external to the PC for a multimodal parallel port which may support 4-bit and modes of operation. Not all modes are required by the IEEE 1284 specification, and the standard makes some provision for additional modes.

The IEEE 1284 specification is targeted at standardizing the behavior between a PC and an attached device, most specifically attached printers, although the specification is of interest to vendors of parallel port peripherals (disks, LAN adapters, and so on).

IEEE 1284 is a hardware and line control-only standard and does not define how software should talk to the port. An offshoot of the original IEEE 1284 standard has been created to define the software interface. The IEEE 1284.3 committee was formed to develop a standard for software used with IEEE 1284-compliant hardware. This standard, designed to address the disparity among providers of parallel port chips, contains a specification for supporting EPP mode via the PC's system BIOS.

IEEE 1284 allows for much higher throughput in a connection between a computer and a printer, or two computers. The result is that the printer cable is no longer the standard printer cable. The IEEE 1284 printer cable uses twisted-pair technology, the same technology that allows Category 5 cabling to carry speeds up to 100Mbps.

IEEE 1284 also defined a new port, which most people aren't familiar with. A type A connector in the IEEE 1284 standard is defined as a DB25 pin connector. A type B connector is defined as a Centronics 36 connector. The new connector, referred to as type C, is a high-density connector. The three connectors are shown in Figure 11.3.

FIG. 11.3  The three different IEEE 1284 connectors.

Upgrading to EPP/ECP Ports

If you have an older system that does not include an EPP/ECP port and you would like to upgrade, there are several expansion boards with the correct Super I/O chips that implement these features. Many newer printers have to be connected to a bidirectional printer port. Other printers can be configured to work with unidirectional printer ports, but some advanced functions (like paper/ink status) will be disabled.

Parallel-Port Configuration

Parallel-port configuration is not as complicated as it is for serial ports. Even the original IBM PC has BIOS support for three LPT ports, and DOS has always had this support as well. Table 11.7 shows the standard I/O address and interrupt settings for parallel port use.

Table 11.7  Parallel Interface I/O Port Addresses and Interrupts

LPTx

I/O

System Standard Alternative Port IRQ
8/16-bit ISA LPT1 -- 3BCh IRQ7
8/16-bit ISA LPT1 LPT2 378h IRQ5
8/16-bit ISA LPT2 LPT3 278h None

Because the BIOS and DOS always have provided three definitions for parallel ports, problems with older systems are infrequent. Problems can arise, however, from the lack of available interrupt-driven ports for ISA bus systems. Normally, an interrupt-driven port is not absolutely required for printing operations; in fact, many programs do not use the interrupt-driven capability. Many programs do use the interrupt, however, such as network print programs and other types of background or spooler-type printer programs.

Also, high-speed, laser-printer utility programs often use the interrupt capabilities to allow for printing. If you use these types of applications on a port that is not interrupt driven, you see the printing slow to a crawl, if it works at all. The only solution is to use an interrupt-driven port. Windows 95 supports up to 128 parallel ports.

To configure parallel ports in ISA bus systems, you probably will have to set jumpers and switches. Because each board is different, you always should consult the OEM manual for that particular card if you need to know how the card should be configured.

Parallel Port Devices

The original IBM PC designers envisioned that the parallel port would be used only for communicating with a printer. Over the years, the number of devices that can be used with a parallel port has increased tremendously. You can find everything from tape backup units to LAN adapters to CD-ROMs that connect through your parallel port. Some modem manufacturers have modems that connect to the parallel port instead of the serial port for faster data transfer.

Perhaps one of the most common uses for bidirectional parallel ports is to transfer data between your system and another, such as a laptop computer. If both systems use an EPP/ECP port, you can actually communicate at rates of up to 2M/sec.

Connecting two computers with standard unidirectional parallel ports requires a special cable. Most programs sell or provide these cables with their software. However, if you need to make one for yourself, Table 11.8 provides the wiring diagram you need.

Table 11.8  Parallel Port Interlink/Lap Link Cable Wiring

25-Pin Signal Description Signal Description 25-Pin
Pin 2 Data 0 <--> -Error Pin 15
Pin 3 Data 1 <--> Select Pin 13
Pin 4 Data 2 <--> Paper End Pin 12
Pin 5 Data 3 <--> -Acknowledge Pin 10
Pin 6 Data 4 <--> Busy Pin 11
Pin 15 -Error <--> Data 0 Pin 2
Pin 13 Select <--> Data 1 Pin 3
Pin 12 Paper End <--> Data 2 Pin 4
Pin 10 -Acknowledge <--> Data 3 Pin 5
Pin 11 Busy <--> Data 4 Pin 6
Pin 25 Ground <--> Ground Pin 25


TIP: Even though cables are most often provided for data transfer programs, notebook users may want to look for an adapter that makes the appropriate changes to a standard parallel cable. This can make traveling lighter by preventing the need for additional cables. Most of the time, these adapters attach to the centronics end of the cable, and provide a standard DB25 connection on the other end. They're sold under a variety of names; however, Laplink adapter or Laplink converter are the most common.

While the wiring configuration and premade interlink cables given in Table 11.8 will work for connecting two machines with ECP/EPP ports, they won't be able to take advantage of the advanced transfer rates of these ports. Special cables are needed to communicate between ECP/EPP ports. Parallel Technologies is a company that sells ECP/EPP cables for connecting to other ECP/EPP computers, and also sells a universal cable for connecting any two parallel ports together to use the highest speed.

Testing Serial Ports

You can perform several tests on serial and parallel ports. The two most common types of tests involve software only, or both hardware and software. The software-only tests are done with diagnostic programs such as Microsoft's MSD, while the hardware and software tests involve using a wrap plug to perform loopback testing.

Microsoft Diagnostics (MSD)

MSD is a diagnostic program supplied with MS-DOS 6.x, Microsoft Windows, or Windows 95. Early versions of the program also were shipped with some Microsoft applications such as Microsoft Word for DOS.

To use MSD, switch to the directory in which it is located. This is not necessary, of course, if the directory containing the program is in your search path--which is often the case with the DOS 6.x or Windows-provided versions of MSD. Then, simply type MSD at the DOS prompt and press Enter. Soon you see the MSD screen.

Choose the Serial Ports option. Notice that you are provided information about what type of serial chip you have in your system, as well as information about what ports are available. If any of the ports are in use (for example, a mouse), that information is provided as well.

MSD is helpful in at least determining whether your serial ports are responding. If MSD cannot determine the existence of a port, it does not provide the report indicating that the port exists. This sort of "look-and-see" test is the first action you should take to determine why a port is not responding.

Windows 95 also shows whether or not your ports are functioning. To check your ports, right-click My Computer and choose Properties. Choose the Device Manager tab. On the Device Manager screen, if a device is not working properly there will be an exclamation point in a yellow circle next to the device on the list. You can also double-click Ports (COM & LPT), and then double-click the desired port to see whether Windows 95 says that the port is functioning or not. In many cases, it tells you what is conflicting with that specific port.

Advanced Diagnostics Using Loopback Testing

One of the most useful tests is the loopback test, which can be used to ensure the correct function of the serial port, as well as any attached cables. Loopback tests basically are internal (digital), or external (analog). Internal tests can be run simply by unplugging any cables from the port and executing the test via a diagnostics program. The external loopback test is more effective. This test requires that a special loopback connector or wrap plug be attached to the port in question. When the test is run, the port is used to send data out to the loopback plug, which simply routes the data back into the port's receive pins so that the port is transmitting and receiving at the same time. A loopback or wrap plug is nothing more than a cable doubled back on itself. Most diagnostics programs that run this type of test include the loopback plug, and if not, these types of plugs can be purchased easily or even built.

The wiring that is needed to construct your own loopback or wrap plugs is as follows:

  • IBM 25-Pin Serial (Female DB25S) Loopback Connector (Wrap Plug). Connect these pins:

1 to 7
2 to 3
4 to 5 to 8
6 to 11 to 20 to 22
15 to 17 to 23
18 to 25

  • IBM 9-Pin Serial (Female DB9S) Loopback Connector (Wrap Plug). Connect these pins:

1 to 7 to 8
2 to 3
4 to 6 to 9

If you need to test serial ports further, refer to the Chapters 20 - Software and Hardware Diagnostic Tools, and Chapter 21 - Operating Systems Software and Troubleshooting, which describe third-party testing software and operating system-based testing procedures, respectively.

Testing Parallel Ports

Testing parallel ports is, in most cases, simpler than testing serial ports. The procedures you use are effectively the same as those used for serial ports, except that when you use the diagnostics software, you choose the obvious choices for parallel ports rather than serial ports.

Not only are the software tests similar, but the hardware tests require the proper plugs for the loopback tests on the parallel port. To create an IBM 25-Pin Parallel (Male DB25P) Loopback Connector (Wrap Plug), connect these pins:

1 to 13
2 to 15
10 to 16
11 to 17

If you want to test the parallel ports in a system, especially to determine what type they are, you can use a utility called Parallel. This is a handy parallel port information utility that examines your system's parallel ports and reports the Port Type, IO address, IRQ level, BIOS name, and an assortment of informative notes and warnings in a compact and easy-to-read display. The output may be redirected to a file for tech support purposes. Parallel uses very sophisticated techniques for port and IRQ detection, and is aware of a broad range of quirky port features.

Serial and Parallel Port Replacements

Two high-speed serial-bus architectures for desktop and portable are available, called the Universal Serial Bus (USB) and IEEE 1394. These are high-speed communications ports that far outstrip the capabilities of the standard serial and parallel ports, and may be used as an alternative to SCSI for high-speed peripheral connections. In addition to performance, these new ports offer I/O device consolidation, meaning all types of external peripherals can be connected to these ports.

An important trend in high-performance peripheral bus design is to use a serial architecture, where one bit is sent at a time down a wire. Parallel architecture uses 8, 16, or more wires to send bits simultaneously. At the same clock speed, the parallel bus is faster; however, it is much easier to increase the clock speed of a serial connection than a parallel one.

Parallel connections suffer from several problems, the biggest being signal skew and jitter. Skew and jitter are the reasons that high-speed parallel busses like SCSI are limited to short distances of three meters or less. The problem is that although the 8 or 16 bits of data are fired from the transmitter at the same time, by the time they reach the receiver, propagation delays have conspired to allow some bits to arrive before the others. The longer the cable, the longer the time between the arrival of the first and last bits at the other end! This signal skew, as it is called, either prevents you from running a high-speed transfer rate, a longer cable, or both. Jitter is the tendency for the signal to reach its target voltage and float above and below for a short period of time.

With a serial bus, the data is sent one bit at a time. Because there is no worry about when each bit will arrive, the clocking rate can be increased dramatically.

With a high clock rate, parallel signals tend to interfere with each other. Serial again has an advantage in that with only one or two signal wires, crosstalk and interference between the wires in the cable is negligible.

Parallel cables are very expensive. In addition to the many additional wires needed to carry the multiple bits in parallel, the cable also needs to be specially constructed to prevent crosstalk and interference between adjacent data lines. This is one reason external SCSI cables are so expensive. Serial cables, on the other hand, are very inexpensive. For one thing, they have very few wires, plus the shielding requirements are far simpler, even at very high speeds.

It is for these reasons, plus the need for Plug and Play external peripheral interfaces, as well as the elimination of the physical port crowding on portable computers, that these new high-performance serial busses have been developed. Both USB and IEEE 1394 are available on desktop and portable PCs.

USB (Universal Serial Bus)

In 1995, the Universal Serial Bus (USB) was designed as a convenient method to connect a variety of different peripherals to a system. Intel has been the primary proponent of USB, and most of their PC chipsets, starting with the Triton II (82430HX and VX), include USB support as standard. Six other companies have worked with Intel in co-developing the USB, including Compaq, Digital, IBM, Microsoft, NEC, and Northern Telecom. Together these companies have established the USB Implementers Forum to develop, support, and promote the USB architecture.

The bus supports up to 127 devices and uses a tiered star topology built on expansion hubs that can reside in the PC, any USB peripheral, or even stand-alone hub boxes. Devices can be connected by daisy-chaining, or by using a USB hub which itself has a number of USB sockets and plugs into a PC or other device. 7 peripherals can be attached to each hub device. This can include a second hub to which up to another 7 peripherals can be connected, and so on. Each cable between devices is limited to a length of 5 meters (3 meters when an unshielded cable is used). Figure 11.4 shows the shielded USB cable, while Figure 11.5 shows the two types of USB connectors.

FIG. 11.4  The shielded USB cable.

FIG. 11.5  USB connector Type A and Type B.

USB 1.1 has a maximum data transfer rate of 12Mbit/sec (1.5M/sec). USB 2.0 (released in 2001) has a maximum data transfer rate of 480Mbit/sec (60M/sec).

USB also conforms to Intel's Plug and Play (PnP) specification, including hot plugging, which means that devices can be plugged in dynamically without powering down or rebooting the system. Simply plug in the device, and the USB controller in the PC will detect the device and automatically determine and allocate the resources and drivers required. Microsoft has developed USB drivers and has included them in existing versions of Windows 95 and NT. USB support is also required in the BIOS, which is included in newer systems with USB ports built in.

Aftermarket USB boards can be installed for adding USB to an existing system. Most boards have ROM on-board, which allows the USB peripherals to function under DOS, while Windows built-in drivers will take care of the USB function under Windows.

USB peripherals include virtually all external devices, like monitors, modems, joysticks, keyboards, scanners, webcams, printers and pointing devices. One interesting feature of USB is that small attached devices can be powered by the USB bus itself.


NOTE: For more information about the USB specification, refer to Chapter 5 - Bus Slots and I/O Cards.

FireWire (IEEE 1394)

FireWire (IEEE 1394) is a high-speed local serial bus, published by the IEEE Standards Board in late 1995. This bus was derived from the "FireWire" bus originally developed by Apple and Texas Instruments, and is also a part of the newer Serial SCSI standard.

IEEE 1394 is fully Plug and Play, including the ability for hot plugging (insertion and removal of components without powering down). IEEE 1394 is a daisy-chained and branched topology and allows up to 63 nodes with a chain of up to 16 devices on each node. Buses can also be bridged, so more than 64,000 nodes can be connected!

FireWire uses a simple six-wire cable with two differential pairs of clock and data lines plus two power lines. Individual FireWire cables can run as long as 4.5 meters. Data can send through up to 16 hops for a total maximum distance of 72 meters. Hops occur when devices are daisy-chained together. Figure 11.6 shows the FireWire cable, while Figure 11.7 shows the FireWire connector.

FIG. 11.6  The FireWire cable.

FIG. 11.7  The FireWire connector.

The types of devices that are connected to the PC via IEEE 1394 include practically anything that might be using SCSI otherwise. This includes all forms of disk drives, including hard disk, optical, floppy, CD-ROM, and DVD (Digital Versatile Disc) drives. Also digital cameras, tape drives, and many other high-speed peripherals featuring IEEE 1394 interfaces built in.


NOTE: For more information about the IEEE 1394 specification, refer to Chapter 5 - Bus Slots and I/O Cards.

Understanding the Components of a LAN

A local area network (LAN) enables you to share files, applications, printers, disk space, modems, faxes, and CD-ROM drives; use client/server software products; send electronic mail; and otherwise make a collection of computers work as a team.

There are many ways to construct a LAN. A LAN can be as simple as two computers connected together via either their serial or parallel ports. This is the simplest LAN. Many users connect their laptop to their desktop computer for access to a printer or to transfer files. This type of connection is usually called a direct cable connection, in which one computer is designated as the host computer. The host computer is the machine with the resources you want to access. The guest computer wants to use the resources of the host. You can use special software that allows you to connect two computers in this manner, but some operating systems such as DOS and Windows 95 have direct cable connection support built in. Although the term network is not often used for this sort of arrangement, it does satisfy the definition.

Peer-to-peer networks have become more popular as the software became more reliable and personal computers became more powerful. Peer-to-peer means computer to computer. In a peer-to-peer network, any computer can access any other computer to which it is connected and has been granted access rights. Essentially, every computer functions as both a server and a client. Peer-to-peer networks can be as small as two computers, or as large as hundreds of units, and they may or may not use a LAN card or network interface card (NIC). For more than two stations, or when higher data transfer speeds are desired, NICs should be used.

Peer-to-peer networks are more common in small offices or within a department in a larger organization. The advantage of a peer-to-peer network is that you don't have to dedicate a computer to be a file server. Most peer-to-peer networks allow you to share practically any device attached to any computer. The potential disadvantages to a peer-to-peer network are that there is typically less security and less control.

Windows 95 has peer-to-peer networking built in. With Windows 95, setting up a peer-to-peer LAN can be accomplished in two ways. The first method is to install the Dial-Up Networking modules. Dial-up networking requires a Windows 95-compatible server, such as Windows 95 dial-up server in the Plus! package, or Windows NT. Dial-Up Networking allows the remote system (the one dialing in) to access the server and any peripherals attached to the server to which the remote user has been given rights. These peripherals can be CD-ROM drives, tape drives, removable media drives, hard drives, and even another network. IPX/SPX are the network transport protocols used in NetWare and other networks. NetBEUI is the NetBIOS (Network Basic Input Output System) Extended User Interface; it is the native protocol of Microsoft Windows networks. Most of the other networks (like Unix and the internet) use the TCP/IP protocol.

The other method of peer-to-peer networking is much like that with which we all became familiar in Windows for Workgroups, but it is much easier to set up in Windows 95. With the new PnP technology incorporated into the operating system, most NICs are automatically detected. Supported NIC manufacturers include 3COM, Digital Equipment Corporation, IBM, Intel, Madge, Novell, Proteon, Racal, SMC, and Thomas-Conrad. Once the NIC is detected, Windows 95 asks for a computer name and a workgroup name. Once this is accomplished, your Windows 95 network workstation is ready to go.

A LAN is a combination of computers, LAN cables (usually), network adapter cards, network operating system software, and LAN application software. (You sometimes see network operating system abbreviated as NOS.) On a LAN, each personal computer is called a workstation, except for one or more computers designated as network servers. Each workstation and server contains a network adapter card. LAN cables connect all the workstations and servers, except in less frequent cases when infrared, radio, or microwaves are used.

A network in which the workstations connect only to servers (as opposed to each other, as in a peer-to-peer) is called a client/server network. In addition to its local operating system (for example DOS or one of the Windows operating systems), each workstation runs network software (client software) that enables the workstation to communicate with the servers. Windows 95 itself contains the client software necessary to connect to Novell NetWare, IBM OS/2 LAN Server, and Windows NT networks. In turn, the servers run network software (server software) that communicates with the workstations and serves up files and other services to those workstations. LAN-aware application software runs at each workstation, communicating with the server when it needs to read and write files. Figure 11.8 illustrates the components that make up a LAN.

FIG. 11.8  The components of a LAN.

Workstations

A LAN is made up of computers. You will usually find two kinds of computers on a LAN: the workstations, usually manned by people on their individual desktops, and the servers, usually located in a secured area, like a separate room or closet. The workstation is used only by the person sitting in front of it, whereas a server allows many people to share its resources. Workstations often have good-quality video adapters and monitors, as well as high-quality keyboards, but these are characteristics that make them easy to use; they are not required to make the LAN work. A workstation also usually has a relatively inexpensive slow small hard disk.

Many existing networks operate very well with older machines, however. Some sites even continue to use diskless workstations--that is, computers that do not have a disk drive of their own. Such workstations rely completely on the LAN for their file access. A diskless workstation requires a NIC with an autoboot PROM. This type of ROM causes the workstation to boot from files stored on a network server.

The advantages to this type of workstation are lower cost for hardware and greater security, which is increased by not having any drives at the local workstation with which to copy files to or from the server. The primary disadvantage is that the newer operating systems will not run efficiently from a network drive. The sheer number of program files opened and closed, as well as the need for frequent swapping of memory to hard disk space, make the practice prohibitive.

File Servers

All the workstations on a peer-to-peer LAN can function as file servers, in that any drive on any peer workstation can be shared with (or served to) other users.

On a client/server network, however, a file server is a computer that serves all the workstations--primarily by storing and retrieving data from files shared on its disks. Most servers usually have inexpensive monitors and keyboards, because people do not use the file server console as heavily as that of a workstation. The server normally operates unattended, and almost always has one or more fast, expensive, large hard disks, however.

Servers must be high-quality, heavy-duty machines because, in serving the whole network, they do many times the work of an ordinary workstation computer. In particular, the file server's hard disk(s) need to be durable and reliable, and geared to the task of serving multiple users simultaneously. For this reason, SCSI hard drives are usually preferred over IDE drives (see Chapter 15 - Hard Disk Interfaces, for more information on IDE versus SCSI).

You will most often see a computer wholly dedicated to the task of being a server. Sometimes, on smaller LANs, the server doubles as a workstation, depending on the network operating system being used. Serving an entire network is a big job that does not leave much spare horsepower to handle workstation duties, however, and if a user locks up the workstation that also serves as the file server, your network also locks up.

Under a heavy load, if there are 20 workstations and one server, each workstation can use only 1/20th of the server's resources. In practice, though, most workstations are idle most of the time--at least from a network disk-file-access point of view. As long as no other workstation is using the server, your workstation can use 100 percent of the server's resources.

Evaluating File Server Hardware

A typical file server consists of a personal computer that you dedicate to the task of sharing disk space, files, and possibly printers. On a larger network, you may use a personal computer especially built for file server work (a superserver), but the basic components are the same as those of a desktop PC. No matter what sort of computer you choose as a server, it communicates with the workstations through the LAN.

A file server does many times the work of an ordinary workstation. You may type on the server's keyboard only a couple of times a day, and you may glance at its monitor only infrequently. The server's CPU and hard disk drives, however, take the brunt of responding to the file-service requests of all the workstations on the LAN.

If you consider your LAN an important part of your investment in your business, you will want to get the highest quality computer you can afford for the file server. The hard disk drives should be large and fast (in case of a file server), although in some cases the highest capacity drive available is not necessarily the best choice. When you consider that the server will be processing the file requests of many users simultaneously, it can be more efficient to have, for example, nine 2G SCSI hard drives rather than one 18G drive. That way, the requests can be spread across several different units, rather than queued up waiting for one device.

Performance is important, of course, but the most crucial consideration in purchasing a server is that the CPU, the motherboard on which the CPU is mounted, and the hard disk drives should be rugged and reliable. Do not skimp on these components.

Downtime (periods when the server is not operating) can be expensive because people cannot access their shared files to get their work done. Higher-quality components will keep the LAN running without failure for longer periods of time.

It is very important that you configure your server properly. Be sure that you have enough slots for all your present adapters and any future adapters that you can anticipate. It is also very important that you follow the RAM and hard drive sizing guidelines for your network operating system.

In the same vein, you will want to set up a regular maintenance schedule for your file server. Over the course of a few weeks, the fans within the computer can move great volumes of air through the machine to keep it cool. The air may contain dust and dirt, which accumulates inside the computer (or in the filter elements). You should clean out the dust in the server or the filter elements on a regular base. Chapter 3 - Inspection of the System, discusses how to clean out the "dust bunnies" without harming your system. Many larger network sites house their servers in rooms or closets designed to maintain low dust and static levels as well as constant temperatures.

You do not replace components in the server as part of your regular preventive maintenance, but you will want to know whether a part is beginning to fail. You may want to acquire diagnostic software or hardware products to periodically check the health of your file server.

The electricity the file server gets from the wall outlet may, from time to time, vary considerably in voltage (resulting in sags and spikes). To make your file server as reliable as possible, you should install an uninterruptable power supply (UPS) between the electric power source and your server. The UPS not only provides electricity in case of a power failure, but also conditions the line to protect the server from voltage fluctuations.

In general, you want to do whatever you can to make your network reliable, including placing the server away from public access areas.

Evaluating the File Server Hard Disks

The hard disk drives are the most important components of a file server. The hard disks are where the people who use the LAN store their files. To a large extent, the reliability, access speed, and capacity of a server's hard disks determine whether people will be happy with the LAN and will be able to use it productively. The most common bottleneck in the average LAN is disk I/O time at the file server. And the most common complaint voiced by people on the average LAN is that the file server has run out of free disk space. Make sure that your file server's disk drives and hard disk controller are high-performance components, and that you always have plenty of free disk space on your server's drives.

Evaluating the File Server CPU

The file server CPU tells the hard disk drives what to store and retrieve. The CPU is the next most important file server component after the hard disks. Unless your LAN will have only a few users and will never grow, a file server with a fast CPU is a wise investment.

The CPU chip in a computer executes the instructions given to it by the software you run. If you run an application, that application runs more quickly if the CPU is fast. Likewise, if you run a network operating system, that NOS runs more quickly if the CPU is fast. Some NOSes absolutely require certain types of CPU chips. NetWare version 2, for example, requires at least a 286 CPU. NetWare versions 3 and 4 require at least a 386. IBM LAN Server version 2 and Microsoft LAN Manager version 2 require that OS/2 1.3 be running on the server computer; OS/2 1.3 requires an 80286 or later CPU. LAN Server 3.0 requires that the file server use OS/2 2.x, which runs only on 386 or later CPUs. Microsoft Windows NT Advanced Server 3.51 requires a 386DX25 or later CPU and 16M of RAM. These are, of course, the absolute minimum CPU requirements. Exceeding them is a practice that is highly recommended, for any of these products.

Evaluating Server RAM

The network operating system loads into the computer's RAM, just like any other application. You need to have enough RAM in the computer for the NOS to load and run properly. On a peer-to-peer LAN, the recommended amount of RAM would be whatever it takes to run your applications, whereas on a client/server LAN, you might install a lot more RAM in your file server. Windows 95 in a peer-to-peer environment should have a minimum of 16M of RAM. Windows NT should have more. The proper amount of RAM for a server-based LAN operating system like NetWare is calculated using a formula that accounts for the software you will be running and the capacity and configuration of your disk drives. Be sure to follow the operating system manufacturer's RAM recommendations carefully, or severe performance problems may result.

You can realize significant performance gains in a NetWare server with a faster CPU and extra RAM because of a process called file caching. If the server has sufficient memory installed, it can "remember" those portions of the hard disk that it has accessed previously. When the next user asks for the same file represented by those portions of the hard disk, the server can send it to the next user without having to actually access the hard disk. Because the file server is able to avoid waiting for the hard disk to rotate into position, the server can do its job more quickly. The NOS merely needs to look in the computer's RAM for the file data that a workstation has requested. Thus, you can be assured that any extra memory installed in your server will be put to beneficial use.

Note that the NOS's caching of file data is distinct from (and in addition to) any caching that might occur due to the hard disk or hard disk controller card having on-board memory.

Evaluating the Network Adapter Card

The server's network adapter card is its link to all the workstations on the LAN. All file requests and other network communications enter and leave the server through the network adapter. Figure 11.9 shows a network adapter you might install in a file server. As you can imagine, the network adapter in a server is a very busy component.

FIG. 11.9  The file server's network adapter sends and receives messages to and from all the workstations on the LAN.

All the network adapters on the LAN use Ethernet, Token Ring, ARCnet, or some other low-level protocol. You can find network adapters for each of these protocols, however, that perform better than others. A network adapter may be faster at processing messages because it has a large amount of on-board memory (RAM), because it contains its own microprocessor, or perhaps because the adapter uses a faster bus slot and thus can transfer more data to and from the CPU at one time.

Evaluating the Server's Power Supply

In a file server, the power supply is an important but often overlooked item. Power supply failures and malfunctions can cause problems elsewhere in the computer that are difficult to diagnose. Your file server may display a message indicating that a RAM chip has failed, and then stop; the cause of the problem may indeed be a failed RAM chip, or the problem may be in the power supply.

The fan(s) in the power supply sometimes stop working or become obstructed with dust and dirt. The computer overheats and fails completely or acts strangely. Cleaning the fan(s) should be a part of the regular maintenance of your file server.

Evaluating the Keyboard, Monitor, and Mouse

The keyboard, monitor, and mouse (if any) are usually not significant components on a file server computer, because they receive far less use than their workstation counterparts. Often you can use lower-quality, less-expensive components here. A typical file server runs unattended and may go for days or weeks without interaction from you. You can power off the monitor for these long periods.


CAUTION: Tuck the server keyboard away so that falling objects (pencils or coffee mugs, for example) do not harm your network's file server.

If your network server has a tape drive, be sure it is easily accessible. When the backup of the server is complete, be sure to remove the tape and store it in a safe place.

Network Interface Cards (NICs)

A network interface card, or NIC, fits into a slot in each workstation and file server. Most newer computers ship with network interface hardware embedded on the motherboard, but you may prefer to select your own. Your workstation sends requests through the network adapter to the server. The workstation then receives responses through the network adapter when the server delivers all or a portion of a file to that workstation. The sending and receiving of these requests and responses is the LAN equivalent of reading and writing files on your PC's local hard disk. If you're like most people, you probably think of reading and writing files in terms of loading or saving your work.

A typical LAN consists of only a single data channel connecting its various computers. This is called a baseband network. As a result of this, only two network adapters can communicate with each other at the same time. If one person's workstation is currently accessing the file server (processing the requests and responses that deliver a file to the workstation), then other users' workstations must wait their turn. Fortunately, such delays are usually not noticeable. The LAN gives the appearance of many workstations accessing the file server simultaneously.

Older Ethernet adapters have a single BNC connector (for Thinnet), a D-shaped 15-pin connector called a DB15 or AUI connector (for Thicknet), a connector that looks like a large telephone jack called an RJ45 (10BaseT), or sometimes a combination of all three. Newer Ethernet adapters have only a single RJ45 connector for 10BaseT, 100BaseT and 1000BaseT. Token Ring adapters can have a D-shaped 9-pin connector called a DB9, an RJ45 connector, or a combination of those connectors. Figure 11.10 shows a high-performance Token Ring adapter with both kinds of connectors.

FIG. 11.10  The Thomas-Conrad 16/4 Token Ring adapter (with a DB9 connector and a RJ45 connector).

Cards with two or more connectors enable you to choose from a wider variety of LAN cables. A Token Ring card with two connectors, for example, enables you to use shielded twisted pair (STP) or unshielded twisted pair (UTP) cable. You cannot use both connectors at the same time, however, except on special adapters designed specifically for this purpose. Normally you have to select the used connector type by jumpers or dip switches on the network adapter. Newer cards, however, can detect the used connector type and correct network speed automatically.


Shielded versus Unshielded Twisted Pair
When cabling was being developed for use with computers, it was first thought that shielding the cable from external interference was the best way to reduce interference and allow for greater transmission speeds. However, it was discovered that twisting the pairs of wires is a more effective way to prevent interference from disrupting transmissions. As a result, earlier cabling scenarios relied on shielded cables rather than the unshielded cables used later.

Shielded cables also have some special grounding concerns because one--and only one--end of a shielded cable should be connected to an earth ground; issues arose where people inadvertently caused grounding loops to occur by connecting both ends, or caused the shield to act as an antenna because it wasn't grounded. Grounding loops are situations where two different grounds are tied together. This is a bad situation because each ground can have a slightly different potential. This results in a circuit that has very low voltage but infinite amperage. This causes undue stress on electrical components and can be a fire hazard.

The LAN adapter card in your PC receives all the traffic going by on the network cable, accepts only the messages destined for your workstation, and passes on the rest to the next machine. The adapter hands these messages over to your workstation when the workstation is ready to attend to them. When the workstation wants to send a request to a server, the adapter card waits for the appropriate time (according to the network type), and inserts your message into the data stream. The workstation is also notified as to whether the message arrived intact, and resends the message if it was garbled.

Network adapters range in price a lot, but what do you get for your money? Primarily, speed. The faster adapters can push data faster onto the cable, which means that the file server receives a request more quickly and sends back a response more quickly.


Data-Transfer Speeds on a LAN
Electrical engineers and technical people measure the speed of a network in megabits per second (Mbps). Because a byte of information consists of 8 bits, you can divide the Mbps rating by 8 to find out how many millions of characters (bytes) per second the network can handle theoretically.

In practice, a LAN is slower than its rated speed. In fact, a LAN is no faster than its slowest component. If you were to transfer data from one workstation's hard disk to the file server, the elapsed time would include not only the transmission time but also the workstation hard disk retrieval time, the workstation processing time, and the file server's hard disk and server CPU processing times. The transfer rate of your hard disk, which in this case is probably the slowest component involved in the copying of the data to the server, governs the rate at which data flows to the file server. Other people's requests interleave with your requests on the LAN, and the total transfer time may be longer because the other people are using the LAN at the same time you are.

ARCnet Adapters

ARCnet is one of the oldest types of LAN hardware. It originally was a proprietary scheme of the Datapoint Corporation. By the newer standards, ARCnet is very slow, but it is forgiving of minor errors in installation. It is known for solid reliability, and ARCnet cable/adapter problems are easy to diagnose. ARCnet operates something like Token Ring, but at the slower rate of 2.5Mbps. The section "Token Ring Adapters" later in this chapter explains the basic principles on which ARCnet and Token Ring work.

Ethernet Adapters

The most widely used type of network adapter is Ethernet. Ethernet-based LANs allow you to interconnect a wide variety of equipment, including UNIX workstations, Apple computers, IBM PCs, and IBM clones. Ethernet basically exists in three varieties (Thicknet, Thinnet and UTP), depending on the type of cabling you use. Thicknet cables can span a greater distance (500 meters), but they are much more expensive. Thinnet can span 185 meters. Like Thicknet, Thinnet operates at 10Mbps. The UTP standards can operate at a rate of 10Mbps, 100Mbps (Fast Ethernet) and 1000Mbps (Gigabit Ethernet). Fiber-optic standards are developed to span greater distances, and also for use in environments with a lot of interferences.

Between data transfers (requests and responses to and from the file server), Ethernet LANs remain quiet. After a workstation sends a request across the LAN cable, the cable falls silent again. What happens when two or more workstations (and/or file servers) attempt to use the LAN at the same time?

Suppose that one of the workstations wants to request something from the file server, just as the server is sending a response to another workstation. A collision occurs. (Remember that only two computers can communicate through the cable at a given moment.) Both computers--the file server and the workstation--back off and try again. Ethernet network adapters use an algorithm called Carrier Sense, Multiple Access with Collision Detection (CSMA/CD) to deal with collisions, causing each computer to back off for a random amount of time. This method effectively enables one computer to go first. A certain number of collisions are therefore normal and expected on an Ethernet network, but with higher amounts of traffic, the frequency of collisions rises higher and higher, and response times become worse and worse. A saturated Ethernet network actually can spend more time recovering from collisions than it does sending data. IBM and Texas Instruments, recognizing Ethernet's traffic limitations, designed the Token-Ring network to solve the problem.

Token Ring Adapters

Except for fiber-optic and some of the newer high-speed technologies, Token Ring is the most expensive type of LAN. Token Ring can use STP or UTP cable. Token Ring's cost is justified, however, when you have a great deal of traffic generated by workstations because under normal conditions collisions are all but eliminated. For several years, Token Ring was the best choice for networks in large corporations with large LANs. Because the faster improvements in the development (and speed) of the Ethernet standards, most Token Ring networks were migrated to Ethernet. Token Ring can operate at 4, 16 or 100Mbps.

Workstations on a Token Ring LAN continuously pass an electronic token among themselves. The token is just a short message indicating that the workstation or server possessing it is allowed to transmit. If a workstation has nothing to send, as soon as it receives the token, it passes it on to the next downstream workstation. Only when a workstation receives the token it can transmit data onto the LAN. After transmitting, the token is again passed down the line. If the LAN is busy, and you want your workstation to send a message to another workstation or server, you must wait patiently for the token to come around. Only then your workstation can send its message. The message circulates through all the workstations and file servers on the LAN, and eventually winds its way back to you, the sender. The sender then generates a new token, releasing control of the network to the next workstation. During the circulation of the message around the ring, the workstations or server that is the designated recipient recognizes that the message is addressed to it and begins processing that message, but still passes it on to the next workstation.

Token Ring is not as wasteful of LAN resources as this description makes it sound. An unclaimed token takes almost no time at all to circulate through a LAN, even with 100 or 200 workstations. It is also possible to assign priorities to certain workstations and file servers so that they get more frequent access to the LAN. And, of course, the token-passing scheme is much more tolerant of high traffic levels on the LAN than the collision-prone Ethernet.


Early Token Release
On a momentarily idle Token Ring LAN, workstations circulate a token. The LAN becomes busy (carries information) when a workstation receives a token and turns it into a data frame targeted at another computer on the network. After receipt by the target node, the data frame continues circulating around the LAN until it is returned to its source node. The source node turns the data frame back into a token that circulates until a downstream node needs it. So far, so good--these are just standard Token Ring concepts.

When a workstation sends a file request to a server, it consists of only a few bytes, far fewer than the transmission that actually returns the file to the workstation. If the request packet must go into and out of many workstations to circulate the ring, and if the data frame is small, latency occurs. Latency is the unproductive delay that occurs while the source node waits for its upstream neighbor to return its data frame. During the latency period, the source node appends idle characters onto the LAN following the data frame until the frame circulates the entire LAN and arrives back at the source node. The typical latency period of a 4Mbps ring will result in the transmission of about 50 to 100 idle characters. On a 16Mbps ring, latency may reach 400 or more bytes worth of LAN time.

Early Token Release, available only on 16Mbps networks, is a feature that allows the originating workstation to transmit a new token immediately after sending its data frame. Downstream nodes pass along the data frame and then receive an opportunity to transmit data themselves--the new token. If you were to perform a protocol analysis of a network using Early Token Release, you would see tokens and other data frames immediately following the file request, instead of a long trail of idle characters.

Sometimes a station fumbles and "drops" the token. LAN stations monitor each other and use a complex procedure called beaconing to detect the location of the problem and regenerate a lost token. Token Ring is quite a bit more complicated than Ethernet, and the hardware is correspondingly more expensive.

ARCnet and Token Ring are not compatible with one another, but ARCnet uses a similar token-passing scheme to control workstation and server access to the LAN.

Adapter Functions

As mentioned in this section earlier, network adapters generally are collision-sensing or token-passing. A network adapter's design ties it to one of the low-level protocols--Ethernet, Token Ring, FDDI, ARCnet, or some other protocol.

Collision-sensing and token-passing adapters contain sufficient on-board logic to know when it is permissible to send a frame and to recognize frames intended for the adapters. With the adapter support software, both types of cards perform seven major steps during the process of sending or receiving a frame. When sending data out from the card, the steps are performed in the order presented in the following list. When receiving data in, however, the steps are reversed. Here are the steps:

1. Data transfer. Data is transferred from PC memory (RAM) to the adapter card or from the adapter card to PC memory via DMA, shared memory, or programmed I/O.

2. Buffering. While being processed by the network adapter card, data is held in a buffer. The buffer gives the card access to an entire frame at once, and the buffer enables the card to manage the difference between the data rate of the network and the rate at which the PC can process data.

3. Frame formation. The network adapter has to break up the data into manageable chunks (or, on reception, reassemble it). On an Ethernet network, these chunks are about 1,500 bytes. Token Ring networks generally use a frame size of about 4K. The adapter prefixes the data packet with a frame header and appends a frame trailer to it. The header and trailer are the Physical layer's envelope. At this point, a complete, ready-for-transmission frame exists. (Inbound, on reception, the adapter removes the header and trailer at this stage.)

4. Cable access. In a CSMA/CD network such as Ethernet, the network adapter ensures that the line is quiet before sending its data (or retransmits its data if a collision occurs). In a token-passing network, the adapter waits until it gets a token it can claim. (These steps are not significant to receiving a message, of course.)

5. Parallel/serial conversion. The bytes of data in the buffer are sent or received through the cables in serial fashion, with one bit following the next. The adapter card does this conversion in the split second before transmission (or after reception).

6. Encoding/decoding. The electrical signals that represent the data being sent or received are formed. Ethernet adapters use a technique called Manchester encoding, while Token Ring adapters use a slightly different scheme called Differential Manchester. These techniques have the advantage of incorporating timing information into the data through the use of bit periods. Instead of representing a 0 as the absence of electricity and a 1 as its presence, the 0s and 1s are represented by changes in polarity as they occur in relation to very small time periods.

7. Sending/receiving impulses. The electrically encoded impulses making up the data (frame) are amplified and sent through the wire. (On reception, the impulses are handed up to the decoding step.)

Of course, the execution of all of these steps takes only a fraction of a second. While you were reading about these steps, thousands of frames could have been sent across the LAN.

Network adapter cards and the support software recognize and handle errors, which occur when electrical interference, collisions (in CSMA/CD networks), or malfunctioning equipment cause some portion of a frame to be corrupted. Errors generally are detected through the use of a cyclic redundancy check (CRC) data item in the frame. The CRC is checked by the receiver; if its own calculated CRC doesn't match the value of the CRC in the frame, the receiver tells the sender about the error and requests retransmission of the frame in error.

The different types of network adapters vary not only in access method and protocol, but also in the following elements:

  • Transmission speed

  • Amount of on-board memory for buffering frames and data

  • Bus design

  • Bus speed

  • Compatibility with various CPU chipsets

  • DMA usage

  • IRQ and I/O port addressing

  • Intelligence

  • Connector design

LAN Cables

Generally speaking, the cabling systems described in the next few sections normally use one of three distinct cable types. These are twisted pair, shielded and unshielded (also known as STP and UTP), coaxial cable, thin and thick, and fiber-optic cable.

The kind of cable you use depends mostly on the kind of network layout you select, the conditions at the network site, and of course your budget.

Using Twisted Pair Cable

Twisted pair cable is just what its name implies: insulated wires within a protective casing, with a specified number of twists per foot. Twisting the wires reduces the effect of electromagnetic interference on the signals being transmitted. Shielded Twisted Pair (STP) refers to the amount of insulation around the cluster of wires and therefore its noise immunity. Unshielded Twisted Pair (UTP) is the most commonly used network cable. Figure 11.11 shows unshielded twisted pair cable; Figure 11.12 illustrates shielded twisted pair cable.

FIG. 11.11  An unshielded twisted pair cable.

FIG. 11.12  A shielded twisted pair cable.

Using Coaxial Cable

Coaxial cable is a cable just like the cable connected to the back of television sets and audio equipment. Thin and thick, of course, refer to the diameter of the coaxial cable. Thick Ethernet (Thicknet) is as thick as your thumb. Thin Ethernet (Thinnet) cable is slightly narrower than your little finger. The thick cable has a greater degree of noise immunity, is more difficult to damage, and requires a vampire tap (a connector with teeth that pierce the tough outer insulation) and a drop cable to connect to a workstation. Although thin cable carries the signal over shorter distances than the thick cable, Thinnet uses a simple BNC (Bayonet-Neill-Concelman) connector (a bayonet-locking connector for thin coaxial cables), is lower in cost, and was at one time the standard in office coaxial cable. Thinnet is wired directly to the back of each computer on the network, and generally installs much more easily than Thicknet, but it is more prone to signal interference and physical connection problems. Both cables are obsolete, because they can only operate at 10Mbps, while the newer network standards use higher speeds.

Figure 11.13 shows an Ethernet BNC coaxial T-connector, and Figure 11.14 illustrates the design of coaxial cable.

FIG. 11.13  An Ethernet coaxial cable T-connector.

FIG. 11.14 Coaxial cable.

Using Fiber-Optic Cable

Fiber-optic cable uses pulses of light rather than electricity to carry information. It is therefore completely resistant to the electromagnetic interference that limits the length of copper cables. Attenuation (the weakening of a signal as it traverses the cable) is also less of a problem, allowing fiber to send data over huge distances at high speeds. It is, however, more expensive and difficult to work with. Splicing the cable and installing connectors is a something for specialists.


TIP: Fiber-optic cable is sometimes needed to connect buildings together in a campus network environment for two very important reasons. One is that fiber can travel over many kilometers, whereas copper-based technologies are significantly more restricted. The other reason is because fiber, by its design, eliminates the problems with differing ground sources.

Fiber-optic cable is simply designed, but unforgiving of bad connections. Fiber cable usually consists of a core of glass thread, with a diameter measured in microns, surrounded by a solid glass cladding. This, in turn, is covered by a protective sheath. The first fiber-optic cables were made of glass, but plastic fibers also have been developed. The light source for fiber-optic cable is a light-emitting diode (LED) for normal distances, or a laser for longer distances. Information usually is encoded by varying the intensity of the light. A detector at the other end of the cable converts the received signal back into electrical impulses. Two types of fiber cable exist: single mode and multimode. Single mode has a smaller diameter, is more expensive, and can carry signals over a greater distance.

Fiber-optic cables come with several different types of connectors, like ST, SC or MTRJ. The ST connector is most commonly used with fiber-optic cables. Figure 11.15 illustrates the ST fiber-optic connectors.

FIG. 11.15  The ST fiber-optic connectors.

Network Topologies

Each workstation on the network is connected with cable (or some other medium) to the other workstations and to one or more servers. Sometimes a single piece of cable winds from station to station, visiting all the servers and workstations along the way. This cabling arrangement is called a bus topology, as shown in Figure 11.16. (A topology is simply a description of the way the workstations and servers are physically connected.) The potential disadvantage to this type of wiring is that if a workstation has a problem, it can cause all of the stations beyond it on the bus to lose their network connections.

FIG. 11.16  The linear bus topology, attaching all network devices to a common cable.

Sometimes separate cables run from a central wiring nexus, often called a hub or a concentrator, to each workstation. Figure 11.17 shows this arrangement, called a star topology. Sometimes the cables branch out repeatedly from a root location, forming the star-wired tree shown in Figure 11.18. Bus cabling schemes use the least amount of cable but are the hardest to diagnose or bypass when problems occur.

FIG. 11.17  The star topology, connecting the LAN's computers and devices with cables that radiate outward, usually from a file server.

FIG. 11.18  The star-wired tree topology, linking the LAN's computers and devices to one or more central hubs, or access units.

The other topology often listed in discussions of this type is a ring, in which each workstation is connected to the next, and the last workstation is connected to the first again (essentially a bus topology with the two ends connected). Data travels around a Token-Ring network in this fashion, for example. However, the ring is not physically evident in the cabling layout for the network. In fact, the ring exists only within the hub (called a Multistation Access Unit (MAU) on a Token-Ring network). Signals generated from one workstation travel back to the hub, are sent out to the next workstation, and then back to the hub again. The data is then passed to each workstation in turn until it arrives back at the computer that originated it, where it is removed from the network. Therefore, although the wiring topology is a star, the data path is theoretically a ring. This is called a logical ring.

If you have to run cables (of any type) through walls and ceilings, the cable installation can be the most expensive part of setting up a LAN. At every branching point, special fittings connect the intersecting wires. Sometimes you also need various additional components along the way, such as hubs, repeaters, or access units.

Because of that, a few companies started developing LANs that do not require cables at all. Wireless LANs use infrared or radio waves to carry network signals from computer to computer, but the speed and reliability is not as high as the wired network systems.

Planning the cabling layout, cutting the cable, attaching connectors, and installing the cables and fittings are jobs usually best left to experienced workers. If the fittings are not perfect, you may get electronic echoes on the network, which cause transmission errors.

Building codes almost always require you to use fireproof plenum cables. Plenum cables are more fire-resistant than some other cables. A professional cable installer should be familiar with the building codes in your area. You would be very upset if you installed ordinary cable yourself and were later told by the building inspector to rip out the cable and start over again with the proper kind.

Selecting the Proper Cable

As the demands of network users for ever increasing amounts of bandwidth continue, and new networking systems are developed to accommodate them, it soon becomes necessary to examine the capabilities of the most fundamental part of the network infrastructure: the cable itself.

The cable used for networks has traditionally been the same as that used for business telephone wiring. This is known as Category 3 (CAT.3) UTP, or voice grade UTP cable, measured according to a scale that quantifies the cable's data transmission capabilities. The cable itself is 24 AWG (American Wire Gauge, a standard for measuring the diameter of a wire), copper tinned, with solid conductors, 100-105 ohm characteristic impedance, and a minimum of two twists per foot. Category 3 cable is adequate for networks running at up to 16Mbps.

Newer, faster network types require greater performance levels, however. Fast Ethernet technologies that run at 100Mbps using the same number of wires as standard Ethernet need a greater resistance to signal crosstalk and attenuation, and so the use of Category 5 (CAT.5) UTP cabling is essential.

In a token-passing network, the cables from the workstations (or from the wall faceplates) connect centrally to a MAU. The MAU keeps track of which workstations on the LAN are neighbors and which neighbor is upstream or downstream. It is an easy job; the MAU usually does not even need to be plugged into an electrical power outlet. The exceptions to this need for external power are MAUs that support longer cable distances, or the use of UTP cable in high-speed LANs. The externally powered MAU helps the signal along by regenerating it.

An IBM MAU has eight ports for connecting one to eight Token-Ring devices. Each connection is made with a genderless data connector (as specified in the IBM cabling system). The MAU has two additional ports, labeled RI (Ring-In) and RO (Ring-Out), that daisy-chain several MAUs together when you have more than eight workstations on the LAN.

It takes several seconds to open the adapter connection on a Token-Ring LAN (something you may have noticed). During this time, the MAU and your Token-Ring adapter card perform a small diagnostic check, after which the MAU establishes you as a new neighbor on the ring. After being established as an active workstation, your computer is linked on both sides to your upstream and downstream neighbors (as defined by your position on the MAU). In its turn, your Token-Ring adapter card accepts the token or frame, regenerates its electrical signals, and gives the token or frame a swift kick to send it through the MAU in the direction of your downstream neighbor.

In an Ethernet network, the number of connections (taps) and their intervening distances are the network's limiting factors. Repeaters regenerate the signal every 500 meters or so. If repeaters were not used, standing waves (additive signal reflections) would distort the signal and cause errors. Because collision detection is highly dependent on timing, only five 500-meter segments and four repeaters can be placed in series before the propagation delay becomes longer than the maximum allowed period for the detection of a collision. Otherwise, the workstations farthest from the sender would be unable to determine whether a collision had occurred.

The people who design computer systems love to find ways to circumvent limitations. Manufacturers of Ethernet products have made it possible to create Ethernet networks in star, branch, and tree designs that overcome the basic limitations already mentioned. You can have thousands of workstations on a complex Ethernet network.

LANs are local because the network adapters and other hardware components cannot send LAN messages more than about a few hundred feet. Table 11.9 reveals the distance limitations of different kinds of LAN cable. In addition to the limitations shown in the table, keep in mind that you cannot connect more than 30 computers on a single Thinnet Ethernet segment, more than 100 computers on a Thicknet Ethernet segment, more than 72 computers on a UTP Token-Ring cable, or more than 260 computers on an STP Token-Ring cable.

Table 11.9 Network Distance Limitations

Network Adapter Cable Type Maximum Minimum
Ethernet Thin 607 ft. (185 meters) 20 in. (0.5 meters)
Thick (drop cable) 164 ft. (50 meters) 8 ft. (2.5 meters)
Thick (backbone) 1,640 ft. (500 meters) 8 ft. (2.5 meters)
UTP 328 ft. (100 meters) 8 ft. (2.5 meters)
Token Ring STP 328 ft. (100 meters) 8 ft. (2.5 meters)
UTP 148 ft. (45 meters) 8 ft. (2.5 meters)
ARCnet (passive hub) 393 ft. (120 meters) Depends on cable
ARCnet (active hub) 1,988 ft. (606 meters) Depends on cable

Examining Protocols, Frames, and Communications

The network adapter sends and receives messages among the LAN computers, and the network cable carries the messages. It is the less tangible elements, however--the layers of networking protocols in each computer--that turn the individual machines into a local area network.

At the lowest level, networked computers communicate with one another by using message packets, often called frames. These frames, so-called because they surround and encapsulate that actual information to be transmitted, are the foundation on which all LAN activity is based. The network adapter, along with its support software, sends and receives these frames. Each computer on the LAN is identified by a unique address to which frames can be sent.

Frames are sent over the network for many different purposes, including the following:

  • Opening a communications session with another adapter

  • Sending data (perhaps a record from a file) to a PC

  • Acknowledging the receipt of a data frame

  • Broadcasting a message to all other adapters

  • Closing a communications session

Figure 11.19 shows what a typical frame looks like. Different network implementations define frames in very different, highly specific ways, but the following data items are common to all implementations:

  • The sender's unique network address

  • The destination's unique network address

  • An identification of the contents of the frame

  • A data record or message

  • A checksum or CRC for error-detection purposes

These items are used to perform fundamental tasks that underlie every network transmission: to take the needed information, send it to the proper destination, and ensure that it is received successfully.

FIG. 11.19  The basic layout of a frame.

Using Frames that Contain Other Frames

The layering of networking protocols within a single frame is a powerful concept that makes network communication possible. The lowest layer knows how to tell the network adapter to send a message, but that layer is ignorant of file servers and file redirection. The highest layer understands file servers and redirection but knows nothing about Ethernet or Token Ring. Together, though, the layers give you the full functionality of a local area network. Frames always are layered (see Figure 11.20).

FIG. 11.20  Frame layers.

When a higher-level file redirection protocol gives a message to a midlevel protocol (such as NetBIOS, for example), it asks that the message be sent to another PC on the network (probably a file server). The midlevel protocol then puts an envelope around the message packet and hands it to the lowest level protocol, implemented as the network support software and the network adapter card. This lowest layer in turns wraps the (NetBIOS) envelope in an envelope of its own and sends it out across the network. On receipt, the network support software on the receiving computer removes the outer envelope and hands the result upward to the next higher-level protocol. The midlevel protocol running on the receiver's computer removes its envelope and gives the message--now an exact copy of the sender's message--to the receiving computer's highest-level protocol.

The primary reason for splitting the networking functionality into layers in this manner is that the different hardware and software components of the network are manufactured by different companies. If a single vendor produced every product used on your network, from applications to operating systems to network adapters to cabling, then they could arrange the communications however they wanted, and still be assured of the inter-operability of the different parts.

This is not the case, however. Different vendors may split the LAN communications functions in slightly different ways, but they all have to rely on a common diagram of the overall process to ensure that their products will successfully interact with all of the others used on a typical LAN. One such diagram is called the OSI Reference Model.

Using the OSI Reference Model

The International Organization for Standardization (cryptically abbreviated as the ISO), has published a document called the Open System Interconnection (OSI) model. Most vendors of LAN products endorse the OSI standard but few or none implement it fully. The OSI model divides LAN communications into seven layers. Most NOS vendors use three or four layers of protocols, overlapping various OSI layers to span the same distance.

The OSI model describes how communications between two computers should occur. It calls for seven layers and specifies that each layer be insulated from the others by a well-defined interface. Figure 11.21 shows the seven layers. Various development projects over the years have attempted to create a networking system that is fully compliant with the OSI architecture, but no practical product has emerged. The OSI model remains a popular reference tool, however, and is a ubiquitous part of the education of any networking professional.

FIG. 11.21  The OSI model.

Descriptions of the seven layers follow:

  • Physical. This part of the OSI model specifies the physical and electrical characteristics of the connections that make up the network (twisted pair cables, fiber-optic cables, coaxial cables, connectors, repeaters, and so on). You can think of this layer as the hardware layer. Although the rest of the layers may be implemented as chip-level functions rather than as actual software, the other layers are software in relation to this first layer.

  • Data Link. At this stage of processing, the electrical impulses enter or leave the network cable. The network's electrical representation of your data (bit patterns, encoding methods, and tokens) is known to this layer and only to this layer. It is at this point that most errors are detected and corrected (by requesting retransmissions of corrupted packets). Because of its complexity, the Data Link layer often is subdivided into a Media Access Control (MAC) layer and a Logical Link Control (LLC) layer. The MAC layer deals with network access (token-passing or collision-sensing) and network control. The LLC layer, operating at a higher level than the MAC layer, is concerned with sending and receiving the user data messages. Ethernet and Token Ring are Data Link Layer protocols.

  • Network. This layer switches and routes the packets as necessary to get them to their destinations. This layer is responsible for addressing and delivering message packets. While the Data Link layer is conscious only of the immediately adjacent computers on the network, the Network layer is responsible for the entire route of a packet, from source to destination. IPX and IP are examples of Network layer protocols.

  • Transport. When more than one packet is in process at any time, such as when a large file must be split into multiple packets for transmission, the Transport layer controls the sequencing of the message components and regulates inbound traffic flow. If a duplicate packet arrives, this layer recognizes it as a duplicate and discards it. SPX and TCP are Transport layer protocols.

  • Session. The functions in this layer enable applications running at two workstations to coordinate their communications into a single session (which you can think of in terms of a highly structured dialog). The Session layer supports the creation of the session, the management of the packets sent back and forth during the session, and the termination of the session.

  • Presentation. When IBM, Apple, DEC, NeXT, and Burroughs computers want to talk to one another, obviously a certain amount of translation and byte reordering needs to be done. The Presentation layer converts data into (or from) a machine's native internal numeric format.

  • Application. This is the layer of the OSI model seen by an application program. A message to be sent across the network enters the OSI model at this point, travels downward toward Layer 1 (the Physical layer), zips across to the other workstation, and then travels back up the layers until the message reaches the application on the other computer through its own Application layer.

One of the factors that makes the NOS of each vendor proprietary (as opposed to having an open architecture) is the vendor's degree and method of noncompliance with the OSI model. Sufficient protocol standardization has been implemented to allow all Ethernet products to function interchangeably (for example), but these standards do not directly comply with the OSI model document.

Using Low-Level Protocols

The MAC method for most LANs (part of the Data Link layer functionality discussed above) works in one of two basic ways: collision-sensing or token-passing. Ethernet is an example of a collision-sensing network; Token Ring is an example of a token-passing network.

The Institute of Electrical and Electronic Engineers (IEEE) has defined and documented a set of standards for the physical characteristics of both collision-sensing and token-passing networks. These standards are known as IEEE 802.3 (Ethernet) and IEEE 802.5 (Token Ring). Be aware, though, that the colloquial names Ethernet and Token Ring actually refer to earlier versions of these protocols, upon which the IEEE standards were based. There are minor differences between the frame definitions for true Ethernet and true IEEE 802.3. In terms of the standards, IBM's 16Mbps Token-Ring adapter card is an 802.5 Token-Ring extension. You learn the definitions and layout of Ethernet and Token Ring frames in the sections "Using Ethernet" and "Using Token Ring" later in this chapter.

Some LANs don't conform to IEEE 802.3 or IEEE 802.5, of course. The most well-known of these is ARCnet (from such vendors as Datapoint Corporation, Standard Microsystems, and Thomas-Conrad). Other types of LANs include StarLan (from AT&T), VistaLan (from Allen-Bradley), LANtastic (from Artisoft), Omninet (from Corvus), PC Net (from IBM), and ProNet (from Proteon). All of these architectures are obsolete.

Fiber Distributed Data Interface (FDDI) is a newer physical-layer LAN standard. FDDI uses fiber-optic cable and a token-passing scheme similar to IEEE 802.5 to transmit data frames at 100Mbps or faster speeds.

Evaluating High-Speed Networking Technologies

If you have fast workstations and a fast file server, you will want a fast network as well. Even the high speeds of the regular office networks may be too slow if your applications are data-intensive. The explosive growth of multimedia, groupware, and other technologies that require enormous amounts of data has forced network administrators to consider the need for high-speed network connections to individual desktop workstations.

Networking at higher speeds has primarily been limited to high-speed backbone connections between servers, due to its additional expense. Several new technologies are designed to deliver data at high speeds, also to standard user workstations. Real-time data feeds from financial services, videoconferencing, video editing, and high-color graphics processing are just some of the tasks that would benefit greatly from an increase in network transmission speed.

Using the Fiber Distributed Data Interface

FDDI is a much newer protocol than Ethernet or Token Ring. Designed by the X3T9.5 Task Group of ANSI (the American National Standards Institute), FDDI passes tokens and data frames around a ring of optical fiber at a standard rate of 100Mbps. FDDI was designed to be as much like the IEEE 802.5 Token Ring standard as possible, above the Physical layer. Differences occur only where necessary to support the faster speeds and longer transmission distances of FDDI.

If FDDI were to use the same bit-encoding scheme used by Token Ring, every bit would require two optical signals: a pulse of light and then a pause of darkness. This means that FDDI would need to send 200 million signals per second to have a 100Mbps transmission rate. Instead, the scheme used by FDDI--called NRZI 4B/5B--encodes 4 bits of data into 5 bits for transmission so that fewer signals are needed to send a byte of information. The 5-bit codes (symbols) were chosen carefully to ensure that network timing requirements are met. The 4B/5B scheme, at a 100Mbps transmission rate, actually causes 125 million signals per second to occur (this is 125 megabaud). Also, because each carefully selected light pattern symbol represents 4 bits (a half byte, or nibble), FDDI hardware can operate at the nibble and byte level rather than at the bit level, making it easier to achieve the high data rate.

Two major differences in the way the token is managed by FDDI and IEEE 802.5 Token Ring exist. In traditional Token Ring, a new token is circulated only after a sending workstation gets back the frame that it sent. In FDDI, a new token is circulated immediately by the sending workstation after it finishes transmitting a frame, a technique that has since been adapted for use in Token Ring networks and called Early Token Release. FDDI classifies attached workstations as asynchronous (workstations that are not rigid about the time periods that occur between network accesses) and synchronous (workstations having very stringent requirements regarding the timing between transmissions). FDDI uses a complex algorithm to allocate network access to the two classes of devices.

Although it provides superior performance, FDDI's acceptance as a desktop network has been hampered by its extremely high installation and maintenance costs (see "Using Fiber-Optic Cable" earlier in this chapter).

Using 100Mbps Ethernet

One of the largest barriers to the implementation of high-speed networking has been the need for a complete replacement of the networking infrastructure. Most companies cannot afford the down time needed to rewire the entire network, replace all the hubs and NICs, and then configure everything to operate properly. As a result of this, some of the 100Mbps technologies are designed to make the upgrade process easier in several ways. First, they can often use the network cable that is already in place, and second, they are compatible enough with the existing installation to allow a gradual changeover to the new technology, workstation by workstation. Obviously, these factors also serve to minimize the expense associated with such an upgrade.

The two systems that take this approach are 100BaseT, first developed by the Grand Junction Corp., and 100VG AnyLAN, advocated by Hewlett-Packard and AT&T. Both of these systems run at 100Mbps over standard UTP cable, but that is where the similarities end. In fact, of the two, only 100BaseT can truly be called an Ethernet network. 100BaseT uses the same CSMA/CD media access protocol and the same frame layout defined in the IEEE 802.3 standard. In fact, 100BaseT as been ratified as an extension to that standard, called 802.3u.

To accommodate existing cable installations, the 802.3u document defines four different cabling standards, as shown in Table 11.10.

Table 11.10  100BaseT Cabling Standards

Standard Cable Type Segment Length
100BaseTX Category 5 (2 pairs) 100 meters
100BaseT4 Category 3, 4, or 5 (4 pairs) 100 meters
100BaseFX 62.6 micrometer Multimode fiber (2 strands) 400 meters

Sites with Category 3 cable already installed can therefore use the system without the need for rewiring, as long as the full four pairs in a typical run are available. Category 3 cables can also be used with a Token Ring network at 4 and 16Mbps. 100Mbps Token Ring needs Category 5 cables.


NOTE: Despite the apparent wastefulness, in most cases it is not recommended that that data and voice traffic be mixed within the same cable, even if sufficient wire pairs are available. Digital phone traffic could possibly coexist, but normal analog voice lines will definitely inhibit the performance of the data network.

100BaseT also requires the installation of new hubs and new NICs, but because the frame type used by the new system is identical to that of the old, this replacement can be done gradually, to spread the labor and expense over a protracted period of time. You could replace one hub with a 100BaseT model, and then switch workstations over to it, one at a time, as the users need and the networking staff's time permits. You can also use 10/100Mbps NICs to make the changeover even easier.

100VG (voice grade) AnyLAN also runs at 100Mbps, and is specifically designed to use existing Category 3 UTP cabling. Like 100BaseT4, it requires four pairs of cable strands to affect its communications. There are no separate Category 5 or fiber-optic options in the standard. Beyond the cabling, 100VG AnyLAN is quite different from 100BaseT and indeed from Ethernet.

While 10 and 100BaseT networks both reserve one pair of wires for collision detection, 100VG AnyLAN is able to transmit over all four pairs simultaneously. This technique is called quartet signaling. A different signal encoding scheme called 5B/6B NRZ is also used, sending 2.5 times more bits per cycle than an Ethernet network's Manchester encoding scheme. Multiplied by the four pairs of wires (as compared to 10BaseT's one), you have a tenfold increase in transmission speed.

The fourth pair is made available for transmission because there is no need for collision detection on a 100VG AnyLAN network. Instead of the CSMA/CD media access system that defines an Ethernet network, 100VG AnyLAN uses a brand new technique called demand priority. Individual network computers have to request and be granted permission to transmit by the hub before they can send their data.

100VG AnyLAN also used the 802.3 frame type, so its traffic can coexist on a LAN with regular Ethernet. Like 100BaseT, combination 10/100 NICs can be used, and the installation can be gradually migrated to the new technology.

Support for 100VG AnyLAN has almost completely disappeared from the market due to the cost of the adapters and the popularity of 10/100Mbps Ethernet adapters.

Using ATM

Asynchronous Transfer Mode is also a newer high-speed technology. ATM defines a Physical layer protocol in which a standard-size 53-byte packet (called a cell) can be used to transmit voice, data and real-time video over the same cable, simultaneously. The cells contain identification information that allow high-speed ATM switches (wiring hubs) to separate the data types and ensure that the cells are reassembled in the right order. The basic ATM standard runs at 155Mbps, but some implementations can go as high as 660Mbps.

ATM is a radically different concept, and there are no convenient upgrade paths as there are with the 100Mbps standards described earlier. For this reason, ATM is being used primarily for WAN links.

TCP/IP and the Internet

TCP/IP stands for Transmission Control Protocol/Internet Protocol. It is the colloquial name given to the suite of networking protocols used by the Internet, as well as by most UNIX operating systems. TCP is primarily the Transport layer protocol in the suite, and IP defines the Network layer protocol that transmits blocks of data to the host.

TCP/IP is an extensive collection of Internet protocol applications and transport protocols, and includes File Transfer Protocol (FTP), Terminal Emulation (Telnet), and the Simple Mail Transfer Protocol (SMTP). TCP/IP was originally developed by the U.S. Department of Defense in the 1970s as platform and hardware-independent medium for communication over what was to become known as the Internet. A good example of this independence is the capability of DOS, Windows, or Windows 95 workstations to access information and transfer files on the Internet, which is a mixed platform environment. The primary advantages of TCP/IP are:

  • Platform Independence. TCP/IP is not designed for use in any single hardware or software environment. It can and has been used on networks of all types.

  • Absolute Addressing. TCP/IP provides a means of uniquely identifying every machine on the Internet.

  • Open Standards. The TCP/IP specifications are publicly available to users and developers alike. Suggestions for changes to the standard can be submitted by anyone.

  • Application Protocols. TCP/IP allows dissimilar environments to communicate. High-level protocols like FTP and Telnet have become ubiquitous in TCP/IP environments on all platforms.

Although it has been the protocol of choice on UNIX networks first, the explosive growth of the Internet has brought the protocols onto all kinds of LANs as well. Most network administrators have found that they can adapt their current NOSes to use TCP/IP, and thus lessen the network traffic problems that can be caused by running several different sets of protocols on the same network.

Connecting to the Internet

You can connect a computer to the Internet through virtually any of the access ports discussed in this chapter thus far. Individual computers can use modems to connect to an Internet Service Provider (ISP), or a network connection can be established through which all of the users on the LAN gain access. Depending on your organization's degree of Internet involvement, any one of the following access options can be selected.

Asynchronous Modem Connections

Individual computers can use normal asynchronous modems attached to a serial port to connect to the Internet, through the services of an ISP. ISPs provide dial-in capabilities using either the PPP (Point-to-Point Protocol) or the SLIP (Serial Line Internet Protocol). Both of these protocols are part of the TCP/IP suite, and are provided by virtually all of the third-party TCP/IP stacks available for DOS and Windows 3.1. Windows 95 and Windows NT include support for both protocols as part of the operating system. Whichever protocol you use must be supported by the TCP/IP stack on the remote computer, as well as the system to which you are connecting. Your service provider will be able to tell you what protocols are supported by the host system.

SLIP

The SLIP is an extremely simple protocol that provides a mechanism for the packets generated by IP (called datagrams) to be transmitted over a serial connection. It sends each datagram sequentially, separating them with a single byte known as the SLIP END character to signify the end of a packet. SLIP provides no means of error correction or data compression, and was eventually superseded by the PPP.

PPP

The PPP improves the reliability of serial TCP/IP communications with a three-layer protocol that provides the means for implementing the error correction and compression that SLIP lacks. Most TCP/IP stacks provide PPP support. When given a choice, you should always select PPP over SLIP; it provides superior throughput and reliability.

ISDN Connections

A popular option for Internet connectivity is the ISDN connection. Providing speeds of 128Kbps (when both B channels are combined), it is more than two times faster than a 56Kbps modem connection. ISDN can be used to provide Internet access to a network or to an individual computer. The basics of ISDN communications are covered in the "Integrated Services Digital Network" section earlier in this chapter. For basic e-mail connectivity and modest use, an ISDN connection could support 10 to 20 users on a network nicely. Giving users a taste of the Internet often leads to a substantial habit, however, and you may find that World Wide Web browsing and FTP transfers cause you to quickly outgrow an ISDN link.

Broadband Home Connections

A newer Internet connectivity method is the broadband home connection. Your ISP can supply you a special modem (called a Cable Modem), which you can connect to the TV broadcast net to get a connection to the Internet. Also a newer technique is the Internet connection through the public electricity net. Broadband home connections usually supply a very high speed (3Mbps or faster).

T-1 Connections

For networks that must support a large number of Internet users, and especially for organizations that will be hosting their own Internet services, a T-1 connection to your service provider may be the wise investment. A T-1 is a digital connection running at 1.55 Mbps. This is more than 10 times faster than an ISDN link. A T-1 may be split (or fractioned), depending on how it is to be used. It can be split into 24 individual 64K lines, or left as a single high-capacity pipeline. Some service providers allow you to lease any portion of a T-1 connection that you want (in 64K increments).

T-3 Connections

Equivalent in throughput to approximately 30 T-1 lines, a T-3 connection runs at 45Mbps, and is suitable for use by very large networks, university campuses, and the like.

Enter supporting content here