Linux IP Stacks Commentary Web Edition

Background And Basic Concepts

net•work•ing (net´´wûr´king) n. A supportive system of sharing information and services among individuals and groups having a common interest.

Table of Contents

Introduction

The Dark Ages

Rivals, Fallen And Otherwise

Ghu Said “Let Linux Be, And All Was Light”

Network Programming Precepts

Design Questions

Bits On A Wire

It’s Just A Jump To The Left, And A Step To The Right

Bitfields, Or Logical Operations?

Start At The Big End...No, At The Little End....

Co-op-er-a-tion,” Say The Muppets


Introduction

Computer networking as we know it today, and the telecommunications protocols implemented under the programming language used by the Linux kernel, are the lineal descendants of two entities whose first incarnations date back to the early years of the eighth decade of the twentieth century: the original Internet; and the Unix operating system with it’s C programming language.

Internet-based networking is a supportive system, a system originally designed to share information even in the face of cataclysmic events, up to and including limited thermonuclear warfare. Its matrix, the Internet, is a direct descendant of that unique product of the Cold War, the 1970s-era ARPAnet, the spawn of the United States Department of Defense Advanced Research Projects Agency.

That Internet has grown up — and after 50 years, has grown out — from its first incarnation as a single 100-computer network. It became what is often referred to as a “co-operative anarchy”: thousands of separately-owned and -administered networks that interconnect millions of individual computers.

At almost the same time that the ARPAnet was fledgling its wings, the Unix operating system was born. It started life as a tiny “skunk-works” project at a certain branch of The Phone Company (TPC, popularized in the contemporaneous novel and film The President’s Analyst) — specifically, at that company’s research arm, Bell Labs.

This small-group effort was originally undertaken to design and implement a writing workbench tool for a group of company lawyers. The purpose of the Unix system, and of the B programming language that was developed along with it, was to put to use some cast-off DEC PDP-7 computer hardware, without requiring legions of programmers...or shiploads of paperwork.

Many of self-styled “Bell Lab Rat”” Ken Thompson’s innovations — a simple file system, a simple but powerful permissions system, and the concept of pipes — were developed with the innocent aim of speeding up the development of those writing tools by making applications easier to integrate. “Divide and conquer” had been the dominant programming philosophy for almost a decade prior to the start of this project, but Unix was one of the first operating systems to provide an efficient way to implement this philosophy at the applications level.

These two projects — the ARPAnet and Unix — shared one central element that was largely responsible for their success. Specifically, they were both deeply rooted in the concept of using simple building blocks to create a larger, more complex whole.

And what Unix did for programming with its “pop-bead” approach to systems, TCP/IP did for telecommunications: using small, well-defined, easy-to-understand layers of abstraction and a large number of small custom protocols that could be combined to form complex, reliable, and robust data links. Given these strong similarities, in retrospect, it was inevitable (dare we say kismet) that Unix programmers would be drawn to TCP/ IP networking.

The ARPAnet was originally built around connecting “big iron” hardware, because that’s what developers had on hand: IBM mainframes, DEC supercomputers, and Burroughs and GE timesharing complexes. The actual network node computers were the Interface Message Processors, or IMPs, which were Honeywell DDP 516 minicomputers with custom line and host interfaces. But by the late 1970s, Unix hardware had become much cheaper, and network implementers found that the necessary hardware interfaces were much easier to design and build for Unix minicomputers than they were for non-Unix mainframes.

In short, Unix was at that time within the grasp of almost all university research centers. Best of all, the Unix operating system was modular enough that network device drivers were easy to write, debug, and install. The C programming language (which succeeded, and greatly improved on, the original B language, which was designed around the PDP-7 instruction set) eased the chore of writing protocol drivers that actually worked. When you consider that, at the time, these drivers were usually ground out in assembly language, it’s easy to see why minicomputers running Unix soon became wildly popular platforms for experiments in networking.

Bell Labs was not alone. A number of Unix computers were available at the University of California at Berkeley (UCB), which also had a well-qualified pool of undergraduates and graduates who were eager to tinker with the systems. Thanks to that swarm of brainpower, Berkeley was able to take Bell Labs’ Unix and introduce a “rival” Unix-based operating system, known as the Berkeley System Distribution (or simply BSD Unix).

Because U. C. Berkeley was also a major participant in the ARPAnet project, the Network Control Protocol (NCP) concept of sockets became part of the networking software that the university incorporated into BSD Unix. (AT&T tried to introduce a competing concept, called streams, into its own version of Unix, known as System V, but it never really caught on.)

In short, the availability of Unix, of which Linux is a distant but faithful relation, and “cheap” hardware brought the concepts of computer networking out of obscurity and into the hands of students and hobbyistswho grabbed the ball and ran with it, full-tilt boogie.

The Dark Ages

In the ancient days of networks, which for our purposes were the 1960s, the means by which computers talked to each other were developed and monopolized by the members of a tiny technological priestly caste. Shielded by esoteric language, graduate-level academic courses, obtuse standards published by the pound by the International Telecommunications Union (an agency of the United Nations, which status may account, at least in part, for the impenetrability of its documents), and enough state diagrams to gift-wrap the Great Wall of China, the priests and acolytes of networking’s inner sanctum hid their work from the common herd of users. The users, often without understanding exactly what they were doing, simply memorized the mantras that made the systems work for them.

Of course, from time to time, real information filtered out. The IBM Corporation actually gave away some (but by no means all) of the details of its computer-to-computer communications schemes — but only to client companies who could afford to hire in-house gurus to implement the maddeningly (and unnecessarily) complex recipes.

Obfuscation in the service of proprietary interests wasn’t limited to the private sector. Many details of the ARPAnet — which was nominally a public project — were buried in inches-thick reports that were hard to find and, for the uninitiated, harder to read. Technical articles were sprinkled liberally across a broad spectrum of journals, some of which were so obscure that only the members of the techno-priesthood even knew of their existence, let alone had copies of them.

During the early days of TCP/IP, and right up through the early 1990s, hard information about networking was scarce, even for members of the inner manufacturing and academic circles. Then, the explosive growth of the Internet led increasing numbers of novices...er, young people, into jobs that exposed them to the inner workings of networking software, and thus to the networking traps and pitfalls that await unwary protocol designers.

But even with the advent of just-born microcomputers in 1977, the Apple-based “VisiCalc”-type programs of 1978, or even the IBM PC that 1981 devil-spawn, responsible for a veritable tsunami of bad programmers, worse programs, and overdone demo-dollies at the biannual COMDEX shows networking at large remained a dark mystery. A few brave souls tried to create TCP/IP stacks, but their efforts were hobbled by a shortage of RAM and, even more so, by a shortage of information and by other network stacks to test against.

Speaking of information deficits: In 1985, the present authors bought a copy of “hot off the press” networking specifications. The price of the three-volume DDN Protocol Handbook was $100 — equivalent, in today’s money, to $625. This paper-bound product of the Defense Communications Agency consisted of selected RFCs and other explanatory text, and featured truly primitive line printer output that would have been an embarrassment 10 years earlier. But, given what information was and wasn’t available, the Handbook was pure gold.

Rivals, Fallen And Otherwise

Its military-industrial pedigree aside, part of the reason why TCP/IP was so obscure was that it wasn’t the only way to link computers into networks. Some potential TCP/IP users were lured away from it by other methods that didn’t require the use of leased telephone lines, the way the ARPAnet and the original Internet did.

One of these rival methods, which was developed at Bell Labs early in the life of Unix, was the uucp (Unix-to-Unix copy) program. Not content with simply copying files from one system to another, the links provided by uucp also provided a way for electronic mail protocols to be launched. As a result, Unix-based systems all over the United States, and in many other parts of the world, soon became linked by an informal network of dial-up connections that used the public telephone network: first at 1200 bits per second, and then, as the technology advanced, at data rates of up to 18 kilobits per second. (The authors still have their Telebit modems.)

The original network news packages also used these links, in their case to form a distributed bulletin board. This application lives on today, in the form of the network news system. But the original uucp network gradually withered and (for all practical purposes) died, as the network of Telebit Trailblazer modems, which from 1985 to 1995 knit Unix systems together, was gradually replaced by the availability of increasingly ever-cheaper leased telephone lines.

Not to be outdone, in 1984 the microcomputer community formed its own network of computers, called FidoNet, which operated over the public telephone network. The FidoNet system, which was the brainchild of skateboard fanatic Tom Jennings, implemented electronic-mail functions and also distributed its BBS functionality. At this writing in 2022, the FidoNet is still in operation, carrying information between the Internet and FidoNet’s loyal but no doubt dwindling devotees. Woof.

Commercial Unix systems, such as Xenix and SCO Unix, which were designed to run on microcomputers, did include TCP/IP networking. The cost of direct Internet connections during the late 1980s was so high that most people used dial-up modems instead, accessing the Internet through so-called shell accounts running on minicomputer Unix systems, and then using simple protocols such as uucp, Xmodem and variants, or Kermit to exchange data between the Unix box and their own microcomputers.

Ghu Said “Let Linux Be, And All Was Light”

The rather dismal situation described in the preceding section changed radically in the mid-1990s, with the confluence of half a dozen seemingly unrelated trends and events:

The Linux OS brought, and brings, to the world a real-life, standards-compliant OS whose source code is available to anyone — anyonewho wants it. Better yet, Linux distributions include a real-life, standards-compliant TCP/IP networking system, whose source code is likewise available to anyone who wants it. (Like us.) When wedded with the standard networking tools that are available for Unix-style systems, Linux stands proudly head to head with any other OS available today. Indeed, Linux is hands-down the most popular operating system running on supercomputers around the world.

This book describes TCP/IP’s implementation in the C programming language. Other implementations have been written — in FORTRAN, ALGOL, Pascal, PL/1, and any number of assembler languages, as well as in the language of the application-specific integrated circuits (ASICs) that drive the so-called “silicon compilers,” which generate physical devices that implement the protocol suite directly in hardware. Special languages have also been created that build TCP/IP stacks, either because no traditional compiler was available for the designated hardware, or simply because the implementer could do it. (With the introduction of the Rust language to the Linux kernel in the 2020s, expect some parts of the TCP/IP stacks implementation to be moved to that language.)

Some implementations of TCP/IP have been around for 50 years, evolving to meet a changing world, while others are still in the teething stage.

Network Programming Precepts

Design Questions

Telecommunications is all about talking with other computers. TCP/IP telecommunications is all about making talking with many different kinds of other computers happen: new ones and old ones, “back-words” and “fore-words,” from palm-tops to building-busters, and everything in between. Even smart watches.

Talking with other computers also means agreeing on the answers to certain basic questions, such as these:

These prove to be tough questions to answer, although the world has come to some agreement.

Bits On A Wire

The first task any telecommunications system must perform is to decide which bits get sent first. Here, TCP/IP doesn’t issue any edicts; instead, it leaves the answer to be reached by the drivers and the hardware. However, according to telecomm custom, the transmission of bits in a given data unit (usually the character) starts with the low-order bit.

The reason why the low-order bit is sent first is, well, historical. Early teletype communications systems used a parity bit to determine whether a character that was being sent had been transferred correctly. When the low-order bit was sent first, followed sequentially by the remaining bits, the parity bit could be calculated on the fly by the sender, and transmitted after the last bit in a character (that is, in a data unit) had been transmitted; the receiver can follow along and compare its calculated result with what it sees.

Alas, not all transmission systems are serial in nature. The IBM PC parallel port and the standard SCSI bus are two examples of communications devices that can transfer bits of data in parallel. Here again, the determination of which bit is sent on what wire is a matter of convention. The interpretation of those bits is up to the hardware and the software driver.

It’s Just A Jump To The Left, And A Step To The Right

The customary way of sending characters, in order from left to right, was determined not by teletypes but by early stock-market tickers, in the years immediately following the U. S. Civil War. These were mechanical marvels that required several wires in order to work properly; but they did the job of transmitting stock quotations from the central stock exchanges to wherever the tickers were located. To make them useful to human readers, the tickers were set up so that, as information was transmitted, characters were printed on the tape in the natural left-to-right order. This way, the stock quotations could be read and understood as soon as the tape came out of the ticker. This “natural-order” approach also simplified the task of encoding the quotations at the sending end. Lastly, early stock tickers were run by the Western Union company, whose telegraph operators wouldn’t hear of keying data any other way but from left to right.

As “printing telegraphs” were introduced into newspaper companies, the left-to-right order was maintained, because that method was the one that the typists who were keying the information were accustomed to using. This habit persisted throughout the duration of the teletype era, up through the mid-1980s.

In TCP/IP, information is transmitted in characters (8-bit octets, if you prefer the formal international Standards language) in left-to-right order. Accordingly, in many of the diagrams that illustrate packet formats, both in this book and elsewhere in the literature, the customary octet transmission order is from left to right, top row to bottom row.

Characters stored in a computer’s memory are usually arranged in order, from low to high memory addresses. This way, a packet can be built in memory and then handed directly to the device driver and the hardware. Indeed, the C programming language virtually guarantees that its I/O model will behave this way.

Bitfields, Or Logical Operations?

As you’ll see in the definitions of IP and TCP packet headers, the individual data bits in the headers have meaning above and beyond the meaning of the characters that they constitute. For example, in the thirteenth byte of the TCP packet header, the rightmost bit of the byte (20) is interpreted as the FIN bit. Each implementation must interpret the same bit in the same way.

The C programming language defines the concept of bitfields as a way to associate a symbolic name with a single bit or group of bits in a word. Unfortunately, in the original language as defined by Kernighan and Ritchie, the actual interpretation of bitfields was left up to each individual compiler. The ANSI C standard perpetuates this ambiguity. Unfortunately, such ambiguity is simply unacceptable in telecommunications, in which every single implementation of a communications protocol must agree with every other implementation on the questions of which bit means what.

One popular way to avoid the problem is to use the logical AND and OR operators to test and manipulate individual bits. This workaround imposes a slightly heavier burden on the programmer; moreover, when a computer doesn’t include explicit bit-manipulation instructions as part of its instruction set, this workaround can eat up machine cycles unnecessarily. When the logical operators are used purely to manipulate characters, this approach is essentially painless. However, when integer quantities need to be manipulated, the operations are complicated by byte-ordering problems, as described in the next section.

Another way to circumvent the bit-definition problem is to experiment with the compiler and determine how it allocates bits in a bitfield. You can then define the individual bit or group of bits as part of a structure, and treat the resulting bitfield element like any other integer variable.

The Linux TCP/IP implementation uses a blend of all of these methods. The TCP packet-header code uses bitfields (with conditional compilation, to ensure that the bits are defined in the “correct” order) to define the six bits that indicate which fields are valid, while the IP header code uses logical calculations to set and test the don’t-fragment and more-fragment bits.

Start At The Big End...No, At The Little End....

As previously noted, the contents of buffers are transmitted one character at a time, starting with the characters that live at low memory addresses and ending with the characters that live at high memory addresses. This procedure works well with some machines but not with others, because of the way each specific computer store integer values. Obviously, this is a nontrivial problem.

The TCP/IP packets contains many fields that store integer values. Many of these values are 16-bit values, while others are 32-bit values. (IPv6 introduces 128-bit values!) In other words, in computers that use 8-bit characters, these integer values can be represented by groups of two or four characters, respectively, of information. Or 16 characters, for those extra-big values.

When information is stored in integer memory locations in Motorola 68000-based computers, the most significant 8 bits of the integer data are placed in the lowest memory address. This arrangement is known as foreword or big-endian orientation. In contrast, however, in the Intel 80x86 and Pentium family of computers, the least significant 8 bits of the integer data are stored in the lowest memory address. Hence, the monikers backword and little-endian.

This disagreement about how integer values should be stored has been going on for as long as binary computers have existed. Motorola and Intel are relative latecomers to this particular religious war. True, for most computer work, the integer-value storage method doesn’t make any difference. It only becomes a headache when you need to move data electronically between computers that disagree about byte order.

By definition, TCP/IP uses the big-endian scheme to store numbers in buffers. In much of the literature, this order, which is known as network order, is fixed. Unfortunately, computers perform calculations in their own way, in a native mode called host order. Consequently, programmers must keep track of whether a particular integer value is being stored in network order or in host order.

To create portable code that takes this machine-specific behavior into consideration, the poor programmers have to use the C library functions htonl and htons to convert a 32-bit or 16-bit value from host to network order, and have to use the inverse functions ntohl and ntohs to convert values from network to host order. In big-endian systems, such as the Motorola 68000, these functions do nothing, whereas in Intel 80x86 and Pentium systems, they swap the bytes as indicated.

The alternative, which consists of breaking down the integer values into character chunks, is comparatively more expensive in terms of the number of CPU cycles required. The use of the conversion functions is a reasonable trade-off, because hand-tooled assembler code can be used to make the implemented functions run very, very fast.

The other aspect of handling integer values relates to the fact that some machines work with integer values only when those values are aligned on some “word” boundary. The IBM System/360 family of computers was famous for this quirk, but it was far from alone in this regard. Any attempt to manipulate word values at arbitrary boundaries on these machines would cause machine exceptions, which meant that the protocol routines had to include exception processing routines to handle the exception, program around the problem, or else simply not function properly. Fortunately, with more-modern computers, the use of such unaligned integer quantities is handled by the hardware, and just slows down the processing a little.

Even so, a shortsighted implementation of TCP/IP on one networked machine could conceivably cause a malfunction in another machine on that network. Therefore, TCP/IP implementations, must be very careful to align integer quantities on word boundaries in the TCP and IP option fields.

“Co-op-er-a-tion,” Say The Muppets

The authors would like to conclude this tour with one final philosophical observation. Network programming is different from virtually every other kind of programming. Some programmers make the mistake of assuming that a master-slave relationship exists between two ends of a communications connection, just as a master-slave relationship exists between a function and the functions it calls. Indeed, some Paleolithic communications systems (such as those built by IBM for its big-business customers) involved the equivalent of a feudal lord and a ring of serfs, with each lowly endpoint doing the bidding of the Master Control Program that lived up the hill in the manor...er, mainframe.

It’s easy to see why some business types tried so hard to erect, in the cyberworld, a duplicate of the real world’s corporate ladder. (Remember, these are the same people who transmuted the relatively simple concept of “data processing” to “management information” and then to “information technology,” in much the same way that the straightforward term “insane asylum” was metastasized into “state mental hospital” and then into the dazzlingly euphemistic “correctional medical facility.”)

“Divide and conquer” is a success formula in the day-to-day take-no-prisoners business world, or in strategic warfare, but it doesn’t work for diplomats. Consider two heads of state in two completely different cultures (say, for instance, the U. S. and China) who are trying to reach an agreement. The will to understand is there, but the ideological chasm between the two sides is wide and brimming with alligators. Negotiators in good faith on both sides need to work together to bridge the difficulties.

Similarly, in a successful communications environment, two programs running on two different machines need to cooperate, instead of competing for control. The programs aren’t each other’s enemy; instead, the ravine full of alligators...er, the communications channel, is the common enemy of both of them. When one program tries to dominate the other, chaos ensues. Clarity and chaos don’t mix. And clarity, as always, is the goal.

Programs — and people — of good will could do much worse than take to heart Dr. Jon Postel’s suggestion: “Be conservative in what you do; be liberal in what you accept from others.”



Back to Table of Contents


Comments, suggestions, and error reports are welcome.
Send them to: ipstacks (at) satchell (dot) net
Copyright © 2022 Stephen Satchell, Reno NV USA