uses. The original vision
for the ARPANET was a resource-sharing research network, and users in
one part of the network could employ a computer somewhere on the
network to do their computation. By 1985, the network was not only a
technology used for research and development, it was supporting forms of
collaboration that the original designers had never anticipated.35 Email was
the surprising "killer app, 1136 an application of such great value that its use
alone could be a reason for having the network.37
Network redundancy proved to be magic. "We had the realization that
if there's an overload in one place, traffic will move around it," Baran
recalled years later.38 In other words, the network routes around problems.
Redundancy provided even more. Increased size created increased reliability; indeed, redundancy increased exponentially with network size. No
initial call setup work meant there was high efficiency at any bandwidth,
holding time, or scale. And distributed routing could be deployed on any
network topology (as long as all endpoints are connected to the network).39
That last point means that anyone could deploy an Internet over an existing network. This was bad news for the telephone companies whose role
was left to provide the physical infrastructure-and not much else.
2.3 Creating Other IP-Based Networks
ARPANET's success spurred the development of other networks,40 some
with quite different properties. The U.S. Department of Energy (DoE) set
up MFENet for collaboration in magnetic fusion energy; DoE's high-energy
physicists created one of their own: HEPNet. Academics started CSNET to
enable academic collaboration. Other networks were developed with other
purposes in mind. Two Duke University graduate students set up USENET,
a forum for electronic discussion groups, while the Because It's There NetBITNET-was started for universities to do email, file transfer, and so on.
(CSNET, USENET, and BITNET are all "store-and-forward" networks, in
which the data are stored in the transit machine until it receives an
acknowledgment that the data has arrived at the next device in the
network. This is unlike the Internet best-effort protocols.)41
In the mid-1980s the value of the ARPANET was clear, and it was
time to bring the work to a new level. The U.S. National Science Foundation (NSF) had already connected the nation's supercomputing centers
over what is now a very slow network: 56,000 bits per second. Now
the NSF went a major step further: NSF built a high-speed national
network connecting six supercomputing centers42 to the ARPANET. (Supercomputers are machines that are super fast at the time they are introduced.
They are often used for large scientific or military computations.) That was
not all.
NSF decided to build a general-purpose research network,43 and various
existing regional and local academic networks connected to the system.
The effort was an immediate success: in the first year the system needed
an upgrade in order to cope with traffic. While it took the ARPANET ten
years to reach a thousand computers, NSFNET went from two thousand computers at its start to more than two million in 1993.44 NSFNET had one
rule: traffic could be only for the purposes of research and education.
That limitation spurred private growth, including the development of
companies like AOL, which provided a limited, "walled garden" approach
to the Internet for the public. The growth inside and outside NSFNET
demonstrated that commercialization was needed;45 this occurred in
the mid-1990s. This commercialized network-basically the same one
we use today-is larger and faster than NSFNET, but relies on the same
protocols.46
2.4 The Network Stack
To understand the insecurities of the network, it is necessary to delve a bit
more deeply into the way communication occurs over the network. This
section and related sections in the next chapter on security risks are more
technical than other