Draft: Not for Quotation

[Note: This is part of a larger work in progress]

An Introduction to the "governance" of the Internet

By A. Michael Froomkin

Copyright (c) 1995. All rights reserved. Permission granted to view and print one copy for private, non-commercial use.


All of these activities take place in the shadow of national regulation. At least until recently, U.S. and other governments provided much of the start-up funding for the Internet and its predecessors which gave these governments influence. National regulation has only limited effect today, however, because the Internet has become a trans-national phenomenon, beyond the power of any single nation to control.

I. The Internet: an orderly anarchy

(1) What is the Internet?

The Internet is not a thing; it is the interconnection of many things -- the (potential) interconnection between any of millions of computers located around the world.<1> Each of these computers is independently managed by persons who have chosen to adhere to common communications standards, particularly a fundamental standard known as TCP/IP<2>, that makes it practical for any computer adhering to the standard to share data with any of the others even if the two computers are far apart and have no direct line of communication. There is no single program one uses to gain access to the Internet; instead there is a plethora of programs that adhere to the "Internet protocols". A computer connected to the Internet may have any number of users, all of whom may have any of widely varying levels of access to other computers in the network.<3>

The TCP/IP standard makes the Internet possible. Its most important feature is that it defines a packet switching network, a method by which data can be broken up into standardized packets which are then routed to their destination via an indeterminate number of intermediaries.<4> Thus, two parties do not need to be in direct contact to communicate. Under TCP/IP, as each intermediary receives data intended for a party further away, the data is forwarded along whatever route seems most convenient at the nanosecond the data arrives.<5> Neither sender nor receiver need know or care about the route that their data will take, and there is no particular reason to expect that data will follow the same route twice.<6> Indeed, it is likely that multiple packets originating from a single long data stream will use more than one route to reach a far destination where they will be reassembled. This decentralized, anarchic, method of sending information appealed to the Internet's early sponsors, the U.S. Defense Department, who were intrigued by a communications network that could continue to function even if a major catastrophe (such as a nuclear war) destroyed a large fraction of the system.<7> This built-in resilience to even major obstacles to communication is the primary reason that any effort to censor the Internet is likely to fail. As John Gilmore put it, "the Net interprets censorship as damage and routes around it."<8>

The prevalence of TCP/IP enables the functions that have come to be identified with the Internet, notably: electronic mail (e-mail), USENET, the World-Wide Web (Web), a common file transfer protocol (FTP), Gopher, the Wide Area Information Server (WAIS), Internet Relay Chat (IRC), Multiple User Dungeons/Domains (MUDs) and Mud Object Oriented (MOOs), to name only the most commonly used functions. A user who has access to one of these functions does not necessarily have access to others, because the user's level of access is determined by the type of computer available, the capacity of the Internet connection, the cost of access, the software available, and the policy of the person or organization operating that computer. In some places, national governments impose additional constraints on Internet connectivity. Today it is still possible for a government to restrict access to the Internet; once a person is connected, however, it is currently beyond the power of any government to limit what is accessible via the Internet.

A. Rapid Growth

In 1983, there were perhaps 200 computers on the precursor of the modern Internet, the ARPANET. As of January 1993, there were more than 1.3 million computers with a regular connection to the system. In 1995, there are perphaps 5 million Internet hosts,<9> with a large fraction located outside the United States.<10> There were only a handful of networks in 1983;<11> a 1993 estimate suggested that the Internet connected to approximately 50,000 networks and 30 million users. Access doubles approximately every year.<12> Current estimates range from up to 40 million users.<13> At this rate of growth, the Internet cannot help but reach far into the general population of industrialized countries.

In the earliest days of the Internet and its predecessors, almost everyone who used the network was either involved in building it or worked for an organization that had a role in its creation. As the network grew, it reached larger and larger circles of people -- first computer scientists, then academics and graduate students in major universities, and then others as well.

The two most successful Internet applications have been e-mail and the World-Wide Web (Web). More than 25 billion e-mail messages will be sent in 1995.<14> The Web is an Internet client-server, hypertext, distributed information retrieval system.<15> Within two years of its introduction to the public in 1991, the share of Web traffic traversing the NSFNET Internet backbone reached 75 gigabytes per month or one percent of the total. By July 1994, it was one terrabyte per month,<16> and a recent estimate puts it at 17.6 percent of the total Internet traffic.<17> Web browsing programs such as Mosaic, Cello, or Netscape run on the user's computer and allow the user to navigate the Web in hypertext. Hypertext links inserted by document authors refer the reader to other documents by their Uniform Resource Locators (URLs). A mouse click on a link refers the user to remote documents, images, sounds or even movies that have been made Internet-accessible by their owners.<18>

B. Interconnection of Other Networks

Today's Internet is an amalgam of many networks. In addition to the ARPANET, two other early networking projects, BITNET<19> and CSNET<20>, were influential. In 1987 BITNET and CSNET merged to form the Corporation for Research and Educational Networking (CREN).<21> These networks have been joined by increasing numbers of commercial and nonprofit information service providers who also have chosen to conncet to the Internet, including Dow Jones, Telebase, Dialog, CARL, the National Library of Medicine, and RLIN.<22>

The relationship between the Internet and commercial consumer information providers such as America OnLine (AOL), CompuServe and Prodigy continues to evolve. At their inception these services provided no Internet connectivity. They then began to offer limited gateways for the exchange of electronic mail. Now they are expanding their gateways to allow their users to gain access to the Web, and sometimes to other Internet services as well. These services are growing rapidly.<23> The extent to which they intend to allow persons outside their service to have Web or FTP access to information generated by subscribers is less clear, although it seems reasonable to expect market pressures to promote this development presently.

The increasing internationalization and commercialization<24> of the Internet, as well as the simple increase in number of users, creates new constituencies. What had once been the preserve of computer science and electrical engineering departments, of computer nerds, now raises legal, social and commercial policy questions ranging from pricing to privacy.<25> The traditional policies and mores of the Internet risk becoming victims of their success. Fortunately, these policies and mores have considerable resilience due in large part to the way that they have evolved.

(2) A Short Social and Institutional History of the Internet Protocols

As the Internet rests on the use of particular protocols, the history of the Internet begins with the story of how those protocols came into being. The negotiated consent procedure used to decide on those early protocols became the template for subsequent decisionmaking on technical standards; the procedure has also had some effect on the means by which social norms are created in virtual communities.

A. The pre-historic period: 1968-1972

In 1968, the Advanced Research Projects Agency (ARPA)<26> expressed interest in a packet-switching network. That summer, the four existing ARPA computer science contractors (UCLA, SRI, UCSB and The University of Utah), met to discuss what this network might look like.<27> Although they came up with more questions than answers, "we did come to one conclusion: We ought to meet again. Over the next several months, we managed to parlay that idea into a series of exchange meetings at each of our sites, thereby setting the most important precedent in protocol design" -- that of a nomadic, collegial, open and consensus-based process.<28>

Most of the attendees at the early "Network Working Group" meetings were graduate students at UCLA and the University of Utah. In the words of a founder-member, "we expected that a professional crew would show up eventually to take over the problems we were dealing with."<29> Instead, the initial group of six people found themselves in charge by default.<30> By March 1969, the de facto design team concluded that it would have to begin documenting its work, if only to protect its members from recriminations later.<31> The Network Working Group issued its first documentation in April 1969.<32> Steven Crocker, the author of the first standards document, entitled it a "Request for Comments" (RFC), because "we were just graduate students at the time and so had no authority. So we had to find a way to document what we were doing without acting like we were imposing anything on anyone."<33> Crocker later recalled that,

I remember having great fear that we would offend whomever the official protocol designers were, and I spent a sleepless night composing humble words for our notes. The basic ground rules were that anyone could say anything and that nothing was official. And to emphasize the point, I labeled the notes "Request for Comments."<34>

Originally, RFCs were working documents to be used within a small community.<35> They included everything from careful papers to notes of a discussion held between two people.<36> "If a protocol seemed interesting, someone implemented it and if the implementation was useful, it was copied to similar systems on the net."<37>

As the network protocol and hardware began to function, and the number of linked machines on what came to be known as the ARPANET grew to double-digits,<38> attendance at Network Working Group meetings grew to as many as 100 per session. With the increasing number of participants, RFCs proliferated. RFC 1 issued in April 1969; RFC 50 issued one year later, in April, 1970; RFC 100 issued ten months after that, in February, 1971; the next year and a half saw another 50% increase in the rate of production, with RFC 400 issued in October 1972.<39>

B. Formalization Begins: 1972-1986

The informal Network Working Group formally became the International Network Working Group (INWG) at the October 1972 International Conference on Computer Communications in Washington D.C. Vinton Cerf, one of the original graduate students, organized and chaired INWG for the first four years.<40> Meanwhile, work on the TCP protocol continued. Concurrent implementations occurred at two sites in the U.S. and at University College London.<41>

In 1973, the U.S. Defense Advanced Research Projects Agency (DARPA) initiated the "Internetting project".<42> In July 1975, the ARPANET was transferred by DARPA to the Defense Communications Agency (now the Defense Information Systems Agency) as an operational network.<43> In 1979, Cerf, who had become the DARPA program manager for the Internet project, established the Internet Configuration Control Board (ICCB) as an informal committee to advise DARPA, and to guide the technical evolution of the protocol suite.<44> In recognition of the important role played by the government in funding and managing the ARPANET, the government had ex officio seats on the IAB, with Cerf holding the DARPA seat. The ICCB set the basic structure for everything that was to follow.<45> Although the ICCB had a U.S. focus, dictated in part by the funding of the ARPANET, this was partly offset by the parallel foundation of the International Collaboration Board (ICB) which served European and Canadian interests, notably issues arising from the Atlantic Packet Satellite Network (SATNET), which included Norway, the United Kingdom, Italy and Germany. The ICB also addressed requirements of transatlantic and NATO use of the Internet protocols.<46>

In 1983, Cerf's successor at DARPA reorganized the ICCB into several task forces and dubbed the reorganized group the Internet Activities Board (IAB).<47> Like its predecessors, the IAB was a small, self-selected, self-perpetuating body. Also like its predecessors, the IAB was a very open organization from its inception. It publicized its meetings, published its minutes, welcomed standardization proposals from anyone with the time and energy to make them, created informal task forces open to all comers that developed new draft standards, and subjected all proposed standards to public comment before adopting them. But the IAB itself remained something of a small, private, highly meritocratic, club which could do what it liked, subject to the very real constraint of peer review and the need to conform at least loosely to its federal paymaster's needs. Indeed, a representative from DARPA had one IAB seat ex officio.

C. Further Formalization: 1986-1992

The public's role in Internet standard setting was formalized in 1986, when the IAB created the Internet Engineering Task Force (IETF). Although the IAB also created several other task forces at the same time,<48> the IETF was special as it was asked to take over "the general responsibility for making the Internet work and for the resolution of all short- and mid-range protocol and architectural issues required to make the Internet function effectively."<49> In so doing, the IAB divested itself of the main part of the standard-creation work, and relegated itself to making the final decisions in a supervisory, appellate and management role. The IETF became the main forum in which the technical standards were proposed, tested, and debated.<50>

In 1989, the IAB, which until that time oversaw many "task forces," changed its structure to leave only a (somewhat changed) IETF and the IRTF.<51> Meanwhile, the ARPANET formally died in 1989, but no one noticed, as its functions were already being performed by the Internet.<52>

As the Internet became increasingly large and international, the IAB explored the possibility of associating with an existing standards body, but was unable to come to an arrangement it found satisfactory.<53> Matters were further complicated by a dispute with the International Standards Organization (ISO), the voluntary, nontreaty, umbrella body for international standards in computers and telecommunications. ISO refused to accept TCP/IP as an ISO standard, preferring a competing system called Open Systems Interconnection.

D. Institutionalization: 1992-

The Internet standard-setting process has evolved since a few graduate students anxiously wrote up their notes in 1970. As described in this section, the current system permits unlimited grassroots participation in the detail work of creating and debating standards through the IETF. The unlimited grassroots participation is the work of the IETF, and the relatively large, open agenda-setting is balanced by allowing specialist groups such as the IAB to retain a veto power over proposed standards. The IAB also retains considerable control over the terms of reference given to the almost ad hoc task forces who create them. Where once the IAB was entirely self-selected and unaccountable other than through peer review and the marketplace, now the IAB has some quasi-democratic legitimacy since its members are selected by the IETF and the Internet Society, both bodies that anyone can join for a small fee.

(i) The Internet Society (ISOC)

The Internet Society (ISOC) was founded in January 1992 as an independent, international "professional society that is concerned with the growth and evolution of the worldwide Internet, with the way in which the Internet is and can be used, and with the social, political, and technical issues which arise as a result."<54> Vinton Cerf became the first president of the ISOC, established as a non-profit organization incorporated in Washington, D.C.<55>

The ISOC Board of Trustees must approve all nominations to the IAB.

(ii) The Internet Engineering Task Force (IETF)

The IETF plays a crucial role in the standard-setting process.<56> In particular, the IETF does most of the actual day-to-day work of standard writing, and randomly selected members of the IETF control the nominations for the IAB, the body that oversees the process.<57> The IETF has no general membership; instead, it is made up primarily of volunteers, many of whom attend the IETF's triennial meetings. Meetings are open to all; similarly, anyone can join the e-mail mailing list in which potential standards are discussed.<58> Although the IETF plays a role in the seletion of the IAB, it is not part of or subject to the IAB or the ISOC. Indeed, it is not entirely clear to the membership who if anyone "owns" the IETF, or for that matter who is liable if it is sued.<59>

The IETF's meetings attract upward of 700 attendees, with many others participating in the standards process entirely by e-mail.<60> Approximately one-third of the attendees at each meeting are attending for the first time.<61> Everyone who attends a meeting has an equal right to participate. The flavor of IETF meetings can perhaps best be explained by quoting the "Dress Code" for attendees:

Since attendees must wear their name tags, they must also wear shirts or blouses. Pants or skirts are also highly recommended. Seriously though, many newcomers are often embarrassed when they show up Monday morning in suits, to discover that everybody else is wearing t-shirts, jeans (shorts, if weather permits) and sandals. There are those in the IETF who refuse to wear anything other than suits. Fortunately, they are well known (for other reasons) so they are forgiven this particular idiosyncrasy. The general rule is "dress for the weather" (unless you plan to work so hard that you won't go outside, in which case, "dress for comfort" is the rule!).<62>

The IETF is divided into several areas of specialization, with an area chairman who supervises work in that area (and has a virtual veto on the creation of new projects in that area). Each area is further subdivided into several working groups. A working group is a group of people who work under a charter to achieve a goal such as the creation of an informational document or the creation of a protocol specification. Working groups ordinarily have a finite lifetime and are expected to disband once they achieve their goal.<63> As in the IETF, there is no official membership for a working group. In practice, a working group member is anyone on the working group's e-mail list.<64>

The IETF has two bureaucratic elements that ensure its continued existence and cohesion.

1. The IETF Secretariat

The IETF Secretariat organizes the triennial meetings and provides institutional continuity between meetings, e.g. by maintaining a world-wide wed site. The Secretariat is administered by the Corporation for National Research Initiatives, with funding from U.S. government agencies and the Internet Society. The Secretariat also maintains the on-line Internet Repository, a set of IETF documents.<65>

2. The Internet Engineering Steering Group (IESG)

The Internet Engineering Steering Group (IESG) effectively controls the progression of standards towards final acceptance. In particular, the IESG controls whether a proposed standard can be promoted to draft standard status and then to final approval. IESG custom requires that the proposed standard have been implemented in at least two functioning products to be promoted to Draft Standard status.<66> Appeals from IESG decisions go to the IAB.<67> The IAB chooses the IESG from nominations made by a nominating committee drawn from the IETF. The IETF chair is also the chair of the IESG. The IAB can overrule decisions made by the IESG.<68>

(iii)The Internet Architecture Board (IAB)

In 1992, the Internet Activities Board (as it then was) failed to approve a recommendation from the IESG<69> regarding the IP protocol's management of the rapidly increasing number of computer addresses on the Internet. The IESG's recommendations were based on the prior work of an IETF task force, but the IAB felt that the solution failed to address long-term issues. The IAB thus returned the proposal with questions for further discussion. This set off a firestorm of criticism of the IAB as unrepresentative and undemocratic and threw into question the legitimacy of the IAB.<70>

Ironically, at about the same time as the IAB was making its controversial decision, the IAB agreed to become a subsidiary of the ISOC with responsibility for "...oversight of the architecture of the worldwide multi-protocol Internet" including continued standards and publication efforts.<71> During INET92 in Kobe, Japan, the ISOC Trustees approved a new charter for the IAB to reflect the proposed relationship.<72> As part of the move, the IAB changed its name to the Internet Architecture Board to reflect the final separation of the IAB from direct participation in the operational activities of the Internet.<73> The IAB thus became a technical advisory group of the ISOC responsible for providing oversight of the architecture of the Internet and its protocols, although the membership of the IAB remained unchanged.

Nevertheless, controversy over the IAB's decision and powers continued. In particular, members of the Internet community objected to the self-perpetuating nature of the IAB, the potentially unlimited terms granted to members, and its vague and general powers over Internet Standards.<74> As a result of these complaints, Internet Society President Vinton Cerf called for the formation of a working group to examine the IAB function procedures. Cerf asked Steve Crocker, the author of the first RFC,<75> to chair a working group charged with making curative proposals.<76> As the committee itself recognized, it was easy to say that the IAB should represent the consensus choice of the participants in the IETF, but that body's very openness made the idea of polling the membership problematic:

[T]here was a strong feeling in the community that the IAB and IESG members should be selected with the consensus of the community. A natural mechanism for doing this is through formal voting. However, a formal voting process requires formal delineation of who's enfranchised. One of the strengths of the IETF is there isn't any formal membership requirement, nor is there a tradition of decision through votes. Decisions are generally reached by consensus with mediation by leaders when necessary.<77>

Traditional voting works badly as a means of making representative decisons if you have a membership that is in constant flux and is open to all.

The working group returned with consensus recommendations that worked a radical change in the selection of the IAB. Instead of the previous, somewhat clubby, system, the committee recommended opening up the process. These recommendations were adopted, and as a result the IAB's charter was revised again. There are now 13 voting members, composed of the IETF chair and 12 full members who each serve for a term of two years. There are no term limits. Six IAB members are selected each year by the ISOC Board of Trustees from a list of nominees prepared by a nominating committee of the IETF.<78>

The IETF nomination process relies on volunteerism and randomness to achieve a fair result.<79> The nomination committee is formed annually and consists of a non-voting chair designated by the ISOC and seven members picked at random from a pool of volunteers. Any person who took part in two IETF meetings in the last two years may volunteer for this pool.<80> The nominating committee must put forward at least one name for each opening, but may forward more than one name per opening if it wishes. The ISOC Trustees then select the new members from among the nominees.<81>

The IAB has a recursive relationship with the IETF. IETF members nominate the IAB; in turn the IAB appoints a new IETF chair and all other IESG candidates from a list provided by the IETF nominating committee.<82> The IAB is also the body that hears appeals from decisions of the IESG regarding Internet standards, and the body with ultimate control over the RFC creation process.

E. The Internet Standard Creation Process Today

The Requests for Comments series remains the heart of the Internet Standards process.<83> (It bears emphasis that not all RFCs are Internet Standards, but all Internet Standards are RFCs.)

New Internet Standards begin with a Proposed Standard, which can originate from anyone. The IESG then writes a charter for an IETF working group and assigns a working group chair. The working group works via e-mail and at IETF meetings.

Once approved by the IETF and the IESG as a worthy subject for discussion, a proposed standard is then discussed in the IETF working group. IETF working groups reach decisions by negotiated consent -- the working group is expected to reach a "`rough' consensus about decisions." This means something less than full unanimity, but more than a simple majority of those present; indeed voting is discouraged in favor of finding a way to agree on what was the dominant opinion of those attending the meeting. Working groups have no formal membership and hence no voting. Parties who believe that their voices were unjustly ignored must either live with that result or appeal it to IETF leaders, the IESG, and ultimately to the IAB.<84> Once the working group has a "rough consensus," it submits its work to the IESG for public review and ultimate approval as a Draft Standard.<85>

A Proposed Standard can usually become a Draft Standard when there exist "at least two independent implementations which have interoperated to test all functions, and the specification has been a Proposed Standard for at least six months."<86> The (informal) requirement that there be at least two products using the standard is important. It means that no completely proprietary standard can ever become an Internet Standard. If a company wishes to retain full control over a patented technology, for example, it is free to do so, but its product cannot become an Internet Standard, although it can be "incorporated by reference" into a standard if the proprietors agree to give licenses to the technology on "specified, reasonable, non-discriminatory terms."<87>

When a Draft Standard has demonstrated its worth in the field and at least four more months have passed it can become a full Internet Standard and be published as an RFC.<88>

F. Coping with Growth: Can the Internet Standards Process Continue?

As the IETF grows, the Internet Standard process is showing signs of strain. The increased attendence at IETF meetings means that informal relationships among participants are more difficult because personal and professional familiarity declines. Decisions regarding standards now have important financial consequences for would-be providers of Internet hardware and software, and tempers can flare when tens of millions of dollars are at stake.<89> Furthermore, "[w]hile more people are participating, the number of senior, experienced contributors has not risen proportionately. ... Without [their] guidance, working groups run the serious risk of hav[ing] good consensus about a bad design."<90>

IETF veterans are concerned about these problems. They have reacted to them by treating them as if they were an Internet Standards problem. As with any technical problem, they are discussing it, writing papers about it, and trying to design systems that are participatory and fault-tolerant. Changes are proposed in incremental steps, and everyone understands that if a design fails in the field it will have to be replaced.<91>

(3) Force of Internet Standards

A. Conform or Else

From 1973 until the late 1980s, the U.S. government provided a significant share of the funding for the Internet through the National Science Foundation, the Defense Department, the Department of Energy and other departments. This gave the U.S. government considerable control over the network, although this control shrank as the network became larger, more international, and increasingly capable of carrying large amounts of traffic over private (non-governmental) connections.

From the early 1970s, software applications were being developed to take advantage of the emerging TCP/IP protocols. In order to test these applications against the moving target presented by the ever-evolving TCP/IP, the architects of the new network engaged in a series of "Bake Offs". The Bake Off principle was simple: all applications competing in a Bake Off would "shoot kamikaze packets at each other" and see if the recipient crashed.<92> Winner/survivors would be hailed as conforming to the standard and fit for use.

The initial TCP suite was completed in 1978. In 1982, the ARPANET's administrators decided to convert the network to TCP/IP. The network administrators were concerned that users would not take the trouble to change from the previous protocol, called Network Control Protocol (NCP). In order to signal that the move was imminent, the network managers programmed the computers that formed the "backbone"<93> of the ARPANET to reject all data transmitted under the old protocols for a day. Even this failed to convince some users, so the ARPANET administrators repeated the demonstration for a two day period. On January 1, 1983, the NCP was turned off permanently, forcing all network users to use TCP/IP or lose their connectivity.<94>

A similar, if less dramatic, fate awaits those who stray from the Internet Protocols today. Unless they persuade others to switch also, there will simply be no one on the other end of the line. It is of course possible to build a parallel network that is not part of the Internet. The French Minitel system, for example, has almost no Internet connectivity; whether the French now perceive this as an advantage is open to doubt. A better example of a parallel network is the "Intelink," created by the U.S. intelligence community to serve as a secure alternative network for communications too important to be entrusted to the Internet.<95> Simlarly, on-line services such as AOL and CompuServe began as autonomous networks; the competitive trend, however, seems to be to link these services to the Internet as quickly as possible.

B. Debate Over Whether an RFC is "Law"

RFC 1602 describes the Internet Standards Process. Unlike some of the RFC's produced in accordance with its dictates, RFC 1602 is not an "Internet Standard" but merely purports to be an "informational" document.<96> Indeed, the IETF/IAB/ISOC standard-making structure is designed with technical standards in mind; there is no agreed procedure for defining formal social standards with the same formality as the technical ones. Nevertheless, RFC 1602 is a critical document. It did more than codify prior practices, it changed them.<97> Unlike most RFCs, which are authored by one or two people, RFC 1602 has institutional authors, the IAB and the IESG themselves. Thus any would-be standard that was not prepared in accordance with its dictates could expect to face difficulties winning the IAB and IESG approval that is required to become an Internet Standard.

Recently, the intellectual property provisions of RFC 1602 have caused controversy. The controversy caused members of the IETF and the IAB to reconside the status of an "informational" document that seems to have as much force as a technical standard, and to debate whether "informational" RFCs are "law".

RFC 1602 states that proprietary specifications may be incorporated within the Internet standards process, so long as the owner makes the specification available on-line and grants the ISOC a no-cost license to use the technology or works in its standards work. The owner must also promise that "upon adoption and during maintenance of an Internet Standard, any party will be able to obtain the right to implement and use the technology or works under specified, reasonable, non-discriminatory terms."<98> Worst of all from the point of view of a corporation considering the donatation of a technology, RFC 1602 requires that the donor warrant to the ISOC that the donated technology "does not violate the rights of others."<99>

Understandably, even corporations willing to promise to give the required licenses have been hesitant to provide a warranty against all patent infringement claims, especially those that may be unkown at the time of the donation. Some corporations have also been unwilling to promise to provide licenses on "specified, reasonable, non-discriminatory terms" and have sought to limit their promise to "reasonable" terms. As drafted, however, RFC 1602 creates no mechanisms for exceptions or alterations to its onerous intellectual property provisions. Indeed, it is unclear who, if anyone, would be authorized to negotiate with a potential donor as to its representations, since the donation is to the ISOC, but the IETF first does the work of defining the draft standard -- which cannot happen until the legalities have been sorted out.

Inability to decide these questions paralyzed at least one IETF working group for months. The group's inability to complete its work culminated in an appeal to the IAB, which debated the issue in April 1995. The IAB's minutes suggest that the group was divided as to the legal and moral status of RFC 1602. Some called it a rule, others a documentation of current practice. One attendee argued that the IETF "is supreme and can change things whenever it wants. What is the status of RFC 1602--is it the law?"<100> The IAB concluded that "If RFC 1602 is the law, it is broken." The IAB also noted that,

Another cause for confusion is that RFC 1602 is "only" informational. (It is hard to imagine that RFC 1602 should appear on the standards track, since it describes processes, not protocols.) Perhaps we need a "Process" series of documents that includes an extended last call process like a standard but not the various levels, the requirement for interoperating implementations, etc. The series might also include a (standard) escape route for quickly changing the process when it is found to be broken.<101>

The IAB thus issued a public statement a few days later that, "RFC 1602, which defines how the IETF/IESG should operate, has a procedure for adopting external technologies which is unimplementable. ... The IAB supports ... revisions to RFC 1602 which would: (a) designate an owner of the intellectual property rights process; (b) specify a procedure to negotiate variations."<102> Of course, the only body that can initiate revisions to RFC 1602 is the IETF.

(4) De Facto Internet Standards Formed Outside the RFC Process

TCP/IP and the other Internet Standards do not define what software or hardware should look like nor, save in the most general terms, what those tools should do. The standards set out in the RFCs only define how those various tools should communicate with each other. And, of course, the Internet Standards are silent on what users should do with their tools. Informal, but powerful, standards have evolved parallel to the more formal standards process and have created norms regarding which tools are preferred and what is acceptable use of certain Internet applications. The following sections give three examples of the varying means in which these standards take shape.

A. Market Induced Standardization

Market pressures create de facto standards. This is not just an Internet phenomenon, as anyone who owned a BetaMax can testify.<103> The computer industry, and the Internet in particular, are ruthlessly competitive markets. Information propagates quickly on the Internet; if someone learns of a superior product, or of a defect in a new product, this information can be shared with interested parties via newsgroups and mailing lists. Furthermore, if the product is freeware<104> or shareware<105> the Internet itself can be used to distribute it.

Products judged to be superior can quickly establish themselves as a standard, as the experience of programs such as Pretty Good Privacy (PGP)<106>, PKZip,<107> and Netscape Navigator demonstrate. Netscape Navigator, a product of the Netscape Communications Corporation, provides a particularly good example because it caught on so quickly. Netscape Navigator is a graphical browser for the World-Wide Web, that is a program that lets users read "pages" created by others and stored on their computers. The company gave away millions of copies of version 1.0 to anyone who wanted it.<108> Netscape Communications Corporation makes its profits offering support to users who want more help than the documentation provides, and especially in selling server versions of the software to companies that want to establish Web pages of their own, i.e. to become writers of Web material. And it has started charging corporate users for version 1.1 of its product. Netscape has several features that make its servers attractive to corporate customers. A communication between a Netscape server and Netscape Navigator can easily be encrypted for security, which is important if credit card or other sensitive information is being transmitted; communications originating from other servers lack this capability. Similarly, Netscape offers several enhancements to the basic Web protocols that allow more interesting graphics to be displayed if both parties are using its products.

Once a product becomes a de facto standard, it is tremendously difficult to dislodge.<109> Most experts agree that IBM's OS2/Warp is vastly superior to Microsoft's Windows;<110> but OS2/Warp has a much smaller market share.<111> As a result, far more consumer and business PC software is written for Windows than for OS2, further entrenching Windows's domination of the marketplace.

B. Delegation: the USENET Example

USENET, known to many of its users as "news" or "netnews", is a distributed bulletin board system. A bulletin board system is a means of communication by which users leave messages for anyone else to read. Subsequent readers can reply, leading to dialog, discourse, or cacaphony as the case may be. (Imagine an infinitely long chalkboard, nearly indelible chalk, a prominent location, and a sign that says, "any comments?") A computer bulletin board often resides on one computer; users wishing to participate in its discussions must establish contact directly with that machine. A distributed bulletin board system is a bulletin board that is replicated on many computers.

Technically, Usenet is a separate network from the Internet: not all Internet capable computers subscribe to Usenet, and not all Usenet hosts are on the Internet, although the overlap is large and growing. Usenet has been called "the largest decentralized information utility in existence."<112> Usenet is divided into thousands of "newsgroups," each effectively its own specialized bulletin board. Newsgroups are organized in a topical and alphabetical hierarchy, each with an independent name such as misc.legal, rec.pets.cats, or comp.risks.<113> Topics include social, cultural, and hobbyist information as well as newsgroups devoted to the sciences and arts. As one might expect, there is a newsgroup for practically every software and hardware product in widespread use, and indeed a newsgroup for almost every social, sexual, political, and recreational practice.

Usenet operates on a very different technical principle from the Web. On the Web, information resides on one host computer and users instruct their clients to retrieve a copy when they wish to view it. On Usenet, messages "posted" by a user are directed to an existing newsgroup. The message is then circulated to every participating machine that carries that newsgroup,<114> to be held there for the convenience of local users. Messages do not necessarily travel quickly; delays of a day or more for a message to travel a long distance via many intermediaries are infrequent but not unheard of. Since the full Usenet feed now exceeds seven megabytes a day,<115> a full Usenet feed imposes significant storage costs on host machines, particularly since it is common practice to archive groups for a week to a month in order to accommodate users who do not check news every day. Many sites therefore do not carry every group.

The administration of each site (computer or LAN) that participates in Usenet is free to carry as few or as many groups as it wishes. Since a newsgroup concerned with any but the most parochial topic needs wide propagation in order to function effectively, the smooth functioning of Usenet depends on the ability of the operators to come to some agreement as to which groups to carry, and what those groups should be called. The coordination problem is particularly acute because the various system administrators have essentially no contact with each other, and usually feel that they have better things to do with their time than to worry about whether economics qualifies to be under the "sci" hierarchy or should be relegated to "misc" or "soc.religion". Sites that have unlimited disk space can solve the problem by carrying everything; ordinary sites must choose, yet often lack the knowledge or inclination to do so.

Usenet propagation used to be particularly reliant on the good offices of the "backbone," known to its detractors as the "backbone cabal". Before communications costs dropped and before it became easy to transmit Usenet news through an Internet link, the large majority of Usenet traffic propagated widely due to the efforts of a small number of sites that were willing and able to pay the long-distance telephone charges required to ensure that news was transmitted swiftly across the country and across the globe. With propagation came power, and these sites, notably UUnet, were able to shape Usenet policies.<116> Although UUNet no longer has nearly as much control over Usenet propagation, the policies -- and the reliance they have engendered in most site administrators -- live on.

Like the Internet Standards that they resemble, Usenet newsgroup creation procedures have become formalized in a set of Guidelines.<117> The Guidelines require that the proponent of a new group in the major hierarchies draft a proposed charter, submit it for discussion in news.announce.newgroups and any other appropriate newsgroups, and amend it as needed given the comments received.<118> When consensus has been achieved that the charter is ready, a Call for Votes is issued. Voting has three aims: (1) To demonstrate that there are enough interested persons to justify creating the group; (2) to ensure that the group does not duplicate other existing groups (if it did, the participants in those groups would presumably vote against the new group so as not to dilute their old one); (3) to ensure that the proposed name for the newsgroup fairly represents its contents and is properly located in the hiearchy.<119>

Votes are collected and counted by a neutral party, drawn from the "Usenet Volunteer Votetakers".<120> The voting period varies from 21 to 31 days, and anyone with Usenet access is welcome to vote either for or against the proposed newsgroup. Any newsgroup that gets 100 more valid yes votes than no votes and also receives at least a 2/3 majority in favor of creation is declared passed.

New Usenet groups are created by sending out a "control" message, which is an ordinary message with a predetermined form; groups are deleted in the same fashion, although legitimate decisions to delete a group are rare. Many users of Usenet have the technical capability to send a "control" message; the technical obstacles that might prevent them from sending control messages are usually easy to evade. As a result, spurious messages are not uncommon and most sites that carry Usenet therefore do not act immediately on control messages. Instead they send them to the system administrator who must then decide whether to honor them. As noted above, however, system administrators frequently do not want to be bothered with having to figure out what is a legitimate group that is likely to be carried by other systems around the world, and what is a practical joke. In practice, therefore, many system operators program their systems to accept control messages only from one source: tale@uunet.uu.net, which is the Usenet control address for David C. Lawrence. Lawrence has been active in Usenet since it was founded, and is an employee of UUNet, which remains a major Usenet news disseminator. He also created and guides the "Usenet Volunteer Votetakers."

Many Usenet administrators rely on "Tale" to administer the newsgroup creation process and to send out control messages only for those groups that have complied with the guidelines. While there remain other ways of creating a group, notably the "alt" hierarchy, the alternatives propagate less widely than groups in the major Usenet hierarchies that Tale administers. As a result, a large number, perhaps a majority, of sites have effectively delegated administration of the newsgroup creation process to one person.

C. Peer Pressure

Some de facto Internet standards are set by a combination of peer pressure and socialization. The main channels by which this is communicated other than one-to-one education is through electronic documents, often entitled "FAQs" for (answers to) Frequently Asked Questions. These documents, prepared by self-selected volunteers, attempt to distill some Internet wisdom, or Internet norms, for newcomers.

Two types of documents deserve mention. While most FAQs are identified with a particular topic, newsgroup or mailing list (e.g. the alt.fan.dan-quale FAQ, or the net.loons FAQ), there are also meta-FAQs about how to behave on the Internet. Many of these attempt to set out the basic rules of conduct that have come to be called "netiquette." Basic rules of netiquette include asking permission before forwarding a private message, respecting copyrights, reading the FAQ before asking silly questions, and remembering not to send messages in all capital letters except for emphasis.

A FAQ is typically the work of one person, or a small committee, who self-selects. More recently there have been attempts to codify and propagate basic ideas of netiquette in a more organized way, through the RFC process and under the offices of the various Internet institutions such as the ISOC. For example, an IETF working group has produced a draft RFC describing "a minimum set of guidelines for Network Etiquette (Netiquette)."<121> The draft RFC contains guidelines for e-mail Internet talk,<122> mailing lists, Usenet and other Internet applications. The basic suggestions are not controversial. For example, the e-mail section directs that one should "[n]ever send chain letters on the Internet", the Internet talk section cautions "that talk is an interruption to the other person", and the mailing list and Usenet sections note that "a large audience will see your posts. That may include your present or your next boss."<123>

D. Mass Revenge/Vigilante Justice: The Spam Problem and its (Partial) Solution

In Internet parlance, "to spam" is to e-mail or post the same message over and over in different fora, and "a spam" is the e-mail or Usenet<124> posts that result from spamming.<125> Spamming can be carried out in any number of ways. A person<126> can send mail to many mailing lists, directly to users whose names have been culled from mailing lists or newsgroups, or by individually posting the message to many (even all) newsgroups. The newsgroup method is probably the most common because it is the simplest to implement.

Usenet-compatible news reading software allows the user to indicate the group(s) to which a message (called a "post" or "posting") should be directed. A message can easily be "cross-posted" to more than one group. This in itself is not a spam. Often, cross-posting is appropriate since a topic may legitimately be of interest to more than one group. A legal question about the Clipper Chip, for example, legitimately belongs in talk.politics.crypto, misc.legal, alt.privacy.clipper and maybe misc.legal.moderated as well. Cross-posting a message ensures that responses to a message will be cross-posted as well, allowing cross-fertilization among different newsgroups on topics of mutual interest. Cross-posting has another, crucial property: it ensures that the recipient need only download and read the message once.

Only one copy of a cross-posted message is stored on the host machine. Its headers identify it as a message that belongs in multiple newsgroups. Most newsreading software allows users to track which messages they have read across newsgroups. If the user reads a cross-posted message and then encounters it again in another group, the software will automatically mark the message as already read in the second group, saving the user the annoyance of viewing it many times.

A spam is typically not cross-posted. Instead the identical text is individually posted to every newsgroup with no consideration of the message's relevance to that group's purposes. This has several disagreeable effects. First, each host computer is tricked into storing multiple copies of the same message instead of one message with a header identifying the cross-post. If the message is long, and storage space is tight, this creates a burden on the system. Second, users have no way of automatically flagging the message as read after their first encounter with it. Depending on the nature of the user's Internet access, this may require the user to manually delete the message every time it appears on the screen. Second, users who pay for the access by the byte or the minute will be forced to pay for every incidence of the message; similarly, users who download their news from a remote computer will have to wait longer while the multiple copies are each downloaded. If users access their news by telephone, this may cost them additional telephone charges. Whether there is a financial cost or not, many people find it very annoying -- the electronic equivalent of mass junk mail.<127>

(i) Great Spams

Probably the two greatest spams in the recent history of Usenet are the Canter & Siegel advertising spam and the strange career of the "zuma-bot" named "Serdar Argic". Between them they exemplify current spamming technology and brazenness.

"Serdar Argic" is the name attached to innumerable Usenet posts claiming that the Armenians committed genocide against the Turks. (Most historians believe it was the other way around.) Some Usenet detective work-linked the Serdar name with two different people, first Hasan B. Mutlu, an employee of AT&T Bell Laboratories, and then Ahmet Cosar, a former student at the University of Minnesota.<128> Serdar Argic's posts are concentrated in a small number of newsgroups, e.g. soc.culture.turkish, soc.culture.russia,<129> but have been so numerous at times as to drown out almost everything else. In addition to large numbers of semi-coherent rants on the subject of Armenian genocide appended as replies to Usenet posts on matters pertaining vaguely to Armenia, Turkey and other nearby states, Serdar Argic appears to have used a computer program that searched all articles in certain groups for the word "Turkey" and appended anti-Armenian replies. Thus, the .signature line "On the first day after Christmas my truelove served to me...Leftover Turkey!" received a long rambling reply about Turkish suffering at the hands of the Armenians.<130> This incident earned him/it the sobriquet the "zuma-bot" since the postings appeared to originate at a computer with the (probably forged) address of "zuma".

Serdar Argic may be a case of automation or obsession, but the spam created by lawyers Laurence A. Canter and Martha S. Siegel is a simple tale of greed. Canter and Siegel have been described as "gothically clumsy" spammers, "so intent on making a buck" and so unrepentant when criticized that an entire Usenet newsgroup (news.admin.net-abuse) was created to discuss their antics.<131> On April 12, 1994, Canter ran a program from his computer in Arizona that individually posted copies of a message entitled "Green Card Lottery 1994 May be the Last One!! Sign up now!!" to approximately 6,000 newsgroups. The message advertised the Canter & Siegel law firm's services in helping would-be immigrants to the U.S. complete the application forms for the green card lottery being held by the U.S. Immigration and Naturalization Service.<132> (In fact, applicants could have, if they wished, obtained the forms directly from INS and applied for free, but the message did not explain this.)

Confronted with wide-spread condemnation by other Usenet users, Canter and Siegel not only were unrepentant but wrote a book, How to Make a Fortune on the Information Superhighway<133> advocating spamming as the road to riches. The book claims that their spam garnered them 1004 clients, who paid between $95 and $145 each, for a total revenue of more than $100,000. The response from the Usenet community was vitriolic.

(ii) Responses to Spam

Canter and Siegel received 20,000 e-mails complaining about their original spam, and "reams" of junk faxes.<134> The traffic, including "mailbombs" (long, irrelevant, messages designed to clog the recipient's mailbox) became so great that the computer system providing C&S's Internet access crashed more than 15 times, leading the company that owned the system to terminate C&S's account as an act of self-preservation. C&S then switched to a second company, Netcom, which closed their account as a precaution when Canter gave a TV interview in which he promised to spam again.<135>

The striking thing about the e-mail response to spammers such as Canter and Siegel is that it is widespread and spontaneous. So far as one can tell, spammers who find their mailboxes clogged are not the victim of one or two angry people who have set up mailbombing programs. Rather, the spammers are victims of mass action: thousands of people who notice the spam independently "reply" to the message either by quoting it in full, or by attaching a long irrelevant reply (most news and mail software makes it the work of a keystroke to reply to sender quoting the sender's text). There does not appear to be any overt coordination among the counter-spammers; they seem to be using a feature of the software in mass petulance, which amounts to mass self-defense.

Another tactic that has been used with some effectiveness is for many offended recipients of a spam to complain to the operator of the system that provides the spammers internet access. Being seen to endorse spam is bad for business; having to reply to dozens of angry letters is annoying; having users who get so much hate mail that they crash your system is disastrous. E-mail complaint campaigns have driven some service providers to suspend a spammer's account, or to require the spammer to apologize and promise never to do it again as a condition of keeping the account.

The most significant anti-spam developments, however, are the creation of news.admin.net-abuse and the rise of Cancelmoose (TM), an Internet vigilante. News.admin.net-abuse is a moderated newsgroup; that means that a human being screens every post before it is forwarded to the group as a whole. This screening keeps down the volume, and keeps up the quality (sometimes called the "signal/noise ratio"). The newsgroup is the place in which spam sightings are reported, debates are held as to whether a given set of messages are large enough to qualify as spam, and the ethics and mechanics of message cancellation are discussed. According to the news.admin.net-abuse FAQ, the group has reached a rough consensus that any usenet message reposted (as opposed to cross-posted) more than twenty times qualifies as spam.<136> News.admin.net-abuse is also the place that Cancelmoose reports his/her/its activities and requests comments and guidance.

Cancelmoose [tm] is a vigilante who monitors Usenet for spams and sends out cancellation messages to erase the spams.<137> A cancellation message instructs all the computers in Usenet to delete a message. In theory, only the original sender of a message should be able to cancel a message (e.g. a message sent in error); in practice cancellation messages can be forged. Because of this possibility, some Usenet hosts do not honor cancellation messages, but most do. After cancelling a spam, the 'Moose posts an anonymous message to news.admin.net-abuse announcing the action and inviting discussion of the reasonableness of the action. So far, to quote the news.admin.net-abuse FAQ, the Moose "has behaved altogether admirably -- fair, even-handed, and quick to respond to comments and criticism, all without self-aggrandizement or martyrdom."<138> As a result, "Cancelmoose [tm] appears to have near-unanimous support" from the group's readership.<139> Part of the 'Moose's popularity may be that the anti-spam campaign appears completely selfless: "Nobody knows who Cancelmoose[tm] really is, and there aren't really even any good rumors"<140> other than the perennial legend, perhaps deriving from the name, that the 'Moose is Norwegian.

In contrast to relatively successful vigilante justice meted out to mass spammers, attempts to stop the zuma-bot have usually failed. There are two reasons for this. First, zuma posts to only a limited number of groups; the posts, while repetitive are not carbon copies of each other. Perhaps because the zuma-bot's activities are confined to fewer than twenty newsgroups,<141> no one has set up an automated canceling device to respond to his/its posts. Second, "Zuma" gets its Usenet feed from "anatolia.org" which is registered to Ahmet Cosar, and anatolia.org gets its Usenet feed from Uunet which considers itself a common carrier bound to serve all customers. There is little that can be done to stop the zuma-bot, or anyone else with similar Internet access, although some people have from time to time sent out messages cancelling posts by Serdar Argic in certain newsgroups.<142> Since Serdar Argic appears to control its own internet host, it is less vulnerable to tactics other than message cancellation than are other types of users. Spam by a system operator remains rare, perhaps because most operators of internet hosts either have better things to do, or are the type of computer users who are already socialized into the Internet traditions.

E. Coping With Growth: Will "Newbies" Undermine Existing De Facto Standards?

Socialization, or education, in the Internet traditions is an important part of keeping the informal standards alive. The pressure on IETF procedures from the influx of new members is nothing compared to the strain on Internet norms caused by the massive inflow of new persons unschooled in the informal Internet norms:

In the past, the population of people using the Internet had "grown up" with the Internet, were technically minded, and understood the nature of the transport and the protocols. Today, the community of Internet users includes people who are new to the environment. These "Newbies" are unfamiliar with the culture and don't need to know about transport and protocols.<143>

The Internet community is aware of the problem -- almost hysterical about it at times.<144> It is responding to the influx with a massive education effort, producing FAQs, books, web pages, and of course, flaming away at violations of netiquette. Whether these efforts will suffice to preserve the Internet's norms is unclear; if they do not, something valuable will have been lost.

<1>See generally I. Trotter Hardy, Government control and Regulations of Networks, paper presented at Symposium on The Emerging Law of Computer Networks, Austin, TX, May 19, 1995.

<2>TCP/IP is the fundamental communication standard on which the Internet has relied: "TCP" stands for Transmission Control Protocol, while "IP" stands for "Internet Protocol." See generally, Information Sciences Institute, University of Southern California, Internet Protocol (Network Working Group, Request for Comments No. 791, Sept. 1981), available online URL http://ds.internic.net/rfc/rfc791.txt (IP specification); Information Sciences Institute, University of Southern California, Internet Protocol (Network Working Group, Request for Comments No. 793, Sept. 1981), available online URL http://ds.internic.net/rfc/rfc793.txt (original specification for TCP); Gary C.

Kessler & Steven D. Shepard, A Primer On Internet and TCP/IP Tools, (Network Working Group, Request for Comments No. 1739, Dec. 1994), available online URL http://ds.internic.net/rfc/rfc1739.txt (describing major TCP/IP-based applications; Vincent Cerf & R. Kahn, A Protocol for Packet Network Interconnection, in COM-22 IEEE Trans. on Communications 637 (May 1974) (original paper proposing packet switching network).

<3>See generally David H. Crocker, To Be "On" the Internet (Network Working Group, Request for Comments No. 1775, March 1995), available online URL http://ds.internic.net/rfc/rfc1775.txt (describing varying levels of user access to the Internet).

<4>See Bruce Sterling, Short History of the Internet (Feb. 1993), available online URL gopher:/ /gopher.isoc.org:70/00/Internet/history/short.history.of.internet.

<5>It is as if rather than telephoning friends you were to tape record your message, cut it up into equal pieces, and hand the pieces to people heading in the general direction of the intended recipient. Each time people carrying tape meet anyone going in the right direction, they can hand over as many pieces of tape as the recipient can comfortably carry. Eventually the message would get where it needs to go.

<6>More importantly from a technical standpoint, the computers in the network can all communicate without knowing anything about the network technology carrying their messages.


The particular route that the packet took would be unimportant. Only final results would count. Basically, the packet would be tossed like a hot potato from node to node to node, more or less in the direction of its destination, until it ended up in the proper place. If big pieces of the network had been blown away, that simply wouldn't matter; the packets would still stay airborne, lateralled wildly across the field by whatever nodes happened to survive. This rather haphazard delivery system might be "inefficient" in the usual sense (especially compared to, say, the telephone system) -- but it would be extremely rugged.

Sterling, supra note 4.

<8>Redefining Community, Info. Wk., Nov. 29, 1993, at 28 (quoting Gilmore).

<9>Gary H. Anthes, Summit Addresses Growth Security Issues for Internet, Computerworld, Apr. 24, 1995 at 67 (quoting Vinton Cerf estimate).

<10>Compare Louise Kehoe, Surge of Business Interest, The Financial Times, March 1, 1995 at XVIII (about 3.3 million outside US) with Database, U.S. News & World Rep., Feb. 27, 1995, at 14 (1.4 million outside US).

<11>Aboba, supra note 22

<12>David H. Crocker, Making Standards the IETF Way, 1 StandardView 1, -- (1993), available online URL .

The following table gives some recent statistics:

Replied Network Class

Date | Hosts Domains ToPing* A B C


Jul 95| 6,642,000 120,000 1,149,000 91 5390 56057

Jan 95| 4,852,000 71,000 970,000 91 4979 34340

Oct 94| 3,864,000 56,000 1,024,000 93 4831 32098

Jul 94| 3,212,000 46,000 707,000 89 4493 20628

Apr 94| -N/A-

Jan 94| 2,217,000 30,000 576,000 74 4043 16422

Oct 93| 2,056,000 28,000 69 3849 12615

Jul 93| 1,776,000 26,000 464,000 67 3728 9972

Apr 93| 1,486,000 22,000 421,000 58 3409 6255

Jan 93| 1,313,000 21,000 54 3206 4998

[* estimated by pinging 1% of all hosts]

Network Wizards, Internet Domain Survey, y, 1995, available online URL http:// www.nw.com/zone/WWW/report.html.

<13>Kehoe, supra note 10, at XVIII.

<14>Nina Burns, E-mail beyond the LAN, PC Mag., April 25, 1995, at 102.

<15>See "World-Wide Web" in The Free On-Line Dictionary of Computing available online URL http://wombat.doc.ic.ac.uk/?World-Wide+Web. The Web was invented in the CERN High-Energy Physics laboratories in Geneva, Switzerland.

<16>Id. A terrabyte is a million megabytes.

<17>Anthony-Michael Rutkowski, Executive Director, Internet Society, Bottom-Up Information Infrastructure and the Internet, (Feb. 27, 1995) available online URL http://www.isoc.org/speeches/ [to come].

<18>See "World-Wide Web", supra note 15. A Web browser can retrieve data via FTP, Gopher, Telnet or news, as well as via the http protocol used to transfer hypertext documents. Id.

<19>BITNET used the IBM RSCS protocol. It relied on direct leased line connections between participating sites, most which were in U.S. universities. Today, BITNET and its parallel networks in other parts of the world (e.g., EARN in Europe) have several thousand participating sites. In recent years, BITNET has established a backbone which uses the TCP/IP protocols, thus interconnecting BITNET with the Internet. See Vincent Cerf, A Brief History of the Internet and Related Networks, available online URL gopher://gopher.isoc.org:70/ 00/internet/history/_A Brief History of the Internet and Related Networks_ by V. Cerf.

<20>CSNET was initially funded by the National Science Foundation (NSF) to provide networking for university, industry and government computer science research groups. At its peak, CSNET had approximately 200 participating sites and international connections to approximately fifteen countries. CSNET ceased in 1991. Cerf, supra note 19.

<21>Cerf, supra note 19.

<22>Benard Aboba, How the Internet Came to Be, in The Online User's Encyclopedia __ (1993), available online URL gopher://gopher.isoc.org:70/00/internet/history/ how.internet.came.to.be.

<23>Subscriber growth is estimated at 25% or more per year. During the first three months of 1995, US-based PC online services added more than 1 million subscribers. Burrington Testimony, supra note ?, at 6.

<24>Network Wizards, Host Distribution by Top-Level Domain Name, available online URL http://www.nw.com/zone/WWW/dist-byname.html lists Internet hosts sorted by "domain names," which are either two-letter country codes, or three letter codes that specify the type of institution, gives some idea of the distribution of host machines. 1.3 million of the 4.8 million hosts have "com" suffixes, representing commercial sites.

<25>See Aboba, supra note 22 (quoting Cerf).

<26>ARPA became DARPA, the Defense Advanced Research Projects Agency, in 1972.

<27>J. Reynolds & J. Postel, The Request for Comments Reference Guide 2 (Network Working Group, Request for Comments No. 1000, Aug. 1987), available online URL http://ds.internic.net/rfc/rfc1000.txt [hereinafter RFC 1000].

<28>RFC 1000, supra note 27, at 2.

<29>RFC 1000, supra note 27, at 2.

<30>RFC 1000, supra note 27, at 2-3 ("while BBN [the ARPA contractor] didn't take over the protocol design process, we kept expecting that an official protocol design team would announce itself.").

<31>RFC 1000, supra note 27, at 2-3.

<32>See, e.g., Steve Crocker, Documentation Conventions (Network Working Group, Request for Comments No. 3, April, 1969), available online URL http://ds.internic.net/rfc/ rfc3.txt ("The Network Working Group seems to consist of Steve Carr of Utah, Jeff Rulifson and Bill Duvall at SRI, and Steve Crocker and Gerard Deloche at UCLA. Membership is not closed."). RFCs 1 and 2, which dealt with "host software," are so obsolete as to have been taken off line.

<33>Aboba, supra note 22. (interview with Vinton Cerf who participated in early development of Internet protocols while a graduate student at UCLA).

<34>RFC 1000, supra note 27, at 3 (quoting Stephen D. Crocker, first and to date only RFC editor).

<35>Crocker, supra note 12, at _.

<36>[examples to come]

<37>Crocker, supra note 12, at _.

<38>In 1971 there were 15 computers linked to the network, by 1972 there were 37. Sterling, supra note 4.

<39>There are now (May 1995) more than 1800 RFCs, and hundreds more are continually in various stages of preparation. The contemporary RFC procedure is described in more detail, infra TAN --. Not all RFCs are standards, however. Some are informational, others are "experimental" or "historic". See Christian Huitema et al., Not All RFCs are Standards 1-2 (Network Working Group, Request for Comments No. 1796, April 1995), available online URL http://ds.internic.net/rfc/rfc1796.txt.

<40>Aboba, supra note 22.

<41>See Aboba, supra note 22.

<42>See Cerf, supra note 19.

<43>Aboba, supra note 22.

<44>Vinton Cerf, The Internet Activities Board 1-2 (Network Working Group, Request for Comments No. 1120, Sept. 1989), available online URL http://ds.internic.net/rfc/rfc1120.txt [hereinafter RFC 1120]. Dr. David C. Clark of the Lab for Computer Science at Massachusetts Institute of Technology was named the chairman of this committee. Id.

<45>See Crocker, supra note 12, at _ ("Initially consisting of eight members, this is essentially the management structure that is in place today.").

<46>See Crocker, supra note 12, at --.

<47>RFC 1120, supra note 44, at 2.

<48>One other important task force created in 1986 deserves mention: the IAB created the Internet Research Task Force (IRTF) to consider long-term research problems in the Internet.

<49>See Vinton Cerf, The Internet Activities Board 5 (Network Working Group, Request for Comments No. 1160, May 1990), available online URL http://ds.internic.net/rfc/rfc1160.txt. Technically this function was delegated to the Internet Engineering Steering Group (IESG), which managed the IETF.

<50>For more details see Internet Activities Board & Internet Engineering Steering Group, Internet Standards Process--Revision 2 (Network Working Group, Request for Comments No. 1602, March 1994), available online URL http://ds.internic.net/rfc/rfc1602.txt [hereinafter RFC 1602].

<51>See supra note 48. Indeed, the IAB endured several reorganizations from 1983 to 1994, when the current charter was drafted.

<52>Sterling, supra note 4.

<53>Crocker, supra note 12, at -.

<54>RFC 1602, supra note 50, at 7.

<55>Internet Society, What Is the Internet Society, available online URL http://www.isoc.org/what-is-isoc.html.

<56>RFC 1602, supra note 50, at 6-7.

<57>See infra text at note 80.

<58>See IETF Secretariat et al., The Tao of IETF 3 (Network Working Group, Request for Comments No. 1718, Nov. 1994), available online URL http://ds.internic.net/rfc/ rfc1718.txt [hereinafter IETF Tao].

<59>Paul Mockapetris, POISED '95 BOF available online URL gopher://ds.internic.net/ 00/ietf/95apr/poised95-minutes-95apr.txt (minutes of POISED meeting held at April 1995 IETF meeting).

<60>Crocker, supra note 12.

<61>IETF Tao, supra note 58, at 1.

<62>IETF Tao, supra note 58, at 7.

<63>IETF Tao, supra note 58, at 5.

<64>IETF Tao, supra note 58, at 5.

<65>Crocker, supra note 12, at -.

<66>Internet Architecture Board, Internet Official Protocol Standards 3 (Network Working Group, Request for Comments No. 1780, March 1995), available online URL http://ds.internic.net/rfc/rfc1780.txt [hereinafter RFC 1780]; IETF Tao, supra note 58, at 1.

<67>RFC 1602, supra note 50, at 7.

<68>RFC 1780, supra note 66, at 3; IETF Tao, supra note 58, at 1.

<69>See supra note 49.

<70>See Steve Crocker, The Process of Organization of Internet Standards Working Group (POISED) 2-4 (Network Working Group, Request for Comments No. 1640, June 1994), available online URL http://ds. internic.net/rfc/rfc1640.txt [hereinafter RFC 1640].

<71>For a description of the IAB prior to this voluntary merger, see supra, TAN 47.

<72>IETF Tao, supra note 58, at 4. The 1992 IAB Charter appears as Internet Architecture Board, Charter of the Internet Architecture Board (IAB) (Network Working Group, Request for Comments No. 1358, Aug. 1992), available online URL http://ds.internic.net/rfc/rfc1358.txt. The current charter is Internet Architecture Board, Charter of the Internet Architecture Board (IAB) (Network Working Group, Request for Comments No. 1601, March 1994), available online URL http:// ds.internic.net/rfc/rfc1358.txt [hereinafter RFC 1601].

<73>Crocker, supra note 12, at -.

<74>See RFC 1640, supra note 70, at 1-4.

<75>See supra text accompanying note 33.

<76>The charge to the working group asked it to consider: (1) Procedures for making appointments to the IAB; (2) Procedures for resolving disagreements among the IETF, IESG, and IAB in matters pertaining to the Internet Standards; and (3) Methods for ensuring that for any particular Internet Standard, procedures have been followed satisfactorily by all parties so that everyone with an interest has had a fair opportunity to be heard. RFC 1640 at 9.

<77> RFC 1640, supra note 70, at 4.

<78>RFC 1601, supra note 72, at 1. The IETF chair must be approved by a vote of the 12 full IAB members then sitting. Id.

<79>On the use of randomness to achieve democratic results, cf. Akhil R. Amar, Note, Choosing Representatives by Lottery Voting, 93 Yale L.J. 1283 (1984).

<80>RFC 1601, supra note 75, at 1-2. The nomination committee also includes four non-voting liaison members, one designated by each of the Board of Trustees of the Internet Society, the IAB, the IESG, and the IRSG. Id. at 2.

<81>IETF Tao, supra note 58, at 4.

<82>RFC 1601, supra note 72, at 2.

<83>See IETF Tao, supra note 58, at 16. Not every RFC contains a standard.

<84>The 'rough consensus' procedure, guarantees moments of divisiveness, since parties that lose various debates will occasionally feel that they were not given a fair opportunity to express their views or that the consensus of the working group was not accurately read. All such expressions of concern are taken very seriously by the IETF management. More than most, this is a system that operates on an underlying sense of the good will and integrity of its participants. Often, claims of "undue" process will cause a brief delay in the standard-track progression of a specification, while a review is conducted. While frustrating to those who did the work of technical development, these delays usually measure a small number of weeks and are vital to ensuring that the process which developed the specification was fair.

Crocker, supra note 12, at -.

<85>Crocker, supra note 12, at -.

<86>Crocker, supra note 12, at -.

<87>See infra text accompanying notes 98, 99.

<88>Crocker, supra note 12, at -.

<89>See Crocker, supra note 12, at --.

<90>Crocker, supra note 12, at --.

<91>"In general, the IETF is applying its own technical design philosophy to its own operation. So far, the technique seems to be working. With luck, it will demonstrate the same analytic likelihood of failure, with the same experiential fact of continued success." Crocker, supra note 12, at --.

<92>Aboba, supra note 22. Recently, FTP Software has reinstituted Bake Offs to ensure interoperability among modern vendor products. Id. The Bake Off protocol, complete with instructions to test for "rubber baby buffer bumpers in both directions," appears as J. Postel, TCP and IP Bake Off (Network Working Group, Request for Comments No. 1025, Sept. 1987), available online URL http://ds.internic.net/rfc/rfc1025.txt.

<93>The backbone is the sites that ensure traffic passes along the top level of the network. G. Malkin, Internet Users' Glossary (Network Working Group, Request for Comments No. 1392, Jan. 1993), available online URL http://ds.internic.net/rfc/rfc1392.txt.

<94>Aboba, supra note 22.

<95>See William F. Powers, Cloak and Dagger Internet Lets Spies Whisper in Binary Code, Wash. Post, Dec. 28, 1994, at A4 (noting that the intelligence community created the "Intelink" because the "very public, very uncontrollable global mesh of computer networks [(the Internet)] was too risky a place to do business").

<96>See RFC 1602, supra note 50, at 1.

<97>See RFC 1602, supra note 50, at 1 (noting it obsoletes earier RFC).

<98>RFC 1602, supra note 50, at 25, 28-29.

<99>RFC 1602, supra note 50, at § 5.4.1.

<100>IAB Minutes Apr. 2, 1995.

<101>E-mail From: AWeinrib@ibeam.jf.intel.com (Abel Weinrib) Subject: Minutes for April 2 IAB Meeting.

<102>(IAB Open Meeting, Reported by Abel Weinrib) available online URL gopher://ds.internic.net/00/ietf/95apr/iab-minutes-95apr.txt

<103>For an economic analysis of the costs and benefits of standards, see Stanley M. Besen & Joseph Farrell, Choosing How to Compete: Strategies and Tactics in Standardization, J. Econ. Perspectives, Spring 1994, at 117, 117-18 (asserting that firms manipulate standards for competitive advantage); Michael L. Katz & Carl Shapiro, Systems Competition and Network Effects, J. Econ. Perspectives, Spring 1994, at 93, 93-95 (warning that pervasive standards lead to inefficient market outcomes in "systems markets" characterized by products that require other conforming products to function). But see S.J. Liebowitz & Stephen E. Margolis, Network Externality: An Uncommon Tragedy, J. Econ. Perspectives, Spring 1994, at 133, 133-35 (arguing that the negative effects of standards identified by Katz and Shapiro are infrequent, if they exist at all).

<104>Freeware is distributed by the author on a no-cost basis.

<105>Shareware is distributed by the author on a try-before-you-buy basis. Anyone possessing an unregistered copy is encouraged to copy it and share it with other potential users. Users who like the product are asked to "register" it by sending in a small fee. Shareware works entirely on the honor system.

<106>PGP, currently in version 2.6.2 for shareware and 2.7 for commercial users (ViaCrypt), is a public-key encryption program. Because the program relies on a "web of trust" approach to key management, see generally, A. Michael Froomkin, The Metaphor is the Key: Cryptography,

the Clipper Chip, and the Constitution, 143 U. Penn. L. Rev. 709, 893-94 (1995), the widespread use of PGP has spawned a small cottage industry of "key servers" -- computers that act like telephone books, listing public keys, the username (or pseudonym) attached to them, and any attestations of reliability that may be appended by other users.

<107>PKZip, currently in version 2.04g, is a file compression program in wide use on DOS- based personal computers. Files produced by the program commonly have the form, FILENAME.ZIP.

<108>See Steve Alexander, The Great Netscape, Star Tribune, May 3, 1995, at D1 (stating 6 million copies given away); Peter H. Lewis, Netscape Knows Fame And Aspires to Fortune, N.Y. Times, March 1, 1995, at D1 (stating 3 million copies download from Netscape server since December 1994). Innumerable additional copies may have been made from those copies.

<109>See, e.g. Lewis, supra note 108 (quoting software engineer as saying, "Once someone starts writing code for one platform, it's hard to leave it. ... If we write a 2,000-page Web site

for Netscape this spring, and Microsoft comes out with a superior system in October, we'd have to redo all our work to take advantage of it. Believe me, that's a lot of work.").

<110>See, e.g., Ben Rothke, OS/2's Big Problem: IBM, InformationWeek 79 (Feb. 13, 1995).

<111>See, e.g., Infocorp Reports On Mac, Windows Duel, Newsbytes News Network (April 18, 1995) (reporting OS/2 market share holding steady at 2%).

<112>"Usenet" in The Free On-Line Dictionary of Computing available online URL http://wombat.doc.ic.ac.uk/?USENET. "Network News Transfer Protocol is a protocol used to transfer news articles between a news server and a news reader. The uucp protocol is sometimes used to transfer articles between servers, though this is probably less common now that most backbone sites are on the Internet." Id.

<113>"Usenet mainstream hierarchies are established by a process that requires the approval of a majority of Usenet members. Most sites that receive a NETNEWS feed receive all of these hierarchies, which include:

comp Computers

misc Miscellaneous

news Network news

rec Recreation

sci Science

soc Social issues

talk Various discussion lists

"The alternative hierarchies include lists that may be set up at any site that has the server software and disk space. These lists are not formally part of Usenet and, therefore, may not be received by all sites getting NETNEWS. The alternative hierarchies include:

alt Alternate miscellaneous discussion lists

bionet Biology, medicine, and life sciences

bit BITNET discussion lists

biz Various business-related discussion lists

ddn Defense Data Network

gnu GNU lists

ieee IEEE information

info Various Internet and other networking information

k12 K-12 education

u3b AT&T 3B computers

vmsnet Digital's VMS operating system"

Gary C. Kessler & Steven D. Shepard, A Primer On Internet and TCP/IP Tools 34 (Network Working Group, Request for Comments No. 1739, Dec. 1994) available online URL http://ds.internic.net/rfc/rfc1739.txt.

<114>The technical details appear in M. Horton & R. Adams, Standard for Interchange of USENET Messages (Network Working Group, Request for Comments No. 1036, Dec. 1987), available online URL http://ds.internic.net/rfc/rfc1036.txt.

<115>[Get current figures from net.admin]

<116>See Edward Vielmetti & Mark Moraes, What is Usenet? A second opinion (Oct. 26, 1994), available online URL

<117>The full description of the procedures is Greg Woods et al., Guidelines for Usenet Group Creation, available online URL http://www.ahmdal.com/usenet/creating-newsgroups/part1. [hereinafter Guidelines].

<118>See Guidelines, supra note 117. The "alt." groups work differently.

<119>There are numerous naming conventions designed to make it easier to find a group on a given topic. For example, discussions of religious issues are found under soc.religion.{name- of-religion}, not as top level entry under soc. Thus, for example, soc.religion.scientology, not soc.scientology or misc.scientology.

<120>The institutionalization of the "Usenet Volunteer Votetakers" is a fairly recent phenomenon. The task of finding neutral votetakers used to be left to the proponents of the new group until convincing accusations of vote fraud were leveled at a person who apparently served as his own votetaker under a pseudonym.

<121>Sally Hambridge, IETF RUN Working Group, Internet-Draft, Netiquette Guidelines (undated, "expires November 24, 1995").

<122>Internet talk is a utility that allows direct real-time terminal to terminal communication.

<123>Hambridge, supra note 121, 123.

<124>For a discussion of how Usenet works, see supra text accompanying note 112.

<125>The term comes from the Monty Python song in which the word "spam" is repeated over and over and over and over and over.... See news.admin.net-abuse FAQ available online URL http://www-sc.ucssc.indiana.edu/~scotty/acena.html at § 2.2. [hereinafter net.abuse FAQ].

<126>Spam can also be created by a computer program. See infra text accompanying note 130 (discussing "zuma-bot").

<127>Most newsreading software allows the user to define a "killfile" instructing the software to ignore all messages from a particular source. It is possible to killfile the author of a spam, thus ignoring multiple posts. However, by the time one is aware that one has encountered a spam it is too late to avoid any costs of downloading as the damage is already done. In addition, technically unsophisticated users, especially new ones, are often unaware of this feature of their software.

<128>See alt.fan.serdar-argic FAQ, available online URL gopher://dixie.aiss.uiuc.edu:6969/00/ urban.legends/net.legends/alt.fan.serdar-argic.faq at §§ 2.2, 2.4.

<129>A list of his 16 favorite groups appears id. at § 3.

<130>See Serdar FAQ, supra note 128, at § 2.6.

<131>See Net.abuse FAQ, supra note 125, at § 2.4.

<132>Charles Arthur, A Spammer in the Networks, New Scientist (Nov. 19, 1994).

<133>Laurence A. Canter & Martha S. Siegel, How to Make a Fortune on the Information Superhighway (1994).

<134>Arthur, supra note 132.

<135>Arthur, supra note 132.

<136>Net.abuse FAQ, supra note 125, at § 3.1.

<138>Net.abuse FAQ, supra note 125, at § 2.9.



<141>See supra text accompanying note 136 (noting consensus that more than 20 copies are required to qualify as spam).

<142>See Serdar FAQ, supra note 128, at § 7. Cancellation usually requires a forged posting purporting to be from the original sender.

<143>Hambridge, supra note 121, 123.

<144>The fear seems to center on the "invasion" by AOL users, who are considered particularly clueless and numerous.