home page about us documents miscellaneous sitemap
home page about us documents miscellaneous sitemap

Standard Development as Techno-social Ordering
The Case of the Next Generation of the Internet Protocol
  Management and Network Technology. Proceedings from COST A3 Workshop in Trondheim 22-24, 1995. WZB Discussion Paper FS II 96-104, Wissenschaftszentrum Berlin, S. 35-57, 1996

Sabine Helmers, Ute Hoffmann, Jeanette Hofmann , 5/96

  jump-off point
1  Reconfiguring the Foundations of the Internet
2  The Internet Protocol: Technical Foundation of a New Public Space
3  Development and Implementation of Internet Protocol Standards:
A Process-Related Perspective
3.1  The Adoption of TCP/IP
3.2  The Internet Way of Standards Setting
3.3  The IPng Effort
4  IPng and the Future: Envisioned Usages of Cyberspace
4.1  The Address System - Addressing Social Practices
4.2  The Routing System - Spatial Organization of the Electronic World
4.3  Principles of Data Transmission: Security and Social Control
5  The Challenge of Success: The Fragility of Internet's Self-Governing structure


  The network communication protocol of the Internet embodies conceptions of principles by which social exchange in this net ought to be organized. The IP Next Generation effort provides a strategic research site to look into the Internet_s changing techno-social order. This paper seeks to portray elements of this order by the way the protocol standard is created as well as by its substantial goals.

The developers of the TCP/IP protocol suite in the early days of inter-networking could not foresee the "dilemma of scale" that the Internet address system faces today. In the early 1990s efforts have been started to explore ways to resolve the address limitations which at the same time provide functionality beyond that of the current Internet Protocol (IPv4). In the fall of 1995 a set of core specifications of a new Internet Protocol (IPv6) has been made a "Proposed Standard".

IPv6 (or Ipng) is not only designed to meet the scaling requirements imposed by the explosive growth of the Internet and to provide support for multimedia traffic. Beyond that, IPng can be understood as an attempt to define and standardize major elements of a technical framework for a social order of the future virtual world. Among other things, this concerns the questions of who and what will have access to the electronic world, the topographic organization of the virtual space, and the distinction between its private and public spheres and objects.

1 Reconfiguring the Foundations of the Internet
  Talking about standards development for electronic data networks in terms of techno-social ordering is anything but self-evident. The premise of this paper is that electronic exchange in open networks (as opposed to dedicated networks) forms a new socio-cultural realm marked by specific kinds and meanings of interaction. The existence of cyberspace implies that the very act of transmitting data connects people to the electronic realm and thereby may affect both the interacting people as well as their data exchanged.

The patterns of "order" which can be found in the social interactions as well as in the technical structure of the Internet are part of the cultural tradition of the net. The technology is a cultural expression of the "net society". At the same time, the net society's interactions are shaped by its already existing technical protocols and conventions. Thus, the developing order of the Internet consists of technical and social elements, both of which can be described in terms of social practices. An inquiry into the particularities of the Internet Protocol (IP) may help to make this plain for the case of Internet.

Because of the technically constituted nature of the electronic realm, standards and communication protocols offer a valuable source for exploring the social organization of the Internet. Unlike in other habitats, all communication in the electronic space is necessarily technically mediated. No action is possible without some sort of technology such as, for example, a computer, a modem and a telephone. On the assumption of a mutual shaping of technology and its social environment, the communication standards and protocols to be investigated are themselves also socially constituted: "The Internet is a virtual structure, imagined by its designers, and implemented entirely in software. Thus, the designers are free to choose packet formats and sizes, addresses, delivery techniques, and so on" (Comer 1991, p. 62). That is, the communication protocols cannot be regarded as mere neutral elements keeping a network running more or less effectively. They embody, as it were, conceptions of the basic principles by which social exchange in the Internet ought to be organized.

This paper gives an outline description of the Internet`s changing techno-social order regarding the way protocol standards are created as well as the content of the new Internet Protocol. We start by describing some characteristics of network communication and IP in general. In a process-related perspective we then briefly give an account of how the current version of IPv4 came into being and, in contrast, how todays Internet standards setting works, including a history of the IPng effort. The final part addresses the future of Internet as it is anticipated by IPng developers.

2 The Internet Protocol: Technical Foundation of a New Public Space
  IP is part of the TCP/IP version 4 protocol suite (Transmission Control Protocol/Internet Protocol) which today forms the technical heart of the Internet. IP can be regarded as sort of a "common language" which enables large varieties of different networks and computer systems to communicate with each other. Compatibility or interoperability is achieved by defining a set of rules with respect to

  • the format of data to be transferred
  • the routing system which leads data to their destination
  • general organizational principles of data transmission
The present version of the Internet Protocol, IPv4, generates a particular type of communicative space which clearly reflects the special conditions of the emergence of the Internet. To begin with, IP organizes the exchange of data in a way that differs in important respects from other common transmission procedures, such as that underlying the telephone system. The telephone system normally reserves one line exclusively for one connection between two participants ("point-to-point-connection" or "circuit-switched transfer mode"). By contrast, IP provides that the data exchanged are split into small packages and transmitted through all available lines in chronological order, regardless of the concrete senders, receivers, and content ("point-to-many-connections" or "packet-switched transfer mode").

Both communication protocols enable specific forms of social exchange. The point-to-many-connection provided by IP allows an undetermined group of people to interact concurrently. In this sense, a new public space is created by a communication protocol. The possibility of collective, public exchange has led to various institutional inventions in cyberspace known as "network services". Among them are IRC (Internet Relay Chat; to be understood as virtual channels, some of them devoted to special groups or special topics) or MUDs (Multi-User Domains; virtual localities consisting of rooms or landscapes inviting users to visit, to settle and set up their own places). Together with other network services like email, file transfer and netnews based on IP, the Internet world offers flexible forms of interactive exchange that would be difficult or even impossible to implement in the telephone-centered telecom world.

Conversely, the circuit-switched architecture of the telephone system allows forms of data transfer and interaction that are hard to realize within the limits of IP - at least in its current version. This is true, for example, for sound or moving pictures, which consist of time-dependent data that need to be transferred without interruption or delays. Yet, the packet-switched mode of IP makes intermissions in the data flow rather likely. To be sure, services like the Internet-Phone or radio broadcasting do already exist but the quality of their performance depends very much on the capacity of and the general traffic on the connection in use. A stable flow of data is best guaranteed by dedicated end-to-end connections as offered by the telephone architecture.

Another striking difference between IP and the telephone transmission system pertains to the conception of spatial distance and time. The architecture and charging rationality of the telephone network favors local exchanges and tend to constrain long-distance communication (at least for those with an average income). By contrast, communication on the Internet can always take place at the expense of local calls. The packet-switched architecture of the Internet transmits data units by hopping from one server or gateway to the next without ever creating a long-distance connection.Guided by numerous "routing tables", the data packets seek their way by using available lines for fractions of seconds only. Thus, the architecture and charging logic of the Internet facilitates an extension of the geographical reach of social interaction (at least for those who are able to communicate in English).

A further, fairly obvious difference between the two types of networks refers to the physical devices involved. In order to communicate via circuit switched networks, the users need only a telephone. Access to the Internet, however, requires a computer to be connected to a stationary host computer.

As long as computers constitute the only possible network nodes, the electronic world enables mainly practices that can be transformed into digital data and which are thus detached from physical bodies and localities. Correspondingly, cyberspace excludes most of social activities and things not amenable to a dissection into digital data. This limits not only the means and contents of exchange, but evidently also the type of people participating in electronic interaction.

Another obvious limitation of the current IP version concerns its addressing system. Contrary to common assumptions, Internet addresses do neither specify a concrete physical machine nor an individual person. As a consequence, Internet addresses cannot move. Users who change locations need to get a new account (that is, a new number that specifies the new position of the same computer within a different network) on the next network. Services like "telnet" (remote login) notwithstanding, the Internet clearly offers far less mobility at present than the telephone system: Virtual counterparts to mobile phones which provide a ubiquitous connectivity for everyone are not in sight.

Until the early 90s, the IP-related restrictions have not been perceived as crucial problems. The design of the Internet's communication protocols was tailored to the needs and interests of its original user community (see below). However, the number and type of users on the Internet has been changing significantly over the last years. The ongoing expansion of the user community means a challenge for the Internet communication architecture in several respects. The version of IP currently in use (IPv4) still reflects a phase during which only a small academic community, mostly of American origin, had access to the Internet and used the net for non-commercial purposes. In the face of the exploding number of new users, neither the available address capacity of IP nor the present routing system and transmission procedures are sufficient any longer. According to the view of the Internet community, a reform of IP that meets the growing amount and changing demands of users has therefore become a matter of survival.

3 Development and Implementation of Internet Protocol Standards:
A Process-Related Perspective
  Internet protocols embody the politics of the Net in a twofold way: the technical "content" of IPv6 and the process by which it is made a standard. Therefore, beneath the question of what the next generation of IP should look like lays an equally important one: who sets the technical direction and decides when protocols become standard? The following section will address this question. We start by throwing a glance at how the present architecture of the Internet came into being.

3.1 The Adoption of TCP/IP
  The current Internet protocol TCP/IP version 4 was introduced in 1982. Its predecessor, the Network Control Protocol (NCP), was used in Arpanet from 1969 to 1982. The simple fact that these protocols each were used for more than ten years indicates the necessity of a reliable and stabile "common language". All of the networking participants have to agree on its usability.

TCP/IP developer Vinton Cerf, born in 1943, has been called "father of Internet" (Hafner 1994a). He is a member of the circle of celebrities of the pioneering days (see e.g. Hafner 1994b). Cerf is a person who is regarded as legitimatized to tell stories on "How the Internet Came to Be" (Cerf 1993). Much of this attribution of "father" is related to his contribution into the development of the Transmission Control Protocol in the early days of the Net while he was a graduate student. Cerf's work on TCP/IP was a joint effort together with Robert Kahn, who also became a Net celebrity: "Dr. Cerf recalls that the first work on that "common language" came from 'literally sketching the idea out on the back of an envelope' in a San Francisco hotel in 1973 while he and Dr. Kahn attended a computer conference" (Hafner 1994a).

This 'back of an envelope' story fits into the collection of pioneer stories that people nowadays like to listen to. The basic technical component of a global network that today connects millions of people and forms an important part of scientific, business, political, administrational and leisure life, was born on an envelope in San Francisco. This lovely San Francisco-story is a much nicer component within the legend of origin than other parts of the Internet history which are intertwined with the cold war situation and the involvement of the Department of Defense.

In an official report to DARPA, the US Department of Defense with its Defense Advanced Research Projects Agency, Cerf describes the TCP/IP development work done at Stanford University, the company Bold Beranek and Newman, and University College London during 1973-1976, guided and financed by DARPA (Cerf 1980). This report provides brief details of the TCP development and its several versions. Cerf stressed that it took several years and that "... a great many people and groups were involved in or influenced" TCP - but he didn't miss to tell also that the initial impetus resulted from work of Kahn and himself (1980: 2). The same involvement of people and groups was true for the following period (see also Comer 1991, pp 6-8). Final versions of TCP and IP were published in early 1980 by DARPA and had then to undergo "procedures for standardization within the Department of Defense and Intelligence Community", writes Cerf (1980: 2).

In 1980 TCP/IPv4 was accepted as "finally" by DARPA. It took two more years to actually introduce it as the new common protocol. However, unlike it would be even thinkable in view of today's situation of the Internet, it was back then possible to introduce the new communication protocol by an act of despotism:

"In 1982 it was decided that all the systems on the ARPANET would convert from NCP to TCP/IP. A clever enforcement mechanism was used to encourage this. We used a Link Level Protocol on the ARPANET; NCP packets used one set of one channel numbers and TCP/IP packets used another set. So it was possible to have the ARPANET turn off NCP by rejecting packets sent on those specific channel numbers. This was used to convince people that we were serious in moving from NCP to TCP/IP. In the middle of 1982, we turned off the ability of the network to transmit NCP for one day. This caused a lot of hubbub unless you happened to be running TCP/IP. It wasn't completely convincing that we were serious, so toward the middle of fall we turned off NCP for two days; then on January 1, 1983, it was turned off permanently..." (Cerf 1993)

For a number of reasons, the introduction of TCP/IP in 1982 was a much simpler endeavour than the introduction of the next generation of IP currently under development.Some of these reasons are briefly mentioned below:

i) Size of the Net

The Net was so much smaller back in its early days and fewer individuals and groups had to agree on the common network protocol. A few numbers should illustrate the enormous numerical difference in networking participants between the days of the introduction of TCP/IP and the situation today. In June 1982 there were 235 hosts connected to the ARPANET. In January 1996 the number of Internet hosts had reached more than 9.4 million (http://www.nw.com/zone/host-count-history/). Whereas in the late '70s/early '80s it was possible for all of the networking participants to be involved in the proccess of discussing the new protocol and decide on its value, a similar endeavour would be very unefficient if not practically impossible today.

ii) Actor world

Also, in 1982 there had been one namable organization with a superior influence and decisive power: DARPA hold the role of a leading actor in TCP/IP development.

Unlike today where a multiple and heterogenous network of actors strives for the development and introduction of Ipng, in the seventies and eighties, there had been a 'design plan' given by a single organisation: DARPA gave the orders and formulated design criteria and functional demands, payed for the development and judged any achieved progress in transmission technology. DARPA had the last vote in any step of development. Beside its decisions, however, there was a long lasting process of discussion on the work of the actual TCP/IP developers which took place among the network participants.

iii) Network Functionality

The Net was used by a mostly academic research community for a comparatively narrow set of purposes. Important services were file transfer, remote login and e-mail. Other forms of electronic exchange such as financial transactions that need higher security standards were nearly unthought of.

iv) The UNIX Connection

Today, both operating systems and network architectures employed by sub-networks of the Internet vary much more than in the 1970s and 1980s. In the early years, the UNIX operating system was the common tool for most of the Internet community (Hauben 1993). Many computers hooked to the Net were running the same UNIX operating system (BSD Unix, Berkeley Software Distribution). DARPA payed for the implementation of TCP/IP in this Unix system and in September 1983, TCP/IP came along with the popular 4.2 BSD, making the relationship between the Internet and Unix even stronger. Thus DARPA was able to reach most of the American university's computer science departments, which could get the software at low cost, according to Douglas Comer (1991: 7):

"The new protocol software came at a particularly significant time because many departments were just acquiring second or third computers and connecting them together with local area networks. The departments needed communication protocols and no others were generally available."

v) Net Culture

Last but not least: the by now legendary 'spirit of community' and the mutual understanding among the internet pioneers may have had significant positive impact on common agreements and the rather quick adoption and use of the new protocol. To what extent this community spirit has been instrumental in a positive way remains to be considered by the pioneers themselves. In an interview, Vinton Cerf says that the Internet used to be a kind of neat, private thing within a little community (Cisler 1994).

3.2 The Internet Way of Standards Setting
  Organizational and Management Structure

After the TCP/IP protocols were formally adopted for use in the Arpanet an organizational and management structure concerning standards activities emerged. The newly founded Internet Engineering Task Force (IETF) formed the core element of this structure.[1]

The actual development and standardization activities take place in IETF working groups pertinent to a number of functional areas. Anyone who is interested may participate. The internal management of the working groups is provided by area directors. The working groups do their business primarily on-line and additionally at IETF meetings. The Internet Engineering Steering Group (IESG) handles the operational management of the Internet standards process. It assigns working group chairs and areas directors, charters working groups' efforts and approves their results. When the specifications of a working group formally enter the "standards track", the working group goes quiescent.

An Internet standard is supposed to progress through three stages (see Fig. 1). Each distinct version of a specification, as defined by the Internet Engineering Task Force and its steering group, is published as part of the "Request for Comments" (RFC) document series maintained by the Internet Architecture Board.[2] The first RFC was published in 1969 when the first four ARPANET nodes were installed. The name very accurately reflects their intention. The series came into being explicitely as working notes.[3] A document in this series may concern any topic related to computer communication, and may be anything from a poem (e.g. RFC 986) or a meeting report to the specification of a standard.

Fig. 1: The Internet "Standards Track" (RFC 1920)

         |                                               ^
         V    0                                          |    4
   +-----------+                                   +===========+
   |   enter   |-->----------------+-------------->|experiment |
   +-----------+                   |               +=====+=====+
                                   |                     |
                                   V    1                |
                             +-----------+               V
                             | proposed  |-------------->+
                        +--->+-----+-----+               |
                        |          |                     |
                        |          V    2                |
                        +<---+-----+-----+               V
                             | draft std |-------------->+
                        +--->+-----+-----+               |
                        |          |                     |
                        |          V    3                |
                        +<---+=====+=====+               V
                             | standard  |-------------->+
                             +=====+=====+               |
                                                         V    5
                                                   | historic  |
Their accidental and informal origin notwithstanding, RFCs have become one of the core institutions of the Internet. Within the Internet Community there is credibility associated with memos carrying the RFC label. Much of the written information about the Net, including its history, can be found in the RFC series.

Critical Success Factors

Dissatisfaction with traditional standardization fora in telecommunications and the success of the Internet and its technology have focused interest in the Internet as a management model. "Standards process development by the IETF", some analysts conclude, "stands as an important paradigm for evolving standards and technology together in an open, effecient, and highly effective manner" (Ranscomb & Kahin, 1995, 25).

Some characteristics of Internet standards setting rank as critical success factors in particular (e.g., Croker, 1993; Lehr, 1995). Open development has been considered as the most salient point. Everyone contributing to the standardization process by attending a meeting or making comments on-line, is considered a member of an IETF working group. The selection of technical topics also is an open process. If a topic lacks an adequate constituency, it is not pursued. Strong user involvement in the development process has been stated as a second key. With users being directly involved in the choice of a standard, it is argued, they will adopt an efficient solution. Low participation cost through the ability to communicate electronically has been said another advantage of the Internet. Experimentation prior to the formal adoption of a standard is perceived to produce specifications that guarantee interoperability. Aggressive timing rules ("move it or loose it") are evidence that the IETF process is designed to accelerate standards development. Thus, it favors protocol designs that solve "simple problems in a simple a manner as possible" (RFC 1336, p 25). This approach is in stark contrast to the desire for functional completeness and the philosophy of including as much as possible features in a standard as it is the case in the work of traditional standards organizations like ISO (International Standards Organization).

Perhaps the most important explanation for the IETF's success to date is the long term involvement of a dedicated community of computer scientists who sees the Internet as a collaborative enterprise in which they act as designers, developers, and users. What analysts highlight as key factors in explaining the success of the Internet may well turn out as constitutive elements of the overall culture of this community - the culture that underlies the development and adoption of consensus-based protocol standards.

Facing Organizational Change

The success of the Internet provides a fundamental challenge to the Internet-style standards process. It has been doubted whether the IETF process will be able to scale well in a commercialized environment with a multitude of stakeholders and large investments involved (see, e.g., Branscomb & Kahin, 1995, 7).

Over the past years, the network community has diversified significantly. Participation in IETF meetings keeps growing, and many of the new participants have heterogeneous backgrounds. The first IETF meeting in 1986 saw 15 attendees. Today, the IETF meetings attract hundreds of participants and spectators. Key decision makers can be expected less and less to have long working relationships.

Lehr (1995, 136) therefore expects the IETF to become more like traditional standards development organizations. Such a change would probably affect the structure of Internet standard setting bodies as well as their specific procedures, time frames and solutions. Obvious steps toward a conventional standard procedure would include increased formalisms and the introduction of voting requirements replacing the consensus rules.

The growth of the Internet and the unstable mix of stakeholders prove a challenge to the process of standards development and adoption in general. The migration to a new Internet protocol, however, is perhaps the most important change in the history of the Net. It was not just mere chance that the effort to replace IPv4 by a new version brought the IETF in direct conflict with its governing body and eventually resulted in procedural changes in the standards process (Abate 1995). In the following section of this paper we give an outline of the IPng effort.

3.3 The IPng Effort
  IPv6 represents the eventual synthesis of many different proposals and working groups (see Bradner/Mankin 1996, 193ff). It represents over four years of effort focused on developing an IP Next Generation (see Fig. 2). In January 1991, the Internet Architecture Board (IAB) met with members of the Internet Engineering Steering Group (IESG) to discuss critical issues and future directions of the IP architecture in a growing and ever more diverse Internet. An IAB "Architecture Summit" followed in June 1991.[4] In November 1991, the Internet Engineering Task Force (IETF) formed the Routing and Address (ROAD) group. The group should explore ways to resolve the perceived address limitations and routing problems of the Internet. In March 1992 the ROAD group offered several recommendations ranging from immediate to long term, including having the IETF call for proposals to stimulate new approaches.

By February 1992 the Internet community had developed four separate proposals.[5] In June 1992, the Internet Architecture Board issued a recommendation for a new Internet Protocol, referred to as IPv7.[6] The IAB recommended CLNP ("Connection-Less Network Protocol") as a basis for further development. This meant, in fact, that the Internet migrate to an addressing technology created by ISO (International Standards Organisation) und ITU (International Telecommunications Union) to solve the address problem. The bureaucratic ISO, however, had always been the antithesis to what the IETF stood for. Consequently, the IETF rebelled against the IAB recommendation and, at the same time, questioned the IAB's right to act as arbiter of Internet standards. Applying the IETF's technical design philosophy and standards process to its own operation a new Working Group was formed to discuss on how IAB and IETF should reform their decision making rules.[7] If technology is politics, in the Internet the reverse is also true: the evolution of the rules of Internet`s self governance is handled in the same way as Internet's protocol standards.

In the aftermath of the 1992 infighting, the IPng effort took root that led to the present IPv6. A IETF call for IPng proposals went out in July. A number of Working Groups was formed in response. By December 1992 three more proposals followed.[8] In July 1993 an IPng Decision Process "Bird of a Feather" (BOF-meeting) was held with the intention to re-focus attention on a decision between the different candidates for IPng. In September 1993 the IETF announced the building of an "area" to investigate the various proposals and recommend how to proceed (http://www.ietf.cnri.reston.va.us/html.charters/ipngwg-charter.html).[9]

The formation of the IPng Area was an attempt to concentrate the resources of the IETF behind a single effort. In spring 1994 the IPng Area directors had evaluated three IPng proposals (cf. RFC 1752). In July 1994 the revised proposal of the SIPP Working Group was recommended by the IPng Area directors to be adopted as the basis for IPng. With much of the basis protocol coming from the SIPP effort, ideas from other proposals were also included.

The recommendation by the IPng area directors was a significant move towards closure. IPng was accepted by the Internet Engineering Steering Group (IESG) and made a Proposed Standard. The Internet Assigned Number Authority (IANA) assigned version number 6 to IPng. The protocol itself was to be called IPv6. A Request For Comment (RFC 1752) was published describing the review process, the recommendation and the features of IPv6 in outline.

A new Working Group was formed to produce the specifications. On September 18, 1995, the IESG has approved a core set of IPv6 specifications as Proposed Standards (RFC 1883)[10]. At this point the broad lines of the next IP have taken shape. We now address some of the fundamental design choices embodied by the present IPv6.

4 IPng and the Future: Envisioned Usages of Cyberspace
  The development of the next generation of Internet protocol (IPng) provides a good example of how envisioned conceptions of future techno-social exchange in the Internet become ingrained in its very foundations. On the one hand, the new version of IP has to anticipate the wishes and requirements of the future users of the Internet in order to maintain its attractiveness and superiority over other networks. On the other hand, the next generation of IP is required to be compatible to the current version already in use by millions of people. These competing requirements with regard to the design of IP have been expressed in one the RFCs accompanying the development process:

"We believe that is impossible to expect the community to make significant, non-backward compatible changes to the IP Layer more often than once every 10-15 years. In order to be conservative, we strongly urge protocol developers to consider what the Internet will look like in 20 years and design their protocols to fit that vision" (RFC 1726, p. 4). Obviously, there is a quite strong tension between various anticipated necessities in future times and a tendency toward 'conservative' solutions that have been proved functionable in the past. At present, it is still too early to risk any bets on the ultimate outcome of the IPng effort. It can be safely said, however, that with IPv6 the Internet community opted for incremental rather than radical change. The new Internet Protocol will be an improvement and extention of the current version. Regardless of the limitations of the next generation of IP, the forthcoming changes will affect almost every aspect of data exchange within the reach of IP.

IPng began as a project that aimed to solve the problem of the limited addressing capacity.[11] In the meantime, a much broader range of functional areas are involved: the address system, the routing system and the means of data control and security. All three fields innovation are likely to have clear impacts on future usages of the Internet and, hence, on the social organization of open data networks hooked up to the Internet. Some of the expected changes are discussed below.

4.1 The Address System - Addressing Social Practices
  An enlargement of the address system has become necessary because of the rapidly growing user community on the Internet. On the other hand, the development of a new address system also faces the questions of who and what (i.e., which objects) should have access to data networks, or, which kind of social practices should be addressed by data networks. As already mentioned above, only activities carried out by means of (stationary) computers have been linked to the electronic world thus far. The trend toward digitization of analogue technologies which operate independently from one another calls for a redefinition of accepted "actors" and practices in data networks.

IPng is designed to enable a future growth of human and non-human actors on the Internet by expanding the address space in the IP header - the size of the "address label" that directs each data packet to it's destination - from currently 32 bit to 128 bit. According to the designers of IPng, the 128 bit address space provides between 1,564 and 3,911,873,538,269,506,102 addresses per square meter on earth (Hinden 1994, p. 10).[12] This means, at least in principle, that there are enough IP addresses available for any kind of digital devices (e.g. the legendary remote-controlled toaster or other every-day devices like television). Correspondingly, more and more technologies can be remotely controlled by means of data networks. Additionally, the address space of IPng is expected to provide new possibilities for a topographic and organizational ordering of network actors (see below).

An integration of non-computer devices and related activities into open data networks will generate more non-text based, heterogeneous ways of expression such as, for example, voice and touch-based forms of digital interaction. However, it seems quite likely that these new types of activities will be accompanied by stronger demands for control and security, for surveillance and authentication. The more activities are enabled by the Internet communication protocols, the higher will be the constraints imposed on these very activities. Consider, for example, the transfer of money where private information on bank accounts, credit cards and solvency is involved. In order to guarantee a secure exchange of personal data, a dense system of control and observation is required that potentially encompasses all users of the Internet. Since security of data exchange can only be achieved if "data traces" of any violation and any suspect can be chased, the introduction of tools and institutional bodies able to "monitor" social life on Internet are not unlikely.[13] (This is not to say, though, that such systems of control would ever be successful.)

An extension of possible participants and usages made possible by IPng will cause a somewhat paradoxical shift within the social order of the electronic world. The inclusion of more social activities will bring about severe restrictions affecting the conditions that made these very activities possible. To mention only two basic aspects of these conditions, this concerns the decentral and permeable organization of the Internet and a tradition of open exchange of information.[14]

4.2 The Routing System - Spatial Organization of the Electronic World
  The planned reform of the routing system points to the topographic and organizational dimension of order in electronic space. To date, data units travel through networks by hopping from one gateway to the next quite irrespective of their final destination. Whereas this procedure keeps the necessary information about the topographic architecture of cyberspace low and thereby the routing tables small, remarkable geographical detours are produced nonetheless.[15] As long as most users were located in the US, these detours did not cause a major problem. The geographical expansion of the Internet together with the increase of traffic and the privatization of the infrastructure generate the demand for a more effective routing system.

Within the framework of the expanded address space, new types of addresses will be also introduced. So-called anycast and cluster addresses (group of nodes sharing a common address prefix) specify regional or organizational destinations instead of individual nodes (unicast addresses). Cluster addresses are designed to improve the source node's control over the travel routes. Such "source selected policies" are expected to shorten travel paths. Moreover, they will enable network administrators to select specific providers to carry the traffic and thereby increase the "sys ob's" control over the virtual traffic infrastructure (Hinden 1994).

IP-related means to increase and decentralize control over travel routes can be regarded as a significant step towards a denser spatial ordering of the electronic terrain. Source selected policies will foster competition among providers. Correspondingly, customers will be able - albeit to varying extents - to choose among different providers who offer different services under different conditions. As a consequence, the infrastructure of electronic space may become more similar to that of a deregulated telecom world which clearly favors large customers. Address prefixes within IPng may turn out to be one way to come closer to such a model.

Because of its immaterial character, any kinds of geographical and organizational orders within cyberspace seem to be conceivable. Neither national borders nor continents constitute obvious factors to determine the organization of cyberspace. In any case, each type of ordering will correspond with specific social configurations of inclusion and exclusion expressed in manifold divisions between, for example, private and public or commercial and free forms of exchange. Access to services and information or the scope of unregulated expression and exchange are areas already well known for being easily affected (and hurt) by any kind of universal order.

4.3 Principles of Data Transmission: Security and Social Control
  The third part of IPng concerns the conditions under which interaction in electronic space supposedly takes place in the future. Above all, this pertains to the security of data flow, which is regarded as essential if commercial use of open data networks is to broaden. The more diverse users and usages of open networks become, the more important appear to be the means and methods for creating reliable boundaries between those users and usages - boundaries that help establish realms of privacy that can be fully controlled by those authorized to do so. Common security measures are "fire walls" intended to protect electronic areas against unauthorized access; means of "authentication" to identify actors; means of tracking "data traces" of users, and encryption codes to protect proprietary information. As is already evident, the attempts to increase the degree of security and social control in open data networks such as the Internet are hampered by almost unlimited possibilities to manufacture, simulate, or even fake identities of both people and objects (Haraway 1991; Stone 1992; Stone 1995). The immaterial and border-crossing character of the electronic realm therefore poses a challenge to long-established, government-based procedures of public control. What kind of democratic body could legitimately deal with problems arising from the specific conditions of the electronic space? And on what legal basis could it do so? IPng-related efforts to increase the security of data transmission on the Internet can be interpreted as an attempt to establish a technical infrastructure which provides the means to solve at least some of these problems without requiring recourse to legal enforcement.

Another important feature of IPng refers to the type of data transferred. Under the current version of IP, all types of data are treated equally. The enlarged address space of IPng permits the labelling of data packets according to desired delivery procedures. The source node is to indicate the kind of data to be transferred and to assign it a specific priority level. This flow label permits a distinction between "real-time" (time-dependent) data and the so-called "flow-controlled" data (email, FTP or netnews) which, unlike real-time data, elicits automatic feedback from the receiving host.

On the one hand, the introduction of flow labels improves the conditions of data transmission and thereby enriches the repertoire of social interaction on the Internet. The distinction between different types of packets aims to facilitate the transfer of real-time data such as voices, for example. On the other hand, flow labels may foster a tendency toward institutionalizing hierarchies and categories regarding social exchanges on the Internet. Thus, new and better means of identifying and transporting data to be exchanged may also entail new and better forms of social control.This to say that the increase of forms of exchange brought forth by the next generation of Ip will come at the prize of restricting the interaction in cyberspace. New types of data exchanges such as financial transactions will raise the need for monitoring these very exchanges.

From a social scientific point of view, the ongoing changes of the Internet can be described in terms of inclusive and exclusive effects on the social interaction in cyberspace. Besides its decentralized and open character, the Internet consists of both enabling and constraining elements. Some of them such as the transmission rules of IP are technically constituted, others such as the standards setting procedures are organizational by nature. As a whole, these elements form what we refer to as the social order of the Internet. IPng will cause a change of the specific composition of these elements - whether to its advantage or disadvantage remains to be seen.

5 The Challenge of Success: The Fragility of Internet's Self-Governing structure
  Contrary to the allegedly chaos and anarchy in cyberspace, communication on the Internet has brought forth specific structures and conventions, skills and goals that together form a tentative socio-technical order which itself is subject to changes permanently. The Internet standards developing process served as an example to demonstrate not only the existence but also the comparatively effectiveness of the institutions, conventions and rules governing the Internet. While these rules and conventions express the culture of the Internet community, this culture is at the same time shaped by changing practical and institutional circumstances. Again, the standards setting procedure of the Internet serves as an example for this point. The remarkable openness of the Internet bodies for everyone who wants to participate in the work on new standards arguably reflects a period during which the Internet community was rather homogeneous and small. Correspondingly, the steady increase of participants raises doubts about the future adaquacy of this convention.

Furthermore, the case of IPng shows that the order of the Internet consists of social as well as technical conventions. Both types of rules emerge interdependently. While the whole idea of public electronic space is essentially based on technical protocols which support distributed communication, the design of such protocols reflects its envisioned users and usages. Some of these visions concerning future interaction on the Internet have been discussed in this paper.

The ongoing growth of the Internet's user community will bring about significant changes and challenges for the existing communication culture. This is especially true for its decentralized, self-regulated structure. Lately, more and more governments begin to take legal action against the organizational order of cyberspace. At issue are various forms of legal infringements whereby the definition of them differs from country to country. While the success of these legal endeavors is doubtful, the damage for a communication culture based on free exchange seems more or less predictable.

On this background, the reform of IP can be regarded as the attempt to keep pace with the growth of the Internet and its accompanying cultural changes in a self determined way. It seems not unlikely, however, that the forthcoming technial and social conventions generated within the framework of IPng also will eventually undermine the very conditions which made possible the development of IPng in the above described way.

  Abate, Tom, Internet Infighting, Upside 10/95 (http://www.upside.com/resource/print/9510/)

Bradner, Scott O. & Allison Mankin, eds. (1996) IPng. Internet Protocol Next Generation. Reading, MA, Addison-Wesley.

Cerf, Vinton G. (1980) Final Report of the Stanford University TCP Project. (1.4.1980). http://www.cis.ohio-state.edu/htbin/ien/ien151.html

Cerf, Vinton (1993) How The Internet Came to Be (as told to Bernard Aboba) (gopher://gopher.isoc.org:70/00/internet/history/how.internet.came.to.be)

Cisler, Steve (1994) (Interview with Vinton Cerf.) Wired 12/94: 153-154.

Comer, Douglas E. (1991) Internetworking with TCP/IP. Vol. I: Principles, Protocols, and Architecture, Prentice-Hall, Englewood Cliffs.

Croker, D. (1993) Making Standards the IETF Way, StandardView, Vol. 1, No. 1, 1993 (http://info.isoc.org:80/papers/standards/crocker-on-standards.html)

Hafner, Katie (1994a) For 'Father of the Internet,' New Goals, Same Energy. New York Times 25.9.1994.

Hafner, Katie (1994b) The Creators. Wired 12/94: 152-153.

Hardy, Henry Edward (1993) The History of the Net. Master's Thesis, School of Communications, Grand Valley State University, Allendale. ftp://umcc.umich.edu, /pub/users/seraphim/doc/nethist8.txt

Hauben, Ronda (1993) The Role of Unix in the Development of the Net and in the Automation of Telephone Support Operations, in The Netizens and the Wonderful World of the Internet, M. Hauben & R. Hauben (eds.) (http://www.informatik.uni-leipzig.de/fachschaft/Medien/netbook/ch.5_evol_unix.hml)

Hinden, Robert M. (1994) IP Next Generation Overview, Internet Draft, http:// playground.sun.com/pub/ipng/html/doc/ipng-overview.txt.

Huitema, Christian (1995) Routing in the Internet, Prentice-Hall, Englewood Cliffs.

Lehr, William, Compatibility Standards and Interoperability: Lessons from the Internet, in: Standards Policy for Information Infrastructure, B. Kahin & J Abbate (eds.), MIT Press, Cambridge: MA, 121-147.

Ranscomb, Brian & Janet Kahin, (eds.) (1995) Standards Policy for Information Infrastructure, MIT Press, Cambridge: MA.

[RFC 1000], J. Reynolds & J. Postel, The Request For Comments Reference Guide, August 1987.

[RFC 1150] G. Malkin & J. Reynolds, (1990) F.Y.I. on F.Y.I. Introduction to the F.Y.I. Notes, March 1990.

[RFC 1311] Internet Activities Board. J. Postel, Editor, Introduction to the STD Notes, March 1992.

[RFC 1336] Malkin, G., Who's Who in the Internet. Biographies of IAB, IESG and IRSG Members, May 1992 [FYI: 9]

[RFC 1380] P. Gross & P. Almquist, IESG Deliberations on Routing and Addressing. November 1992.

[RFC 1475] Ullmann, R., TP/IX: The Next Internet, June 1993

[RFC 1550] S. Bradner & A. Mankin, IP: Next generation (IPng) White Paper Solicitation, December 1993 (Category: Informational)

[RFC 1602] Internet Architecture Board and Internet Engineering Steering Group, The Internet Standards Process -- Revision 2, March 1994.

[RFC 1603] E Huizer & D. Croker, IETF Working Group Guidelines and Procedures, March 1994

[RFC 1726] Partridge, C. & F. Kastenholz, Technical Criteria for Choosing IP The Next Generation (IPng), December 1994.

[RFC 1752] S. Bradner & A. Mankin, The Recommendation for the IP Next Generation Protocol. January 1995. (Category: Standards Track)

[RFC 1883] Deering, S. & R. Hinden, Internet Protocol, Version 6 (IPv6) Specification, December 1995

Weinberg, Steve G., "Addressing the Future of the Net", Wired May 1995, 141/44.

  [1] The IETF was formed in 1986 as an offshoot of the Internet Activities Board (IAB). The IAB evolved from a DARPA-specific research group into an autonomous organization responsible for overall architectural considerations in the Internet. In 1992 the IAB was placed under the then formed Internet Society, changing its name to be Internet Architecture Board (http://www.iab.org/iab/). For a detailed description of the Internet Standardization process cf Alvestrand in this volume.

[2] RFCs are available at no cost and on-line at ftp://ds.internic.net. The "Internet Official Protocol Standards" (currently RFC 1920) documents the status of each RFC on the Internet standards track, as well as the status of RFCs of other types. The RFCs documenting "full" Internet standards form the 'STD' subseries which was introduced with RFC 1311 in the Spring 1992.

[3] A short description of the origins of RFCs are contained in RFC 1000.

[4] See http://info.internet.isi.edu:80/IAB/IABmins.910108.Arch for a summary of the meeting.

[5] These were "CNAT", "IP Encaps", "Nimrod", and "Simple CLNP". These efforts represented different views of the issues involved and sought to optimize different aspects of the possible solutions (for details see Huitema 1995, 314-315).

[6] Rob Ullmann, who authored the TP/IX-proposal, addresses the question why the new protocol was to be called version 7. "(...) there wasn't anything magic about the number 7, I made it up. Version 4 is the familiar current version of IP. Version 5 is the experimental ST (Stream) protocol. ST-II, a newer version of ST, uses the same version number, something I was not aware of until recently; I suspected it might have been allocated 6. Besides, I liked 7. Apparently (..) the IAB followed much the same logic, and may have had the idea planted by the mention of version 7 in the "Toasternet Part II" memo." (RFC 1475, p. 6)

[7] See RFC 1602 and RFC 1603 for the results.

[8] These were "The P Internet Protocol" (PIP), "The Simple Internet Protocol" (SIP) and "TP/IX". "Simple CNLP" evolved into "TCP and UDP with Bigger Addresses" (TUBA) and "IP Encaps" evolved into "IP Address Encapsulation" (IPAE).

[9] By the fall of 1993 IPAE had merged with SIP while still maintaining the name SIP. This group later merged with PIP and the resulting working group called themselves "Simple Internet Protocol Plus" (SIPP). At about the same time the TP/IX Working Group changed its name to "Common Architecture for the Internet" (CATNIP). For a review of the different proposals see Bradner & Mankin 1996, 195-202.

[10] In addition to this core set of IPv6 protocols, a number of extensions have been published recently as Internet drafts (http://web.nexor.co.uk/public/internet-drafts/data/ipngwg.html).

[11] IP Next gereation Home Page: http://playground.sun.com/pub/ipng/html/ipng-main.html

[12] The actual number of nodes available depends of the amount of hierarchy levels employed.

[13] The public struggle between the US government, major software companies, and the Internet community about future encryption policies, which revolves basically around the question whether or not the government is entitled to "read" all encrypted data on digital networks (known as the Clipper Chip affair) is quite telling in this respect.

[14] A current example for the tendency toward more restrictions can be found in the ongoing conflict on encryption policies between various national governments, companies and the Internet community. The governments of France and the US among others claim the right to control information transferred on the Internet and thereby try to impose national law on the non-national electronic space. An enforcement of national borders, however, would mean a significant restriction on an otherwise expanding social exchange on the Internet.

[15] For example, it would not be unusual for a data unit sent from Berlin to Cologne to travel a route via New York. Likewise, an email sent from a Berlin based sender to a Berlin-based addressee might travel via Dortmund. For the users themselves these detours are quite unimportant, as the transmission speed is usually too high for such cumbersome routing mechanisms to be noticed. Indeed, the high speed of data transport in the electronic world diminishes geographical distances to such an extent that remoteness tends to lose its significance.


home page about us documents miscellaneous sitemap
home page about us documents miscellaneous sitemap

Projektgruppe "Kulturraum Internet". c/o Wissenschaftszentrum Berlin für Sozialforschung (WZB)
Reichpietschufer 50, 10785 Berlin. Telefon: (030) 254 91 - 207; Fax: (030) 254 91 - 209;
; http://duplox.wzb.eu.