d PKI Regierende Techniken und Techniken des Regierens: zur Politik im Netz
home page about us documents miscellaneous sitemap
home page about us documents miscellaneous sitemap

 

Governing Technologies and Techniques of Government: Politics on the Net
 

 

  Sprungbrett
1  "Meta matters": Developing the research methods
2  Problems with scaling on the Net
2.1  The internet´s constitution: The Internet Protocol
2.2  Regulating the Net: The Internet Engineering Task Force
3  Governing technologies: The old and the new Generation
3.1  The Interim Solution: CIDR
3.2  IP, the Next Generation: Good architecture between reform and revolution
3.2.1  Address format I: Semantics
3.2.2  Address format II: Address lengths
3.2.3  Address format III: Principles of space administration
4  4 "The Internet way of doing things" - Net techniques of government
4.1  Fiddling while the Internet is drowning? - Goodbye rough concencus
4.2  IPv6: A new model order for the Internet
5  5 "IPv4ever"?
5.1  Network Address Translators - Self-help on the Net
5.2  Data flows
5.3  "IP Written in stone?"
6  "So long, and thanks for all the packets": Fazit

 

 
"Fixing society to make folks to be rational is ok as an interim solution but it is just plain easier to fix technology in the long run." (Antonov, IPv6 haters, 12.1.96)

When people talk about the political significance of the Internet they usually mean its influence on existing forms of political organisation and participation: How do the parties present themselves on the Net? Who uses this service? Can the Net be expected to yield new, more direct forms of political participation? These are the questions posed from the political Science perspective (see, e.g., Gellner & von Korff 1998; Hagen 1996; Wagner & Kubicek 1996; London 1994). It is not the nature of the medium itself that is considered interesting, but the effects it has on the political landscape. But what about the Internet´s constitutional framework? Do the development and use of the Net themselves give rise to an independent political dimension?

At first we simply asserted that the Internet had an internal political dimension rather than actually justifying this claim systematically (see Helmers, Hoffman & Hofmann 1996, p. 36ff.). The thrust of the argument was that in a technically constituted space, in which all action is in the form of data flow, the common concerns of this action, the res publica so to speak, is also technically constituted. Following this reasoning, political significance can be attributed to all those events and objects that affect the constitution of the Net and the conditions under which the users operate.

From an empirical point of view, the political quality of the Net has both a normative and a procedural form. The Internet is based on the idea of good network architecture and on a set of rules which are intended to ensure the good quality of its development. This goodness is meant quite in the Aristotelian sense: the manner in which the early Internet community "felt" its way to the principles of a good digital data network is comparable to the quest for the rules of a just and good life (Cerf, cited in Salus 1995, p. 29). The notion of what constitutes a good data network corresponds to the practical goal of "global connectivity", and the pursuit of this goal is what gave the Internet its current make-up. In summary, "Connectivity is the basic stuff from which the Internet is made" (Mahdavi & Paxson 1997). Accordingly, those rules or technologies that serve the project of global connectivity are good and just rules.

Making a central political motto out of a maxim such as global connectivity may at first seem trivial. Wasn´t this aim reached long ago by the telephone? When compared with the telephone system, the Internet in a sense radicalises the idea of being connected to a network. It equips the physical cables and appliances we are familiar with from the world of telephones with a new philosophy of use or, more precisely, with other software-based transmission technologies whose core elements are found in the family of protocols called TCP/IP (Transmission Control Protocol/Internet Protocol).

Perhaps the most important difference between the idea of connectivity in the "telco world" and "IP land" is the type and the scope of the conditions under which the users and the applications of the networks must operate. The difference between the telephone - which for a long time reserved connectivity for voice transmission - and the Internet is that the latter aims to support all forms of communication and service that can be digitalised.

Unlike the proprietary technologies of the telephone world, which are protected under property law, Internet Standards such as TCP/IP are open and may thus be used by anyone without any limitations. The consequences for the establishment of a good architecture are wider reaching than one might think. Open standards mean a renunciation not only of patent rights but also of all control over use and above all over further development. Everyone is free to develop new transmission procedures or services for the Internet. Neither the Internet community1, at present the most important authority with regard to standardisation on the Net, nor other organisations can effectively control the further development of the Internet. The target of global connectivity is thus being pursued on the Internet under conditions and with aims and consequences that are different to those obtaining in centralised networks administered by organisations under governmental control such as the ISO (International Organization for Standardization) or the ITU (International Telecommunication Union) (see Werle & Leib 1997). Without any means of sanctioning, governmental power on the Internet relies almost exclusively for support on the general recognition of its rules and products.

The political dimension of "Internet Governance" (Baer 1996; Gillet & Kapor 1997; Gould 1997) can be seen in the pursuit of a good architectural order, which is understood as a common good that serves global communication. This order is not, incidentally, the only option, but it has been developed in explicit competition with other concepts of communication networks, above all with those of the "POTs" (Plain Old Telephone Systems). We characterise the results of these efforts, the specific architecture of the Internet, as governing technologies.

The fact that the properties of Internet architecture can be described in terms of ancient political categories does not mean that those carrying out the project wish their activities to be understood as political. On the contrary, the Internet community insists on a concept of technology which is strictly removed from politics. According to Net engineers, the differences between the architecture of the telephone network and that of the Internet derive from the fact that the former was developed from a political and the latter purely from a technical point of view (see Hofmann 1998a).

However, contrary to the self-image of those involved, we perceive in the institutions and values, and in the strategies of inclusion and exclusion that characterise the development of the Net a further dimension of political power, which we call techniques of government.

The interplay of governing technologies and techniques of government serves as a conceptual means of access to the practices of political ordering on the Internet. We decided to examine the question of practices and objects of governance on the Internet by means of a case study on the development of the next generation of Internet Protocols "IP" (or IPng), the most important of Internet Standards. How does the Internet organise its own continued existence and what choices are seen to be available? What interested us here were both technical aims and conflicts as well as the specific procedures involved in making and implementing decisions. The emphasis in this text will be more on governing technologies than on techniques of government. We will provide an exemplary outline of the architectural dilemmas of the Internet in order to then investigate the question of how matters that apparently cannot be decided on technical grounds are in fact decided in the IETF.

The choice of IP for the case study seemed justified by its fundamental importance on the Internet. After all, IP contains the set of canonical rules that allow computers to contact each other and to exchange data in the first place to then join autonomous networks together in the Internet.

    IP defines
  • the addressing system (the order of the net's address space),
  • the characteristic format in which data is transmitted (packet switching),
  • the type of data transmission ("connectionless", i.e. hopping from one router to the next) and
  • the reliability of data transmission (best-effort service, no guarantees).2
IP is regarded as the mother tongue of the data-communication space. It contains the minimum binding rules for the exchange of data and thus also the key to the achievement of global connectivity (RFC 1958). A few months before this study began, the Internet community agreed on a model for IPng and called it IPv6. Our case study reconstructs the prehistory of the lengthy selection process and looks at the process of specification and implementation of IPv6 up to the stage of readiness for application. What insights into the constitution of political authority on the Internet can be gained from the development of IPv6? Our answers to this question are influenced by the ethnographically oriented research method and the types of sources used.
1 "Meta matters": Developing the research methods
 
"If no-one has any objection, I will setup a daemon to automatically post random postings from the big-internet [mailing list, d. A.] archive until we all travel sufficiently far back in time that we can prevent the spice girls from meeting."
(Chiappa, IETF list, 15.7.98)

Studies of political powers based on ethnographical methods focus on actors, resources and strategies that normally have rather marginal significance in political science. On the Internet these would include, for example, the customs, rituals, sacred values and almost religious convictions that come into play in struggles for sovereignty to define matters of public interest The advantage of cultural research approaches to the net is their ability to question the prerequisites and conditions of existence of social associations. These usually seem self-evident and in no real need of clarification in empirical political science, whose thinking is shaped by societies organised on the model of the nation state. Internet governance cannot fall back on the usual repertoire of organisational resources found in states (see Reidenberg 1997; Willke 1997). The Internet has no Constitution, no government and no parliament; it does not even OBEY a legal system. Binding sets of rules such as the Internet Protocol derive acceptance, respect and recognition exclusively through the agreement of its users, or of their network administrators. Paradoxically, it is the architecture of the Internet that is responsible for this decentralisation of decision-making powers. The more successful the project of global connectivity is, the more uncontrollable will be its effects on the architecture of the Net.

The aims, strategies and conflicts associated with the further development of the Internet´s architecture were examined with the aid of four different types of source material:

  1. Document analysis: Internet Drafts and Requests for Comments (RFCs), the IETF´s two publication series, allow very fruitful document analysis. Together they give a good overall picture of the current state of the entire standardisation procedure in the community, and are more up to date than any other source of documents.. RFCs are also occasionally used to communicate other announcements, and some have even included poems (see RFC 1121) or April Fool jokes (see Footnote 19). In the eighteen months since the beginning of 1997 the IETF has published over 3,000 Internet Drafts and almost 400 RFCs. Specifications of IPv6 are currently described in 29 Internet Drafts and 16 RFCs.3

  2. IETF Conferences: The working conferences of the IETF, which take place three times a year, were another source of data collection. The five-day meeting of between 2,000 and 2,500 engineers provides an opportunity to study the practical working methods, the form of internal organisation, the typical customs and, not least, the relationship between the electronic form of communication and that of the real world. During the conferences participants hitherto known to us only from their written contributions on the Net became recognisable faces and personalities. For one thing, we could now match names and opinions to physical appearances; for another, individuals now had specific roles (paternal, functionary or outsider), reputations and statuses - these are fundamental steering influences in the development of the Net, which we had not deduced merely from observation on the Net itself, but which were nonetheless useful in acquiring a better understanding of structures and the course of electronic debates.
    A good example of the way authority and status are expressed in the community is the organisation of discussions at the meetings of the working groups. It is not the order of the queue behind the microphone that decides who gets to speak next, rather the status of the speaker, or of his intended contribution. Status is both claimed - e.g. in the decisive manner in which the microphone is approached - and readily conceded: those of higher rank are allowed to skip the queue.

  3. Interviews with experts: Participation at the conferences of the IETF also allowed us to conduct interviews. On the one hand, the interviews were intended to supplement the history of the development of IPv6 - as reconstructed from electronic archives - with the actual memories of those involved (also see Hofmann 1998b). On the other, the questions were aimed at clarifying problem areas - such as the question of alternatives to IPv6 or the consequences of private address space for the network architecture - which were unrelated or somewhat tangential to the discussions of the working groups.
    Despite its influential role in the regulation of the Internet, the IETF has not even begun to establish coherent development strategies. Debates that go beyond concrete standardisation programmes at best turn out to be (usually short-lived) incidental contributions to the relatively strictly regulated flow of communication in the working groups. An authority of the community described this phenomenon as follows: "There is a lack of a comprehensive ´business plan´ within the IETF. There is no consensus or discussion within the entire IETF about the direction of an area, for example".4

  4. Mailing Lists: Mailing lists in fact proved to be the most important source of information for the case study. This distributed communication service represents the most important development centre for Internet technology. Each of the approximately 100 working groups in the IETF maintains its own mailing list, which on principle is open to anyone interested. The IPng mailing list is read by around 1,400 subscribers at present; the number of active members is, however, far smaller, amounting to barely 5% of the mainly "lurking" list members. Since the establishment of the list exactly four years ago in mid-1994, around 6,000 contributions have been made.

    We have described reading mailing lists as a qualitatively new kind of "window" on the research area of technology development on the Net (Hofmann 1998b). What is special about mailing lists in terms of research strategy is that one can closely observe the reasoning of the engineers without being addressed oneself. This research source opens up access - as otherwise perhaps only archive research can - to the area of investigation without even indirectly leading to undesired interventions. Although they are as present as the active participants on the list, the spectators are not heeded by them. This lack of attention to the spectators is evidenced by the fact that there are no attempts to translate or de-contextualise. The engineers communicate with each other on the mailing lists in their own particular written language, characterised by a strict economy of words and letters (countless acronyms!) and a specific linguistic wit.

    Mailing lists are sources with many voices. They permit subscribers to follow and be present at conflicts about the architecture of the Net. In other words, one is participating in an ongoing discussion in which the various protagonists are articulating positions on one's own research topic. In 1997, the IPng mailing list, the official development workshop of the new Internet Protocol, even fell victim to a kind of counter-attack: Critics of IPv6 and those interested in this opposition founded the "IPv6-haters list".5
    To be sure, mailing lists are not a replacement for other sources such as interviews, but, on account of the special conditions of observation that exist on the Net, they do allow a kind of "being there" which cannot be attained by traditional investigative methods. One of the "generic" insights into governmental power on the Net gained from this source is the breadth of the spectrum of diverging but equally plausible positions on Internet architecture which are represented in the community. The observation of the collective and public development of technology as a kind of "live show" allowed us to realise that there are actually different development options associated with the project of global connectivity. In the name of good architecture engineers battle over which course of action comes closest to achieving the goal and over which compromises are acceptable. The problems involved and how they are assessed and processed will be reconstructed (very) roughly in the following on the basis of the developmental history of a technology which will soon perhaps be a governing technology. Extracts from interviews (anonymous) and contributions to mailing lists6 are included for illustration and increased plausibility.

2 Problems with scaling on the Net
 
2.1 The Internet´s constitutions: The Internet Protocol
 
"Scalability is not only a technical term these days. It is quite worrysome to see how many dependencies there are on the network, and what it takes to make solutions viable, acceptable, implementable and manageable, including in a global and also quite politicized arena."
(Braun, big I, 3.7.92, 0531)

The more the Internet expands, the more precarious becomes any form of coherent coping with the crises caused by its own growth. Around 1990 the Net seemed to come under pressure in two important areas. On the one hand, the numerical addressing capacity began to exhibit bottlenecks. The address space decides the number of possible nodes and users who can receive a unique identity on the Internet - and thus also the maximum size that the Net can reach. A similar kind of problem was encountered in the area of routing. The number of routes on the Net grew more rapidly than the capacity of computers to compute them.7 The routers´ capacity for storage and computation ultimately determines the number of accessible objects on the Net, i.e. nodes or sites to which data routes can be traced.

The project of global connectivity had come up against two complicated and also ominous problems of scale. The situation at the beginning of the 1990s appeared ominous in so far as the Internet was heading for collapse; it was complicated because the difficulty was not the absolute number of available addresses and computable data routes between them - IPv4´s 32-bit address field can still identify approximately 4 billion host computers in 16.7 networks (RFC 1752) - but an aspect to which the engineers had not paid much attention up till then: the organisation of the address space. The procedure of allocating Net addresses then in use proved to be too generous and even wasteful, as well as being extremely unsystematic in view of the new size of the Internet (see Ford, Rekhter & Braun 1993, p. 14). Many Net addresses remain unused today because networks and sites only use a fraction of the address volume they received in the past. Estimates suggest that approximately a tenth of the addressing capacity of IPv4 is actually in use.8 The problem of the "routing explosion" is also only indirectly related to the growth of the Internet. The address space lacked a systematic order which would allow to disassociate the increase in the number of data routes from the increase in the number of sites. The key notion became "aggregatable address allocation".

According to the recollections of our interviewees, the structure of the address space did not play a role in the development of IPv4. As long as the Internet remained small and only had one backbone, i.e. only one transit route for all data packets, the efficiency of the addressing procedure was of as little interest as its "routing friendliness". Addresses principally served the purpose of identification and not of localisation of objects on the Net. Up to 1993/94 addresses were issued sequentially, i.e. in the order in which they were applied for, without taking the geographical or topological position of the networks into consideration.9 From the routers´ perspective the address space of IP is therefore a flat object. Because there was no subdivided structure according to which Net addresses could be organised, they all had to be "advertised". And the more the number of networks seeking a connection to the Internet increased, the longer became the tables in which routers stored Net addresses and the routes to them.

Higher addressing efficiency, as was proposed by some members of the Internet community, could indeed have remedied the problem of the scarcity of addresses in the short term, but it also threatened to make the routing bottleneck even worse. Conversely, while the introduction of a hierarchical addressing system could be expected to improve the routing situation, the already low use-efficiency of address allocation would have been lowered even further (see Huitema 1996).

These inter-related problems of scaling provided the context for the development of a variety of contradictory solutions which perforce led on to two central questions about government on the Net. One concerned the architecture of the Internet: Are the current scaling problems an indication that the project of global connectivity requires a fundamental reform of the organisation of the Net, or does it only require an improved implementation of its principles? The other concerned the distribution of power in the IETF: Who is responsible for making a decision on this issue?

2.2 Regulating the Net: The Internet Engineering Task Force
  In 1998, the Internet Engineering Task Force is still the most important regulatory authority on the Internet. Many of the de facto standards which together constitute the Internet come from its working groups.10 However, the existence of the "non-governmental governance of the Internet" (Baer 1996) is largely unknown outside the world of the Net. This is especially true of Europe, where only a few companies and research institutes have so far shown any interest in becoming involved. That can partly be explained by the unusual image of the IETF, as well as its form of organisation, which differs from that of official, international standardisation committees. The Internet community possesses neither litigable status nor formal rules of membership. Everyone who subscribes to the mailing lists of the IETF and who participates in their meetings can and ought to consider himself a member. The IETF is open to anyone who is interested, provided they have the necessary technical competence and practical engineering skills. The exclusionary effect of this prerequisite should not be underestimated. The IETF has traditionally understood itself as an elite in the technical development of communication networks. Gestures of superiority and a dim view of other standardisation committees are matched by unmistakable impatience with incompetence in their own ranks.

As in the early days, when the Internet community was still a small and mainly academic grouping, the rule is still "one man, one vote". Everyone speaks only for himself and not for the organisation he represents in the IETF: "all participants in the IETF are there as INDIVIDUALS, not as companies or representatives thereof. (Whether this is a quaint fiction is an interesting discussion given enough beer, but it is certainly the historical basis of the organization (...)." (O´Dell, 18. 12. 97, POISED)

Majority decisions and contentious votes are frowned upon. The members continue discussing an issue until broad consensus ("rough consensus") emerges.

The impression that the IETF is a grassroots community is strengthened when you see the clothing worn by its members. T-shirts, shorts, sandals and an aversion to suits and ties play such an important role in the Internet community´s self-image that novices are explicitly made aware of the existence of the "dress code" (see RFC 1718). The rules regarding both the formation of a consensus and the dress code are manifestations of the programmatic attitude of the technical community to the Internet. From the perspective of the engineers, the Internet reflects a tradition of technical development in which those who make decisions are also those who do the practical work, i.e. those who write programme code. Suits and ties, on the other hand, symbolise a division of labour according to which decisions are made by management and marketing. The technicians believe that the Internet should be governed by technicians. The primacy of standards development is expressed in the community´s form of organisation:

"One of the things that no one seems to get is that voluntary standards groups are bottom-up organizations. The program of work is set by who shows up and what they want to work on. If people quit attending they have no raison d'etre. Voluntary standards are just that: ´use them if you want.´ (...) Voluntary standards have weight in the market place because vendors and users decide to implement and buy products using those standards and *choose* to attend meetings of those organizations." (Day, IPv6-haters, 16. 2. 96; 2. 3. 96, 0468, 0540)
The idea is that governance on the Internet should be based solely on the recognition of procedures and products. According to the technicians, the quality of a standard should determine its success.

What was once a relatively small community has grown to an organisation of several thousand engineers working mainly in the Internet sector. While in the early years of the community they met in university rooms, today the largest existing conference hotels have to be booked several years in advance if there is to be enough space for the almost one hundred working groups to meet. 11 Companies such as Cisco, IBM, Microsoft and Sun, whose future products are directly dependent on Internet Standards, send up to 40 of their staff to the meetings of the IETF.12

The continuing expansion of the Internet community is an expression of the growing regard in which Net technology and indirectly also its standards committees are held. Companies have discovered the IETF as a "vehicle" (Day) for coordination in a new market: "It used to be that the phone companies went to ITU and the computer companies to ISO. Now the computer companies go to the IETF and IEEE. (...) Someday the same fate will befall the IETF when they are viewed as a bunch of old farts." (Day, IPv6-haters, 2. 3. 96, 0540).

From the point of view of the companies, the Internet community is attractive not least because the IETF's own institutional weight has little influence on standards development. Despite its size and economic significance, the Internet community is trying hard to hold onto its informal organisation and its practical overall objective of producing efficient technology.

However, increasing problems of scale are also emerging at the organisational level. One indication is the growing size of the working groups, where the proportion of active participants decreases as the number of passive observers increases. Popular working groups now have to hold their meetings in the large ballrooms of American hotels. Under such circumstances contributions to discussions become more like formal presentations and it is hardly possible to establish a serious working atmosphere.

As in the area of standards development, there is also a pragmatic solution to this expansion problem. The former working conferences of the IETF are undergoing a change of function. The large working groups especially are increasingly becoming venues for presentations which serve to inform a wide audience of silent observers about the current state of affairs and unresolved problems. The actual development work is shifting from the working groups to so-called "design teams" (see Bradner 1998). These are small, non-public working groups of around three people which are either coopted by the "chair" of a working group or which constitute themselves. The formation of design teams can be understood as an attempt to replicate the working conditions of the early days - by excluding the public - when all the participants could fit around one table. Typically design teams come up with first drafts (in the form of Internet Drafts), which are then discussed in the working groups. The chairman of the IPng working group comments, somewhat laconically:

"...almost *all* protocols considered by the IETF are the work of a small number of people and, where that number is greater than 1, the result of closed design meetings/ mailing lists/phone calls/whatever. IETF working groups seem to have two roles: - adding value by identifying bugs or issues that the designers overlooked and would wish to know about - subtracting value by forcing compromise and featuritis into a design against the best judgement of the designers." (Deering, IPv6-haters, 3. 2. 97, 0783)
Even if the community is effecting its rise to the status of an influential standardisation committee apparently rather hesitantly and reluctantly, there are increasing signs of a growing formalisation of its decision-making structures. Customs and rules ("process issues", see RFC 1396) formerly passed on by word of mouth are now more frequently subject to the process of standardisation that was hitherto reserved for technical conventions: a working group is founded, a mailing list is set up, a chair and a document editor are nominated, and a charter, including "milestones", is formulated (RFC 2026; Bradner 1998). The tendency to codify traditional forms of organisation in the IETF has been noticeably strengthened by the conflicts about the next generation of IP. The Internet´s problem of scaling in fact turned into a structural crisis for its community.
3 Governing technologies: The old and the new Generation
 
3.1 The Interim Solution: CIDR
 
Sometimes in life "merely surviving" is a major accomplishment. As superchicken said: "You knew the job was dangerous when you took it. Aaaack!"
(Fleischman, big I, 16.5.94, 0810)

At the end of 1991 a group was formed within the IETF which took the name of ROAD (ROuting and ADdressing) and whose job was to examine the possible solutions to both problems. In contrast with customary procedures, this working group did not meet in public. Six months later the group presented its recommendations along with a timetable for action (for details see RFC 1380).

The extension of the addressing capacity as part of a new Internet Protocol was suggested as a long-term measure. The ROAD group felt that several working groups should be founded in order to explore the various ideas circulating in the community about the question of "bigger Internet addresses". The acute need for some sort of action was to be satisfied by an interim solution until the new protocol was ready. Somewhat less drastic than interfering with the architecture of the Net, this interim solution had to fulfil the task of slowing down the speed of both the consumption of addresses and the growth of routing tables. The solution was available a year later and is still in use: Classless Inter-Domain Routing, CIDR for short.

As the name suggests, CIDR is an addressing system which gives precedence to the localisation function of addresses. In order to simplify the computation of routes and to permanently uncouple the increase in the number of routes from that of the networks on the Internet, a hierarchy, labelled as "provider-based", was inserted into the address space: all numerical Net addresses distributed after 1994 begin with a so-called prefix - similar to a dialling code - which identifies the Internet Service Provider. This prefix - which the provider shares with all its customers - enables the addresses at the farthest ends of the Net to be grouped under the hierarchical (provider) level immediately above them (RFC 1519).

An interim solution was also found for the bottleneck in addressing. Thanks to CIDR, the address blocks can be adjusted more flexibly to suit the size of the networks concerned.13 This job has been entrusted to the providers - which has led to considerable tension between customers and suppliers, or the trustees of the precious address space.

CIDR has prevented the Internet from collapsing, but the price are new ownership and power relations on the Net. Internet addresses issued according to the allocation principles of CIDR do not pass into the possession of the sites, but in fact belong to the providers, who had previously only passed them on.

In order that a lasting aggregation effect can be achieved in the area of addressing, Net addresses must be returned to the provider and replaced by new ones as soon as topological changes occur. According to the logic of "provider-based" addressing, however, topological changes always arise when a network (or indeed its provider) changes provider. The more computers a site involves, the higher the costs brought about by changes in the topological framework.

Looked at from the perspective of just and good architecture, CIDR is regarded to be a mortal sin. The dictate of "renumbering" networks shifts the costs of topological change in the Internet unilaterally to the lower levels of the Net and thus not only hinders competition between the providers but also institutionalises dependencies between the hierarchical levels of the Net (see RFC 1518; RFC 2008; Ford, Rekhter & Braun 1993). On the other hand, CIDR is still helping to prolong the lifetime of IPv4 and is thus providing breathing space (how much is uncertain) to rethink the architecture question.

3.2 IP, the Next Generation: Good architecture between reform and revolution
 
"A beautiful idea has much greater chance of being a correct idea than an ugly one."
(Roger Penrose)

"Must IPng embody a new internet architecture or can it simply be a re-engineering of IPv4?" (Deering, big I, 15.5.94, 0789). This was the question oaf principle to which no clear answer had been found even in 1994, four years after the beginning of the debate about the future of IP.

Those proposing a fundamentally new protocol made reference to the invention of the principle of packet switching. They concluded from the history of the Internet that only another radical new step could continue its tradition with some prospect of success. The installation basis of IPv4 was so large, they argued, that a voluntary "migration" to a new protocol could only be expected if it offered truly conspicuous improvements over its predecessor. This position could be dubbed: achieving global connectivity through a courageous break with the technical conventions prevailing on the Internet:

"When I was at MIT, I got in a protocol war (my first - wheeee :-) between CHAOS and TCP/IP. TCP/IP lost. Not for any *technical* reason, but because of installed base. With editors, if you have a 10% market share, you're OK. With communication protocols, you're dead with 10%: people want to use them to communicate, and who wants to use something that doesn´t let you talk to 90% of the world? From which I deduce that you need a massive capability edge to overcome installed base. (...) In a market competition, IPv4 has the massive edge in installed base. IPv6 is a dead duck." (Chiappa, IPv6-haters, 18.2.96, 0497)
The supporters of a less drastic solution based on IPv4 also used the history of the Internet to support their view. The long life of IPv4, despite the unforeseeable number of users and applications, had to be seen, they argued, as proof of its flexibility and robustness, i.e. of its architectural quality. Moreover, precisely the large installation basis, they said, was an argument for restricting changes to a minimum. And, as it was no longer possible to decree a change of generation of IP, there was every reason to support a version which maintained compatibility and could thus gradually establish itself alongside IPv4.
"I do not believe the Internet needs a new internet-layer architecture. The current architecture has proven remarkably flexible and robust, due to its minimal requirements from lower layers, its minimal service ´promises´ to upper layers, and its simple, stateless, datagram routing and forwarding model. As a result, it has survived the introduction of many new lower-layer technologies and many new applications." (Deering, big-I, 15.5.94, 0789)
IP is considered a simple artefact. Not making any demands upon the underlying hardware and only of minor use to the services residing above IP, it has been compatible with technical innovations so far.14 This simplicity constitutes the specific quality of the architecture; it is the reason behind the universal applicability of IP. Were the technical trivia associated with IP now an argument for making as little fuss as possible about its further development?

The opponents of a radical solution were also further divided into two irreconcilably opposed camps. This dispute was about whether the next generation of the Internet Protocol would have to be developed by the community itself or whether an existing standard could be used, which had already been implemented and which had the necessary addressing capacity: CLNP (Connectionless Network Protocol), a development of IPv4. The supporters of CLNP argued that, in the interest of a truly global connectivity, it would only be sensible to agree on a worldwide single standard for the "Internetwork layer", and to agree on one which was known to work.

"One of the reasons I was personally trying to get CLNP accepted - I wasn´t the only person to believe this - but I was convinced that IP itself is inherently simple. It has to be. What is really interesting and what we should have been spending a lot of time on is not mucking around with the header format for IP, but thinking about the way the routing protocols (...) needed to be made. (...)Those technical advances that had been made in the IETF would have been different in very minor ways, whether in CLNP or IPv6. (...) The Internet protocol itself is not the important thing, what is important is all the stuff that supports its operation." (L.C.)

"The main point of using CLNP is that it allows us to make use of the installed CLNP base, including existing specifications, implementations, deployment plans, deployed routers, etc. Probably the most important part of this is that CLNP is currently available in products. Using CLNP also means that we don´t have to sit down and argue about the design of a new protocol, and that we will be sticking with a basic communications paradigm that we understand and know works (given that CLNP design is based on IP)." (Ross Callon, big-I, 22.6.92, 203)

So what was the problem with CLNP, when it had even been developed under the sign of IP's "paradigm" and organised the exchange of data in a relatively simple and undemanding manner? The resistance to CLNP was rooted less in technical than in political or even ideological convictions, as was recognised within the IETF in retrospect. Unfortunately, CLNP had been developed by the competition, by the International Organization for Standardization (ISO). And although the OSI network model (Open Systems Interconnection), of which CLNP is a component, had had the political support of the participating countries, it was the open Internet Standards that had become established and, judging by the installation basis, had clearly won the battle against OSI. Was the Internet community now to fall back on the technology of the defeated opponent for pragmatic reasons? And what would be the consequences of using an official, protected standard for both the open architecture of the Internet and its regulation? Negative, even devastating, technical and economic effects were forecast:
"... the political repercussions could well end the Internet as we now know it. (...) It means that once the network can fully route CLNP traffic there will no longer be any reason for the TCP/IP stack to exist. (...) It is also a kiss-of-death for all the emerging internet technology companies. The OSI marketing droids will have a field-day with this. Gee, I hope this is all wrong." (O´Dell, big-I, 3.7.92, 0487) The key in the dispute about CLNP was the question of "change control". The IETF in principle claims sovereignty over all technologies out of which Internet standards are developed. This is seen to be the only means of ensuring that the development of the Internet does not become dependent on particular economic or political interests (see RFC 2026). The risk of endangering the technical autonomy of the Internet by choosing CLNP seemed unacceptable to the community.15

The conflicts between the conservationists and the innovators extended to individual details of IPng. The format of numerical addresses was the subject of the most heated arguments. The convention concerning the notation of sender and addressee is one of the important ordering elements in the data space. It supplies the nodes with an identity, describes the type of topological relationship they have with each other, and it also intervenes in the relationship between providers and networks. Structural decisions regarding both the administration and organisation of space - the effects of which can reach right into the economy of the Net, as a few examples will illustrate - converge in the question of address format.

3.2.1 Address format I:Semantics
  Addresses have two functions. They name or identify an object and they indicate where it is to be found. What is characteristic about Internet addresses is that no distinction is drawn between the identification function and the localisation function. The name contains the information about the location and vice versa.16 However, the Internet address shares one other feature with the traditional telephone number system: it is not the individual apparatus that is addressed, but the connection between the apparatus and the network (Comer 1991; Kuri 1996). Both of these features have many consequences for the operation and the use of the Internet, for example because they limit the mobility and flexibility of Internet addresses.

One part of the community proposed a different semantics for Internet addresses as an alternative to CIDR´s architecturally "ugly" address format. "Endpoint identifiers" like those used in CLNP were suggested for identifying the network nodes themselves in a way that would be independent of local position and thus transportable and protected against renumbering. The argument against this idea is that the uncoupling of the two functions of addresses opens up numerous possibilities for forging and hijacking data packets. The assumption that the name or address space fulfils both functions is one of the axioms of the philosophy of Internet architecture, and changes at this level are thus regarded as a huge risk to the stability of the Net (see Crawford et al. 1998).

3.2.2 Address format II: Address length
  Unlike the telephone number, the length of the IPv4 address is fixed..17 The advantages and disadvantages of fixed address lengths had already been assessed in different ways while IPv4 was being developed. The argument in favour of variable address lengths is that they would apparently permanently do away with the danger of addresses ever running out. An addressing system that expands from the bottom up would supply the objects at the periphery of the Net with short addresses and generate ever longer addresses as it advances up the hierarchy. If need be, the size of the address space could be extended by inserting new levels into the hierarchy (see Chiappa 1996). The argument against variable addresses, however, is that they require more computing power to calculate routes and thus further stretch the already tight resources of the routers.

The most sceptical members of the IETF agreed that IPv4´s 32-bit address space would become too small in the long term, even given a more skilful allocation policy for the remaining addresses. How large a future address field should reasonably be was, however, hotly disputed among the supporters of a fixed address length. Again, the notion of good architecture allowed opposing positions to appear plausible. On the one hand, a good architecture demanded the smallest possible address field, at most double the size of that in IPv4, in order to minimise both the extra digital baggage that every data packet carries as a "header" and the consumption of bandwidth. For not only do long addresses discriminate against low-delay applications on less efficient connections, but they also make data traffic more expensive. Critics of a generously proportioned address space thus continue to warn that commercial users could reject a new Internet Protocol simply because of the expense (see Baker, IPng 12.11.97, 4787). On the other hand, global connectivity demands reserves of address capacity because future services whose address needs cannot yet be predicted might otherwise be unintentionally excluded. The possibility of "auto-configuring" host addresses was another argument cited in favour of a large, 128-bit address space.18 The continuing lack of consensus concerning the optimal length of addresses was treated humorously in an "April Fool´s RFC".19

3.2.3 Address format III: Principles of space administration
  CIDR introduced a first hierarchical level to the originally flat address space between the Internet Service Providers and the Internet Service Subscribers. The effectiveness of this measure is founded on the aforementioned expropriation of the subscribers. Sites no longer possess their addresses; they merely "borrow" them from their providers.

The alternative to this model - which, of course, has been discussed at great length - consists of a geographical addressing plan similar to what has hitherto been used in the telco world (see Andeen & King 1997). With "metro-based routing", aggregation effects are achieved by means of regional structuring of the address field. As the hierarchical order is organised around geographical places and not organisations, the provider can be changed without having to change the Net address.20

Under the geographical addressing model the providers are charged with connectivity costs that CIDR imposes on the users or sites. In order to guarantee the flow of data within and between the regions, so-called information exchange points are required, whose accessibility must in turn be ensured by routers between them. Both information exchange points and routers would have the status of virtually public property for which, however, as Hoffman & Claffy (1997, p. 302) put it, there is as yet no "satisfactory commercial model" (also see Chinoy & Salo 1997). But no-one can compel providers to submit to a topological order whose commercial prospects are uncertain: "The Internet has no mechanism for enforcing topology control, and it´s probably ´politically´ infeasible to create such a network" (Chiappa, IPv6-haters, 16.1.96, 0206). The new addressing system's prospects for success thus not only depend on the technical quality of the standard. And, as long as no stable commercial model has established itself for controlling the flow of data on the Internet, the standardisation of the address format can only ensure that at least the formulation of the address format will not exclude any future variation of topological order.

4 4 "The Internet way of doing things" - Net techniques of government
  The various proposals for the design of the address format represent the range of options that, from the point of view of the various factions in the community, are compatible with the traditional architectural principles of the Internet. The different ideas do not in fact seem any less plausible than the objections raised against them. The fact that relatively uncontroversial principles of network design are interpreted differently is all part of standards development. What is particular about the IETF is the way in which it copes with such apparently irresolvable problems:
"It´s a little hard to say why it works, but part of the thing that made the IETF work in the first place was that if people couldn´t agree on between two or three proposals, you´d just send them off in two or three working groups and let the folks do whatever they wanted to do, (...) and you´d just end up with alternatives (...). I mean, frankly, democracy does not necessarily produce the best results, if you have to vote up front as to what´s the best approach (...) A much better approach is to just allow the technical proposals to get done in detail and figure out which one works. You know, deploy them on at least experimental basis. People may find they may learn things; they may learn things that apply to the other proposals (...)." (R.C.)
The community believes that the value of technical ideas should not be decided by vote but by empirical proof of feasibility or, in the language of the engineers, by running code. Running code is software that proves itself to be functional in test runs. For technical drafts to be recognised as Internet Standards, several "genetically" independent implementations demonstrating the interoperability of programmes are required. (RFC 2026)

Running code represents a consensus-building procedure steeped in legend, and at the same time is what the IETF probably sees as the most important distinction between itself and other standardisation bodies:

"In the IETF world we produce running code that has documents that describe it. A lot of other standards organizations produce documents and that´s the end of it." (M.D.)

"Probably, the most fundamental difference is that in the ISO community, the highest goal is global consensus. (...) In the Internet Community, the highest goal was interoperability and getting something to work. Something that worked was its own advertisement." (L.C.)

As a "hard-nosed notion of correctness", running code symbolises the ideal of a purely technical debate, which results in sensible, robust and - above all - useful standards. This is the basis for its almost mythical status in the Internet community.

It is therefore no coincidence that the tradition of empirical testing is repeatedly presented as the better alternative to the "democratic approach" to standards development. Politics might rule decision-making in official standardisation bodies, but the "technical reality" of what is possible rules in the IETF. (R.C.) The IETF´s key slogan, coined in 1992 by one of the "fathers" of the Net, passionately pinpoints the difference between these two types of government:

"We reject presidents, kings and voting, we believe in rough consensus and running code." (Dave Clark)
Presidents, kings and elections represent types of government that are unpopular in the IETF because in political regimes political will wields more power than technical reason (see Hofmann 1998a). Rough consensus, on the other hand, is believed to be reasonably immune to the corruption associated with political power. "The Internet way of doing things" is thus taken as a kind of guarantee for the quality of standards development, and many IETF members believe that the decision about the next generation of IP should also be made in accordance with this set of rules.
"What I think the IAB should have done, was [to] follow the IETF tradition of allowing alternative proposals to go forward to the point where you could actually tell what proposals made sense technically and what didn´t, with the basis that technical reality should be the ruling factor and not what some group of people think." (R.C.)
4.1 "Fiddling while the Internet is drowning" - Goodbye rough consensus
 
"Well, do we love to throw rotten tomatoes and clink with old medals!"
(Antonov, IPv6 haters, 22.1.96, 0306)

Against the recommendations of the ROAD group, which had advocated a systematic analysis of the various approaches toward IPng, in the early summer of 1992 the IAB made a decision - thus not only consciously over-riding the differences of opinion in the Internet community, but also violating the community´s rules on consensus. The justification given was the existential threat to the global connectivity project:
"The problems of too few IP addresses and too many Internet routes are real and immediate, and represent a clear and present danger to the future successful growth of the worldwide Internet. The IAB was therefore unable to agree with the IESG recommendation to pursue an additional six-month program of further analysis before deciding on a plan for dealing with the ROAD problems. (...)
However, we believe that the normal IETF process of ´let a thousand [proposals] bloom´, in which the ´right choice´ emerges gradually and naturally from a dialectic of deployment and experimentation, would in this case expose the community to too great a risk that the Internet will drown in its own explosive success before the process had run its course. The IAB does not take this step lightly, nor without regard for the Internet traditions that are unavoidably offended by it." (Chapin , big I, 1.7.92, 0450)
Rough consensus and running code, which in combination, especially, had represented the "road to truth", suddenly began to pose a risk to each other. According to the IAB, the "dialectic" development process, which used both experimentation and application, was no longer able to keep up with its own success. As another member of the IAB put it: while the community was good at implementing technical projects it believed in, it was not good at coping with situations requiring a decision, where a choice had to be made between a number of proposals. According to this member, none of the bodies in the IETF had experience of decision-making procedures of this kind: "There is simply no process in place for those necessary activities." (Braun big I, 3.7.92, 0524)

The IAB had decided in favour of CLNP - not because it believed that this protocol was technically superior, but because it thought CLNP would provide a quick solution. The choice of CLNP led to a wave of protest of hitherto unknown proportions within the IETF. Not only the decision itself, but also the way in which it was reached and the institutional structures that had allowed such violations of the community's bottom-up tradition, came in for criticism (see, e.g., Rose, big-I, 7. 7. 92, 0631).

Against the background of the growing threat to the Internet´s existence, the problem of defining a good architecture began to overlap with the problem of who was to define it. The architecture crisis became a crisis about the regulation of the Net: "One dimension was technical: What is the best course for evolving the IP protocol? (...) The other dimension was political: Who makes decisions within the Internet community? Who chooses who makes these decisions?" (RFC 1396). Regardless of whether this distinction between technology (the protocol) and politics (the process) is valid or not, the IETF found itself unavoidably confronted - albeit against its will - with the problem of power in the development of the Internet.

The protest being voiced on the relevant mailing lists against the IAB announcement proved to be so immense that the vote for CLNP was first weakened and shortly afterwards - at the next IETF meeting - fully retracted. The result of the failed attempt to quickly define a model was that the hunt was reopened and even extended beyond the limits of the IETF. A "Call for White Papers" was issued with the aim of achieving "the broadest possible understanding of the requirements for a data networking protocol with the broadest possible application." (RFC 1550)

At the same time a new working group was founded, which was unique in the history of the Internet community: POISED (The Process for Organization of Internet Standards Working Group; see RFC 1396). POISED was given the task of investigating the decision-making structures and rules of recruitment operating in the IETF. The POISED mailing list became a place where the community reflected loquaciously on its own constitution: "An estimated 20 MB of messages filled up disks all over the world2 between August and mid-November 1992 (RFC 1396). POISED resulted in a redistribution and formalisation of decision-making powers in the IETF. The influence of the Internet Architecture Board (IAB) was reduced in favour of a body said to be in closer contact with technical developments, the Internet Engineering Steering Group (IESG), which is made up of the IETF´s "Area Directors" (on the organisational structure of the IETF see RFC 2028). At the same time, formal nomination procedures were introduced for appointments to the community´s "official positions" (see RFC 1603 and RFC 2027).21

Kobe, the Japanese city where the IAB announced its vote in favour of CLNP, has become the catchword for a traumatic moment in the history of the IETF. It was not so much the dimension of the battle that was traumatic as its assorted causes. The history of the development of IPng - starting with the diagnosis of imminent collapse and ending with the decision in favour of IPv6 - is today associated with politicking. Politicking describes a type of behaviour that infiltrates the hegemonic technical debate with non-technical considerations. In a sense the technical debate becomes the servant of other interests - with all the imaginable negative consequences for the quality of the products under discussion.22

The belief in a "technically excellent solution" (Huitema 1995), which is practically the natural result of the rough consensus approach, might be the reason why so much attention had been given until then to the rules regarding standards development and so little to those regarding selection procedures. The ultimate decision-making process regarding IPng consisted of two steps. First the technical criteria for selection were announced publicly, and the individual proposals were then evaluated on this basis. (RFC 1752)

4.2 IPv6: A new model for the Internet
  In July 1994 the directors of the IPng Area recommended SIPP (Simple Internet Protocol Plus), which, alongside CLNP, was one of three drafts that had still been in the running at the conclusion of the selection process. The IESG followed this recommendation and called SIPP "IPv6". SIPP was the pragmatic solution. It was the most similar draft to IPv4 and THUS defined as an "engineering task". The authors of SIPP intended to maintain almost all the characteristic features of its predecessor, including the "datagram" as the regular unit of data, and the "best effort service", which delegates control over data flow to the next-highest layer in the network. The address field in the original version was only slightly different to the CIDR format (RFC 1884), and the fixed address length was also maintained.

The most important modification of the address field was its expansion to 128 bits. This seemed acceptable because the size of the data-packet header would only be doubled, although the address itself would become four times longer. In addition, a further reduction of the address field by means of a compression procedure was planned. Auto-configuration, security and authentication measures, a new address type called "anycast address" and the possibility of adding additional headers ("extension headers") to the data packet´s header were among the other rather moderate innovations included in SIPP (details in RFC 1752).

The subsequent years, which really were supposed to be used specifying and testing IPv6, brought a number of unpredicted and fundamental changes to the draft. The first four Internet Drafts specifying IPv6 thus only achieved the level of "Draft Standards" in August 1998 - a year later than planned (see RFC 2300; RFC 2400). Impetus for the reformulation of IPv6 came, for example, from the Routing Area, which deals with procedures for reserving bandwidth and for the transmission of time-critical services. For data packets to enjoy such types of special treatment in the future, their heads had to be equipped with special fields to label data types ("traffic class") and data flows ("Flow Label") (see Deering & Hinden 1998). These features enable the formulation of service demands that would be processed by the routers, assuming they will be able to do so some day.

In 1997 - the year in which the core specifications of IPv6 were to become the "Draft Standard" - significant changes to the format of the address field were again demanded. This was probably the last attempt (for the moment) to separate the identification and localisation functions of the IP address, in order, on the one hand, to remove the undesirable side-effects of the current addressing model and, on the other, to introduce more hierarchies in the address space (see O´Dell 1997; Crawford et al. 1998; Hofmann 1998b).

Although the initiative was unsuccessful, it did lead to a new partitioning of the address field. This is worth looking at in detail because it clearly illustrates both the fact that, and the means by which, the design of IP intervenes into the administration of Net space.

The address field is divided into individual segments, which represent the hierarchical levels planned for in the address space. Its pyramid-like structure corresponds in turn to the desired ranking order between service providers and users on the Net (see RFC 2374):

The aggregatable global unicast address format is as follows:

      | 3|  13 | 8 |   24   |   16   |          64 bits               |
      +--+-----+---+--------+--------+--------------------------------+
      |FP| TLA |RES|  NLA   |  SLA   |         Interface ID           |
      |  | ID  |   |  ID    |  ID    |                                |
      +--+-----+---+--------+--------+--------------------------------+

      <--Public Topology---> Site
                                      <-------->
                                        Topology
                                                   <------Interface
                                                   Identifier----->
      Where        FP Format Prefix(001)
      TLA ID       Top-Level Aggregation Identifier
      RES          Reserved for future use
      NLA ID       Next-Level Aggregation Identifier
      SLA ID       Site-Level Aggregation Identifier
      INTERFACE ID Interface Identifier

The first field (3 bits) reveals the type of address in question, and functions as a type of reading guide for all following bits.23 The remaining 125 bits are divided into a public and a private topology - a delimitation of the data space that does not exist in IPv4. According to the philosophy of IPv6, the "transit domain" of the data, i.e. the levels of the network hierarchy used only to transmit data flows, are public. All locations without transit traffic, by contrast, are private.

The private sphere, in other words the interior of a site, is codified in two fields in the IP address: the identifier or name, which defines the interface between the computer and the network, and the site-level aggregator, a field used to subdivide large local networks.

The fields used to describe the public topology also structure the relationship between the providers. Two levels of the address field are allocated to this task, the lower of which is equipped with no less than 24 bits in order to represent the hierarchy between the providers in a way "that maps well to the current ISP industry, in which smaller ISPs subscribe to higher level ISPs" (King 1998). While the lower-level field in the public topology more or less corresponds to the current constellation among the providers, the field at the top of the address pyramid is consciously aimed at imposing a specific order on Internet providers. The 13-bit address space limits the number of organisations that can inhabit the "default-free region" to a maximum of 8,192 (see Löffler, Sand & Wessendorf 1998). From the point of view of data-flow control, this means that the routers at the highest level of the hierarchy in the communication space need only compute connections between a maximum 8,192 objects in the network. The size of the top-level address field is thus a means to control the extent of topological complexity in the Internet. These 13 bits do, however, represent "significant constraints on operations, business models and address space allocation policies" from the point of view of service providers (Karrenberg, IPng, 4996, 1. 12. 97), and this was one of the more important reasons why the IETF sought information on whether such serious interventions into the economy of the Internet were even legal:

"I asked the lawyers that do work for the IESG what restrictions in flexibility we (the IETF) have in the area of defining rules and technology that restricts ISP practices. I was told that the only time we can be restrictive is when there is no other technically reasonable option ..." (Bradner, IPng, 4997, 1. 12. 97)
And thus one may ask whether the 13-bit option really is the only way - technically speaking - to organise the highest level of the address space:
"if anyone expects a magic formula which says ´13´ and not something else, you won't get it. (...) would 14 work? - certainly. Like everything else, 13 is an engineering compromise - chosen to balance one set of considerations against a bunch of others, and after ruminating over it a long time, the consensus was 13 was the best choice." (O'Dell, IPng 5000, 2. 12. 97)
Design decisions such as the partitioning of the address field exemplify the fact that in the interpretation and allocation of individual bits in the headers of data packets spatial politics are being pursued. In the name of good architecture, which is supposed to guarantee global connectivity without scaling problems, the performance capacity of the routers, for example, is being offset against the business interests of the providers. The result is a draft version of the future order of the Internet. Whether IPv6 is ever implemented also depends on those who will have to bow to this order - the providers and users.

The development of IPv6 seems to be nearly finished. Not only have the most important components of IPv6 reached the penultimate step in the standardisation process, but the large number of implementations and the increasing testing going on in the "6bone" testbed are also signs that completion is imminent.24 At the same time, speculation is growing about the intentions of the manufacturers, because only standards for which products are developed and which are supported by existing products can compete with the current installation basis.

5 5 "IPv4ever"?
 
"ipv6: the service provider general has determined that ipv6 cannot do any harm to asthmatics, pregnant women and men, dislexics, or ipv4."
(Crowcroft, IPv6 haters, 12.1.96)

In the summer of 1998, shortly before the completion of such extensive development work, nobody in the IETF is able to say whether IPv6 will actually ever replace IPv4 as the new Internet Standard. It even looks like the number of those who are distancing themselves from IPv6 and forecasting its downfall is growing.

Desirable as a larger address space might be, its realisation has become less urgent, and IPv6 thus has no guarantee of succeeding. Moreover, because the prospects of IPv6 becoming the standard are generally thought to be doubtful, almost all new products for the Internetwork layers are being developed not only for IPv6 but also for IPv4. Encryption and authentication tools, for example, which are used by IPv6 to increase the security of data traffic, have long also been available for IPv4. Yet another reason for providers and network operators, at least, to migrate to the next generation of IP has thus become obsolete.

Not only has the once existential urgency of IPv6 diminished over the years, but its reputation in the IETF is also disintegrating. Speaking for many, the chairman of the IETF recently expressed his opinion on the subject of IPv6 on the IPng mailing list:

"But what we thought at one time we might be forced to deploy as early as 1993 looks like (according to Frank Solenski´s Address Usage statistics the last time I saw them) it might not be needed for as much as another decade. And in that time - well, maybe your crystal ball is clearer than mine, but my crystal ball doesn´t preclude somebody having a better idea than IP6 as presently formulated. If we do indeed have a better idea in the meantime, I said, we would deploy that better idea." (Baker , IPng, 12. 11. 97, 4787; vgl. auch die Reaktionen in IPng 4775 und insb. IPng 4788).
An addressing system that, according to popular opinion, was heading straight for collapse at the beginning of the 1990s has now suddenly been granted another ten years of life. And the Draft Standard, which may not have achieved rough consensus in the IETF, but did - according to majority opinion - come closest to the internally defined technical criteria, has now become a second-best solution.

If we remember that the Internet community took a number of years to reach agreement on a draft for IPng and a working group then laboured for another approximately four years on the development and implementation of IPv6, we need to ask what brought about this change of opinion in the IETF and so thoroughly undermined the status of the former number-one development project. Several explanations circulate in the IETF, and they sound like a distant echo of the different positions formed at the beginning of the 1990s, when the problems of scaling first emerged, on what constitutes good network architecture.

5.1 Network Address Translators: Self-help on the Net   The first warnings were made in the early 1990s: if the IETF did not quickly deal with the shortage of addresses on the Internet, decentralised solutions would be invented, which would make the establishment of a new, globally standardised address space more difficult, if not altogether impossible. Shortly afterwards the decentralised solution was on the market: so-called NAT boxes, or Network Address Translators, which are able to turn a single Internet address into a new address space of unlimited size. Behind a globally legible IP address an address space of any size desired is created, which contains addresses that are not globally legible and are thus only valid inside the site in question. NAT boxes allow large organisations, in particular, to meet their needs for additional addressing capacity, which either cannot be acquired from providers or only at a high cost. Because the new local name spaces are usually aggregated under only a few IP addresses, they not only remove some of the burden on the network's limited addressing capacity, but from the point of view of routing they also appear to be "topologically correct":
"NAT leads to IPv4ever (...) because I see NAT deployed and working on extending the lifetime of IPv4 addresses both in terms of sheer quantity and in terms of making the allocation [of addresses] more hierarchical and better aligned to topology." (Doran, diff-serv-arch, 10. 3. 98, 00338)
Looked at from the perspective of global connectivity, the decentralised "NAT box" solution is considered a "kludge" - an ugly temporary solution that makes the Internet less transparent and more difficult to administer the more widespread it becomes. NAT boxes not only promote the possibly irreversible collapse of a global address space, but they also - along with popular "firewalls" - violate some of the architectural axioms of the Internet.25

The spread of NAT boxes, firewalls and similar solutions to Internet problems illustrates an insight accepted in the community long ago, namely that the IETF is gradually losing its influence over the evolution of the Net. Its authority is dwindling at the same pace that the strength of unity - which proceeds from the vision of the Internet as a collectively defined good - is decreasing. Both the project of a good network architecture geared towards the public good and the power to regulate the Net, based in both normative and practical terms on the architecture, are facing competition from individual interests that are gaining in weight both at the development and the user level. IPv6, which was understood as an Internet-wide solution to problems of scaling, is now only one of many development options.

While it was possible to prescribe the introduction of IPv4, the predecessor to IPv6 and still the Internet Standard, in "big bang" form (see Helmers, Hoffmann and Hofmann 1997), the future of IPv6 depends on whether its implementation appeals to enough actors on the Net: "The deployment of IPv6 will totally depend on the users. We can design the most beautiful protocol on the world, if the users don´t buy the software or see the merit of switching over to IPv6, technically speaking, it won´t work. I think everybody in the IETF realizes that, that we are working on a protocol that might not be recognized by the market." (E.H.) The longer users have to wait for IPv6 to mature to application stage, the more its prospects for success seem to be diminishing. Another problem is the pace at which the Internet is developing.

5.2 Data flows   The design of IPv6 is based on the assumption that data packets will continue to be the norm and that other modes of transfer will remain exceptions that require identification. The classical packet transmission procedure treats all data in the same way. All types of application have to share the available transmission capacity with all other data packets. Even related and homogeneous quantities of data with the same source and target addresses, for example, are sent through the Net as individual packets with full senders and addresses. What can be a strength in the event of malfunctioning connections (lost data can be easily sent again) can become a weakness when large volumes of data need to be sent as quickly as possible.

A significant increase in transmission speed can be achieved when data packets with the same destination are grouped together in data flows. Data flows are a hybrid phenomenon. They consist of data packets that have the same receiver address or at least share part of the same route (e.g. the connection between the exchange points for transatlantic traffic). Data packets in a flow are labelled with a "tag" that tells the router or switch that they belong to a "flow", so that all the packets can be waved through without each destination address having to be read individually. The effect of this "tag switching" or "IP switching" is similar to that achieved by reserved lines on the telephone network and, accordingly, high economic expectations are associated with this transmission procedure (see Sietmann 1998).

Routing procedures of this kind have been on offer as commercial Internet products for some time, and the competing manufacturers are trying to agree to a common standard in an IETF working group (see Gillhuber 1997; Schmidt 1997; Callon et al. 1998). If data flows ever achieve the status of a generic transmission dimension on the Net and even replace data packets as paradigms, then IPv6, which only treats data flows as an exceptional case, would have been leap-frogged in the development of the routing procedure of the future.

Looking back, it becomes clear that the real problem facing IPv6 is not so much the predicted addressing crisis as the high speed at which the environment in which IP is embedded is changing. "Yesterday´s technology for tomorrow´s customers", was an acerbic comment made on the IPv6-haters list (Yakov, 4. 12. 97). Such mockers remain unmoved by the quandary that while tomorrow´s technology is difficult to define today and even more difficult to standardise, today´s technology may no longer be able to meet the demands of global connectivity by the time the community has agreed on a standard via rough consensus and running code. The expansion of the open community and the growing competition, capital power and impatience of the interests represented on the Net is accompanied by a formalisation and slowing down of the standardisation process in all areas, while product development within the enterprises involved continues. Thus, the ambivalent status of IPv6 within the community can also be traced back to the growing gap between the pace of standard-setting in the IETF and the speed of product development on the market.

5.3 "IP Written in Stone?"  
"(...) but this is what the IETF seems to excel at, taking your pet dream, your clear and unfettered vision, and seeing it get all cloudy as everyone pees into it (...) "
(Knowles, IPv6 haters, 10.12.97)

IPv6´s critics believe the uncertain future of the protocol proves that Simple Internet Protocol (SIPP) was the wrong model from the start, that the authors of SIPP were the wrong people and that the IETF is now the wrong place for new network technology to be developed.

In the interests of a stable Internet, a majority of the IETF had favoured the traditional alternative of simply updating IPv4 and had entrusted its realisation to a "conservative design nature", who explicitly shies away from the risk of "promising new architectural features" (Deering, big I, 15.5.94, 0789). One of the probably unforeseen consequences of this move was that a problem that had previously been at the centre of attention now lost its attraction. SIPP was considered so "simple" and "well understood" that those interested in breaking new technical ground turned their attention to other challenges:

"The people you would immediately recognize as the best and the bravest minds of the Internet were not going to IPng and IPv6 meetings. (...) They were going to working groups that were looking at the kinds of technical issues that were really unsolved ..." (L.C.)

"IPv6 is the ´b-ark´ of the IETF." (N.C.)

The early days of the Internet had an aura of singularity and willingness to cut loose - not only from the "design hegemony" of the telephone network, which had been unchallenged for decades, but also from internal architectural shrines. Recently describing this attitude on behalf of the Internet community, IAB chairman Carpenter wrote that no procedure was a protected species: "Principles that seem sacred today will be deprecated tomorrow. The principle of constant change is perhaps the only principle of the Internet that should survive indefinitely." (RFC 1958) Should IPv6 be taken as evidence that the Internet community is losing its detachment from its achievements and thus one of the preconditions for proper observation of the "principle of constant change"? Are technically outdated conventions being imposed in the interests of global connectivity? In short, is the Internet facing a problem of scale not only as regards its governing technologies but also with respect to its techniques of government?

Critical voices in the Internet community claim that the IETF has got religion. TCP/IP, seen originally by its developers as no more than a gradually developing experiment that was never more than "a few steps ahead of our success desaster", is said to have turned into a set of holy axioms that has become so deeply embedded in the foundations of the Net and in its community´s thinking that it is now almost untouchable and insurmountable:

"Oh no (...) 'I feel a RANT COMING ON!!!'

(begin RANT)
along the way of becoming the sanctified protocols, we managed to breathe way too much of our own gas. one of the great strengths of IP is that you can glue together all kinds of feckless, misbegotten networks fragments and make something that has operational connectivity. somehow we decided, though, that networks *should* be built like that (...) that pervasive religious position is just plain wrong. we have transformed the liability of having only a hammer into the design principle that everything except nails is the work of the devil. that's just patent, braindead bullshit!!" (O'Dell, IPv6 haters, 12.7.98)

Pragmatic interim solutions (O´Dell´s RANT takes the Internet´s hop-by-hop routing technique as an example), which only came about because better procedures were not available at the time, turned into technical virtues, with nobody remembering the circumstances of their emergence.This cheerful sanctification of what in fact were improvisations is traced back to the growth and cultural transformation of the IETF. The approach to existing technical and social conventions is said to show the difference between the academic world, from which the first and founding generation of the Internet community was recruited, and the industrial sector, which dominates the next generation in the IETF. On this view, the large majority of the current community sees the Internet as a fact, as a manifest technical construction, which should be further developed, but not fundamentally questioned: "Now you have people coming along who weren´t part of that thought process, who think these things are essentially God-given and are the scientific laws for building networks when in fact they are not." (J.D.)

The ideal of an Internet in continual metamorphosis seems to be increasingly giving way to the reality of a permanent construction site - one that can constantly be patched up and repaired, it is true, but none of which can be torn down. And IPv6 is seen by critics only as a flagrant example of the IETF´s inability to revolutionise its own thinking. The pioneering spirit of the past is being supplanted by an "unimaginative" politicised bureaucracy, which is making the Internet community more similar to its opponents, ISO and ITU.

And thus, while the IETF - according to the righteous and incorruptible - is increasingly distancing itself from the ideal of a radically open technical debate, the voices asking whether the IETF is still the right place for developing a new Internet protocol or even good network architecture are getting louder:

"I don't think that the IETF is really the right place to start this effort. (...) Look back at how IP happened; a small number of people felt their way towards what was later recognized as fatesharing, as a fundemental advance over the original basic packet model of the ARPANet and Baran's work. (...) I just think the organizational dynamics [of working groups in the IETF, d. A.] are alle wrong. You'll never get the kind of large group you'd wind up with in the IETF which is a basically unimaginative organization incapable of taking a real step forward, viz IPv6) to make that big step forward, and come to agreement on it." (Chiappa, IPv6 haters, 12.1.96, 0140)
"I am actually surprised that the Big Players haven't formed a closed group to propose new standards. you can keep the useful side of the IETF process (open, free standards) while punting the downside (meetings open to clueless people who have every right to speak.) (Knowles, Ipv6 haters, 13.7.98)
"And how do you know they haven´t?" was the prompt reply. (O´Dell, IPv6-haters, 13. 7. 98) The irony of the story is probably the fact that neither the small, hand-picked design team nor the "Big Players" can guarantee the development of a transmission protocol which is as successful as IPv4. On the one hand, the consensus among IPv6´s opponents probably does not go beyond rejecting it. In the IETF, at least, which represents a broad spectrum of the communications technology sector, there is still no agreement on the features that would characterise the good architecture of the future. On the other hand, even a small group of network architects free of bureaucratic constraints would be unable to avoid thinking about the conceptual consequences of IPv4´s broad installation basis. How can hundreds of thousands of Internet sites be persuaded to switch to a new protocol at the same instant just because the global connectivity of the Internet would otherwise founder on Babylonian multilingualism?

If we bear in mind the size, the complexity and the level of decentralisation the Net has attained, we have to ask whether the uncertain future of IPv6 really is the result of bad decisions or whether it simply illustrates the fact that the Internet is becoming increasingly difficult to regulate. Seen from this perspective, the growing complacency of the IETF would only reflect the unwieldiness of the vehicle it is trying to steer, and the criticism of the body´s procedures and decisions would be no more than pining for the old days, when the Internet still only had marginal significance and the community´s playing fields seemed to have no boundaries at all.

Like the Internet itself, the IETF has also entered the normal world, which entails faction fighting between reformers, conservatives and revolutionaries, and perhaps also the creation of legends and the exaltation of gods and sacred cows. Is IP on the path to beatification?

The current assumption in the IETF is that the transition phase from IPv4 to IPv6 will last at least ten years, if it has a limit at all (RFC 1933; King et al. 1998). The diffusion of a new transmission standard would probably not significantly affect IPv4´s durability, because its open architecture could even cope with competing mother tongues, provided compatibility is maintained. Thus, to sum up: "The Internet will ´get big´ whether or not IPv6 makes orbit." (O´Dell, IPng 6000, 7. 7. 1998)

6 "So long, and thanks for all the packets"   The development of IPv6 was used as a case study for investigating the political dimension of the Net. Thus, it is time to ask which insights IPv6 has revealed about the regulation of the Net and how the perspective of this study differs from other case studies.

A similar, at first glance much more spectacular, case example could have been provided by the controversy that started a year ago about the future administration of names and addresses on the Internet (see Recke 1997). The conflict about the long-overdue reorganisation of the name spaces had got so intense towards the end of 1997 that the regulation of the Net has finally become an issue at the level of international politics. Several international organisations expressed an interest in supervising the Internet (see Cook Report 1998).

The struggle for supremacy over the Internet´s name and address spaces demonstrates the growing sources of friction between the decentrally organised Internet, which had largely been left to its own devices until recently, and its relatively highly regulated social environment. The Internet has become the scene of a battle for political, economic and moral spheres of influence, which only marginally concerns the issue at the centre of the conflict over the future of the Internet protocol: agreement on the features of a good network architecture.

Unlike the question of name and address administration, the development of IPv6 rarely ever crossed the threshold into the general public spotlight. The conflict about IPv6 remained an issue that interested only the Internet community and the associated enterprises. Spared dramatic interventions from the outside world, the conceptual and practical reform of the architecture was thus carried out largely in accordance with the traditional rituals and principles of the community (which were being reformed at the same time).

Looked at from the research perspective, IPv6 gave us an opportunity to observe social and technical order on the Internet from "within", a perspective which would be impossible in this form if, as in the case of address and name space administration, these orders lose influence or are completely removed.

The case study about IPv6 gave us access to a melange of architectural, organisational and symbolical principles that were specific to the Internet world. Clothing and communication rituals, and technical and social values and conventions together provided a framework of reference from which we could discern political contours while reconstructing the history of IPv6. The fundamental architectural axioms and ideas which together comprise the Internet Constitution we call governing technologies. From these we distinguish the techniques of government, i.e. the rules of procedure that are expected to make the wholly impossible permanently possible: the continuous further development and transformation of the Internet without changing its identity.

The reform of a governing technology on the Net may generate dynamics that are more similar in ways to the usual patterns of conflict and consensus-building found in social reform projects than one might have expected. While global connectivity can be formulated as a collective project, it is a project that can never be fully realised collectively. One reason is that the implementation of a good architecture - no matter how incorruptible "ough consensus and running code"may be -comes up against ambiguous problems that can be interpreted in different ways and thus require solutions that involve compromise.

Even if the Internet's techniques of government are also seen consciously as an alternative to democratic decision-making procedures, the content of the problems that are solved one way (technically) or the other (politically) is similar.

The development of IPv6 not only illustrates the different approaches to the common goal, but also how little influence the IETF now wields over the development of the Net and likewise over the future of good architecture. Whether the Internet community´s loss of authority will cause problems for the further development of the Net remains to be seen. Experience with decentralised regulation of global affairs is so rare that little can be said about its success.

As long as IPv4, or a related protocol like IPv6, remains the governing technology on the Internet, IP land will probably continue to resist conventional forms of top-down regulation. Even a brand new network protocol developed outside the IETF will at best be able to coexist with IP, but not replace it.

1 The term "Internet community", "community" and "IETF" are used synonymously here - in accordance with the usage of the IETF. This is not, however, an entirely unproblematic decision because there is no longer just one but rather many communities on the Net. The fact that the IETF still sees itself as the community has to do with its own tbut it also reveals a certain claim to authority over ther groupings on the Net.

2 On the rationality of these principles see Saltzer, Reed & Clark 1984; Cerf & Kahn 1974.

3 http://www.ietf.org/html.charters/ipngwg-charter.html

4 One obvious explanation for this lack is the composition of the IETF: the majority of the engineers see their contribution to the Internet as the solution of clearly delineated, "well-defined" problems. Not only the "Charter" of the working groups, but also the discussion culture on the mailing lists, reflects this attitude. Without disciplined restriction of the discussion to the agenda perhaps very few of the IETF´s working groups would ever archieve their goals. Both of these prerequisites are missing when it comes to the task of developing a "business plan" for the Internet.

5 The founders of the list issue the following invitation: "Come see Noel and Sean display the emotionally immature, abusive, mean-spirited, vindictive and dark sides of their shabby smug childish vain characters as they viciously, cynically, cruelly and spitefully indulge their emotion-laden bias, belligerence, prejudice and uncontrollable hostility in an unparalled display of unprofessional and unfair attacks on the technical quality of a protocol design which their jealously and resentment does not allow them to admire..."

6 The mailing lists cited here are "big-Internet" (abbreviated as big-I), now inactive, on which the future of network and routing architecture was discussed across the IETF until the decision process had gone so far that the precision work on IPv6 was handed over to the working group´s list IPng. Other mailing lists cited are the official IPng list, the IPv6-haters list, "diff-serv" (differentiated services), POISED (The Process for Organization of Internet Standards) and the IETF list.

7 The routers used at the of the 1990s could compute approximately 16,000 routes. Forecasts were made in 1992 that this number would be reached between twelve and eighteen months later (RFC 1380). Routers at the level of the big backbones currently compute up to 40,000 routes (see King et al. 1998).

8 This is a considerably poorer rate of use than that attained by the telephone networks, for example, see Huitema 1996.

9 To clarify, imagine if the address space of the international telephone network were not structured either by region or by service provider. The telephone exchanges would then need data banks the size of all the valid telephone numbers in the world in order to set up connections between one number and another.

10 Given the complexity of the Net, the community is agreed that the IETF can look only after parts of its technical development. However, probably nobody is able to say which parts these are, or why some standardisation projects "migrate" into the IETF and others out of it (see Eastlake 3rd 1.6.98, POISED).

11 One of the rituals preceding the IETF conferences is the flood of complaints that the conference hotel is relready booked out. Many blame this on the rampant "goers" who, unlike the "doers", travel from conference to conference instead of writing code.

12 This figure is based on the list of participants at the 39th conference of the IETF, which took place in Munich in the summer of 1997. We are obliged to Volker Leib for making this list available to us in tabular form.

13 CIDR (pronounced as in apple cider) is actually the second repartitioning of IPv4´s address space. Originally the 32-bit long address space consisted of two parts. The first 8 bits identified the network, while the following 16 bits addressed the individual hosts. So-called "subnetting" was developed in 1981 after it became apparent that the Internet would involve more than one hundred networks in the foreseeable future. Three different size classes (recognasiable from the number of bits which are available to identify the hosts) multiplied the adressing capacity of IPv4 (see RFC 790). As most organisations wanted a medium-sized address block for their sites, the first bottleneck came out in the area of so-called "Class B addresses". A study carried out at the beginning of the 1990s showed that 50% of the Class B addresses, which are capable of identifying no less than 65,536 host computers, actually identified less than 50 (see Ford, Rekhter & Braun 1993).
CIDR, the "classless" addressing procedure, also called "supernetting" (RFC 1338), does away with the rigid bit boundaries between the size classes and leaves it to the providers to adjust the remaining addressing capacity of the Internet more precisely to the needs of the networks.

14 For more on the layer model, which begins with the bare cable, ends with the concrete application and in between piles up the various operative functions of the Net, see, for example, Comer 1991 and Tanenbaum 1996.

15 How realistic this threat actually was is still a matter of dispute. Some members of the IETF believe that the supporters of CLNP had planned to hand the Internet over to the ISO. Others are convinced that the ISO had given the IETF the "change control" over CLNP.

16 In the telephone system there is a clearer distinction - the first digits of a telephone number indicate the topological position and the last the identity of a user. Renumbering in this system thus always only affects parts of the whole number (on the "behaviour" of IPv4 addresses see RFC 2101).

17 The address always remains the same length, regardless of the topological distance between the communicating units. If the telephone network wer based on fixed address lengths, the international dialing code would have to be used even for local calls.

18 Auto-configuration means that the computers receive their numerical identity automatically. Thus, as soon as a computer is linked to the Net it "propagates" a "link-dependent interface identifier", typically the number of its network card, in order to then be assigned a prefix of the site by the nearest server or router. These two components would then "automatically" give rise to a complete Internet address. The format of "autoconfiguration-friendly" addresses would be necessarily larger than 64 bits because the number of the network card alone comes to at least 48 bits (see Thomson & Narten 1998).

19 "Declaring that the address is the message, the IPng WG has selected a packet length format which includes 1696 bytes of address space. (...) Observing that it's not what you know but who you know, the IPng focussed on choosing an addressing scheme that makes it possible to talk to everyone while dispensing with the irrelevant overhead of actually having to say anything." (RFC 1776)

20 According to the logic of geographical addressing, it is changes of places which cause topoligical changes having an effect on the address. The networks of multinational organisations would require a different address for each location - just as is the case with the telephone numbers.

21POISED had barely completed its task when a new working group called POISSON was founded. This group is responsible, for example, for elaborating a "Code of Conduct" for mmbers of the IETF (see O´Dell 1998) and for revising the catalogue of rules and principles for working groups previously drawn up by POISED. "The Internet Way of Doing Things" has presumably now become a permanent topic on the IETF agenda.

22 Technically ugly or bad solutions, whether developed inside or outside the community, are usually blamed on politicking. The most infamous example is OSi (Piscitello & Chapin 1993; Rose 1990; Hafner & Lyon 1996), and cartoons about the political background to its design are even sported on T-shirts (see Salus 1955, 122; Ferguson , IPv6 haters, 16.2.96, 047)

23 IPv6 has three types of address: unicats addresses identify a single network node (host computer or router); multicats addresses identify a set of nodes; and the new anycast addresses also identify a set of nodes, but only contact the nearest node (see RFC 2373).

24 Details of IPv6 implementations can be found at http://playground.sun.com/pub/ipng/html/ipng-main.html; information about 6bone http://www-6bone.lbl.gov/6bone/ .

25 These include, for example, the "end-to-end" principle, which leaves the supervision of data flow to the applications at the periphery of the network (RFC 1958; Hain 1998). The Net´s transport and control mechanism are also based on the assumption of unchanging receiver and sender addresses.

 

home page about us documents miscellaneous sitemap
home page about us documents miscellaneous sitemap

Copyright 1994-1998 Projektgruppe "Kulturraum Internet". c/o Wissenschaftszentrum Berlin für Sozialforschung (WZB)
Reichpietschufer 50, 10785 Berlin. Telefon: (030) 254 91 - 207; Fax: (030) 254 91 - 209;
; http://duplox.wz-berlin.de.