home page about us documents miscellaneous sitemap
home page about us documents miscellaneous sitemap

(How) Can Software Agents Become Good Net Citizens?
  CMC Magazine, Vol. 3, No. 2, Feb. 97

Sabine Helmers, Ute Hoffmann, Jillian-Beth Stamos-Kaschke

  jump-off point
1  (How) Can Software Agents Become Good Net Citizens?
1.1  Software Agents
1.2  News Agents
1.3  Intelligent Agents
1.4  Bots
1.5  Chatterbots
1.6  Spiders
2  Bots and Humans Together
3  Keeping Spiders in Check


  (Copy of the original text, source references omitted)
1 (How) Can Software Agents Become Good Net Citizens?
  In virtual worlds, human-generated information transactions are not unique. Also residing in these worlds are actors known as software agents, such as bots, news agents, and spiders. These actors can communicate with both humans and machines even though they are not made of flesh and bone These programs-turned-actors carry out certain jobs, such as searching the WWW arranging meetings or compiling music recommendations, more or less anonymously and act on behalf of a single user or an organization. Because of their communicative role, and because they reside on the Internet, they too can be considered "Netizens" for the following reasons:

  • A perfunctory aspect of "Netizenship" is that communication occurs online. Because online communication relies on the transaction of data, it is possible for non-human agents to send and receive messages just as humans do. Human and non-human users share the same habitat. As the famous New Yorker cartoon states, "On the Internet, no one knows you are a dog." Such anonymity is possible because transactions in virtual space are transactions of data that leave the physical, "real," bodies of people as they type on their keyboards. As the information transfers to the Internet, it enters a virtual space where physical appearance and corporeal facts such as age or skin color become irrelevant. Hence, you are what you type.

  • Software agents are both autonomous and social net agents just as human users. They are autonomous, in that they often make decisions without prompting from their creators. They are social net agents in that they can react to their environment and communicate with both human and non-humans.

  • Non-human agents can facilitate and enhance the growth of the online community or "electronic commons," which according to Hauben is what distinguishes a true Netizen from a non-Netizen. Consider those bots who serve the MUD community. They can help human users finding their way around the MUD, having the topology of the MUD universe--which can very easily grow to encompass over several hundred rooms--in their "heads".

Because of the actors' electronic nature, different issues in adhering to Netizen-like behavior in accordance with the traditional rules of Netiquette of the "Ye Olde Internet Inn." Nonetheless there are several basic rules that apply:

  1. Never disturb the flow of information!

  2. Help yourself--this is an expression of decentralized organization.

  3. Every user has the right to say anything and to ignore anything.

These principles provide a valuable framework for searching for ways of dealing with some of the more mundane issues and concerns about agent behavior. But even so, these behavioral rules must interface with technical issues that may hinder these actors ability to effectively perform their duties. Consider that the more tasks delegated to agents, useful helpers becoming unpleasant pests as they try to perform these tasks. The supposedly helpful servants can severely impair general network traffic when there is too much data exchanged, as is the case with programs that reproduce in a virus-like manner. Thus, these agents can inadvertently become a nuisance to other users.

The shared life and work of human and non-human actors in the world of networks has opened up a new policy domain in cyberspace. It remains to be seen whether or not the Internet community will be able to devise self-generated, sustainable solutions to the problems stemming from malbehaved agents, agent misuse and abuse.

1.1 Software Agents
  Providing a universally accepted definition for software agents--meant as a collective term encompassing "bots", "spiders" etc.--is nearly impossible. No terminological consensus has emerged so far.
"Carl Hewitt recently remarked that the question what is an agent? is embarrassing for the agent-based computing community in just the same way that the question what is intelligence? is embarrassing for the mainstream AI community. The problem is that although the term is widely used by many people working in closely related areas, it defies attempts to produce a single universally accepted definition. This need not necessarily be a problem: after all, if many people are successfully developing interesting and useful applications, then it hardly matters that they do not agree on potentially trivial terminological details. However, there is also the danger that unless the issue is discussed, `agent' might become a `noise' term, subject to both abuse and misuse, to the potential confusion of the research community." (Michael Wooldridge & Nick Jennings, "Intelligent Agents: Theory and Practice," in: Knowledge Engineering Review, Volume 10 No 2, June 1995.
Software agents, including news agents and intelligent agents, are just a few of the actors to be found on the WWW. News agents are essentially filtering software: they pick out desired information from the plethora of what is offered. Filtering software, can just as well be used to block out information, which is how the SafeSurf Web explorer works. Only pages which have been registered in a database with SafeSurf are displayed on the screen. This reader allows parents to selectively control and restrict their children's Internet access to contents which have been indexed as being suitable for children.
1.2 News Agents
  News agents such as the Stanford Information Filtering Tool can perform other filtering activities. This software agent sorts simultaneously through thousands of news groups for articles for a large number of users according to keywords and sends an email with the results to the client. The multi-user agent Firefly (formerly known as HOMR or Ringo), developed at the Massachusetts Institute for Technology's Media Lab, compiles personalized music recommendations according to information given by other users with similar tastes in music.
1.3 Intelligent Agents
  So-called "intelligent agents", by-products of AI research, are being developed to work as personal assistants that work in the same computer environment as the user. In contrast to other programs which invariably react to the same keywords or actions in the same manner, the object of these agents is to act in a way geared towards one certain person and, in a manner of speaking, independently and intelligently. Their developers envision them "learning" to sort through electronic mail according to the respective user's preferences, point out news and offers that could be of interest to the user or come up with suggestions for shopping. A number of experimental prototypes emerged from intelligent agent research projects such as BargainFinder, a consumer agent that assists with comparison shopping.

1.4 Bots
  Enter the world of bots. From software agents to WWW navigational aids, the world of bots is an area of the virtual space that deserves attention. Besides the larger question of this new policy domain within cyberspace resides the question of defining these actors that are so prevalent on the Web.

1.5 Chatterbots
  Among the many varieties of bots living in the virtual world, chatterbots present themselves as people, although they are actually characters controlled by a program. Many such program-controlled agents pretending to be human users can be found on Internet Relay Chat (IRC) channels, the text-based party lines of the Internet, as it were. IRC bots are seldom talkative, though, and usually perform administrative tasks. They are mostly harmless, but have a bad reputation nevertheless. IRC bots consist of a script which defines what event the program waits for after logging in, in order to then react in a certain manner. IRC bots are often trained to report when a certain user logs in. They are sometimes used to take on an unwanted person's identity in order to prevent them from logging on to IRC.
1.6 Spiders

In contrast to stationary robots like chatterbots, which work as information machines or watchdogs, agents whose activities are not restricted to their place of origin can also be found. Spiders are an example of this type of agent. Examples include WWW navigational aids ("Webcrawler", "Lycos", "Infoseek" or "Alta Vista") that enable keyword-supported searches on certain subjects from among the 60+ million WWW pages. They consist of a database assembled by Web robots which comb the WWW document for document, link for link.

2 Bots and Humans Together

So far, human Netizens are living on the Internet with more and more non-human Net inhabitants "living" among them. In order for the non-human Net inhabitants to coexist and interact peacefully, they rely on voluntary self-regulation and appealing to peoples' sense of responsibility. This seems to be sufficient. The social and technical development of the Internet is based on trust in the assumption that the Internet successfully regulates itself. Only very few questions are dealt with under the observation and coordination of central regulatory boards such as the Internet Assigned Number Authority (IANA) in charge of Internet Addresses and Domain Name administration, for example. Above such basic level network administration and technology development (e.g., the Internet Protocol for data transmission) there are no commonly accepted authorities established on the Internet.

But nowadays, more and more people on the Net are organizing and engaging in all kinds of "neighborhood watch" for various goals or problems which they care for, such as protecting the traditional freedom of the Net (e.g., the Electronic Frontier Foundation) or, on the contrary, trying to constrain traditional free ways of interacting for the sake of non-traditional Internet user groups, such as children, and to make Internet a safe place for all (e.g., Cyber Angels) or to bring proper law and order into the Net (e.g., the Internet Law and Policy Forum). Considering the possible negative potential that the growing number of various kinds of agents might inhere - apart from their supposed helpfulness - it might be worth considering keeping an eye on the non-human actors on the Net as well. If the rules of Netiquette apply to all Netizens and there is a form of social control as concerns the behavior of human actors on the Net, then why should non-human actors not be included?

If non-human actors turn out to be problematic in any way (e.g., by overly consuming bandwidth or annoying other Netizens by email flooding), then maybe even organized forms of specific neighborhood watch over fellow non-human Netizens' behavior would be necessary. A drawback of this form of "agents watch" is that one only has the chance to respond to negative outcomes when they occur and thus react to software agents which have already been developed and brought online. We have to trust in the developers' sense of responsibility. And if the software has already been developed, a chance to really prove what the software will do is only given if the source code is publicly available. The source code of commercial software (which is more frequently used than public domain software, for which source codes are publicly available) is kept secret as the makers of soft drinks keep their recipes secret, because if you tell how to make a product then it can be copied by either the consumers themselves or other firms.

There is no way to prevent any negative effect beforehand, during the development process. All that Netizens can do is to see what happened when they bought the software and used it online. And if a supposedly helpful servant of yours secretly acts as someone else's servant and gives out secrets -- your online behavior for example -- because the second secret master wants your data for marketing research or for political control as in totalitarian countries, you will have to wait until you first notice that your harmless servant is serving another individual or group's purpose. If problems like this occur on a larger scale, then it would be a sensible idea to appoint an independent control board for the examination of software agents like consumer boards do with other products or the Food and Drug Administration does with formulas and medical devices and examine products which are already available or which are to enter the market.The test results could then be distributed throughout the Internet. Agent look-up might be facilitated by agent registries. Registries could be either domain-specific, i.e., according to the location of the host agencies in which the bots reside, or according to the four general bot habitats: the Web, Usenet, IRC, and MUDs.

3 Keeping Spiders in Check

A growing number of spiders conduct exhaustive searches, traversing the Web's hypertext structure and retrieving information from the remote sites they visit, thus slowing response time for everyone. Besides placing demands on network, a Web bot also places extra demand on servers. The strain placed on the network and hosts is even more increased by a badly behaved Web bot. In November 1995, for example, a search robot hit Thomas Boutell's World Birthday Web and began clicking repeatedly on the hundreds of pages located at the site, pages that were in turn linked to thousands more off-site pages. Boutell's provider eventually had to lock out an entire network of Internet addresses in Norway from where the robot was launched Martijn Koster's Guidelines for Robots Writers, written in 1993, were meant as a means of addressing the increased load on Web servers by spiders. They call for programmers to design their robots to act more responsibly. The robot exclusion standard, which originated from the WWW Robots Mailing List, offers Web site administrators and content providers a facility to limit what the robot does. People who don't approve of robots can prevent being visited.

The "Guidelines" and the exclusion standard are supposed to have been the first wide-ranging attempt at web robot ethics, reflecting a consensus among spider authors that was operating reasonably well for quite some time. For the exclusion standard to work, a robot must be programmed to look for a "robot.txt" file that would tell it exactly what it could and could not do on a particular site. The standard, however, is not enforced. According to a recent study few site masters take advantage of this mechanism. Such findings highlight that a robot more often than not must use its own judgement to make decisions regarding his behavior.

Following this line of thinking, researchers at the University of Washington conceived of an Internet Softbot. The softbot, a research prototype that had a number of fielded descendants but itself was never fielded, should be able to use the same software tools and utilities available to human computer users on a person's behalf. Provided only with an incomplete model of its environment the Softbot's behavior would be guided by a collection of "softbotic" laws alluding to the "Laws of Robotics" envisioned by Science Fiction writer Isaac Asimov in the 1950s.

The guidelines for Robots Writers, the exclusion standards, and the "softbotic" laws all represent approaches that aim to construct robots with a built-in capacity to follow the rules of Netiquette. Sometimes this approach does not work. Usenet cancelbots are a case in point. Netiquette usually only allows cancels by the original author of a message. In cases of excessive multiple-posting, when hundreds of newsgroups are flooded with the same message, many Usenet users feel that spam cancelling is justified. Thus, cancelbots may appear an useful instrument in administering Usenet. However, several drawbacks surfaced. Cancelbots "gone mad" have frequently become the source of spam themselves instead of helping to get rid of it. Spam cancelers and spammers alike started to (mis)use cancelbots as weapons against each other. Cancelbots are not immune to abuse as can be seen in attempts to censor speech, e.g., by the Church of Scientology. Thus, cancelbots which were meant to be a technical solution to net abuse threaten to become a plague.


(1) Cheong, F. (1995). Internet agents: spiders, wanderers, brokers, and bots. In Bots and Other Internet Beasties. Ed. Joseph Williams. Indianapolis: Sams.net.

(2) Exchanges between human MUD users and chatter bots are reported by Turkle, S. (1996). Life on the screen. Identity in the age of the Internet. London: Weidenfeld & Nicholson, pp. 77-101. Turkle argues that by talking to bots our language seduces us to accept, indeed to exaggerate, the "naturalness" of software agents.

(3) Leonard, A. (1996, April). Bots are hot! Wired. http://www.hotwired.com/wired/4.04/features/ netbots.html"

(4) Dreilinger, D. (1996). Internet search engines, spiders and meta-search-engines. In Bots and Other Internet Beasties. Ed. Joseph Williams. Indianapolis: Sams.net, pp. 237-256.

(5) "These Laws are. 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or second Law." (Asimov, I. (1968). The rest of robots. London: Granada, p. 69.)


home page about us documents miscellaneous sitemap
home page about us documents miscellaneous sitemap

Projektgruppe "Kulturraum Internet". c/o Wissenschaftszentrum Berlin für Sozialforschung (WZB)
Reichpietschufer 50, 10785 Berlin. Telefon: (030) 254 91 - 207; Fax: (030) 254 91 - 209;
; http://duplox.wzb.eu.