icosilune

Leonard Foner: What’s an Agent, Anyway?

[Readings] (03.19.09, 4:07 pm)

Opening poses agents as a trend in software design, to lend computer applications a human face. This was seen early in Macintosh file finding programs, as well as in a variety of other places. Foner’s goal is to outline what “true” agents are, to identify how they are made up and what they have the potential to do.

The agent Foner spends most of his time examining is Julia, which was developed by Michael Loren (“Fuzzy“) Mauldin. Julia is a MUD chatterbot, which acts like any other player of a MUD and can talk and interact with other players.

The interesting thing with Julia is that because MUDs are textual online worlds, players interact with each other at a level through textual commands. Julia is essentially in the same position as any other player, having a character to interact in this world. As a result, other players interact with Julia just as though she were another player. The interface of the MUD creates an ambiguity between players and agents, because there is no clear or immediate way of distinguishing one from the other.

Julia is often used by other players as a helpful guide in the online world, like a knowledgeable friend who is always around and can always spare the time to give help, directions, or advice. Much of Julia’s function is giving help to others, and she can answer many questions about the world, that are not easily answered any other way.

At this level, it is possible to compare Julia to a documentation system, but instead of being faced with extensive documentation, Julia can give immediate and quick responses. The MUD environment is also constantly changing, so an agent who can explore the space like any other player is a potentially very useful resource. Her encyclopedic knowledge is part of what makes her ordinarily human behaviors give way to her robotic nature.

For her human-like qualities, Julia contains several subtle and very particular variations in her behavior in the world. For instance, she moves waits a second or two before moving from one room to another, she varies her responses, and she usually has somewhat coy responses when asked whether she is really human or really female. Foner explains that these human like characteristics make her functional behavior even more useful for other players. Foner gives an anecdote where another player, herself a programmer who knew that Julia is a bot, remarked on how she missed Julia when whe was offline. This is an interesting emotional reaction to something that the speaker knew was artificial. However, it is hardly unusual. People anthropomorphize things that are not human, often that are not even animate and develop attachments to them.

I would argue that an interesting reason for some of this success is the way in which she is adapted to and situated in the MUD. She is not emobided, but then again, no in-MUD character is really embodied. She has the same sort of virtual body that everyone else does.

Toward the end of the paper, Foner gives a series of bullets that characterize agents. These definitions describe agents as primarily functional things, that exist within some computational format, and are there to carry out tasks on the behalf of users. It is important to note that this is relevant from the perspective of developing agents as software tools, but for the purposes of simulations and of games (such as The Sims), Foner’s definition breaks down somewhat. The characteristics are as follows:

  • Autonomy: The agent performs actions on its own, and takes initiative.
  • Personizability: The agent adapts and learns to different users, adapting itself to them.
  • Discourse: The agent talks back and communication is two way, unlike other tools.
  • Risk and trust: The user can delegate a task to the agent and trust that the agent will do the task correctly. The risk of the agent failing must be balanced with the user’s trust.
  • Domain: The degree of specialization and risk is dependent on the domain being explored.
    Graceful degradation: Failure at a task or improper understanding of the task should exhibit graceful degradation, revealing that there might be a problem without, for instance, producing an error message.
  • Cooperation: The relationship between the user and agent is cooperative, and conversational, as opposed to commanding.
  • Anthropomorphism: Foner argues that agents are often anthropomorphized, but that they do not need to be. Similarly, many anthropomorphizied programs (such as Eliza) are not agents.
  • Expectations: The agent should be able to respond reasonably to most users’ expectations.
Reading Info:
Author/EditorFoner, Lenny
TitleWhat's an Agent, Anyway?
Typearticle
Context
Tagsdigital media, art, social simulation, specials
LookupGoogle Scholar

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.