icosilune

Archive: March 19th, 2009

Why Goffman?

[Research] (03.19.09, 6:51 pm)

A question that has recently been asked of me is “Why Goffman?” in terms of adaptation of Pride and Prejudice. Why apply a Canadian sociologist who was writing from the 1950s to the 80s is relevant for application to the game adaptations of the novels of Jane Austen. This was a question that I have thought a great deal about, but I did not have an answer at the time, and was interested in devising some sort of thorough response.

AI has borrowed from psychology from a long time to develop models of characters, particularly in games and other interactive experiences. Psychology involves an individualistic and internal view of the individual, which is generally well supported by the largely introverted theories of AI. Interactions have proven to be problematic, and for this, AI has adopted several models of intention and interpretation. The applications of these to individual planning-based models have had limited success (This is a risky claim that I ought to put more investigation into supporting), and solutions to managing interactions have seen to shift the level of planning up a level. Instead of having individual characters execute planning, a drama manager will plan the interactions that the characters will have, moving the characters about like puppets. Do not get me wrong, I like puppets, but it might be worthwhile to look at some other potentially useful models.

The field of sociology has developed models of human interaction for a long time, perhaps the most notable and applicable to simulation are the “symbolic interactionists” who originated with George Herbert Mead and Herbert Blumer. Erving Goffman never identified himself as belonging to this school, but he was influenced by Mead and was heavily influential in this school of thought. Goffman’s most influential essay “The Presentation of Self in Everyday Life” sees every social interaction as a form of performance, suggesting that we should look at social action in terms of presentation and performance. This perspective is surprisingly consistent with the interaction between a user and AI controlled characters. As a basis, Goffman can be distinguished as an adoption of a sociological model rather than a psychological one. The sociological model places emphasis on the society in which interaction takes place. In a sociological perspective, the bulk of the rules and the models will dwell between the characters, rather than in them.

The simulation of character is important in the adaptation of fiction because so much of the content and appeal of novels comes from the characters. What is more, many authors, Jane Austen among them, are notable for creating evocative worlds which are populated not only by specific characters, but types of characters and types of situations. These literary worlds require attention beyond the psychology of the individual characters, to look at the scope of how those characters fit into a whole. The field of psychology alone cannot adapt to the scope of content needed in literary worlds, a theory is needed to account for the entirety of the fictional world and its social context. I propose the use of simulation to communicate these worlds because simulation is the only way to show the complexity and richness of a world made of social codes.

Amid sociologists, it is Goffman specifically who suggests approaches for looking at social interactions and social worlds systematically. He is most notable for his theory of interaction as performance, but his work on frames and keying, interaction rituals, and forms of talk each are intensely applicable to the simulation of social worlds. His contributions provide a suite of models for everyday interactions. These models are meant to be applied to everyday life, meaning for Goffman non-dramatic and contemporary interactions. However, the theories are still dramatic in nature, and I believe they are still appliccable to any system of interaction. Most of Goffman’s actual claims are small, minute even, and extremely simple: when one interacts, one performs a role; understanding of interactions is dependent on context and given by deliberate cues; interaction obeys a ritual form; and so on. If Goffman’s points are to be this small, it is a wonder that he should have stretched them out to fill so many books. The rest of his work surrounds the application and analysis of these points, exploring them in situations that range from the most mundane to the most extreme and absurd. The core of these arguments lies not in the examples and analysis Goffman gives of them, but in their application to understanding the world, and the potential to analyze new situations according to his dramaturgical method. In otherwords, Goffman is not an implementation, but a platform.

Early in my work I realized that it in order to develop a general approach for simulating fictional worlds, it would be necessary to simulate the author’s model of the world, but to do that some additional foundation must be built that is ideologically neutral, atop which the rules of the world may be placed. In almost all cases, the author’s model of a fictional world will encompass how people should behave toward one another, what things matter to them, what they want, what types of characters there are, and so on. However, before rules can be developed to answer these questions, more fundamental concerns must be addressed: How do characters intended actions become realized in the world? How do characters recognize values and how can they be expressed? How do characters demonstrate or understand each others’ emotions, words, or actions? These kinds of questions are generally omitted in fiction (except in cases of misunderstandings, which are not infrequent), and they are almost always omitted in everyday life. This is because we are social beings and can understand each others actions, and the actions of fictional characters with relative ease.

It is definitely arguable that it is not correct to treat Goffman as an ideologically neutral platform on which simulation of characters may be placed, but one would be hard stretched to find a platform that was less biased. It is arbitrary to use Goffman’s rules, but I believe that his show much more promise than those that lie in psychology or AI.

Leonard Foner: What’s an Agent, Anyway?

[Readings] (03.19.09, 4:07 pm)

Opening poses agents as a trend in software design, to lend computer applications a human face. This was seen early in Macintosh file finding programs, as well as in a variety of other places. Foner’s goal is to outline what “true” agents are, to identify how they are made up and what they have the potential to do.

The agent Foner spends most of his time examining is Julia, which was developed by Michael Loren (“Fuzzy“) Mauldin. Julia is a MUD chatterbot, which acts like any other player of a MUD and can talk and interact with other players.

The interesting thing with Julia is that because MUDs are textual online worlds, players interact with each other at a level through textual commands. Julia is essentially in the same position as any other player, having a character to interact in this world. As a result, other players interact with Julia just as though she were another player. The interface of the MUD creates an ambiguity between players and agents, because there is no clear or immediate way of distinguishing one from the other.

Julia is often used by other players as a helpful guide in the online world, like a knowledgeable friend who is always around and can always spare the time to give help, directions, or advice. Much of Julia’s function is giving help to others, and she can answer many questions about the world, that are not easily answered any other way.

At this level, it is possible to compare Julia to a documentation system, but instead of being faced with extensive documentation, Julia can give immediate and quick responses. The MUD environment is also constantly changing, so an agent who can explore the space like any other player is a potentially very useful resource. Her encyclopedic knowledge is part of what makes her ordinarily human behaviors give way to her robotic nature.

For her human-like qualities, Julia contains several subtle and very particular variations in her behavior in the world. For instance, she moves waits a second or two before moving from one room to another, she varies her responses, and she usually has somewhat coy responses when asked whether she is really human or really female. Foner explains that these human like characteristics make her functional behavior even more useful for other players. Foner gives an anecdote where another player, herself a programmer who knew that Julia is a bot, remarked on how she missed Julia when whe was offline. This is an interesting emotional reaction to something that the speaker knew was artificial. However, it is hardly unusual. People anthropomorphize things that are not human, often that are not even animate and develop attachments to them.

I would argue that an interesting reason for some of this success is the way in which she is adapted to and situated in the MUD. She is not emobided, but then again, no in-MUD character is really embodied. She has the same sort of virtual body that everyone else does.

Toward the end of the paper, Foner gives a series of bullets that characterize agents. These definitions describe agents as primarily functional things, that exist within some computational format, and are there to carry out tasks on the behalf of users. It is important to note that this is relevant from the perspective of developing agents as software tools, but for the purposes of simulations and of games (such as The Sims), Foner’s definition breaks down somewhat. The characteristics are as follows:

  • Autonomy: The agent performs actions on its own, and takes initiative.
  • Personizability: The agent adapts and learns to different users, adapting itself to them.
  • Discourse: The agent talks back and communication is two way, unlike other tools.
  • Risk and trust: The user can delegate a task to the agent and trust that the agent will do the task correctly. The risk of the agent failing must be balanced with the user’s trust.
  • Domain: The degree of specialization and risk is dependent on the domain being explored.
    Graceful degradation: Failure at a task or improper understanding of the task should exhibit graceful degradation, revealing that there might be a problem without, for instance, producing an error message.
  • Cooperation: The relationship between the user and agent is cooperative, and conversational, as opposed to commanding.
  • Anthropomorphism: Foner argues that agents are often anthropomorphized, but that they do not need to be. Similarly, many anthropomorphizied programs (such as Eliza) are not agents.
  • Expectations: The agent should be able to respond reasonably to most users’ expectations.
Reading Info:
Author/EditorFoner, Lenny
TitleWhat's an Agent, Anyway?
Typearticle
Context
Tagsdigital media, art, social simulation, specials
LookupGoogle Scholar