icosilune

Category: ‘Projects’

Fluid Hydrodynamics

[Projects,Toys] (07.22.10, 5:04 pm)

A while ago I had a brilliant idea of doing a fluid simulation to get interesting material effects that could potentially be used in Painter. I did some research and discovered a paper on Particle-based Viscoelastic Fluid Simulation. The implementation described was pretty much exactly what I needed, so I set forth to make a library to handle effects.

And of course, it’s also useful to have a nice shiny demo.

Looks like applets don’t work for you. Go to www.java.com to get Java.

Spy Games

[Projects,Toys] (05.31.10, 5:16 pm)

This is a project for my Game AI course in Spring 2010. The project was a collaboration between Ken Hartsook and myself. The AI system used for the NPCs was inspired by Cutumisu and Szafron 2009. “An Architecture for Game Behavior AI: Behavior Multi-Queues“. The primary goal of the project was to develop a game in which social interaction is a primary game mechanic.

To play the game, click on the other characters and choose options to engage with them socially.

Looks like applets don’t work for you. Go to www.java.com to get Java.

Painter online

[Projects] (01.16.10, 10:05 am)

I’ve created a more or less permanent section for Painter on the website. That will get the latest updates to the Painter program and include more documentation. Painter still does not have a UI, but that is in the pipeline and should appear relatively soon.

Painter!

[Experiments,Projects,Toys] (11.22.09, 12:50 pm)

At long last I have a demo of Painter that does something interesting. Click on it below to have it start.

Looks like applets don’t work for you

Miscellaneous independent projects

[Projects] (11.13.09, 9:20 am)

I’ve got this strange disposition that I have cultivated where my self satisfaction has a lot to do with whether I am doing something productive. I have to keep busy so that I’m focused and positive, but if I spend too much time on work stuff it feels crushing. On the other hand, if I relax and laze about for a long while, I’ll be fine for a little while, but will gradually become very agitated. This presents a problem for when I already have a lot of work to do, because I’m doing work, but it’s not my own. The answer to this is independent projects. Normally I’ve got one of these going while I’m engaged with other work, but right now I have a lot.

I’m not sure how it came about, but here’s what I have:

1) I’m working on Painter again. A little while ago I was having a conversation about the project. I introduced it as a spiritual successor to Genetic Image, where instead of generated expressions, Painter uses generated programs. After having this conversation I realized that I did all of the hard work for it: the infrastructure to define the programs, statements, expressions, and so on, but I stopped when it got to the actual drawing. For some reason, this seemed like the hard part at the time.

Admittedly, there do not seem to be any open libraries for Java that give Photoshop-esque drawing capabilities, but that’s kind of a silly thing to hope for anyway. However, there are good libraries for producing straightforward visual effects, especially out of Graphics2D. Normally these are directed towards cutting edge UIs, but I’m sure I can use those tools effectively for painting.

2) POV-Ray. I love POV-Ray. In the years since I first discovered it there have been more and more raytracers and renderers, but POV still holds a special place in my heart. However, a thing that has always bugged me about it was that you don’t have a lot of control over how objects reflect light. Actually, that’s not true, but you only have a few ways in which to do it. In traditional renderers, there are several types of illumination: ambient, diffuse, specular, and pure reflection. When programs start using radiosity, particularly via Monte-Carlo integration, it becomes hard to restrict illumination to the four types given. The light reflected off an object viewed from a particular angle is really something in between traditional diffuse and traditional reflection.

Anyway, I decided to compensate for this by modifying the POV-Ray source to include glossy reflection. It looks decent so far. There are actually a few other ways to get this effect, but mine has some special means of variation that allows quite a bit of customizability. When it’s more done I’ll post a few comparison shots.

3) Gameboard: A new project is a program supplement for roleplaying, that attempts to simulate the tabletop experience while gaming over the internet. This project is hardly unique, but most other programs that aim to do tabletop online run into a few flaws: they are closely bound to a particular system, cutting out house-rules; they prioritize 3d graphics over ease of navigation and use; they restrict users to internal assets and prevent them from being creative with their own. Effectively, they prevent users from doing things with the game board that players can do with a tabletop. I haven’t studied the other products that closely, but I know that a few of these are definite issues. My goal with this is to create a system to get those elements of the tabletop experience that are integral toward holding players’ attention during games.

Gameboard has a lot going for it conceptually, but I’m also at a tricky design situation, where I’m trying to figure out how tiles and layers should be represented in data structures. It’s not an impossible decision, but it’s one of those that is either made correctly, saving a lot of time in the future, or is made incorrectly and needs to be revisited a bunch.

4) Finally, I’m working on some new GeneticImage renderings. I’ll post some pictures later…

Working on GeneticImage

[Genetic Image,Projects] (09.11.09, 9:34 am)

I have started working on GeneticImage again. A long time ago, I started a secondary project, called Painter, which was an extension of the expression-generation of GeneticImage into a full-fledged code generation thing. Painter is very interesting, and has oodles of potential, but it’s not something I am capable of working on in full right now. On the other hand, I’ve been using GeneticImage off and on, creating fun new things with it from time to time.

I’m creating a Kenai project around it, so it will be interesting to see where that goes and how it works out.

My interest in working on the project again is primarily to improve it for how I use it, rather than general usability, which is something of a conflict of interests. Hopefully, it should be fun for people to peruse.

Project: A Short Guide to Writing About Games

[General,Projects] (02.02.09, 10:19 pm)

I met with one of LCC’s faculty the other day, the extremely wise and knowledgeable Karen Head, who is an expert on all things Jane Austen. She was kind enough to lend me a book, Timothy Corrigan’s A Short Guide to Writing About Film, which is useful in analyzing film adaptation and thinking about the language of film reviewing and criticism. I spent some time reading through it, and suddenly realized, we need one of these for GAMES. Think about it: a guide to thinking about and writing about games critically, aimed at a general audience. To my knowledge there are no sources about this addressed to general audiences.

There are game reviews, which discuss the space of playing games, but there are few general reviews that discuss games that might lead toward their conception as aesthetic artifacts. I know there are discussions and essays to this effect, but, as everyone in game studies is quick to tell you, we are in want of a language for talking about these things. We have works which aim to discuss design with a critical vocabulary, and these efforts are to be commended, but few to think about them from the perspective of the consumers. Game reviews have yet to reach the maturity of film reviews, but I think that this could still be achieved. What is missing is a guide to writing about games that focuses on them as works, as artifacts which convey messages and meaning. Such an approach would examine mechanics and gameplay as compositional elements.

I want to write this, but it will take a bit of time.

ArtScripter

[General,Projects] (01.25.09, 2:22 am)

This is an idea for a playful project that uses scripting and meta programming. The application consists of two windows (or panels, whatever), one is for rendering, and the other is for editing. The renderer will initially be blank, but gradually will display some sort of animated image. The editor window will have some control features, but will primarily consist of a simple editor window, where the user can enter in code and then try to load it into the renderer.

The primary mechanic for working is not to create and replace, but to add new drawing scripts on top of each other. Over time, the drawing will become lively and complex. The idea is to enable the user to do things that are more feasible and straightforward in code than in some graphical or symbolic system.

One of the challenges is that, in order for it to work well, the editor needs to be good. Ideally it should be possible to embed some real editor, which would be difficult, but it can be done. The interface for creating the drawing scripts would need to be clear, too. Whatever code is necessary for drawing should be something that will be familiar to the user, or easy to pick up immediately.

It looks like an interesting idea. It should be straightforward to pull off in my copious free time. Right?

Cognition Paper

[General,Projects,Research] (01.22.09, 12:41 am)

Introduction

The purpose of my research is to simulate fictional worlds. The challenge to be explored in this paper is the conflicted role of AI within this investigation. I argue that existing approaches to AI are insufficient to tackle this greater problem, and an approach to AI that addresses a social and cultural context is necessary beyond one that addresses individual agents. To understand why, we must closely look into the topic of simulating fictional worlds.

Specifically, my goal is to adapt fictional worlds into games. This grand project is rife with complexity and challenges. I can not hope to provide give a complete exposition on this problem, but rather, my aim is to provide a method or approach for looking at adaptation. The essence of adaptation consists of several steps, but my goal within this paper is to illustrate the challenges pertaining to developing the model of the fictional world, and thinking of it computationally.

It is not my intention to discuss the actual domain in much of the paper, but it is worth mentioning for reasons of contextualization. The narrative to be adapted is the novel Pride and Prejudice, published by Jane Austen
in 1813. While it would seem to be an odd and perhaps counter intuitive target for study, it is a rich domain and on close investigation is unusually well suited for the adaptation problem. The reasons for this are twofold: The first is that Austen has a community and tradition of adaptation. Her novels have been frequently adapted into film, and have spawned other literary adaptations and continuations. The second reason is that Pride and Prejudice has a surprisingly game-like story world. The world has the values of love, money, and social status. Characters interact socially at well defined social situations. They take part in cultural rituals of various scales: from a small scale ritual such as a card game, to a moderate scale ritual such as a social visit or a ball, to broad scale rituals such as courtship.

Let us explore the conceptual steps to looking at the picture of adaptation. First, we understand that fiction defines a world, not just an individual story. A world is the stage on which the plot of the story is enacted. The world itself is defined by a model. That distinction means that the fiction includes some details and excludes others, and lays out a scope of possibility for what can plausibly occur within that world. Building and interpreting the model is a creative act, and is by no means straightforward. Much like translation itself, interpretation is necessary, but subjective. Accepting that we can understand a story world in terms of a model, we can form a computational representation of that model.

The idea that fiction is foremost a world and secondarily a story ties into the work of narratologists David Herman and Marie-Laure Ryan [Herman 2004, Ryan 2006]. The actual narrative defines a sequence of events through the story world. As such, the resulting story is just one of many possible stories that could occur in the world. Furthermore, the writing shapes the nature and properties of the world through the language used. The last step wherein . This is the step that receives the least attention in this paper.

That a story world can be understood in terms of a model is essentially a structuralist claim, and it requires opening up the idea of what a model means. The word model implies a symbolic formulation of objects and rules. For the story world to be understood as a model, it must be first interpreted. The complexity and consequences of this interpretation is quite deep, and is explored in this paper. Once defined, though, it can represent the possibilities of the story world with some coherence. When used to analyze story worlds, models are generally understood vaguely, without explicit formalization.

My main concern for this paper is with the last step: Once we have a model of the story world, how can that be transformed into something that may be simulated computationally? Characters act according to the model of the story world, but exactly how they act and what they do requires some additional work. The field of AI seeks to provide a computational solution to the intelligence and behavior of characters. However, the traditional use of AI relies on assumptions which are inappropriate for the adaptation of fiction. We shall see many challenges posed by traditional AI. These challenges shall be matched with contrasting perspectives that reformulate the constraints of AI in a way that makes them usable for the adaptation of story worlds. This reformulation does not reject the use of symbolic AI, but changes the target of representation from the individual to the broader cultural system.

Cognition and Representation

Artificial intelligence has a detailed and intricate history shaped by many individuals and many different philosophical biases. AI is not a single ideology, it is a tradition shaped by many ideologies. There does exist within the discipline a strong current of particular ideas, which I shall call traditional AI. This is also known as “Good Old Fashioned AI”, or GOFAI, or just symbolic AI. GOFAI is a movement and perspective on computation and cognition that derives from the work of Newell and Simon. The heart of this is the physical symbol system hypothesis: “A physical symbol system has the necessary and sufficient means for general intelligent action.” [Newell 76]

A physical symbol system is essentially a formal model. The physical symbol system uses rules to operate on existing symbols and transform them into new ones. Such systems are symbolic abstractions of the Turing machine. Formal models are powerful tools, and will be employed to great effect later in this paper. However, the relationship between formal model and intelligent action is far more problematic than the physical symbol system hypothesis might first suggest.

The ostensible goal of AI is to provide a computational solution for intelligence. Exactly what this means ranges widely between AI applications. For expert systems, intelligence means an encyclopedic knowledge of a domain. For planners, it means the ability to formulate a plan for a successful course of actions within a particular problem domain or environment. AI applications tend to provide solutions for problems that could ordinarily be solved by a human individual. The intelligence as described by AI is significantly different from applied human intelligence.

AI still serves a valuable function, but to understand its value, we must examine the practice of AI, and what it aims to achieve.

While AI problem solving is intelligent, in the sense that it finds a solution for a complicated task, it is not the same as a human solving the problem. This distinction can be seen in several lights. One perspective on human problem solving is to view a human engaging with a problem as a whole system. The human is not one isolated mind, but an individual in a situation making use of situational affordances. Examples range between a mathematician proving a theorem, or an airplane pilot adjusting the speed of a plane [Hutchins 95, 98]. Even mathematicians, whose work is largely cerebral, make use of situational aids, such as paper or a chalkboard, and use culturally established conventions for conducting proofs. AI can not make use of these affordances, and an AI application is not substitutable for the human doing the problem solving. Instead, the AI represents the entire system, creating representations of the affordances used in memory. This is a distinction between representation and embodiment.

The practice of AI cannot hope to represent all of the interwoven parts demanded by an open approach to cognitive science. The vast sensory apparatus of human experience, replete with cultural and embodied meanings, would be nearly impossible to transform into a computational system. Even from a technical perspective, the human brain is far more distributed than today’s computers. Instead, AI is limited to represent very abstract elements of human cognition. Human activity that operates at a clear symbolic level can be performed computationally without duress. Activities such as these are rote calculations, theorem proving, navigation, path planning, and so on. These activities are embodied, but they may be interpreted and described at a symbolic level, and this is the level on which AI can operate.

However, this process of symbolic transformation is not a clear task. Even for a relatively straightforward application such as a path planner, how the path is represented symbolically can make a major impact on how the algorithm can be understood in context. For example, if the algorithm assumes that the planning agent already knows the environment in which it must plan, versus having it need to discover the environment dynamically. The agent may read in the data for the environment absolutely, or it may be limited in terms of what it can perceive (it may not be able to see around an obstacle, for instance). The environment itself may be represented as discrete or continuous space. These differences involve significant changes in the representations and symbols used within the model. All of these inform the discourse around the AI system.

Representation connects a model to the world, either the physical world, or the world of human meaning. For the purposes of building a simulation of characters, representation is a major element. For providing characters that are believable, that can be simulated to act like characters within actual fiction, intelligence is absolutely necessary. However, the role of intelligence is somewhat unusual. AI applications are built around problem solving, so in order to simulate characters with AI, the task must be posed as some sort of problem. There is a tradition of AI applications which simulate characters and fictional worlds, and I’ll review those here, and connect them to the larger agenda of problem solving.

AI projects that simulate characters tend to fall under two categories: simulations and story generators. The latter category are software that generate stories in the form of text readable by people. One of the earliest and most influential story generation projects was Tale-Spin, which set characters in an imaginary cartoon-like forest world, where animals would interact with each other and try to satisfy goals [Meehan 1976]. Tale-Spin is notable because of instances where stories would fall into infinite cycles. The architecture used a system of planning that exclusively focused on characters, and has been criticized for not accounting for the plans of the story author.

Both Lebowitz’s Universe [Lebowitz 1984] and Turner’s Minstrel [Turner 1994] story generation systems were influenced by Meehan, but these took alternative models to story construction. Universe relied on the author’s narrative goals, which would frequently include constructing situations that would be against the interests of the characters themselves. The system was applied to generate soap-opera stories, and maintain a consistent level of complexity and interest. I would argue that Universe is not precisely a story generation system, but rather a plot generation system instead. Minstrel is aimed at constructing stories creatively, and uses creativity based on analogical reasoning and self evaluation. The model of Minstrel is centered on the process of the author.

In addition to story generators, games have played a strong role in the representation of characters. Games fall under the category of simulations, and represent characters by having the characters engage with the user interactively. The strongest example of characters in games The Sims, which is a notable title for many reasons. The Sims uses a model of behavior that is not based on planning, but rather on needs and motivations that is inspired from Maslow’s hierarchy of needs. The Sims allows players to observe and command the virtual characters (called sims), who interact with objects and each other in a compelling (if not realistic) manner.

A final example of simulated characters is in the AI project Facade, by Mateas and Stern [Mateas 2002]. Facade is an interactive drama, where the player represents a character who visits the house of two longtime friends, Grace and Trip, whose marriage disintegrates over the course of the evening, with the player stuck in the middle. Facade is built to ascribe to the principles of Aristotelian drama, where the tension is meant to fluctuate according to a defined dramatic arc. The simulation is organized by a drama manager, which introduces events to alter the dramatic flow of the experience.

Each of these projects makes use of an explicit model of the story world that they represent. The types of models direct the expressive and representative capability of the resulting system.

For characters to be represented computationally, they must be understood at a symbolic level. This is the heart of the transformation of the story world’s model into computational form. The decisions that must be faced in constructing symbolic representations inform the discourse of the simulation. Because this is a process of adaptation, that discourse of the simulation must be woven into the discourse of the narrative itself. Because of the nature of the domain, an approach that follows in the tradition of GOFAI would be very inappropriate. Story worlds convey more than minds, plans, and psychology of individual characters. A simulation must convey the system of interaction between the characters, and the essence of the world of which they are a part.

Adaptation and Models

The process of translation not only extends a work of literature into a new language, but also extends the work in time. Texts have a life within a culture, which will emerge, grow, bloom, and eventually dwindle. Translation is a means to breathe new life into a text, extending it in time, and into a new cultural world [Bassnett 2002]. This extension come with a set of challenges. A text is necessarily connected tightly to the cultural system whence it originates. The translator is responsible for not only preserving the identity of the text being transferred, but also weaving it into the new culture. The focus of my work is not translation, but rather it’s sibling, adaptation.

Adaptation carries with it all the burdens of translation, but brings the source text not only into a new time, and culture, but into a new medium. This extra dimension brings in added complexity, but it also changes the perspective on the problem of translation. Media speak with their own languages, with conventions established by genres and existing works. Even within a single format, there are different forms: The conventions of the epic poem are different from the conventions of the novel, or the short story.

A productive way of seeing fiction is not as a static or formal artifact, but rather as a world. When characters make choices within a fictional text, their futures are already written. But for those choices to be meaningful to readers, the futures must be imagined as dynamic. Similarly, works define a space of meanings and possibilities. Story worlds are not reflections of what is foretold within a written narrative, but they represent what has the potential to occur at each moment within the world where that narrative takes place.

At a distance, the problem of adaptation operates on some sort of space of equivalence. The individual adapted works are very different, but adaptations and translations must have something in common with the source material. That something is some common intrinsic structure belonging to a work that must be preserved across language and media for a translation or adaptation to be successful. This structure must illustrate the types of characters, relationships, events, plot points, and so on. This intrinsic essence is what I call the work’s model.

A model conveys the essence and meaning of a text. It also conveys the values and most important elements. The model should be imagined as the bricks and mortar which build the space in which the narrative plays out. Johnson-Laird [Johnson-Laird 1983] explains that models are a tool for cognition and are a functional understanding of the world. Narratologist David Herman bridges the space between narrative and cognitive science. Narrative defines a world, and readers understand a story by understanding the underlying model. “This amounts to claiming, rather unspectacularly, that people try to understand a narrative by figuring out what particular interpretation of characters, circumstances, actions, and events informs the design of the story.” [Herman 2004] The actual narrative then, is but one trace through a live simulation of this model.

Forming Models

A problem in building models is what to include versus exclude. The construction of a model is a significant and meaningful act. When one constructs a model of some system or domain, some information is included, and designated as important, and other information is disregarded, and designated as unimportant. This simple and initial step is an important and fundamental case of meaning making. Some branches of cognitive science [Johnson-Laird 1983, Gentner 1983] argue that the construction of models is a basic unit of cognition. By virtue of this isolation, models impose values on the phenomena that they represent.

Models that make use of classifications also impose values. Bowker and Star discuss the political ramifications of classification and sorting, explaining that classifications create ethical claims on the material that they describe [Bowker and Star 1999]. The imposition of classifications says a great deal about what one values in those systems.

A simple example is to look at models of fiction as answers to questions about some particular narrative. Consider an example of classic film such as Citizen Kane. What is this film about? The immediate answers to this question will lay the foundation for a model that describes the work. “It’s about lost childhood,” “It’s about the ambiguity and elusiveness of truth,” “It’s a fictional biography,” “It’s about William Hearst.” Each of these answers begins to describe the essential elements of the film. When describing what is important, the weight and emphasis given to different characteristics shapes the bias of the model as a whole.

Realistically, none of these can claim to have a hold on an absolute truth. Each of them expresses a way of looking at the material with a certain perspective. As is the case in critical thinking, the reader must take a position on the text, and that is the first step toward building a model. Depending on what sort of interest or agenda the reader might have in approaching a text, that will color the reader’s perspective. Models are not right or wrong, but they may be more or less useful depending on the perspective of the interpreter. Scientific theory is a domain where there are many models, often contradictory ones. Newtonian physics is inconsistent with Einsteinean physics, but they are not right or wrong, they are instead or less useful or appliccable to a given problem.

We must be aware of the models we use in approaching a text for adaptation. It is easy to approach a text with a position already established, and this may color the resulting interpretation. It is not possible to come to a text with a blank slate, or if we could, then we would not be able to understand the text at all, because it would not connect to anything in our experience. It is instead necessary to be aware of the models that already exist in mind when interpreting a work.

Both the process of interpretation and the process of creation involve building models. When approaching a text or a procedural artifact, the reader will bring their own models and perspective to the work. There is a cognitive interplay between both the world of the creator and the world of the reader around a created artifact. However, the model belonging to the work may come to take on a life of its own, revealing elements missing from the models of both the reader and the author.

BioChemFX is a detailed and elaborate simulation of deadly chemicals being released in an urban area. The simulation is designed to help develop safety procedures and train rescue workers to respond to the threat while saving the most lives. However, embedded in the software is how the gas spreads, but missing is clear instruction on who to rescue. When a toxic chemical outbreak occurs, the treatment of it raises questions regarding the value of human life. Bogost explains that this is a matter of inclusion and exclusion, combining rules with subjective ideals [Bogost 2006].

In addition to identifying content, models also expose relationships and procedures. These give the model a predictive power. By understanding the procedural laws of the model, one can predict what might occur to real phenomena based on the rules of the model. This principle is the foundation of the scientific method, but the method can be used beyond prediction of events in the real world.

Models so far have been used as general and loosely imagined structures. But when the matter of computation is introduced, they must be made concrete. Computational models are formal systems, or physical symbol systems, the same things at the heart of Newell’s approach to AI and cognitive science.

A formal model is a symbolic construction, borrowing the notion of symbol both from traditional AI, as well as from the semiotic tradition. Signs in semiotics are inherently arbitrary and meaningless, only taking on meaning when coupled with a reference. Similarly, a model on its own has no meaning. A model only gains meaning when its symbols signify some external idea known to somebody. To be useful and understood, models must be creatively interpreted. In order for the model to signify a system that is meaningful, it must represent material outside of the system itself. A model encodes relationships and functions, but representations are needed to tie the model to human systems of meaning. A model without representation is nothing at all. The model must have representation in order to exist. The two exist in opposition to each other, but are necessary to sustain each other. One might argue that a model defined logically has no representation, but formally it must use icons defined in some meta language. Even abstract logical formulations must be written in an alphabet.

The dimension of representation ties so deeply into human embodiment that it is impossible to escape. When someone wishes to develop an AI system which can understand itself, the developer is faced with the issue of infinite regress. An example is the Cyc project’s attempt to understand phenomenological concepts [Adam 1998]. A wholly internal system of knowledge is thus useless. This is not to say that propositions, rules, or symbols are useless, but rather they must represent something to somebody.

Similarly, an argument may be made that no representation can ever mean anything without engaging with other representations. A pure representation on its own can do nothing without engaging with other meanings. A human might be able to supply an interpretation of some isolated observation, but in order for the artifact to convey anything, it must have some meaningful composition of representations, that is an arrangement which conveys relationships. In order for something to be an artifact with communicative potential, it must have a model.

The field of games is surprisingly rich from the perspective of the relationship between models and representations. Game scholar Jesper Juul argues that games are a combination of real rules and fictional representations [Juul 2005]. Digital artifacts have the power of simulation, they can take a procedurally encoded model and simulate it in time. Michael Mateas argues that digital artifacts can be used for the purposes of artistic intention by presenting a procedural system [Mateas 2003]. According to Mateas, an artistic work that uses AI (Expressive AI), is composed of a computational machine and a rhetorical machine. The two of these share vocabulary and a model.

Mateas also argues that there are two parts to the computational machine. There is one system which defines the rules and makeup of the work, which is what I call the model, but there is a second system which contains the running system, which I call the simulation. The model and the simulation are separate. The model is authored by the creator of the artifact, and once the simulation is started, the model may not be adjusted. Instead, the simulation is dynamic and interactive, receptive to engagement with a user. Both contain representative material. The model has the greatest expressive affordances for the author, and the simulation has expressive affordances for the user.

Developing a model for an artifact, whether a new one or an adaptation is a rich and deep activity, even if much of it is done automatically. There is a dilemma of inclusion and exclusion, of imposing values and ethical decisions, and no model will ever exhibit absolute truth. These are not restrictions, but are the very power of models, and simulation, as a system for realizing and formalizing models is a the medium with the greatest potential for expression using them. Models are ambiguous and flexible, interdependent on representation, and intrinsically playful. These are all factors in the construction of models, but they may also be seen as the mechanism for their judgement and analysis.

Traditions of AI

The domain of artificial intelligence can be used to build a computational simulation of fictional characters, but traditional AI lacks the perspective to represent models of story worlds. The central difficulty has to do with the fact that traditional AI is preoccupied with the mind, and can understand things outside of the mind only with difficulty. Things outside of the mind, such as the entire story world, are highly important for fictional adaptation. Even when the world is understood as a model, there are ambiguities that must be explored and addressed. Traditional AI tends to have a totalizing perspective, omitting much that should require attention.

The flaws of traditional AI all stem from the central issue of perspective. The flaws reside in AI’s handling of embodiment, cognitive extension, situation, rationality, cultural values, performance, and emotion. This section attempts to run through the flaws, illustrate what is lacking and why a change is necessary, and touch on any literature that discusses the conflict. Traditional AI limits its understanding of intelligence and cognition. These flaws obstruct the larger goal of adaptation, and the solutions aim to broaden AI’s perspective and epistemology. It is my goal in this section to build a new approach to AI, one step at a time, that can be used to better tackle the problem of simulating fictional worlds.

Traditional AI is flawed, but it can be refined. It is not my intention to throw out physical symbol system entirely, but rather change focus. Because the adaptation process makes use of formal models, a symbolic understanding of character is important. That understanding must not be fixated on the individual psychology of the character, rather, the scope of representation must surround the cultural and social system of which the character is a part. Instead of fixating on goals and planning, the modified approach to AI should focus on goals and activity.

1  embodiment

AI represents cognition without a body. The fact that AI lacks embodiment is no surprise. The only domain of AI that can arguably make use of embodiment is robotics. However, the lack of embodiment in traditional AI is indicative of a stronger ideology: that mind and body can be effectively and unproblematically separated. Traditional AI thus has a bias toward abstract thought that is independent of physical being. Through the origins of AI in symbolic systems, which are themselves mathematical constructs, traditional AI lacks the perspective imposed by embodiment. Free from the constraints of physical grounding, symbol systems can reason in the abstract. Intelligence without embodiment conveys a view from nowhere [Adam 1998].

The separation between mind and body should not be seen as unproblematic. Abstract reasoning without perspective is especially dangerous. There is a difference between information and knowledge, and the central difference is that knowledge requires a knower, who has a position, and who has a viewpoint. When photography emerged, it was seen as something that could represent the world with absolute accuracy, and would lack the personal expression of painting. Instead, early photographers soon discovered that their new medium had the artistic affordances in spite of photography’s ostensible accuracy. Having a viewpoint alone is meaningful. Representing the world without a viewpoint is at best meaningless, and at worst misleading.

Many AI projects have embraced this lack of perspective, most notably the Cyc project. The ideology behind this has its roots in Cartesian thinking. Descartes believed that the mind is wholly separable from the body, and even the physical world. To him, the mind is equivalent to the soul, and existing eternally in spite of the finite nature of the world and human existence. The Cyc project aims to create an encyclopedia of commonsense knowledge, all framed using symbolic propositions. The Cyc project seeks to encode human knowledge, but without the perspective of a knower [Adam 1998].

Embodiment is an unusual argument to bring into a discussion of procedural adaptation of fiction, but it has two important consequences. One, the fictional model is a view from somewhere, that of the author. The simulated world must acknowledge that it has a particular viewpoint. It is not meant to be a disembodied picture that describes reality, but it describes someone’s perspective on reality, or at least, someone’s constructed world. Fictional models therefore have a perspective complete with bias and emphasis on some elements of the world above others.

Perspective complicates representation of the model of a fictional world. Without embodiment, symbolic formulations have no perspective. Models without embodiment present information in abstract and present a model that lacks the capacity for criticism. Because information is represented objectively, it stands without grounding, and communicates itself as absolute. Representations of fictional worlds are biased, and should be understood as such. An astute reader does not believe every word written on a page is gospel truth, but rather understands the meanings accounting for the perspectives of the author and the characters. Underneath this issue are the matters of literacy. Literacy is the ability for a reader to interpret a work in context, although this is by no means a straightforward skill. A work is literate when it situates itself within the context of other meanings. For a fictional adaptation to be embodied, it must be literate.

Secondly, the characters themselves, who are to be simulated, are embodied within the fictional world. They are fictional, but in a good story, they must have some physical presence. I do not mean this in the sense of simulating physical bodies. It would be a mistake to interpret embodiment as merely physical simulation. Moreover, physical movement and articulation are not generally important in fiction. However, the characters will have their own perspectives on the world, they will not necessarily have the complete picture, but they will be bound to the reality that they live in.

Characters’ interactions with each other are embodied, and many meaningful interactions in fictional worlds have bodily manifestations. A good example is a character’s gaze. Such a thing is very significant in fiction and in human interaction, and can be understood symbolically, but it is expressed bodily. It is argued that all symbolic meaning can be traced back to embodiment [Lakoff 1980, 1999]. To accommodate embodiment of characters, we do not need to reject symbolic representations, but understand that those symbolic representations are anchored within the body. It also suggests that embodied action should be a starting point for understanding symbolic representations, and not the other way around.

2  individual vs extension

Beyond the body, a flaw of AI is to examine the individual in isolation, as a single atomic agent, without connection to the surrounding world in which the individual must reside. The application of fictional adaptation requires adaptation of much more than individual agents acting independently. What is missing is the social and cultural context of the agents, and our approach to AI must consider agents within this broader picture.

The idea of looking beyond the individual or the mind falls within the broader scope of cognitive extension, which claims that cognition is not limited to the self, but extends outward. Cognitive extension pushes the understanding of cognition out of the mind and into the body, into tools, instruments, and artifacts, and into the social and cultural landscape. If AI is a computational approach to cognition, it must account for the extendability of cognition.

Cognitive extension blurs so many boundaries that it makes cognitive science a muddled mess. Its position on tools and artifacts is sound, supported by many philosophical positions [Freud 2005, Heidegger 1977, Norman 2002]. Tools are considered extensions or prosthetics of mankind, but this extension is not unproblematic. The use of the tool shapes the human using it as much as it acts as a prosthetic. One way of looking at this is that the tool comes with its own model of the world, and through using it, the human comes to adopt or incorporate that model. In this light, instrumentality is a continuation of embodiment, but may be understood in a symbolic manner.

We can imagine fictional adaptation as incorporating this element. Objects used by characters, props, must have symbolic models that integrate with the model of the story world. In using objects, characters’ ability to interpret and interact with the world is changed. This begins to shift the emphasis in our use of AI from abstract cognition to activity.

The logic behind cognitive extension it is that because cognition makes use of the affordances of the body and of tools, so too must it make use of cultural and social practice. Agents act in concert with each other, and within a framework of meaning defined by the cultural setting of which they are a part. Where cognitive science has been closely tied to psychology, cognitive extension connects psychology to sociology and anthropology. Integrating the two reveals that mind and culture are interdependent [Shore 1998]. What this will mean for AI will be discussed in subsequent sections.

3  situation

Planning is a predominant theme in AI. This makes sense, because AI was developed to solve problems. The kind of problems and problem solving were very particular. To develop the General Problem Solver, Newell and Simon studied how a very specific demographic approached very specific problems, within a very specific situation. The demographic was, exclusively white male engineers and scientists, Carnegie Mellon students and graduate students, specifically [Adam 1998]. The problems were generally scientific or mathematical in nature [Newell 1963], and the situation was that of a laboratory or academic environment. While this form of research was in line with absolute knowledge without perspective, here it is worth critiquing because of its neglect of situation.

When the AI methods that derived from the GPS research have been exposed to other situations, they have frequently come off as peculiar or stilted. These are so called “expert systems,” which usually are best used as tools by field experts who can apply their own knowledge to the system’s results. Problem solvers cannot be used independently without some sort of interpreter to operate the program. The reason for that can be understood by acknowledging context under which AI emerged as a product. Problem solving is only one part of human cognition, and that we do much more than planning in our daily lives. Problem solving is also extremely dependent on problem formulation. When a problem is ill posed or not fully understood, it cannot be solved. The way we solve problems depends on how we view the problems in the first place.

It is perhaps a philosophical challenge, to the rhetoric of problem solving, but it seems that in order for a problem to exist, it must exist within a situation. The problem must have some sort of context, and be for somebody who has an investment in the solution of the problem. A mathematical formula is not a problem until it is important that it be solved. The space in which problems are posed and meaningful is a situation. This concept is important to fictional adaptation because simulated characters may have goals and plans, but those do not mean anything until they are contextualized within the space that the character inhabits.

Situation is one way in which traditional AI does not account for the broader picture. Situation directly informs and affects cognition. Instead of using the same logic and formal rules anywhere and everywhere, situated cognition says that we think differently under different circumstances. Meaning itself is dependent on those very circumstances. For fictional adaptation, characters must be faced with a variety of situations. Situations might range from simple events such as a conversation, to broader ones such as breakfast or a social visit, to even broader grand ones, such as courtship which could be the context of the entire narrative.

Situation does not mean that symbolic AI must be rejected. Instead, situation must be accounted for. Because meaning and activity are dependent on situation, situation again ties into the system of models. A situation describes a model for activity. The model of the fictional world must describe the potential situations that may arise, and inform the possible activities that can be conducted by agents under those circumstances. This framing is symbolic, but again changes the focus of AI from planned atomic actions to situated activities.

Situation informs the identity of characters. When a character has a role within a narrative, that role serves as a context which situates the character’s actions. Identity thus is a composite of situations that apply to the character [Clancey 2006]. This approach also complicates the rhetoric of planning. Planning agents tend to have hierarchical plans stored in memory. With this viewpoint, plans would be integrated with the agent’s situation.

Planning is important to simulated characters, they necessarily form plans within their worlds and attempt to see them out, but plans in fiction are inevitably thrown askew. The problem solving method proposed by planning uses tree based search, which means that when a subplan fails, an AI will usually go up and try the next thing. In fictional worlds, this sort of reaction is not often the case or even possible. Instead, an agent’s goals or intentions define a situation that the agent experiences. Instead of planning according to all potential actions, situated activity directs the agent from the bottom up, according to the agent’s context.

Turning over planning in favor of situated activity exposes something else very important for fictional characters: demeanor and conduct. Frequently what is important about character interactions is not what the characters do explicitly, but how they do it. These factors are tightly related to the situations wherein the characters are occupied.

4  rationality

Traditional AI has a preoccupation with rationality. Rationality is certainly worthwhile and important in the domain of problem solving and critical thought, but it is a small part of the picture when compared to representation and fictional characters.

For an agent to be rational, according to Newell [Newell 1976], means that the agent acts in a way that it believes will help achieve its goals. If we understand this as the basis for rationality, it offers a more complex perspective than might be initially imagined. This perspective of rationality carries with it an inherent acknowledgment of bias, belief, and goals. An observer may consider an agent’s goal of self destruction undesirable, but the agent’s plan to jump off a cliff is perfectly rational given its goals.

Rationality is still an ambiguous issue even with this in mind. For the purposes of fictional adaptation, fiction is rife with characters who behave decidedly irrationally, acting at times in spite of their goals. This suggests that rationality and goals alone are not sufficient to account for the complexities of fictional characters.

Rationality relates to planing and clear understanding of goals. Story characters often have goals, but frequently act in spite of those goals. Thinking in terms of rationality makes discussion of character revolve around goals and desires. A rational character must be defined in terms of goals. Literary characters may have goals, but they are not defined in terms of them. Furthermore, a character’s goals may often be conflicting, and these conflicts reflect issues of the values in the story world.

Frequently in agent based programming, where agents have conflicting goals or objectives, an evaluation function is used to evaluate what the optimal objective is. Even in situations where none of the options may be desirable, they can be framed in terms of better or worse. The process of this evaluation is at the heart of rationality, and derives from the origin of the word itself. “Ratio” means measurement, thus a rational action is one which measures all of its alternatives. This is an approach which hardly seems desirable, but it must be addressed somehow. An agent must act eventually. However, to apply an evaluation function conceals the significance of that choice. In addition to a weighted comparison, the agent must also express the dilemma of choice, and that must be reflected in character. The arithmetic simplicity of evaluation functions must not conceal the importance of how the agent has made its decision. Fictional characters are necessarily irrational. This does not mean that they act flagrantly against their interests (although sometimes they may), but it means that their faculty for measurement is biased, incomplete, or spurious. This complexity is essential to the representation of character.

5  cultural/social symbols/values

The work of symbolic systems depends on a careful relationship between model and representation. Both the model and the representation of fictional simulation must tie into the system of meaning given by the fictional work. This system of meaning is cultural: it is derived from the cultural world of the author, and of the culture of the world within the work itself. Traditional AI looks at only individual cognition, so it is our agenda here to find a way to open it up to cultural meaning.

The agenda of looking at cultural meaning is to continue pushing the subject material of AI outside of the mind. Not only must AI extend to the body and the extensions of the body, it must extend into the meaning systems that are socially and culturally instituted. In the trend of cognitive extension, we must look to culture because cultural practice may be considered an extension of man. Tools are prosthetics which extend our interaction and interpretation of the world, but cultural practices serve this function as well.

As an example, consider the discipline of statics as studied in engineering. This discipline is about the study of structures in static equilibrium. It is a cultural practice because it is used by a specific population (engineers) in the context of other applications, such as the construction or analysis of structures. Statics views the world as decomposed into bodies, which exert forces and moments upon each other. This is a viewpoint, a lens, by which the world can be interpreted and be understood meaningfully with respect to the system of Statics.

Another example is a dance at a ball in the context of Pride and Prejudice, our fictional adaptation target. An observer in this situation interprets the actions performed by the participants in a very careful light. Meaning is made through the use of subtle cues and simple actions, such as gaze, a man asking a girl to dance, and the girl’s subsequent acceptance or refusal. This is a cultural system with a very precise set of meanings, and the situation can be used to interpret events in accordance with that system of meaning.

The matter of cultural meaning is thus an issue about representation. There is a relationship between the physical world and the model, which must be acknowledged by an the AI in the adaptation. All parts of the model ultimately tie into a cultural system of meaning. Since the focus of AI on individual agents is flawed, we see here that a stronger accommodation of the fictional world’s cultural system must be examined.

6  performance

Performance has two meanings in the scope of computation. The first is in the sense of efficiency. In this meaning, a system performs well when it is efficient and completes a task in optimal time, or does the task especially well if there is some criteria for judgment. The other sense of performance comes from the idea of performativity. In the project of fictional adaptation, the criteria of performance seems straightforward: The system performs well if it conveys a believable and compelling adaptation of the target narrative.

While they are quite different in application, both these understandings of performance share a common thread. Performance indicates that there is an audience. The system is doing something for somebody, and can do so well or poorly. Even in the case of efficiency, performance is at stake because time or efficiency is valuable to somebody. In the case of fictional adaptation, performance occurs at two levels. First, the system must perform for the user and believable. Secondly, the characters must perform roles in their interaction with each other.

Performance focuses on interaction between the AI system and the user. This idea has been espoused in reference to computation in general [Laurel 1993]. Performance tends to be very common in games and interactive drama projects. Joe Bates and others [Bates 1994, Mateas 2002] have argued that AI for artificial characters should focus on believability rather than realism. The idea of realism, much like rationality, has been an issue with traditional AI. Under the dogma of pure symbolic reasoning, realism can be seen as an issue of depth, rather than context. This chain of reasoning would justify things such as the Cyc project. Believability rejects the necessity of realism. Instead, it emphasizes the communicativity of representation, rather than the sophistication of the model.

An important subject that is affected by performance is interaction. Not just interaction between the user and the system, but between characters within the system itself. The traditional AI approach to interaction claims that the thoughts and minds of agents must be modeled explicitly, and communication and interaction should work at a low level of intentions and goals [Cohen 1990]. This literature describes the minds of agents as being enormous sets of logical propositions, representing things that, for the agent, are true. This literature is referred to as “Beliefs, Desires, and Intentions,” and it contains descriptions of how to model an agent’s understanding of knowledge. In the full spirit of “realism,” BDI literature describes how to represent immensely complex logical propositions for relatively simple interactions between agents.

Let us begin with an example. With the Wednesday advertising supplement in hand, a supermarket patron approaches the butcher and asks “where are the chuck steaks you advertised for 88 cents per pound?” to which the butcher replies, “How many do you want?”

This delightful snippet is analyzed in terms of logically formulating a complex network of propositional relationships that describe the beliefs, goals, and intentions of the butcher in this example. This formulation is problematic. On one hand, it is necessary to represent, to some degree, the knowledge and intentions of agents. It is perfectly compatible with the epistemology of traditional AI. However, this symbolic formulation is arguably not realistic. Furthermore, this formulation of intentions breaks down when considered developmentally [Vygotsky 1978, Tomassello 2001].

Sociologist Erving Goffman has proposed a model of interaction based on performance [Goffman 1999]. The key to this is the idea of interaction, and the idea that agents interact with each other in addition to the user. Characters make use of roles, and perform those roles. Goffman’s approach is influenced by the school of symbolic interaction, which claims that interaction is ultimately symbolic in nature. Unlike BDI, symbolic interaction focuses on believability. Instead of a character being a set of abstracted goals and intentions, the character has roles which abstract out the types of interactions that the character may engage in. Much like interaction with artifacts, roles are models of interaction.

In the example with the patron and butcher, the symbolic interaction approach would reject the complex interplay of beliefs and intentions. These would be replaced with a pair of roles, and perhaps a social script which the agents would enact. The interaction between the two is symbolic in nature because it is meaningful within the context of the shopping activity. For fictional adaptation, this type of model is vastly superior to the mess presented by BDI.

7 Emotion

The relation between thought and emotion tends to be very contested in the study of cognition. Rationalism excludes emotion particularly. AI has a difficult time accommodating emotion, especially in regards to the relationship between emotion and behavior. Emotion is disconnected from the rhetoric of planning.

Emotion is very important in the lives and behavior of humans, and is furthermore enormously important in fiction. Studies have shown that emotion has an effect on decision making [Oatley 1996]. Such a study might sound like the rational mind is being interfered with by emotions, but the argument is the opposite: that emotions are the basis for decision making and even rationality. This argument hearkens back to Aristotle, who claimed that emotional appeal was an integral element to rhetoric.

Humans are sensitive to emotions and can easily percieve the emotions of others. This cycle of emotional response occurs in reading fiction, and is used to develop an understanding of the fictional world. In reading, the mind conducts a mental simulation of the characters’ emotions, and this helps develop a kind of emotional intelligence [Oatley 2008]. Within fiction, characters are subject to many emotional forces, which must clearly be accounted for in order to perform fictional adaptation.

AI oriented around planning does not have the tools to represent or model emotion. Emotion needs to be represented in the model and behavior of agents. Agents need to experience emotions and those emotions must have effects on their behavior. To model this, we must provide an explanation for how emotions arise in agents, how they are expressed, and what their varieties are. A useful discussion of emotion comes from Ortony, Clore, and Collins who view emotion as valenced reactions [Ortony et al. 1990]. Emotions are experienced in reaction to events, others, and objects, and depending on how the world is perceived, this can affect the emotional response of the agents. This model does not discuss expression of emotions in great detail, but provides a compelling model that differentiates between emotion and mental state.

To accommodate emotions within AI would require that agents experience reactions to things in the world, which would in turn affect the conduct of the agent. Emotion and rationality are connected, and rationality is principally the characteristic of carefully measuring the value of actions and outcomes. With this in mind, the effects of emotions can be represented as skewing the weights of the decision making process. An action that would be poorly valued by a cheerful agent might seem much more desirable, even “rational,” to an angry one. Emotion is represented not only in decisions, but also the manner in which agents conduct ordinary activities.

Changing Perspective

The points in the above section describe what I consider to be the flaws of traditional AI, but taken together, they can be seen as a reorientation of the project. In critiquing the approaches to traditional AI, several trends have emerged. It is not necessary to reject the symbolic nature of AI, but we have seen that changes must be made to the target of study for AI to successfully address the problem of fictional adaptation. The resulting changes are that:

  1. The model of the story world is the unit of analysis.
  2. The story world is a cultural system with contextual meanings and values. It is impossible to assign values from outside of the system.
  3. Situation and activity are the basis for all behavior that occurs within the system. Plans are at most secondary to situated behavior.

I suggest that we shift the focus of study. The problem is the adaptation of fictional worlds. By virtue of looking at worlds, the focus should be on the environment and the relationships between characters. Instead of thinking of the intelligence and goals of individuals, we must think about the logic of the world itself. The world is culturally oriented and contains an index of situations, roles, activities, and actions that characters may engage with. At any given point, a character will be in the middle of some situation or another, taking on a particular role with which to take part in the situation. This role will constrain their ranges of actions. Actions have symbolic values with respect to the story world, and affet its state. Characters may operate according to plans, but foremost, they will operate according to their identities, and express what makes them who they are.

To continue with this, it is necessary to understand the values of the world, and the mechanics. The values describe generally what is important in the world, and what numbers of values the characters might like to keep track of. The mechanics describe what possible things can happen in the world at all. The mechanics of a given situation are dependent on what values of parameters characters might have in that situation, and these parameters are affected by the execution of the mechanics.

Worlds have values in several respects. There are values in the moral sense of what concepts, events, relationships, and qualities are legitimate or meaningful in the space between the characters. There are also values in the sense of what parameters that might be attributed to characters, as specific statistical values. In discussing this latter type, there are two ways of doing such: one is defining what the important parameters are, what they are called and how they are articulated, and two is determining what quantities the actual characters have, or how these quantities are changed with respect to the characters actions or activities.

It is my intent not to provide a final answer to the inquiry of determining the values of a story world. As is the case with any interpretation of a fictional world, the determination of these is not only very subjective, but also very open ended. A fictional story world might operate around one principle, but on further study, the story is revealed to be about deeper meanings. There is a relationship between these strata of meanings that is especially important to consider on the subject of adaptation. I want to suggest an approach toward looking at meanings, thinking of them mechanically, and then using that as a starting point for the adaptation process.

In the space of political games, the goal is frequently to untangle a set of relationships in some everyday system, and uncover what values lie underneath.

The mechanics of a world are the scope of what can happen. In the context of fiction, this is necessarily limited by many factors, but centrally the range of actions a character might perform are dependent on their appropriateness, and the nature of the character itself. Narratives operate according to causal rules, thus one aspect of character may provoke one meaningful act, which may cause another. Characters operate according to some form of social codes, which dictate what their range of choices are in any given situation. Occasionally, there may not be many options, and instead what is of importance is not the action itself, but the conduct, the means of expressing the action. These differences are subtle but significant.

I think that in many fictional worlds, where the action is oriented around character and not centrally around action, that the worlds are primarily socially oriented. Social mechanics take the form of rituals. Rituals work in a manner which amounts to a kind of performance of everyday life. Rituals should be understood in terms of duration. There may be short rituals, for events that are short and pass quickly, such as a conversation, a game of cards, a dance. There are moderate length rituals, that might take the course of a several hours in the narrative world, or several pages in the text, such as a social visit, a date, a work day. And there are extended rituals, which may last a long time, even over the course of an entire story, such as courtship or coming to maturity. There is a heirarchical nature to these rituals, and with increasing spans of time, there are also increasing degrees of freedom and flexibility. Short rituals are tight and restrictive, but larger rituals have more room for transgression.

Characters entering into rituals must adopt some kind of role for their position within the ritual. A social visit would have roles for the host and the guest, a conversation will have fluctuating roles of speaker and listener. Roles come with implicit expressions of power dynamics, creating a currency of interaction. How the character abides by its role is significant and will affect the values and parameters associated with that character.

This approach of role, character, value, and mechanics is very different from the models of goals and planning established by traditional AI. However, the given approach is not inconsistent with the symbolic perspective of AI. Instead, these recast the symbols, anchoring them within perspectives and into the activity of the agents. A number of contemporary cognitive scientists have aimed to shift the focus and object of study of cognition. A prevalent trend is to shift the focus from representing thought to activity [Lave 1988, Nardi 1995, Cole and Derry 2005, Hutchins 1995]. Activity is important for studying cognition, and it is also important for representing the content of a modeled world. It represents the scope of what characters can do. Activity is the mechanics of the fictional world. If we understand what characters can do, and what effects those actions have, then it is arguable that we understand the cultural world in which those characters reside.

Bibliography

Adam, Alison. Artificial Knowing: Gender and the Thinking Machine. 1st ed. Routledge, 1998.

Agre, Philip E. Computation and Human Experience. Cambridge University Press, 1997.

Austen, Jane. Pride and Prejudice. Penguin Books, 2002. Bassnett, Susan. Translation Studies. Routledge, 2002.

Bates, J. The Role of Emotion in Believable Agents. CARNEGIE-MELLON UNIV PITTSBURGH PA DEPT OF COMPUTER SCIENCE, 1994.

Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. The MIT Press, 2007.

Bogost, Ian. Unit Operations: An Approach to Videogame Criticism. The MIT Press, 2006.

Bowker, Geoffrey C., and Susan Leigh Star. Sorting Things Out: Classification and Its Consequences. 1st ed. The MIT Press, 1999.

Brooks, R. A. Intelligence Without Reason. Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 1991.

Clancey, W. J. “Situated action: A neuropsychological interpretation response to Vera and Simon.” Cognitive Science 17, no. 1 (1993): 87-116.

Clancey, W. J., M. Sierhuis, B. Damer, and B. Brodsky. “Cognitive Modeling of Social Behaviors.” Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation (2006).

Clark, Andy. Being There: Putting Brain, Body, and World Together Again. The MIT Press, 1998.

Cohen, Philip R., Jerry Morgan, and Martha E. Pollack. Intentions in Communication. The MIT Press, 1990.

Cole, M., and J. Derry. “We have met technology and it is us.” Intelligence and technology: The impact of tools on the nature and development of human abilities (2005): 209–227.

Dreyfus, H. L. “The Current Relevance of Merleau-Ponty’s Phenomenology of Embodiment.” The Electronic Journal of Analytic Philosophy 4 (1996): 1-16.

Dreyfus, H. L. “What Computers Can’t Do.” New York (1972).

Eco, Umberto. The Open Work. Harvard University Press, 1989.

Freud, Sigmund, and James Strachey. Civilization and Its Discontents, 2005.

Geertz, Clifford. The Interpretation Of Cultures. Basic Books, 1977.

Gentner, Dedre, and Albert L. Stevens. Mental Models, 1983.

Goffman, Erving. Frame Analysis: An Essay on the Organization of Experience. Northeastern, 1986.

Goffman, Erving. Interaction Ritual – Essays on Face-to-Face Behavior. 1st ed. Pantheon, 1982.

Goffman, Erving. The Presentation of Self in Everyday Life. Peter Smith Pub Inc, 1999.

Heidegger, M. “The Question Concerning Technology.” The Question Concerning Technology and Other Essays (1977): 3-35.

Herman, David. Story Logic: Problems and Possibilities of Narrative. University of Nebraska Press, 2004.

Hollan, J., E. Hutchins, and D. Kirsh. “Distributed cognition: toward a new foundation for human-computer interaction research.” ACM Transactions on Computer-Human Interaction (TOCHI) 7, no. 2 (2000): 174-196.

Holland, Dorothy, William, Jr. Lachicotte, Debra Skinner, and Carole Cain. Identity and Agency in Cultural Worlds. Harvard University Press, 2001.

Huizinga, Johan. Homo Ludens. 1st ed. Beacon Press, 1971.

Hutchins, E. Cognition in the wild. MIT Pr., 1995.

Hutchins, E. “How a cockpit remembers its speeds.” Cognitive Science 19, no. 3 (1995): 265-288.

Hutchins, E. “Learning to Navigate. Understanding Practice.” Perspectives on Activity and Context. In: Understanding Practice. Perspectives on Activity and Context. S. Chaiklin, and Lave, J.(eds). Cambridge, Cambridge University Press (1998).

Johnson-Laird, Philip. Mental Models. Harvard University Press, 1983.

Juul, Jesper. Half-Real: Video Games between Real Rules and Fictional Worlds. The MIT Press, 2005.

Lakoff, George, and Mark Johnson. Metaphors We Live By. University Of Chicago Press, 1980.

Lakoff, George, and Mark Johnson. Philosophy in the Flesh : The Embodied Mind and Its Challenge to Western Thought. Basic Books, 1999.

Laurel, Brenda. Computers as Theatre. Addison-Wesley Professional, 1993.

Lave, Jean. Cognition in Practice: Mind, Mathematics and Culture in Everyday Life. Cambridge University Press, 1988.

Lebowitz, M. “Creating Characters in a Story-telling Universe.” Poetics(Amsterdam) 13, no. 3 (1984): 171-194.

Mateas, M. “Interactive Drama, Art and Artificial Intelligence.” University of California, 2002.

Mateas, Michael. “Semiotic Considerations in an Artificial Intelligence-Based Art Practice.” Dichtung Digital, no. 3 (2003).

McCall, George J. & J. L. Simmons. Identities and Interactions: An Examination of Human Associations in Everyday Life. Free Press, 1966.

McKenzie, Jon. Perform or Else: From Discipline to Performance. 1st ed. Routledge, 2001.

McLuhan, Marshall. Understanding Media: The Extensions of Man. The MIT Press, 1994.

Meehan, James Richard. “The metanovel: writing stories by computer..” Yale University, 1976.

Mitchell, W. J. T. On Narrative. 1st ed. University of Chicago Press Journals, 1981.

Mueller, Erik T. Commonsense Reasoning. 1st ed. Morgan Kaufmann, 2006.

Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. The MIT Press, 1998.

Nardi, B. A. “Beyond Bandwidth: Dimensions of Connection in Interpersonal Communication.” Computer Supported Cooperative Work (CSCW) 14, no. 2 (2005): 91-130.

Nardi, Bonnie A. Context and Consciousness: Activity Theory and Human-Computer Interaction. The MIT Press, 1995.

Newell, A. “Physical symbol systems.” Cognitive Science 4, no. 2 (1980): 135-183.

Newell, A., J. C. Shaw, and H. A. Simon. “Report on a General Problem-Solving Program.” Readings in Mathematical Psychology (1963): 41.

Newell, A., H. A. Simon, and CARNEGIE-MELLON UNIV PITTSBURGH PA DEPT OF COMPUTER SCIENCE. Human problem solving. Prentice-Hall Englewood Cliffs, NJ, 1972.

Norman, Donald A. The Design of Everyday Things. Basic Books, 2002.

Oakhill, Jane, and Alan Garnham, eds. Mental Models in Cognitive Science. Edited by Jane Oakhill and Alan Garnham, 1996.

Oatley, K. G. “Emotions, rationality and informal reasoning.” Mental Models in Cognitive Science: Essays in Honour of Phil Johnson-Laird (1996).

Oatley, K. “The science of fiction.” New Scientist 198, no. 2662 (2008): 42-43. 


Oatley, K., and P. N. Johnson-laird. “Towards a Cognitive Theory of Emotions.” Cognition & Emotion 1, no. 1 (1987): 29-50.

Ortony, Andrew, Gerald L. Clore, and Allan Collins. The Cognitive Structure of Emotions. Cambridge University Press, 1990.

Ryan, Marie-Laure. Avatars Of Story. 1st ed. Univ Of Minnesota Press, 2006.
Schechner, R. Performance Theory. 2nd ed. Routledge, 2003.

Shore, Bradd. Culture in Mind: Cognition, Culture, and the Problem of Meaning. Oxford University Press, USA, 1998.

Suchman, Lucy A. Plans and Situated Actions: The Problem of Human-Machine Communication. 2nd ed. Cambridge University Press, 1987.

Sun, Ron. Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation. 1st ed. Cambridge University Press, 2008.

Sutton-Smith, Brian. The Ambiguity of Play. 1st ed. Harvard University Press, 2001.

Tomasello, Michael. The Cultural Origins of Human Cognition. Harvard University Press, 2001.

Vera, A. H., and H. A. Simon. “Situated action: Reply to reviewers.” Cognitive Science 17, no. 1 (1993): 77-86.

Vygotsky, L. S. Mind in Society: Development of Higher Psychological Processes. 14th ed. Harvard University Press, 1978.

Vygotsky, Lev S. Thought and Language – Revised Edition. Revised. The MIT Press, 1986.

Watt, Ian. The Rise Of The Novel: Studies In Defoe, Richardson And Fielding. Kessinger Publishing, LLC, 2007.

Weizenbaum, Joseph. Computer Power and Human Reason. Penguin Books Ltd, 1984.

Wiener, Norbert. Cybernetics: or the Control and Communication in the Animal and the Machine. 2nd ed. The MIT Press, 1965.

Wilson, R. A., and A. Clark. “Situated Cognition: Letting Nature Take its Course” (2006).

Wiltshire, John. Recreating Jane Austen. Cambridge University Press, 2001.

Cognition, Practice, and Mathematical Oddities

[General,Projects] (11.05.08, 1:22 am)

One of my classes is cross listed with an undergraduate course, this is Nancy Nersessian‘s Cognition and Culture. One fun advantage to having a course with undergraduates is that they make a lot of interesting and occasionally profoundly brilliant observations. Not to say that us graduate students are incapable of insight, but we tend to be very bogged down by our own research objectives.

We have been discussing Jean Lave‘s book, Cognition in Practice, and came to a segment where Lave discusses how mathematics is a cultural artifact, but we view it as universal and supremely valuable. An example of this is that we “beam” the Pythagorean theorem into space, in hopes that, were the signal ever to be discovered by extraterrestrials, it would help communication because mathematics is a universal language, that transcends humanity. I didn’t find a source on this beaming precisely, but it seems like the sort of thing that people might do. Coming from a mathematical background, and moving into the complex and tricky field of cognitive science and cultural studies, I had very torn reactions to this conflict, and only realized how to articulate that reaction after the discussion ended. So, I present it here.

Mathematicians are extremely strange people. I don’t really identify as a mathematician anymore, but I still consider myself close to the culture, so I say this pridefully. The conclusions of mathematics are universal, and they are fundamental, but, and this is where things get difficult, these universal conclusions rely on premises. These premises are necessarily situational, and depend on other cultural factors. Furthermore, the practice of mathematics is also culturally relevant, and lots of mathematicians disagree, not on conclusions (a proof is a proof, after all), but on the relevance, importance, usefulness, and elegance of different practices of math. All of these terms are subjective, and while there are many common impressions of what elegance means, it is far from universal.

Generally issues regarding the practice of math applies to topics that are more sophisticated than the Pythagorean theorem. The Pythagorean theorem has to be universal because of its simplicity, elegance, and universality in almost all kinds of math that we use conventionally, right? Those aliens must use that kind of math too, right? Well, mostly. Even in this case, the situation is ambiguous, and that ambiguity arises from the premises under which the Pythagorean theorem is valid, namely: Euclidean geometry. If you are dealing with some other domain of planar geometry, (most notably, spherical or hyperbolic geometry), then the Pythagorean theorem breaks down. It has analogues (which are quite elegant, I might say), but the existence of these alternative types of geometries, and the ways in which the theorems are modified illustrates that our idyllic Euclidean world is not quite as simple or so complete as it first seemed. Space itself is non-Euclidean, according to both relativity and quantum mechanics. So perhaps the Pythagorean Theorem may actually have something to do with our experience as humans on Earth, and may not be quite so transcendent after all.

For real transcendence, we need Godel’s Incompleteness Theorem. That’ll do it for sure.

Next Page »