icosilune

Archive: December 15th, 2008

Philip Agre: Computation and Human Experience

[Readings] (12.15.08, 11:27 pm)

Philip Agre is a rare breed. He is a strong advocate of embodiment and situated action, and is also an AI practitioner. Agre was enormously influential on Michael Mateas, among others. Agre is interested in developing an approach to AI that is critically aware, as well as reformulating conventional AI applications in a manner that might be described as situated or embodied. His view on diectic entities has an enormous potential in application to an activity-centric approach to simulated characters.

Agre gives a good overview of the task at hand. His interest is in a change of approach around AI, based in both philosophical to technical arguments. He advocates the central idea of a critical technical practice. This idea has been very influential on Mateas and Sengers, in particular.

Introduction

Agre’s goal is to shift from cognition to activity. AI has long been mentalistic, proclaiming to be a model of cognition and pure abstract thought. However, the practice of AI has tended to best work in application to specific practices and activities (note Hutchins). Theorem proving is not cognition in general, but it is a specific activity which can be computationally formulated and modeled. Focus on activity tends to model “situated, embodied agents.” (p. 4) The term “agent” applies to robots, insects, cats, and people, and has strength in ambiguity. Situation implies that an agent’s actions make sense within context of its particular situation. For an agent to be embodied, it must simply have a body: “It is in the world, among the world’s materials, and with other agents.”

There is a review and comparison of the planning based approach common to AI. Ordinary life has a sort of routine, and planning tends to make use of that routine. Agre poses that routines come from emergence. I would argue, though, that in social situations, aspects of routine may have been emergent, but they often become institutionalized. There are three questions that operate around activity, and each of these is important to consider for activity centered applications: (p. 7)

  1. Why does activity appear to be organized?
    Planning view: activity is organized because of plans.
    Alternative: orderliness is emergent. Form of activity is influenced by representations, but not determined by them.
  2. How do people engage in activity?
    Planning view: Activity is planned and contingency is marginal.
    Alternative: Activity is improvised, contingency is central. People continually redecide what to do.
  3. How does the world influence activity?
    Planning view: The world is fundamentally hostile. Rational action requires attempts to anticipate difficulties and everyday life requires constant problem solving.
    Alternative: The world is fundamentally benign. Environment and culture provides support for cognition. Life is a fabric of familiar activity.

AI work is itself a practice, which has its own values: Getting computational systems to work. Systems that do not work lack value. This lends to the idea that only things that can be built can be believed. On the other hand, building is also a way of knowing. Agre argues that emphasis should be shifted from technical product, to practice. The use of models and ideas is torn between science and engineering, both practices which led to the development of AI. Science aims to explain with models, whereas engineering seeks to build. AI straddles these two drives. Similarly, works of art exist on that same border, they are both explanations (expressions), and constructed products.

Metaphor in practice

There are two parts to Agre’s thesis: (1) AI is organized around the metaphor of internality and externality. The mind has an inside and an outside and discrete inputs and outputs at the interface. (2) A better starting point uses a metaphor of interaction, and focus on activity. The important part behind this is to be critically aware of the metaphors used in discourse about the practice.

There is an important nugget here: The technical criticism is with the lack of self reflection. A model is an interpretation and a language, it is a way of seeing things that is inherently metaphorical in nature. There is a double vocabulary, one at the level of discussing the subject of the model, as well as discussing the model itself and its interaction with the software. This duality of discourse is reminiscent of Mateas. “Scientific inquiries based on technical modeling should be guided by a proper understanding of the nature of models. A model is, before anything else, and interpretation of the phenomena it represents. Between the model and the putative reality is a research community engaged in a discursive operation, namely, glossing some concrete circumstances in a vocabulary that can be assimilated to certain bits of mathematics. The discourse within which this process takes place are not transparent pictures of reality; nor are they simply approximations of reality. On the contrary, such discourses have elaborate structures and are thoroughly metaphorical in nature. These discourses are not simply ways of speaking; they also help organize mediated ways of seeing.” (p. 46)

Machinery and dynamics

The two metaphorical theories operational in AI research are mentalism and interactionism. Newell and Simon’s GPS seems interaction oriented at onset, but shifts focus onto abstract representation very quickly. GPS aims for disembodied conceptions of the world. It equates objects with their representations, which is a dangerous phenomenological pitfall.

Agre proposes discussing computation in terms of machinery and dynamics. His goal is to discourage focus on machinery and instead emphasize dynamics. However, as presented, machinery is a useful metaphor. It implies situation within an environment: a machine has a physical presence, which steps away from raw functionalism. The term machinery is also very Deleuzian, which may be either positive or negative. Dynamics instead centers on interaction. Agre encourages us to get rid of computational machinery, and instead invent new dynamic effects, rather than devices.

Abstraction and implementation

Abstraction and implementation is an interesting dyad: Functional definition versus physical construction. What is the relationship between these to representation? It is important to discern the levels of abstraction at work when constructing artifacts. Newell, Simon, and Shaw constructed GPS which is not a model of cognition, but a model of problem solving in a particular domain. There is a generative element in theory, common to the work of both Newell and Simon as well as Chomsky. Soar bills itself as a “unified theory of cognition,” but is still primarily concerned with abstraction. Minsky is a researcher who seems to take this into account: he claimed to represent a constellation of mini-theories which are in turn heavily situated. This approach emphasizes implementation rather than abstraction.

Dependency maintenance

There is a review of critical technical practice. Agre’s aim is not to break the traditions of AI or start over, but to become critically aware of representation and computational machinery. The subject matter in Agre’s examples fold back to everyday activities. The goal is to see if the planning paradigm tells “reasonable stories” about everyday activity. Effectively, this means to see if the planning view of cognition reasonably accounts for everyday interactions. This involves a comparison of activity and routine.

  1. Different individuals use different routines for the same task.
  2. Routines emerge from repetition and practice.
  3. Novel activity is not routine.
  4. Routines are upset by circumstances.

Planning and improvisation

Hayes-Roth and Hayes-Roth: real planning is opportunistic and incremental. A relevant question about real world planning is this: “How is it that human activity can take account of the boundless variety of large and small contingencies that affect our everyday undertakings while still exhibiting an overall orderliness and coherence and remaining generally routine? In other words, how can flexible adaptation to specific situations be reconciled with the routine organization of activity?”

Running arguments

The situation and activity oriented framework described that is an alternative to planning is “Running Arguments” or RA. The cycle of RA is determined by the following steps: (p. 175)

  1. The world simulation updates itself.
  2. The periphery computes new values for the central system’s inputs, which represent perceptual information.
  3. The periphery compares the new input values with the old ones. Any newly IN input is declared a premise, any newly OUT input is declared no longer a premise.
  4. The central system propagates dependency values and runs rules. Both the dependency system and rule system continue to run until they have both settled.
  5. The periphery inspects the values of the central system’s outputs, which represent motor commands.
  6. The periphery and world simulation together arrive at as set of proprioceptive propositions (judgments about the success or failure of the agent’s primitive actions) and a set of motor effects (the immediate physical consequences of the agent’s actions).
  7. The world simulation updates itself again, and so on ad infinitum.

There is a set of principles regarding RA, discussed on (p. 179):

  1. It is best to know what you’re doing. Executing plans is derived from the symbols of the plan, not from an understanding of the situation. I think this suggests a model which makes use of the situation within the symbols themselves. Flexibility relies on understanding of situation and its consequences.
  2. You’re continually redeciding what to do. Much of the relevant input from the world is constantly changing, rather than remaining fixed.
  3. All activity is mostly routine. Activity is most frequently something which has been done before.

Representation and indexicality

Actions are about the world. This is an important idea! Aboutness ties into intentionality. At the same time, Agre argues that “world models are the epitome of mentalism. On its face, the idea seems implausible: a model of the whole world inside your head. The technical difficulties that arise are obvious enough at an intuitive level.” (p. 225) Knowledge representation is either mentalistic or platonic, and both reinforce the planning approach. Mentalistic models imply the existence of a world model in some memory state somewhere. Platonic models refer to nouns and items in purest abstract, appealing to universal qualities. Models of this nature resemble things like the Cyc project.

For a variety of reasons, I am very defensive of models, but world models have to do with an area that is extremely problematic, specifically knowledge representation. Understanding of an agent or character’s general knowledge, as well as knowledge about other characters and the world, are extremely difficult to model coherently. Existing theories, especially those around mental models, tend to use propositional models, which do not seem appropriate. Agre’s alternative is indexical representation, which is relevant for situational circumstances, but does not seem appropriate for larger scale activity. Agre’s alternative approach derives from phenomenology, especially Heidegger and Merleau-Ponty.

Diectic representation

Diectic representation is a different kind of world representation. The examples given are immediately situational: “the-door-I-am-opening, the-stop-light-I-am-approaching, the-envelope-I-am-opening, and the-page-I-am-turning.” (p. 243) These are diectic entities, which are indexical and functional. They are also immediately given a perspective, as all of these contain some reference to the entity itself. I would argue that these diectic entities absolutely tie into models, but those models are situational ones. There is considerable challenge to the idea of diectic entities as symbols (note Vera & Simon), but their use is very different from that of conventional symbols.

The most important distinction between diectic entities and symbols is their situated and non-objective nature. “A diectic ontology, then, is not objective, because entities are constituted in relation to an agent’s habitual forms of activity. But neither is it subjective. These forms of activity are not arbitrary; they are organized by a culture and fit together with a cultural way of organizing the material world.” (p. 244) This phrasing anchors diectic representations in a cultural basis. Given this context, it would make sense for diectic representations to be variables in defining activities in a simulated world populated by agents.

Conclusion

Mentalism ties into psychic unity (note Shore). “Perhaps the most unfortunate cultural consequence of mentalism is its tendency to collapse all of the disparate inside phenomena into a single ‘mind’ vaguely coextensive with the brain.” (p. 307) Mentalism too yields to objective accounts of reasoning, which proclaims a kind of universality of goal oriented thought characterized by Western philosophy.

Reading Info:
Author/EditorAgre, Philip
TitleComputation and Human Experience
Typecollection
Context
Tagsai, specials, embodiment
LookupGoogle Scholar, Google Books, Amazon