icosilune

Archive: August, 2008

Here they come

[General] (08.29.08, 3:20 pm)

For the… few that read this research journal of mine with any regularity, I just want to give you a warning.

I am going to start moving my reading annotations in full from their old location to the new bibliography, that is implemented on top of WordPress itself.  This will involve about 40 or 50 new posts all of a sudden. This isn’t new material exactly, it’s been around for quite a while, but oddly it does seem to generate, at the very least, a sense of change.

So, that’s it for the warning. Here they come!

Cognition and Multi-Agent Interaction

[Readings] (08.22.08, 10:28 pm)

Ron Sun: Prolegomena to Integrating Cognitive Modeling and Social Simulation

The goals described here have a great deal of overlap between my research as far as models and AI go. Sun provides an extensive discussion of literature in both cognitive modelling and social science. The important difference between the work described here is that it is focused towards scientific applications, whereas my work is in expressive applications, where simulation is used to explain or express ideas rather than emulate reality. Cognition defined here is pretty wide, encompassing: “thinking, reasoning, planning, problem solving, learning, skills, perception, motor control, as well as motivation and emotion.

Some bullet points expressing the chief questions asked by this intersection of studies:

  • How do we extend computational cognitive modeling to multi-agent interaction (ie, to social simulation)?
  • What should a proper cognitive model for addressing multi-agent interaction be like?
  • What are essential cognitive features that should be taken into consideration in computational simulation models of multi-agent interaction?
  • What additional representations (for example, “motive,” “obligation,” or “norm”) are needed in cognitive modeling of multi-agent interaction?
  • What are appropriate characteristics of cognitive architectures for modeling both individual cognitive agents and multi-agent interaction?

On methodology and methods of simulation: Simulation develops and tests theories. An open question: is there a strong method of simulation to test or understand models? Sources to explore are Axelrod, 1997 and Moss 1999. Regarding the connection of simulation to cognitive science: Sun 2001.

Another reason for looking at social simulation, specifically, is that cognition is a sociocultural process. Note Lave 1988 and Hutchins 1995. This connects the idea of cognition with larger cultural and social meanings. “Only recently, cognitive science, as a whole, has come to grips with the fact that cognition is, it least in part, a sociocultural process. To ignore sociocultural processes is to ignore a major underlying determinant of individual cognition.” (p. 13) Sun mentions later that cognition emerged to satisfy needs and deal with environments, so cognition is necessarily situated and embodied (as opposed to abstract and symbolic). The environment is a part of thinking, and by extension, so must be other agents.

Discussing motivation, thinking, and existent structures, (a sort of intertwined cognitive triad), Sun explains: “The ways in which these three major factors interact can evidently be highly complex. It may therefore be argued on the basis of complexity, that the dynamics of their interaction is best understood by ways of modeling and simulation. One may even claim that the dynamics of their interaction can be understood only by modeling and simulation, as some researchers would. In this endeavor, computational modeling and simulation are the most important means currently available for understanding the processes and their underlying structure…” (p. 14)

There is a small note connecting models and theories: Specifically referenced is van Fraasen, 2002, referencing a position called “constructive empiricism”. The position is that every model is a theory. This idea pulls back to the interesting relationship between models, theory, and practice.

The introduction is primarily concerned with connecting cognition of individuals to the larger scope of social science. Individual thinking at one level is necessary to witness coherence at a higher one. This means that if we wish to understand social science from a coherent perspective, we must look at individual agents and understand how they behave locally, rather than looking at society-wide graphs. This connects to the idea of policy in simulation of expressive systems: comparing a system controlled by a drama manager versus one that is character based.

Taatgen, Lebiere, and Anderson: Modeling Paradigms in ACT-R

This chapter discusses the ACT-R cognitive architecture. ACT-R seems to be designed towards low-level modeling. The architecture seems to be heavily informed by the biological structure of the brain, using layers to handle different cognitive tasks. The system also employs an activation model for memory, which echoes the connectionist model of neural networks. However, oddly, the first example given in describing how the architecture performs is by a control problem, a “sugar factory”. This is an extremely disembodied and disconnected abstract problem. It strongly resembles the sort of feedback cycles described by Weiner’s cybernetics. In later examples, the focus of the architecture is learning when to apply different clearly defined rules.

Wray and Jones: Considering Soar as an Agent Architecture

This section documents Soar as an architecture and as a general theory of intelligence. Right away, the authors begin making the claim that Soar can be used as a holistic and complete model of how everyone thinks, falling well within Alison Adam’s feminist criticism of AI paradigms. Supposedly most or many applications of Soar are intended to be models of specific domains, rather than cognition in toto.

The main features of Soar are thinking cycles, problem solving, and operators. One criticism of the architecture as described here could be the manner in which the problem space is highly disconnected from the actual context surrounding the problem itself.

Clancey, Sierhuis, Damer, and Brodsky: Cognitive Modeling of Social Behaviors

This essay discusses several aspects of social behavior. The paper starts by using a number of terms that should be familiar in the context of social science or sociology: roles, procedures, norms, etc. The paper is also concerned with the idea of collective cognition, which shifts the focus of investigation from goals towards behavioral patterns. This idea is strongly connected to activity theory. This references Lave and social extensions to cognition.

This study specifically looks at a (real world) NASA simulation of a Mars landing team, using extensive footage of the participants enacting the simulation in the FMARS station on Devon Island in the Canadian Arctic.

A key part of this study is the use of the Brahms model, which formalizes field observations for use in developing simulations. The approach used by this study includes a set of clear steps:

  • Understanding activities as patterns of what people do, when, and where, using what tools or representations;
  • Representing activities in a cognitive model using a subsumption architecture (i.e., conceptualization of activities occurs simultaneously on multiple levels);
  • Understanding that conceptualization of activities is tantamount to conceptualization of identity, “what I’m doing now,” which is the missing link between psychological and social theory (Clancey, 1997, 1999, Wenger 1998).
  • Simulating collective behavior in a multi-agent simulation with an explicit “geographic model” of places and facilities, using the Brahms tools.

The Brahms model is intended towards real-life analysis of human behavior, but it is formal enough to extend into the simulated domain. The model is intended as a system for understanding group dynamics in the workspace. Most analytic models are descriptive, that is that the cannot be used for generation or simulation, so it is notable that the Brahms model falls under this category. The focus is on practice and observing what people do, though this encompasses emotion, attitude, and personality. Most shockingly, the Brahms model uses an approach that activity is the same as identity. Being is the same as doing in this case, which resonates deeply with Goffman’s theory of social performance.

The process of simulating the already recorded events is especially tricky, as the simulation must account for many uncontrollable human variations. There is the idea of simulation fidelity, which is the capacity of the computer simulation of the model to accurately recapture the behavior of the participants without doing anything wierd, such as having them all stand up simultaneously at the end of a meeting. What arises again, though is that the FMARS habitat is a simulation as well, and its participants are all performers. So what we have here is an electronic simulation attempting to simulate a performed human simulation. If we bring a theorist like Baudrillard into play, he would probably say that there is no way to actually capture real social behavior or activity, since it is all simulated anyway. However, there is still a gap between the human simulation and the virtual, and this is a gap that can be narrowed.

The way to more closely simulate the humans is to understand that social behavior is a necessary component of individual behavior. Additionally: knowledge is also hard to model. Finally: roles are improvised and are blurry. There are some interesting formal descriptions of the behavior rules. The interaction with the environment works via a number of perception functions and stored variables. The behavior is stateful, but according to cases.

Another interesting thing in the study is the emphasis on biological needs. This makes sense for NASA, but it does not really apply to narrative in the cases that I am working on. It does however lend a certain natural credibility to the simulation, emphasizes the embodied nature of the subjects, and it echoes the decision to have biological needs expressed in The Sims. This has some interesting consequences, though: “The inclusion of biological motives in explaining human behavior provides an interesting problem for cognitive modeling. For example, consider KQ warming her drink in the microwave and then standing by the side of the table. There might be many explanations for this behavior: Her drink may be cold; she might be cold; her back may hurt; she may be bored with the meeting; someone at the table who hasn’t had a shower in a week may smell, etc. One doesn’t know her goals, aside from, perhaps, warming her drink. Even this may be a kind of convenient cover for accomplishing her ‘real intention.'”

There is a strong critique of rational frameworks present here. Simulation is generally concerned with advancing state, and not necessarily determining intention, although certain behaviors may strongly hint towards intentionality. The cognitive model described by the Brahms framework (as well as Soar and every other AI framework, also note Cavazza) involves a top-down model of behavior. These presuppose goal and structure driven models, which may not be appropriate. Top-down models cannot accomodate for human flexibility and ambiguity. This suggests that a situational and context-driven model is key to representing human behavior.

A final word on the modeling philosophy: Modeling “a day in the life” is a starting point, but on its own it is a pastiche (!), much like The Sims.

A connection to Newell’s perspective on cognitive modeling: Newell says that interaction is oganized into isolated and discrete bands, which pulls back to rational goal-driven behaviors. This does not account for social norms and personal habits, which are essential to understanding social behvior.

Gratch, Mao, Marsella: Modeling Social Emotions and Social Attributions

The focus on this paper is on emotions with social elements, stemming from human interactions. These involve not only causality, but also intentionality and free will. The essence in this idea is developing a theory of social intelligence. The social interaction described here resonates with Geertz and Goffman. The goal in this paper is to develop a framework for modeling emotions.

There are several specific points to the cognitive model developed here, building from cited sources (Minsky, Oatley, and Johnson-Laird):

  1. How emotion motivates action
  2. How emotion distorts perception and inference
  3. How emotion communicates information about mental state

A tool used in this paper is Appraisal theory, which explains that emotion arises from two sources: Appraisal and coping. Appraisal itself is the process by which knowledge is understood and reacted to, and coping is the response to events, sometimes leading to change. A key author in developing the model here is Lazarus, 1991. Both the coping and the appraisal processes are complicated and feed back into each other significantly, so the study uses Soar to develop a model of the complex cycle between the two forces.

Appraisal is arranged into variables, and these are described:

  • Perspective: from whose persepctive the event is judged
  • Desirability: what is the utility of the event if it comes to pass, from the perspective taken (i.e., does it causally advance or inhibit a state of some utility)
  • Likelihood: how probable is the outcome of the event
  • Causal attribution: who deserves credit or blame
  • Temporal status: is this past, present, or future
  • Controlability: can the outcome be altered by actions under control of the agent whose perspective is taken
  • Changeability: can the outcome be altered by some other causal agent

Beyond that, there are several tyeps of coping strategies:

  • Action: select an action for execution
  • Planning: form in intention to perform some act
  • Seek instrumental support: ask someone in control of an outcome for help
  • Procrastination: wait for an external event to change the current circumstances
  • Positive reinterpretation: increase utility of positive side-effect of an act with a negative outcome
  • Resignation: drop a threatened intention
  • Denial: lower the probability of a pending undesirable outcome
  • Mental disengagement: lower utility of desired state
  • Shift blame: shift responsibility for an action toward some other agent
  • Seek/suppress information: forma  apositive or negative intention to monitor some pending or unknown state

This collection of coping strategies is really great, especially as it pushes the classical AI scope of planning and action only. It exposes a great deal of underlying potential and variety in modeling emotional behaviors. This also raises the question of how this sort of coarse structure might be defined against the fine granularity of simulation. Note that there is a great deal of importance on interpretation.

The decision/emotional cycle represented in the EMA application is as follows:

  1. Construct and maintain a causal interpretation of ongoing beliefs, desires, plans and intentions.
  2. Generate multiple appraisal frames that characters the state in terms of appraisal variables.
  3. Map individual appraisal frames into individual instances.
  4. Aggregate instances and identify current emotional state.
  5. Propose and adopt a coping strategy in response to the current emotional state.

Note that the current implementation emphasizes task oriented goals. This relates to the general criticism of Soar and planning-based AI paradigms. The authors mention that the selection of tasks does not account for social norms and standards, and propose a model of dis-utility to associate with breaking these, but it still involves a utility based model, which does not seem to be a satisfying solution.

Attribution theory connects to the idea of intention and responsibility, which might better handle credit and blame. The theories on this fall under Shaver and Weiner. However, embedded into the implementation model described is the ever present figure of authority. Attribution is of great importance in a chain of command, but exact attribution is never really used in real circumstances. Every action in this model contains not only the performer, but also the individual who coerced or ordered the action. This model is appropriate in a military simulation, but carries an undesired value in other circumstances.

The authors give a very significant and powerful logical model of attribution theory, basing on a set of primitive logical functions, axioms, and rules for discerning attribution. These all hold from a rational perspective, which again makes sense in a serious application where the logic is meant to solve problems and discern information, but not in an expressive application. The entire layout forms a powerful attribution framework, but with the idea of increasing complexity and partial or faulty information, the idea breaks down for other social simulations.

Reading Info:
Author/EditorSun, Ron
TitleCognition and Multi-Agent Interaction: From Cognitive Modelling to Social Simulation
Typecollection
ContextThis is about cognitive modeling and simulation, and reviews some technology that has been used in current work. This is relevant for directing work, but also for seeing where embedded value systems permeate current AI and cognition research.
Tagsspecials, ai, simulation, social simulation
LookupGoogle Scholar, Google Books, Amazon

Daniel Mackay: The Fantasy Role-Playing Game

[Readings] (08.22.08, 3:34 pm)

Overview:

Mackay’s ultimate goal in this work is to develop a legitimate framework for interpreting role playing games as a performative art form. His analysis covers the cultural, formal, social and finally aesthetic structures of RPGs. Mackay is interested in the artificial world and the networks of meaning established by the performance of the games. Performance is the key element that makes role-playing an aesthetic subject.

Mackay’s arguments in the later stage of the book situate the idea of role-playing in history as a natural consequence of the desire to recreate a sense of lived otherworldliness which has been supressed since the age of Enlightenment. Role-playing leads to the development of deep and personal imaginary worlds, which becomes an artistic object or artifact when recalled in memory.

This book ties together the diverging threads of performance, simulated worlds, and the expressive power of participation and interpretation.

Notes:

Introduction frames the goal and direction of study: Analyze role-playing games as an art made up of both role-playing, and game. The ultimate value of the game comes as performance, which has its own aesthetic background. The “Performance of the role-playing game brings the game into existence, and it is therefore of the foremost importance.” This study is not a poetics, which would describe how to create games, but an aesthetics, which defines a system of analysis of games. The analytic structure for games is taken from Eric Zimmerman and Frank Lantz. A game is split into three dimensions: formal, social, and cultural. Mackay adds a fourth dimension, which is the aesthetic dimension.

The word “narrative” has been used to describe the over-arching story of the game. This makes sense, as the narrative is the history of the game, and the use of the term is supported by the fact that it is verbal telling that is used to drive the game. This construction operates against the sense of archetypical dramatic narratives: narrative here is something that is told (enacted) and then re-told (described).

Cultural Structure

Mackay opens discussing the cultural structure of role-playing games by examining their history and origin arising from war games, which arose in the early 1800s with “Kriegspiel”. Most traditional games, ie card or board games are zero-sum games. War games diverged from this model with a nonzero-sum approach, which led to interesting dynamics using cooperation and subterfuge.

There is a very explicit connection to RPGs and literature. RPGs that emerged out of the wargaming tradition were heavily anchored in settings defined by pulp literature. Mackay writes an equation that sums this up: “Fantasy Literature + Wargames = Role-Playing Games”. This connects to the idea of models, but we do not yet see a connection of the world to game mechanics. We also primarily see fantasy, sci-fi, and pulp literature used here, as opposed to settings that are more “highbrow”. Literature forms the world and the cultural frame of meaning around the game itself. This point connects very strongly to Jenkins (and possibly Michel DeCerteau), and encourages the idea that fans are coopting these cultural artifacts.

The emergence of D&D in the 60s and 70s is also rooted in American culture, exposing a sort of backward-looking nostalgia for a pre-technological era, and a setting without the moral ambiguity present in the political climate. The desire for clarity in distiction between good vs. evil was not satisfied by the “mirages of communism versus free world, cowboy versus Indian, and good guy versus bad guy that permeated the political rhetoric and cultural climate of the 1950s.”

The influence of role-playing games and culture is the most evident in computer games and especially online environments from MUDs to MMOGs. The importance specifically relates to world setting and the interpretation and cultural meaning thereof. Mackay pulls Baudrillard into this connection using the idea of the “semiosphere”, an atmosphere of signs. RPGs regurgitate cultural myths, narratives, and world settings.

The aesthetic of fantasy is the depth of detail and setting. The fiction can be so detailed that it can be imagined. This idea not about realism, but the impression of reality. This idea connects back to the immersion. Player engrossment is through the character, the player co-constructs the fantasy through his or her own imagination. Electronic games cannot do this because they establish a role that is opposing the player. Human imagination is stiffled when presented with observable detail. There must be something about writing and fiction especially that enables this.

Formal Structure:

An interesting connection is made here to amusement parks and “themed entertainment”. The player as spectator model is consistent with how I personally run games, but it is not universal. A connection is made to the aspect of simulation, specifically through Barthes and Baudrillard: The idea is that the logic internal to the game comes to have a life of its own, and detaches from both the real world and its origins. That is, a game world may have originated there, but it no longer lives in sourcebooks, it comes to have a mythology and life detached from physical anchors. Uri Rapp wrote explicitly on this in “Simulation and Imagination, Mimesis as Play” in 1984.

Mackay brings up the interesting example of Everway, which is about “Visionary Role-playing”. This has an abstract conceptual ambiguity that is diametrically opposed to D&D’s rigorous attention to mathematics. It represents a contrast with Cartesian and non-Cartesian thinking. In Everway, aesthetics are incorporated into the formal structure.

Connecting Schechner on performance: Rules guide a performance through constraint, creating safety and security. Note that this is entirely consistent with conversation with Miashara earlier. The RPG narrative is created by performance. This is interesting to compare with other game studies, the relationship between performance and play. The difference between Schechner and the RPG model has to do with the code that defines the performance: “The role-playing game exhibits a narrative, but this narrative does not exist until the actual performance. It exists during every role-playing game episode, either as a memory or as an actual written transcription by the players or game master. It includes all the events that take place in character, nonplayed character backstories, and the preplayed world history. It never exists as a code independent of any and all transmitters, like Schechner’s definition for drama suggests.” (p. 50)

There is some discussion connecting Goffman and framing to the levels at work in games. This describes the various principles and rules and forces that are at work in guiding the game experience. Drama is described as a force that operates on the game at a meta-level. This is explicitly stated in Everway. While players may be aware of the dramatic force at work, the player characters are not. This enforces the notion that drama is simulated like any other rule. This makes an interesting connection to drama managers, which operate on a meta-level in a very similar way.

Social Structure:

Performance and experience exist in all frames simultaneously. The character/player exists in all these levels as well, and identity blurs as the levels meet. Mackay is using Schechner to critique the borders of frames as defined by Goffman, Fine, and Gregory Bateson. Schechner argues that ritual takes place on a level that transcends the frames of interaction.

This section is also called “The Structural Foundation of the Role-Player’s Subjectivity”, which echoes Bogost’s description of simulation, as the gap between the rule-based representation of a world and the player’s subjectivity. The player’s subjectivity in this case also represents the agency of the player to co-construct the game world. Drawing on Barthes, Mackay argues that role-playing games function by exposing the construction of meaning. He muses that the religious right has reacted strongly against role-playing games because they represent a world where people give meaning to things and “try to render intelligible the process behind creation.” (p. 68) The creation of meaning is driven by “blanks” as described by Wolfgang Iser.

Game worlds and game culture take on the idea of speculative or fantastic recreation. Fantastic recreation is what drives the “global villiage” of the Epcott Center. This connects with Bakhtin’s idea of the desire for an alternate or unofficial culture, which also sounds connected to utopian desire.

The relationship between culture and gaming: Constructed characters are reflections/echoes of existing culture, like Deleuzian assemblages. An interesting concept mentioned here is the notion of the “decontextualized tropes” or “fictive blocks” which are tiny bits of culture that can exist without context. Fictive blocks are essentially instances of meaning in a sound-byte culture. To explain how these are used, Mackay references Arnold Van Gennep (1908), who describes stages of separation, liminality, and reincorporation, which are used in rites of passage. The three stage process applies here to fictive blocks in cultural artifacts. A sound byte or image or idea might be taken from a fictional work, then isolated and disconnected from its context, and later reincorporated into some other creative material. This idea connects again very strongly to DeCerteau and Jenkins.

Mackay brings up Foucault to describe power relations in role-playing. The space of the role-playing game is an interesting target for studying power play, especially given the absolute power of the game master. However, this idea goes back and can be applied in an interesting way to power in electronic games. In electronic games, the player has no power, but is not surveiled, but given a certain autonomy, players have massive freedom. The level of personal relation in RPGs allows for odd features that relate to discipline. In a role-playing game, the player is certainly compelled to behave with a certain level of discipline, especially in terms of keeping in character and observing social standards. In electronic games, both the online and offline varieities, players have no compulsion, and will behave very rudely, inconsistently, and incoherently. The strongest example of this is when players attempt to push the limits of a game and break it. There is a strong cultural tradition of this, but it is something that will not be tolerated in role-playing sessions, even when the game mechanics allow for abuse of in-game power. Why this is the case is a deep and complex question.

Aesthetic Structure:

The aesthetic structure is a necessary component to role-playing games. Discussing the engrossing and enchanting power of other types of games, Mackay writes: “The role-playing game performance shares these structures with other activities. However, it also participates in a fourth structure, an aesthetic commonly attributed to art: a cathartic structure that encourages identification with its content and that persists after the performance has disappeared. This structure is at once a social process, a cultural process, and a formal process, but it is also something more. It is the creation of an aesthetic object that results from the collective interpretive process of the role-plaing game performance.” (p. 122) This also exposes some of the lack in electronic games: players have control over the social, cultural, and formal levels of experience, but are not able to contribute to the aesthetic structure.

Connecting art and theatricality: According to Michael Fried, art is opposed to theatricality. Modernist (literalist) art takes the extreme position of reducing art to pure objects. For instance: a painting is just paint on canvas. Fried describes Tony Smith’s car ride on the New Jersey Turnpike before its completion. The idea here is that kinetic, immersive, explosively imaginative experiences work towards the aesthetic of the role-playing game narrative. The aesthetic is the residue left behind in memory after the experience has passed.

In describing historical reenactment, Mackay connects once more the world of literary fiction and wargaming. “I see this moment, when the increasing aestheticization of the war gaming narrative finally culminated in the development of role-playing game performance form, as a reaction to the poverty of the imagination that emptied the architecture of everyday life of any meaning and the scarcity of vision that burdens contemporary philosophy and literature. The imaginative faculty is a built-in function of the human organism–the equivalent to pulses of the heart or respiration of the lungs. If a people do not find that faculty fulfilled in the world they have been handed, they will build their own.” (p. 153)

Reading Info:
Author/EditorMackay, Daniel
TitleThe Fantasy Roleplaying Game
Typebook
ContextMackay analyzes the role-playing game in cultural, formal, social, and aesthetic levels. Various parts of his analysis connect strongly to electronic games.
Tagsspecials, roleplaying, performance, games
LookupGoogle Scholar, Google Books, Amazon

Allen Newell: Physical Symbol Systems

[Readings] (08.21.08, 11:16 am)

Symbol systems are most important development in recent work of cognitive science, linguistics, and psychology. An interesting note: “Thus it is a hypothesis that these symbols are in fact the same symbols that we humans have and use everyday of our lives. Stated another way, the hypothesis is that humans are instances of physical symbol systems, and, by virtue of this, mind enters into the physical universe.” (p. 136) This treatment is important because it establishes the symbol as the media by which the mind interacts with the physical world. Secondly, it seems to echo the notion of the Jungian symbol. Though this is probably not what Newell is intending to connect, the Jungian symbol exists at a cultural and semantic level, and could be used to extend symbol manipulation to be more of a trans-cognitive phenomenon.

Symbol systems have their roots in mathematical logic, as well as in previous work in philosophy, linguistics, etc. Newell mentions Whitehead specifically on the importance of symbols. The work on symbols is existing in parallel between cognition and computer science. Newell is arguing simultaneously for the importance of symbol processing in computation, and also in human thought. These two are intrinically linked in his argument, making AI a natural conclusion. This link extends from a broad cultural history of likening humans to machines in thought and function, stemming from Cartesian Dualism. The human created machine, through mathematics, is something that reaches towards the platonic ideal of pure disembodied meaning. The proclamation of AI is a natural conclusion from this thinking, where human thought belongs in this world of perfected formalism. That Newell should conclude that physical symbol systems are “simply evident” is a continuation of this mode of thought.

Formal definition of a symbol system “SS”: memory, operators, control, input, and output. Memory is a list of symbol structures, or expressions. An expression is a list of symbols, with a type and roles associated with the symbols. Newell writes an expression formally: (Type: T, R1:S1, R2:S2, … Rn:Sn). The number and the roles depend on the type, and the symbols may be repeated. An operator takes some symbols as an input and produces some symbols as an output. The symbol system has several operators, which seem to relate to classical computer IO and memory operations: assign, copy, write, read, execute, exit if, continue if, quote, behave externally, input. No example is given of this system in operation, so we cannot easily see how these properties will cause the system to behave.

The symbol system described is seems to be fairly “garden-variety”, but has the property of universality. This seems to be more than computational universality, but relate to interaction and responsiveness with input and environments. Newell compares this to Weiner’s Cybernetics, where systems used feedback to appear purposive. Newell points out that SS is limited because of its ability to behave in the world, and input symbols. The most significant limitation that he describes is on the limitation of computation, and the existence of non-computable functions.

Newell continues to express concern over this and describe computational universality and relating the capacity of symbol systems to the Church-Turing thesis. It seems, though, that this is getting beyond the problem of cognition. Cognition is about how humans think, and, by virtue of being physical ourselves, we are limited by the laws of computation. The concern over completeness seems unfounded to me. The extreme generalization of Turing’s minimal functions seems to imply that most any symbol system is bound to be universal.

Further embracing the idea of universal machines, Newell forms a definition of symbol systems: “Symbol systems are the same as universal machines.” (p. 154) This argument goes in the direction that symbol systems and universal machines are equivalent, or that they can simulate each other.

Applying this principle: The purpose of symbols is in their ability to signify or stand for something, and Newell describes this as the process of designation. Newell mentions several other words: reference, denotation, naming, standing for, aboutness, or even symbolization or meaning. “The variation in these terms, in either their common or philosophic usage, is not critical for us.” I find this casual rejection very fascinating, as the relation between cognition and the various forms of symbols, especially in terms of metaphor, (and in Piercian linguistics: symbols, icons, and indicies), these variations of meaning are extremely important. The designation intended by Newell is a mechanism used in the means by which one universal machine might represent and simulate another.

The other capacity is interpretation, the ability to derive symbols from given input. Later, discussing assignment, Newell mentions some examples of symbols that might be used to designate things. Symbols processed by machines must be totally and fundamentally arbitrary, even though the words and symbols used by humans in various contexts encode a great deal of information into the symbol itself. One particular example is the word “unhappy” which references the symbol “happy”, even though associated meanings may be more than mere opposites. Also mentioned are labeling conventions in geometry. These sorts of conventions are exactly the type of cognitive extensions that are encouraged by others.

The physical symbol system hypothesis: “The necessary and sufficient condition for a physical system to exhibit intelligent action is that it be a physical symbol system.” (p. 170) General intelligent action: “means the same scope of intelligence seen in human action: that in real situations behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some physical limits.” This does hold and is backed up logically, but fails to describe further context of the ends of the system or the demands of the environment. These details are things that must be carefully constructed and supplied. Furthermore, the hypothesis also is addressing specifically the idea of rationality, which is a loaded and biased term. The hypothesis focuses on rationality in prefrence to a more general “phenomena of mind”. This is described as a preference, but it is also a severe limitation, as humans are not necessarily rational.

Representation and knowledge also get a special treatment. Representation is the quality by which symbols might map from aspects of one object to aspects of another. The idea is that the symbol system has an image of the object in its symbolic structure. This is a useful concept, but is subject to a great deal of value judgements and concerns in terms of formulating the structure of representation. The relation to knowledge is framed in an equation: “Representation = Knowledge + Access”.

Reading Info:
Author/EditorNewell, Allen
TitlePhysical Symbol Systems
Typearticle
ContextNewell is one of the establishing voices in AI, and helped to pioneer traditional symbolic AI.
Tagsai
LookupGoogle Scholar

Wilson & Clark: How to Situate Cognition

[Readings] (08.20.08, 10:18 pm)

Understanding the idea of “situated cognition” by comparative terms. Concludes that definitive characteristic is *cognitive extension*. Embodied cognition is a continuation of this idea.

History of cognition: Individualistic cognition, mind alone, wedged between perception and action.
Approach also suggests that cognition takes place on the features of symbols, as opposed to features of the individuals themselves.
Significant works in the individualistic thread include Fodor, Pylyshyn, and Newell and Simon. Pure example is the Cyc project, which (as we know from Alison Adam) promotes a sort of “view from nowhere”, presenting a sort of assumed identity.

Putnam and Burg
e challenged individualistic perspective, leading to perspective of taxonomic externalism, and these have been extended to more radical theories of externalism. Externalism pushes the idea that cognition extends beyond the individual thinker, even past the flesh and into the environment. When externalism is first introduced, when cognition is pushed outside of the brain (or abstract symbol system) into the body, then it becomes a slippery slope to determine where the edge of cognition stops. This naturally leads to a diversity of conflicting theories.

Extended computation (or wide computationalism, which is a synonym), is not a severe departure from computation, but simply an extension thereof. These are ideas that look at computation as taking place spread across an environment, as opposed to inside the skull of the thinker. Computationalism is not incompatible with situated or extended cognition, but those rather extend from it.

Examples of extensions are bodily extensions, technology and prosthetics. Symbolic thinkers have also used the analogy of prosthetic, but emphasized the prosthesis of the mind, eg Vannevar Bush. This idea has been continued with Don Norman with the notion of affordance. This is described in Wilson and clark as “cognitive augmentation”. Part of the issue with this is how augmentation extends, but also restricts. Augmentation fits with the planning-oriented perception of the world, but stumbles when faced with the idea of expression or limitations (a hammer encourages you to think in terms of nails).

Social structures are mentioned. A crucial example is writing (in terms of cognitive supplement).

An example given is the “task specific device” or TSD. This is something that exists either in the environment or in the actor, and is used to enable certain types of action. Like the case with prosthetics, it promotes an instrumental and intentional model of behavior. Related to TSDs are “transient extended cognitive systems” or TECSs, which seem to be ways of approaching cognition on a per-context basis. The idea of the TECS is similar to the tool oriented approach, but seems to be much more flexible and free-form.

Looking at the boundary between cognition and non-cognition. An argument against extended cognition is the “Dogma of Intrinsic Unsuitability”, which states “Certain kinds of encoding or processing are intrinsically unsuitable to act as parts of the material/computational substrate of any genuinely cognitive state or process”. At odds with Intrinsic Unsuitability is the “Tenet of Computational Promiscuity”, which is the property of computation to spread out across many parts of mind and body.

Another challenge to extended cognition is embedded cognition, which claims that cognition is embedded in things external to the body. One idea in this is memory. Other ideas are the confusion associated with examining changing thinker+tool combinations as cognitive subjects. The authors dispute this because embedding implies heirarchy and order which is missing in application.

Reading Info:
Author/EditorClark, Andy and Wilson, Robert
TitleHow to Situate Cognition: Letting Nature Take its Course
Typearticle
ContextGives a background and argument towards extended cognition. This notion is very useful for rationalizing contextualized situational behavior in AI based agents, expecially relating to the believability of The Sims, etc.
Tagsai, embodiment
LookupGoogle Scholar

Michael Mateas visit

[General,Talks] (08.20.08, 7:27 pm)

Michael Mateas visited today, and gave a presentation about his Expressive Intelligence Studio at USC. The project is about automated game design, which is interesting, since that was one of the original goals of my MS research, before I turned it into a space generation thing. Basically, this idea is something that would support formal game studies by exposing and finding new ways to put together mechanics. It also encourages thought about design at a meta level, reasoning about types of mechanics and how they can be put together. The existing work doesn’t do much yet, but it looks like it might yield some interesting results.

The work involves four layers of work:

  1. Game mechanics: State and state evolution. The actual mechanisms by which state is represented and can advance are part of a larger meta-model.
  2. Concrete representation: Audio, visual elements that represent the mechanics to the player.
  3. Thematic content: Real world references, common sense associations. This makes the game meaningful outside of a purely symbolic context.
  4. Control mappings: User interaction and verbs.

The starting point for the EIS lab was to look at the thematic content, which is arguably the hardest part of the problem. This bit has to make meaningful associations between game mechanics and the underlying concepts. For instance, if the game is about chasing, the player can either chase or be chased, and whatever is being chased must be something that someone would have reason to chase. The associations here were defined via Open Min, ConceptNet, or WordNet. I forget exactly which combination of these was used. The goal was to enforce consistency in game mechanics with the thematic concepts. The result of this was unfortunately rather messy and somewhat absurd in a lot of cases, due to conceptual slippage.

The more interesting area of work I found was in the reasoning about the mechanics themselves. This was done via event calculus, which has been described very effectively by Eric Mueller. The event calculus can reason about events, states, and can be used under the hood to restrict the types of states that can be reached by a given set of game mechanics. Essentially, the calculus can be used to define a suite of invariants, almost like unit tests, and test these on a given set of mechanics, allowing a designer or an automated tool to modify the mechanics quickly and find out whether the invariants are met.

Food for further investigation.

Keith Oatley: The Science of Fiction

[Readings] (08.19.08, 1:51 pm)

This article describes a study done by Oatley and some others on the cognitive effects of reading fiction. The study finds that fiction specifically enhances the ability of readers to empathize and understand emotions. The suggested reason why this occurs is because in reading fiction, the reader simulates the characters mentally, and thus builds a better model of human emotions.

The article does not address more specific qualities, such as how the reader simulates and how knowledge is gained from this. Some open questions I might have are whether the reader is absorbing the protagonist’s emotions as the correct ones, or if the reader is vicariously experiencing the situations and merely correlating his or her own emotions with those of the protagonist. I would lean towards the latter, but the question is open.

The study specifically finds that there is a distinction between this empathy when the story is rendered as a documentary versus fiction. This suggests that there is something special about fiction that enables a certain kind of empathetic processing. Another open question is what is so special about fiction? A possible answer is that fiction frames a situation as a safe cognitive playground where the reader can choose how to experience certain roles. A documentary misses this because it frames the situation as factual, thus restricting the reader’s freedom to “experience as”.

Oatley explains: “In our daily lives we use mental models to work out the possible outcomes of actions we take as we pursue our goals. Fiction is written in a way that encourages us to identify with at least some of the characters, so when we read a story, we suspend our own goals and insert those of a protagonist into our planning processors.”

The idea presented here is directly in line with the notion of simulation, developing an imaginary frame and executing it. This idea continues:

“This is why I liken fiction to a simulation that runs on the software of our minds. And it is a particularly useful simulation because negotiating the social world effectively is extremely tricky, requiring us to weigh up myriad interacting instances of cause and effect. Just as computer simulations can help us get to grips with complex problems such as flying a plane or forecasting the weather, so novels, stories and dramas can help us understand the complexities of social life.”

Interesting things can be extended from this: Roleplaying and games especially. Roleplaying has been demonstrated to have uses in therapy, and it has been suggested in several places that it helps the players develop themselves emotionally (a conclusion I can vouch for based on personal experience). However, both of these have the capacity to be non-developmental, discouraging critical and emotional reasoning. This conflict resembles the conflict framed between Turkle’s view of games and computers as evocative, versus other critiques of games and geek culture as reactionary and exploitative.

That aside, the study still finds significant positive power within fiction, and connects it to the ideas of modeling and simulation.

Reading Info:
Author/EditorOatley, Keith
TitleThe Science of Fiction
Typearticle
Context
JournalNew Scientist
Extra<a href="http://hdap.oise.utoronto.ca/oatley/">Keith Oatley's homepage</a>
Sourcesource
Tagsnarrative, fiction, specials, simulation
LookupGoogle Scholar

Epstein and Axtell: Growing Artificial Societies

[Readings] (08.19.08, 12:54 pm)

Overview

This book documents some experiments in “artificial societies”. The primary idea behind the book is being able to *grow* social behavior. The mission is broad and its goal is to model ideas from social science, and see them simulated and played out over time. While the approach that is used subsequently borrows from mathematics and artificial life, the chief ideas originate from social science.

The reasearch behind the book is funded by the 2050 project (run by the Santa Fe Institute, the World Resources Institute, and the Brookings Institution), whose mission is to investigate sustainable global system.

All of the conclusions in the book derive from running variations on a very small, simple simulation. The premise is that there are agents that can move about in a world, that can see a certain distance, and whose goal is to acquire food and not die. Food is dispersed at first around two locations. The means in which food is regrown changes depending on the parameters of the individual simulation. Later on, complicating elements are introduced, such as reproduction, other food types, trade, warfare, social networks, disease spreading, and debt and lending. With the complexity of the emergent behavior, it is easy to forget that the simulation is of a discrete grid that is only 50 by 50 squares.

While a great deal of interesting conclusions can be derived through these simulations, and the rules described are very simple and understandable, I find that not enough time is spent critiquing the underlying foundation of the simulation. The authors mention that vision works in the principal lattice directions, and that diagonal vision is not allowed to bound the agents “rationality”. What happens when vision within a radius is allowed? What happens if agents have a facing and can only view within a field around that facing? Agents can move, hop, to a place that they see, what if they could only move one square at a time? Agents are also capable of collecting all the sugar on a square after their move. What if agents cannot collect all of it at once? What if they have the option of collecting different ammounts? What if multiple agents can inhabit the same square? What if the world is continuous instead of discrete? If any one of these factors changes, what long term implications does that have for the subsequent rules described in the text?

All of these questions leave me wondering if the particular sets of rules that the authors described were carefully chosen for the dynamics that they produce. This is likely to not be the case, but the reasoning behind the low level elements of the simulations is still not addressed (although it has been made open), meaning that it can still be informed by bias. Because the simulations are evocative, there can be associations made between conclusions or elements of this model and the other system that are evoked.

Notes:

The author’s first concern is to approach social science in a radically different way than it is traditionally explored. The viewpoint taken is to explore bottom-up emergent patterns in social systems (as discovered by the simulations) instead of the traditional top-down means of looking at social science. Traditional social science tends to look at the world through a lens looking for one type of information or another. The authors aim for a new kind of social science, wherein macroscopic theories may be tested through generative simulations.

One of the more troubling flaws in the text is the equation of food or “sugar” to wealth. The flaw is troubling because it raises questions as to what, exactly, sugar or wealth is supposed to mean within the context of this simulation. If sugar is food, then agents are moving about to obtain food, creating something of a nomadic-scavenger idea. But if food is also wealth, then it seems like it is something to be hoarded and accumulated. Rarely ever do nomadic people hoard wealth. Wealth is necessary for survival, but in most societies, it is not something found while moving about. Wealth tends to exist to support societies that have grown to a size to no longer permit barter, and currency is currency because it is not used for anything else. It is thus troubling when sugar is called wealth and economical terms are used about its distribution.

Later on when pollution is discussed, the effect of one model of pollution results in an exodus of agents leaving one area to join agents in another area, which cannot support the new population. The lesson learned from this is that “environmental degredation can have serious security implications”. This is a trite revelation. In this case especially, so much is encoded into the model that it is far from realistic. The consequences of the model are nonetheless interpretable and evocative. Interpretation is associative, meaning that we connect real world ideas, rules, and consequences with the effects seen in the model, but whose real world analogue may be very different.

An example of especially arbitrary constants found in the text is a discussion of reproduction, which gives constants describing the lifespan of agents and how and when they can reproduce. The only difference between “male” and “female” agents in this simulation is that the “male” agents can have children on average for 10 turns longer than the “females”. There is no difference between how the males or females raise the children, and there is no gestation period for females. This distiction is the only one present, so why have it at all?

In a manner similar to Wolfram (although predating his book by almost ten years), the authors conclude that simple rules can generate complex behavior, and that changing the rules changes the fundamental ecology of the simulation. This finding is certain, as is the result that long term predictability is difficult. However, the applicability of these findings to other social systems outside of the 50×50 grid is less than compelling. This can be used for illustrating the fallacies in oversimplified models, but that seems to run counter to the authors’ intent.

Reading Info:
Author/EditorEpstein, Joshua and Axtell, Robert
TitleGrowing Artificial Societies: Social Science from the Bottom Up
Typebook
ContextDiscusses a simulation-based approach to social science. The approach is flawed because of its failure to consider the consequences of the simulated model.
Tagsspecials, ai, simulation, social simulation
LookupGoogle Scholar, Google Books, Amazon

Michel Foucault: Discipline and Punish

[Readings] (08.17.08, 4:23 pm)

Overview

Foucault covers the subjects of torture, punishment, discipline, and surveillance in this important book. It tends to work as a history, originating in the 1750s, and covering the matter of punishment until the 1850s or so. Foucault is writing in the 1970s, when the matter of public surveillance was becoming an issue in England, and possibly among other places, so this may have been an influence in his approach. The historical change from 1750 to 1850 is the disappearance of torture and the transition of the object of punishment and discipline from the corporeal body to the soul. Along side this, is the emergence of a technology of discipline and power, which is constructed a self-observing, self-disciplining society.

The most relevant bit in this is the progressive disembodiment of punishment, and the idea of the carcreral society, which to some may be a utopian society, which is totally contained within its frame of ideology. This is very reminiscent of the enclosing capacities of simulation. A simulation limits everything it represents to that which is definable by its model, and a carceral society willfully enforces its ideology through discipline and surveillance. Resistance to a discourse is still a part of that discourse.

Notes

The Body of the Condemned

Pain and spectacle: Over time, these disappear in punishment. The reform moves the body to become out of bounds as the receptor of punishment. Instead, the body becomes something to be constrained, obliged, and prohibited. As evidence of the change of the reform, Malby writes that punishment should strike the soul, not the body. (p. 11)

The aim of the book is to understand soul, judgment, and power. Power occurs in a political economy over the body. The image of the economy is very prevalent through Foucault, its emergence is reflective of the period of mercantilism that has emerged prior to the reform movement. Economy implies a regular system of exchanges, and an uneven distribution of capital. (p. 23)

Common thought of the time: The body is the prison of the soul, which is very reflective of the prevalent dualism of this period of time. Despite the arguable humanity of the new prison system, revolts occur within the modern system, protesting the situation of the prisoners. The new system is reflective of a new technology of power. (p. 31)

The Spectacle of the Scaffold

Torture is a means of inscribing, by pain, the truth of a crime on a criminal. It is by nature spectacular. The issue of truth of a crime becomes significant later. Torture also serves as a ritual, a symbolic means of formalizing the law in the minds of people as a cultural practice. (p. 35)

The power relationship between the condemned and the sovereign: In a society with a sovereign, the state is equated to the body of the king. The criminal is one who attempts to assert an unauthorized power, which is thus a bodily assault on the authority of the king. The punishment deprives the criminal of power, and visibly enforces the power of the sovereign. In the spectacle of torture, the spectators are witnesses and consumers of the event. (p. 54)

Generalized Punishment

Punishment is an expression of the universal will of the state. The reform movement attempts to challenge the use of punishment as vengeance, pushing for punishment without torture. (p. 74)

There is an economy of punishment that reflects an economy of power. At the center of the reform is an attempt to undermine the centrality of the power of the monarch, around whom was spread a “bad economy of power”. The reformers are attempting to establish a right to punish without the authority of the sovereign. (p. 79)

Illegality also forms an economy, and was widely employed as a social practice. This derives from a general non-observance or abeyance of the law. The illegality is a necessary component within the society, but forms an odd paradox when compared to the criminal. Those who practiced illegality with violence or hurt the general population were scandalized, but general illegality (particularly theft) was widely accepted. Around this practice formed a network of glorification and blame. Thus there was a level of obligation and social custom that operated in spite of the law. (p. 83)

Illegality of property was generally exercised by the lower classes in rampant theft. There was also an illegality of rights practiced by the merchant classes. This represents the change in the economy of illegality associated with the rise of capitalism. The rampant illegality essentially resembles social tactics without a strategy to hold it at bay. Punishment reform is a strategy for a new social system. (p. 87)

The new economy of punishment is based on the concept of the social contract. Criminality in that sense is inherently paradoxical: the criminal is both an enemy of and a member of society. This change is an enormous shift from the authority of the sovereign in torture. Thus the criminal is a traitor to the state. (p. 90)

The change in punishment was reflected by an intense level of calculation and determination of the principles of just and correct punishment. (p. 94) The result of this is a new calculated economy of power disguised as mercy. But, the object of power is no longer the body, but the mind or character. (p. 101) Foucault cites Servan on the next page: “A stupid despot may constrain his slaves with iron chains; but a true politician binds them even more strongly by the chain of their own ideas; it is at the stable point of reason that he secures the end of the chain; this link is all the stronger in that we do not know what it is made of and we believe it to be our own work; …” (p. 102-103) ref (Servan, 35)

The Gentle Way in Punishment

Forces, attractions, and values: Follows ideas of compulsion, attraction and repulsion stem from a Newtonain metaphor. Also heavy into this theme of punishment is the idea that the state is a natural phenomenon, and crime is distinctly unnatural. This flies directly in the face of illegality as a common cultural practice. Essentially, it is an ideological strategy to combat an emergent tactic. (p. 104)

In regards to the vision of a “just society”: Law attempts to counter the historical tradition of the affairs of criminals of old being celebrated in culture and tales. A glorification of outlaws and lawbreakers is very prevalent in many cultures. The vision of the just society aims to replace that with a reverence for the austerity of the law, and have a distinct openness in the city. Everywhere within the just city is the influence and inscription of the law. And education is meant to describe and glorify the law as well. This vision paves the way for the carceral society to come. (p. 113)

Docile Bodies

In regards to discipline, Foucault looks at the soldier, which is constructed as a product of molding via discipline. This, again, treats the body as an object of operation, it deconstructs the body into various independent components. The aim is to shape each force of the body: maximize those forces that yield utility, and minimize other forces that the body might be obedient. (p. 137)

Discipline is the methodical mastery over little things. Its aim is to spread central control to every minutia of the body of a subject, while needing to expend a minimal effort to control these bodies. This echoes again the idea of a technology of power: to distribute and maximize optimally. Discipline is in a sense, the antithesis of emergence. Also, discipline resembles the way that people interact with machines and computers, through working with them, they make humans further like machines. These can be connected through Marx and Weizenbaum.

The Means of Correct Training

A precursor to panopticism: Surveilance is a requirement for discipline. The purpose of discipline is to train, but for what? (p. 173)

Discipline is a normalizing process: It punishes and rewards for established social formations, attempts to make even that which is uneven. This is reminiscent of role gratification, performance of a role is met with rewards and gratification, but failure is met with lack of support. Role learning is a disciplinary process. Examination is described here as a ritualized interaction, and involves a presentation of self. A component of discipline is being subject to examination and gaze. (p. 184)

Panopticisim

The chaos of the plague is met with a focused ordering of life: sectioning, visibility, and isolation. The physical corporeality of bodies is mixed with the ideas of sickness and evil. (p. 197)

In the Panopticon: There is a dissociating of the visibility dyad. An automatization and disindividualization of power. Power exists, but and it exists in the minds of subjects, without necessarily a physical presence to enforce that power. Cells are transformed into stages, where actors compelled to constantly be performers. Allows for an individuality of the prisoners, though. (p. 203) Enables a laboratory of power, whereby the authority may conduct experiments and tests, (developing technology and improving efficiency of power) on the distributed system of the panopticon.

Society has changed from that of a spectacle to that of the Panopticon. Life is longer like an amphitheatre, but we are still performers, watching each other. (consistent w Goffman?) Panopticism is a power technology to improve the efficiency of power. (p. 217)

The object of justice transforms from the physical body, and away from the contractual one, but towards a new thing, a “disciplinary body”. This body can be deconstructed into its component parts and each may be operated on and molded, “corrected” independently. (p. 227)

Panoptic society has surveillance, and does not enable individuals access to the information being stored about them. When these are exposed (eg, wiretapping) popular reaction opposes the system and there is outrage. The problem is that, with the dissociation of the gaze, the individual has no ability to understand how he is being seen. More than knowledge is necessary to topple the system, though. Individual is reduced to pieces and surveilled, but has no independent power in understanding how he is dissected, and no understanding of what is found in there. Thus, to successfully resist the pantoptic society, one must have full self knowledge, because that cannot be taken away.

Complete and Austere Institutions

With isolation, the matter of the self and conscience come into play. Prison coerces order and social rules by replacing society at some levels. Those who designed prisons aimed to have the prison serve as a reduced society (a sub-simulation) where the minimal elements of society were still present, but prisoners would be isolated or prohibited from interacting with each other regularly. (p. 239) Prison life is hardly reflective of the outside, though. What happens to self and performance when the subjects are in total isolation? Society hinges on performance and interaction, what happens when one or both are deprived?

The prison offers a substitution of the offender to the delinquent. This allows an individuality, total knowing, and potential reformability to the criminal. A lot of the philosophy justifying prisons is rooted in the correctability of criminals and justification of the law. Also this changes to demand a total knowledge of the subject. (p. 251)

Illegalities and Delinquency

The prison produces and encourages delinquency. It encourages a loyalty amongst prisoners, and promotes the idea of warders as unauthorized to correct, train, or provide guidance. The focus here is the failure of the prison to perform a corrective function. The reason for this failure involves the cultural foundation of illegality. (p. 267)

The Carceral

Foucault opens the final chapter by discussing a colony, which becomes the example of a contained carceral society. The role of instructors, (not educators) in direct development is to impose morals and encourage subjects to be docile and capable. There is a direct reference to Plato’s Repbulic: children in the colony were taught music and gymnastics. The colony also has a circularity: Instructors are subjects as well. This leads to a closedness of the social model. (p. 294)

More on the enclosed and contained nature of the carceral: Like a closed simulation, the carceral society must contain every projection of things within its model. (It must be mathematically complete). What of the simulation outlaw? The utopia encodes the law into society, so in its simulation, the outlaw is an impossibility, fundamentally and intrinsically unexplainable. (p. 301)

The carceral relates very closely to simulation, even in the Baudriallardian sense: Simulacra encloses and defines the carceral society via its isolation of law and ideas. But it id not really law, but ideology. An open question is who is behind it? It may be that no one is, the order of the society falls towards infinite regress. But, in reality, laws are made, and simulations are defined. (p. 308) Foucault’s history is sort of anti-narrative. So, while power exists, Foucault is reluctant to name individuals or events behind the application of power. It makes a disconcerting approach, leaving the reader wondering why or how the state of affairs is the way that it is. An extreme approach is to claim that power is totally self-generating, and indeed, in the carceral society, it is, but there still must be agents behind any change.

Reading Info:
Author/EditorFoucault, Michel
TitleDiscipline and Punish
Typebook
Context
Tagsmedia theory, dms, embodiment
LookupGoogle Scholar, Google Books, Amazon

Gilles Deleuze and Felix Guattari: A Thousand Plateaus

[Readings] (08.17.08, 4:20 pm)

Understanding Deleuze

Claire Colebrook writes an overview of Deleuze’s philosophy. Deleuze is in tradition of practical “lively” philosophy. What does it mean for a philosophy to be practical? Colebrook compares Foucault, Freud, and Marx as all practical. Marx’s philosophy is intended to be connected to the world and directly change our understanding with it.

Other, more linguistic philosophers, (eg Wittgenstein) aim to understand language, and use common language. That we will realize that things we say are nonsense. Namely, they pose that theory is in a sense fundamentally disconnected from reality. The comparison here raises the ambiguous question of what does it mean for a philosophy to be practical.

Colebrook poses that Deleuze is a positive thinker: that he saw desire as a positive, constructive force that enables meaning. She seems to lay out a quadrant of some philosophers:

Negative Positive
Power MarxIdeas produce power relations. FoucaultTheories and actions are modes of power. Concepts are instances of power. The master and slave are conceptually codependent and produce power through each others’ existence.
Desire FreudDesire is something that occurs outside of a norm. Desire is something that detracts from a person and must be fought against to restore normality. DeleuzeExistence and identity are created through desire. Desire enables identities and relationships.

Overview

It is important to note that A Thousand Plateaus is the second part to Deleuze and Guttari’s “Capitalism and Schizophrenia” pair, the first part being Anti Oedipus. That said, many of the concepts used here are in fact first defined in the first volume.

On Models

Deleuze on models: models are prescriptive. Claims that Western thought is built on radical (single root) model systems, that ascribing to models limit our world view and limits, to subscriber, what is possible.

I would say that the solution in Western *science* or any other constructive movement is to define NEW models, in abundance. This is something that is heavily studied in linguistics, development, etc. New concept/system development does not account for the limitations of ingrained models. In development and education, there is a bit of investigation of concept reformation and development.

What about meta-models? Deleuze attempts to get underneath models (instead of above). Meta models, as might be imagined mathematically, look to define new structures that can turn and encompass others. When models are used in math, science, and programming (models meaning generally varied approaches to representation within a system or framework), they are used in varying applications. Many times scientists, mathematicians, and programmers all try to force more things than can be accounted for into one system, but this is generally recognized as a poor idea.

Frequently, models are defined to address specific problems, and are intended to be used within a specific domain, or from a specific perspective. Change of these may ask for a change in the model being used. Examples are in looking at human behavior, where sociology, anthropology, linguistics, or statistical methods might be used to explain various about human behavior.

On computer code and rhizomes: computer systems are “tree”-like in that they all can be translated (in Chomskian sense) to equivalent computer instructions. They are all founded on some basic underlying models. So, while they may enable interpretation, representation, and thought, in very different ways, they are still executed through the same turing machine. They do enable different means of cognition, but they must be grounded in some fundamental principles.

When applied to programming and simulation, the situation gets trickier. Computer languages, simulations, and representations are all very capable and abstracted. However: programs all must be reduced to machine code and rendered on some form of hardware, eventually. What this reveals is that all things that are simulatable by a computer (or by a formal simulation that satisfies some programmability requirement) are all possible to reduce to one single, ultimate language. This implies that this simulation root underlies all models expressible within a computer.

However, it might be stated that while all simulations share a root of simulatability, they may share roots with other conceptual models and domains, and thus be rhizomes. So, while the execution level of a simulation might be universally translatable, the other levels may not be translated so easily, especially when the representative level is strongly metaphorically coupled to the simulation. A simulation whose execution is tightly bound with its representation is a rhizomic structure, whereas a simulation whose execution and representation are disjoint may be pulled up easily.

On Territorialization

The concept of deterritorialization is coupled with a reterritorialization. To Deleuze and Guattari, individual things have a territory, but when their systems touch upon one another, their respective territories are upset and then reformed. The example given is a wasp touching an orchid: The orchid is upset and disrupted by the wasp by the contact, and correspondingly, the wasp is turned into a part of the orchid’s reproductive system.

The challenge with this model is that it treats the wasp and the orchid as both totally independent systems until they contact one another. Systems are rarely ever totally independent, and do rely on each other. Frequently this may occur via well defined channels, such as the wasp’s fertilization of the orchid, but the notion that systems are structured in connection with each other seems radically opposed to Deleuzian sense. Further, one may scratch the idea of systems as being independent altogether, and understand that any perceived territory of a system is merely a construct or illusion. If we look back far enough, every system can be seen to be composed of multitudes of subsystems. The plant itself is composed of billions of cells which each impinge on each other as part of the plant’s growth. Blossoming in an orchid is a disruption of the plant’s ordinary sympodial pattern. It bears noting that sympodial growth is a from of rhizome. Go figure.

Principles of the Rhizome:

  1. connection
  2. heterogeneity
  3. multiplicity
  4. asignifying rupture : independence of models
  5. cartography
  6. decalcomania

classical linguistics: Language is built on binary differences, furthermore, differences do not *mean* anything. That is, they are arbitrary. Language becomes interesting in its inability to communicate. D&G trying do deny function of representation in knowledge?

Classical representation romanticises the idea of pure meanings, and that before language things were better. Representation aims to point things back to these pure ideas, and thus emphasizes, and is dependent on the notion of lack. Thus, classical representation constantly is a reminder of the lack of pure meanings. But… doesn’t representation project from one system of meanings to another? Why does there have to exist a system of pure meanings? What if I reject the notion of such a thing?

Reading Info:
Author/EditorDeleuze, Giles and Guattari, Felix
TitleA Thousand Plateaus
Typebook
Context
Tagsdms, media theory, philosophy
LookupGoogle Scholar, Google Books, Amazon
« Previous PageNext Page »