icosilune

AI, narrative, and adaptations

[Research] (04.13.09, 11:22 pm)

I discovered recently that Marc Cavazza (with David Pizzi and Fred Charles) of the University of Teesside has been working on a Madame Bovary game. I feel silly for noticing this just now, as the papers were published in 2007. Nonetheless, it’s good to know sooner than later in any case. The research project is frightfully relevant to my work, as it is looking at a procedural adaptation of (part of) a story world. While the authors keep ahold of the conventions of AI planning, they acknowledge that it is necessary to revise the way that the AI decisions are made. They opt for a character based system, and one that is reliant on emotions. The authors reject a general schema of “basic emotions,” which have been criticized by Ortony, Clore, and Collins, and instead adopt a model that is given by Flaubert himself. The work on Madame Bovary is especially helped because the author himself has described his own model of the characters’ emotions.

This is something I would like to know more about, and it has a lot of potential to contribute to theory of game adaptations. There are a few things about it that seem very puzzling, namely the focus on planning and the use of Unreal Tournament (though, to be fair, The Teesside AI group has access to the internals of UT, so they are able to accomplish much more than I was ever able to at Georgia Tech), but the most important detail is their use of perspective. Within the project, the player takes on the role, not of Emma Bovary, but of Randolphe Boulanger, who seduces Emma and eventually leaves her alone amid her romantic hopes of elopement. Because my approach to models involves not just a system but also a perspective, it is important to note that a game from the perspective of Emma Bovary will convey an extremely different model than a game from any other perspective. The player, in the examples discussed in the papers describing the project, is able to engage with Emma, not as communication, but as influencing. The player can say things that communicate positive or negative emotions, and Emma recognizes these as either romantic and encouraging or disparaging and thwarting. This is deeper than many models of emotional interaction (on par with Facade’s “Let’s talk about Trip”, for example), but is still a little distant. The player can imagine Emma’s romantic visions, but is not really in a position to take a role in them. The player starts already with Emma infatuated with Randolphe, so none of the romantic indulgence is necessary. The project is still a technological demo, so this hardly seems desirable or necessary though. The interesting thing is that one might argue that the system of romantic disillusionment and Emma’s engagement with the outside world is still through her fantasies, so “real” interaction is never going to be possible, and in that sense the adaptation might be very solid. I am not sure what “real” interaction would be, though…

What confuses me the most, I think, is what the role of planning is within the system. Planning works with a character considering a number of potential actions and a future goal, and attempting to pick the best action to meet that goal. I am wondering what the goals are, and what constitutes a “best” action. If the goal is to achieve her elopement, or to escape her boredom, or to simply express her state of malaise, I think that this could work, but it seems awkward. The idea of elopement is definitely a prospect, but the other actions seem less planning oriented and more along the lines of situated responses. I would think that the system of actions is more about behavior within a context or situation, rather than about proactively achieving new situations. The essence of tragedy is having things happen to the actor, or having the expected consequence of one’s actions deviate from the actual consequences. Planning depends on having consequences work according to one’s expectations, if a system relies on high level authoring to create situations that require plans to fail then the system is working against the planning paradigm.

One of my problems with challenging AI planning, though, is that I don’t have a good system to replace it with. I am doing a lot of reading now, so we’ll see how this shapes up.

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.