icosilune

Genetic Programming #1

[Experiments,Projects,Research] (10.12.06, 11:36 pm)

Recently I have been developing an agent-controller system that uses Genetic Programming. The underlying logic is a lot similar to that in Genetic Image, but I actually represent a lot of structures from real life programming (such as statements, statement lists, and functions or methods), which is rather unusual in that sense. Most Koza-style GP appears to focus primarily on a tree-based evaluation that starts at a single node. Additionally, this style of GP is stateless, and the programs don’t usually alter their state as they go along. Then again, I haven’t been exposed to that much “real” GP, so I may be rather off.

Either way, the system described here uses a fairly sophisticated model, which is both very useful, but also very difficult to train. Attached below is a java applet that runs my current program. The program is a very simple bitozoa-style simulation that has a bunch of entities looking around for food.

I have run into a variety of challenges, though, and I have included some text from my notebook below:

Movement:
The current system of general movement is not working out tremendously well. Originally, the system of movement and direction planning was intended to be a baseline on agent behavior. If they could “get it”, then they could move up to get other more complicated situations. There are a couple of problems though:

Information overload:
When they percieve too many entities, they do not use this to set internal parameters, but call commands directly instead. It could be possible to cut the commands from the input functions, however, this would still require entities to set their internal variables to reflect sensory data. Furthermore, they would need to use that sensory data in their other functions to act intelligibly. This contains too many steps for the entities to be expected to discover via random mutations.

Behavior choices:
The current system is very emergence centric. While not a bad thing, entites are adapting to patterns, but not actually making decisions. This is partly the world configuration, the world itself is not that complicated. But even when equipped with very direct inputs and outputs (input: rotation of nearest food, outputs: movement speed and rotation), entities fail to map one to the other in a satisfactory manner. This may be solvable by using training, but that is undesirable. Ideally, the entities should be making direct and complex decisions.

Feedback problems:
Should still not be trying to get at a direct feedback problem. These can be solved via neural nets, and do not take into account the complex reasoning that can (supposedly) be solved via GP.

questions:

  • what information should the entities be able to percieve
  • what choices should the entities be making?
  • what problems should be solveable?
  • what worlds should be used?

Possible solutions:

Entities should be able to percieve relevant information. Entities should *not* need to perform elaborate transformations on that information into something that makes sense for them. But, this raises the relevant question of what information is relevant.

Entities should be able to make high level choices. Current choices are in the domain of “move to the food”, “move forward”, “loop about”. These choices are not tremendously complex, they have a great deal of nuance, but the entities could potentially be doing something more interesting.

I want complex behavior to happen, but complex and interesting are subjective adjectives and difficult to define. More explicitly, I want stable patterns to emerge, where entities are successful and stable, in terms of radical behavior changes. The matter of navigating towards food is a start, but there should be more types of objects in the world with which entities may interact, and more sophisticated use of those objects should result in higher entity success (or health, or lifespan, or number of progeny).

The nature of these objects, and the nature of the world itself is the remaining issue. These entities are existing in a purely abstracted environment, so the world can be as simple or absurd as necessary. But due to the matters of movement, having objects in the environment which affect an entity’s health or reproductive faculties is not the best option. If navigational logic were improved, that may change, though.

Because I want to take this in the direction of giving entities different affordances or rules for interacting with the world, and then observing the types of stable systems that result, there should be more capability for the entities to affect the world itself, possibly by creating objects. Who knows, we’ll see.

1 Comment »

  1. Great work!
    I love it!

    Comment by Tom Cohen — March 14, 2007 @ 6:51 am

RSS feed for comments on this post. TrackBack URI

Leave a comment

You must be logged in to post a comment.