Category: ‘Projects’

More progress on Painter

[General,Genetic Image,Projects,Toys] (04.19.08, 12:18 am)

Because I don’t know when to quit on these things (or, possibly because working on somethings helps me relax from working on others), I made some nice progress on Painter, and rather than showing images, I figured I might embed an applet. This is very simple, and contains some primitive graphical methods, but is nonetheless quite neat and has its own sort of style.

The applet will think when you click on it, and if it thinks for too long (10 seconds) it will realize that it is confused and allow you to click again. If you are interested in this project, it is available via svn on the Painter site. The documentation on the site is abhorrent, I know.

EDIT: Due to strange technical issues, I had to take the applet down, as it was causing issues with browsers. I truly regret having to do so, but I just haven’t been able to fix it yet.

It Lives!

[Experiments,Genetic Image,Projects] (04.16.08, 8:07 pm)

In my copious spare time, I’ve been working on Painter. Today I was able to get it to produce some sort of image. Amazing! Painter is essentially a metaprogramming project, and it generates its own programming control structures. Generally what I’m trying to do with it is put in some sort of automatic Processing. Eventually it will be able to do a lot more. But, we all must start small.

Painter progress

[Art,General,Projects] (04.06.08, 11:48 am)

So, I’ve been continuing to work on Painter, in spite of everything else that is urgently awaiting my attention. The project is coming along, especially the module for the custom function language that is working as a Netbeans plugin. Very cool.

Anyway, I am at the point of building an actual procedural painting program for Painter to interact with in building its work, and I have been looking around for source material to base the code on. Maybe this is just my gradual corruption as a budding humanities scholar, but I have suddenly been feeling this strange compulsion to base things that I do or create in the context of existing work, rather than creating it all by scratch. It’s very odd. Something clearly must be wrong with me.

I have spent a little while digging though the source for The GIMP. I still fondly appreciate C/C++ as my “native” programming language, but reading through it is reminding me why I migrated to Java. Unfortunately, there are very very few good Java painting and graphics applications out there. In fact I haven’t seen one yet. I want something on the level of GIMP or Photoshop, or, ideally, Corel Painter. The hunt continues.


[General,Genetic Image,Projects] (12.12.07, 10:52 pm)

I am re-writing GeneticImage.

GeneticImage was enormously successful as a project, but it is time to move on, pushing it in new and exciting and convoluted directions. GeneticImage is turning into a new project, Painter. Instead of having the evaluative model that GeneticImage has (wherein every point is evaluated with functions), Painter has a procedural, process oriented approach. Painter will actually draw to the canvas, doing brush strokes, and employing interesting procedural mechanics to make images. Painter is online as a Google code project: http://code.google.com/p/painter/

Painter is still far from being available or released, but I figured I’d make a post to say that I was working on something new.

Genetic Programming #1

[Experiments,Projects,Research] (10.12.06, 11:36 pm)

Recently I have been developing an agent-controller system that uses Genetic Programming. The underlying logic is a lot similar to that in Genetic Image, but I actually represent a lot of structures from real life programming (such as statements, statement lists, and functions or methods), which is rather unusual in that sense. Most Koza-style GP appears to focus primarily on a tree-based evaluation that starts at a single node. Additionally, this style of GP is stateless, and the programs don’t usually alter their state as they go along. Then again, I haven’t been exposed to that much “real” GP, so I may be rather off.

Either way, the system described here uses a fairly sophisticated model, which is both very useful, but also very difficult to train. Attached below is a java applet that runs my current program. The program is a very simple bitozoa-style simulation that has a bunch of entities looking around for food.

I have run into a variety of challenges, though, and I have included some text from my notebook below:

The current system of general movement is not working out tremendously well. Originally, the system of movement and direction planning was intended to be a baseline on agent behavior. If they could “get it”, then they could move up to get other more complicated situations. There are a couple of problems though:

Information overload:
When they percieve too many entities, they do not use this to set internal parameters, but call commands directly instead. It could be possible to cut the commands from the input functions, however, this would still require entities to set their internal variables to reflect sensory data. Furthermore, they would need to use that sensory data in their other functions to act intelligibly. This contains too many steps for the entities to be expected to discover via random mutations.

Behavior choices:
The current system is very emergence centric. While not a bad thing, entites are adapting to patterns, but not actually making decisions. This is partly the world configuration, the world itself is not that complicated. But even when equipped with very direct inputs and outputs (input: rotation of nearest food, outputs: movement speed and rotation), entities fail to map one to the other in a satisfactory manner. This may be solvable by using training, but that is undesirable. Ideally, the entities should be making direct and complex decisions.

Feedback problems:
Should still not be trying to get at a direct feedback problem. These can be solved via neural nets, and do not take into account the complex reasoning that can (supposedly) be solved via GP.


  • what information should the entities be able to percieve
  • what choices should the entities be making?
  • what problems should be solveable?
  • what worlds should be used?

Possible solutions:

Entities should be able to percieve relevant information. Entities should *not* need to perform elaborate transformations on that information into something that makes sense for them. But, this raises the relevant question of what information is relevant.

Entities should be able to make high level choices. Current choices are in the domain of “move to the food”, “move forward”, “loop about”. These choices are not tremendously complex, they have a great deal of nuance, but the entities could potentially be doing something more interesting.

I want complex behavior to happen, but complex and interesting are subjective adjectives and difficult to define. More explicitly, I want stable patterns to emerge, where entities are successful and stable, in terms of radical behavior changes. The matter of navigating towards food is a start, but there should be more types of objects in the world with which entities may interact, and more sophisticated use of those objects should result in higher entity success (or health, or lifespan, or number of progeny).

The nature of these objects, and the nature of the world itself is the remaining issue. These entities are existing in a purely abstracted environment, so the world can be as simple or absurd as necessary. But due to the matters of movement, having objects in the environment which affect an entity’s health or reproductive faculties is not the best option. If navigational logic were improved, that may change, though.

Because I want to take this in the direction of giving entities different affordances or rules for interacting with the world, and then observing the types of stable systems that result, there should be more capability for the entities to affect the world itself, possibly by creating objects. Who knows, we’ll see.

« Previous Page