Agent Flocking Using Genetic Algorithms
 
 

Timothy Mossbarger
 
 
 

Abstract

Flocking is one of the most interesting yet simple biological behaviors studied to date in the field of robotics.  Countless papers on the subject have been released over the years, all with varying methodology.&n= bsp; That such a simple behavior has aroused such a vast interest is not altoge= ther surprising in light of the actual importance of the task.  Flocking addresses almost exactly the varied needs of multiagent simulation, which include multiagent communication, interaction, and collective behavior.&nb= sp;  Yet it also provides a problem so straightforward that it allows actual implementation and scrutinization of that implementation.

Introduction

This paper discusses the use of a combination of genetic algorithms and neural networks in relation to the task of flocking.  The appeal of such an approach stems from the quest in robotics to deliver a standard= ized solution method given a general problem.  More generally, this paper tries to determine the feasibility of using such an approach to solve prob= lems defined through a fitness algorithm.  The interest in trying to devel= op such an approach as opposed to more traditional methods of agent design arises from the possibility of receiving a result which, though correct, defies standard design principles.  Examination of this result would then shed light on the nature of the original problem.

Approach and Methodology

The simulation was done using a Braitenberg simulation environment call= ed Vehicles, written by Chris Gerken.  Vehicles provides support for large multiagent simulations of agents utilizing small neural networks.&nb= sp; The form of these networks act accordingly to those found in the original book by Valentino Braitenberg titled Vehicles.  A simple Braitenberg vehicle consists of a small neural network linking simple receptors to two =93wheels=94, which propel the vehicle.  The simulation environme= nt is a java based program capable of supporting any network based simulation= , which includes genetic algorithmic approaches. Selection of the package was based greatly on the expandability of the system to support large popu= lations; its scalability was excellent. While originally the package did not have functionality for genetic algorithms, relatively little code was required to implement the above system.

Vehicles implements its sensors such that a sensor returns the inverse of the distance to the stimulus.  The wheels as a pair have four stat= es representing four possible actions for a given time step; stand still, turn right, turn left, move forward.  The hidden neurons each have the property that they fire when input value exceeds their threshold, and then reset their inputs to zero.  At the beginning of the simulation it is also important to note that all of the vehicle networks are random.

The basic form of the simulation implemented a genetic fitness function to critique a set of varied neural networks.  The genetic fitness function consisted of a best-performer propagation and worst performer destruction technique, but also incorporated propagation randomness to escape the possibility of de-optimization through a local minima.  The intent of applying such a genetic algorithm to neural networks was to achieve an understanding of the process involved.  Neural networks are interesting logical structures, and the incorporation of a genetic algorithm shed some light on their possible operation.

Implementation

The genetic algorithm used separated vehicles into 3 groups: the best, middle, and worst performers.  The best performers were moved on to the next round without modification.  This ensured that the vehicles would not regress from one simulation run to the next.  The middle performers were moved to the next round of simulation, but were modified by averaging their networks with the networks of the best performers. = ; This attempted to aid convergence towards a fitness maxima.  Finally the bottom performers were totally scrapped, and in their place a new netw= ork consisting of 70% best performer and 30% random network was added.  This ensures continual search through the state space, which, as mentioned earlier, serves to escape all local minima.

The fitness algorithm used (see attached figure) was fairly straightfor= ward.  It first retreived the average midpoint of all of the closest vehicles.&nb= sp; In addition to these others, it also added in a virtual vehicle which was always at a global distance and position from the current robot.  The thought in this case was to implement a directional vector which would influence the vehicle to move in the direction of the vector.  What the global cumulative effect of this would be is to evolve robots which all flock in a particular direction.  This was implemented due to a vehicle=92s behavior of otherwise circling some fixed point with a zero net location gain.  This directional vector corresponds to D in the diagram.  Once a midpoint was located, a vector was formed from the vehicle=92s location to the location of the midpoint.  This correspon= ds to vector F in the diagram.  Another vector  S was also formed which consisted of a fixed length exactly in the opposite direction of F.  This vector has the purpose of applying a set separation between the vehicle and the midpoint.  Thus, for small F, the value of S woul= d negate if not invert the direction of F.  This was done with the inte= nt of preventing vehicles from all congregating to the same place. When these vectors are combined, the result represents the vehicle=92s fitness functi= on for a particular time step.

The vehicle network (See attached figure) was general enough that I thought it could accomplish its task.  While I didn=92t go about crea= ting it with any formal methods to ensure that it was possible to do so, the task requires so little internal representation, that I was sure that if would work correctly.  Each node ahs a firing weight, and each connec= tion has a transmitted weight as well.  Thus, the genetic algorithm has control over both the firing rate and the ability of a neuron to fire.
 
 

Results

While the results were not terrible they were still lacking in what I had hoped to accomplish.  Later generations displayed no more inter= esting and useful behaviors in most simulations than much earlier generations.&nb= sp; Part of the reason I think, was that the fitness criteria itself was not good enough.  A more exact, and information oriented approach might have worked better.  For instance, it would have also been useful to have awarded them for turning in the correct direction.  Further feedback could have been given to the fitness function by comparing their behavior with the behavior of the group as a whole.

Strengths and Weaknesses

The propagation of vehicles from one generation to the next seemed to work correctly.  Each generation, compared to it=92s predecessor, see= med to contain the information it should have, but had excluded that which was unnecessary.  I was concerned however that the technique used could have been modified to yield faster and most robust convergence, perh= aps by impacting the middle performers with less of the high performers networ= ks.

The fitness criteria could have implemented much more information which I am certain would have given the simulation a great deal better indicatio= n of which networks were performing closest to my desires.  As it was, the system would almost always oscillate near the maximum value of the fitness function.  A trend which I=92m sure represents a deficiency in the fitness function.  A complete revamping of this would probably be the best course of action.

The network itself raises issues as to if it contained enough informati= on to produce a correctly operational network.  While that is debatable, I think that for a reactive network, simply linking the receptors to the motor nodes fully would have worked to produce good results.  Due to a lack of theory as to the exact neural structures necessary to produce certain output, this segment of the design was definitely a black art, and requires a lot of trail and effort.

Related Work

The only largely related work found to this was a paper by Adrian Thomp= son titled =93Evolving Electronic Robot Controllers that Exploit hardware Reso= urces=94.  It is the only paper which also used a combination of genetic algorithms and neural networks.  In the arena of flocking however, there are a huge amount of papers implementing totally different approaches.  Likewise, I also drew a lot from several of the papers which deal with genetic algorithms for other tasks.

Summary

Using a combination of genetic algorithms and neural networks has provi= ded an interesting examination of those methods.  There are so many possi= ble factors which potentially can play detrimental roles in the process of evolution that it is difficult to set down any specific useful rules for approach.  This methodology is still a black art.  It is also exceedingly difficult to implement, and for general tasks is probably not an efficient use of resources.  However, as more people become aware of the difficulties, and produce simulations which work, these productions might yet yield some standardization which will be applicable across the varied tasks attempted.
 
 

References

Adrian Thompson, =93Evolving Electronic Robot Controllers that Exploit hardware Resources=94, University of Sussex, 1995

Geoffrey F. Miller, Dave Cliff, =93Protean Behavior in Dynamic Games: Arguments for the co-evolution of pursuit-evasion tactics=94, University of Sussex, 1994

Dean A. Pomeleau, =93Rapidly Adapting Artificial Neural Networks for Au= tonomous Navigation=94, Carnegie Mellon University, 1992

Craig W. Reynolds, =93Flocks, Herds, and Schools: A Distributed Behavio= ral Model=94, Symbolics Graphics Division, 1987

John R. Koza, =93Evolution of Subsumption Using Genetic Programming=94, Stanford University, 1992
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Relevant Code

ublic void fitFunction(Vehicle p) {
 Vector closest;
 Vector locations=3D new Vector();
 SimulationParticipant temp;
 double fit=3D0;
 double ang1;
 double ang2;
 double dist=3D15;
 double distx=3D0;
 double disty=3D0;
 Location l;
 //Location weightedMovement=3D new Location(p.getX()+2,p.getY()+= 2);
 closest=3Dthis.getClosestV((SimulationParticipant)p, 5);
 for (Enumeration enum =3D closest.elements(); enum.hasMoreElemen= ts();) {
  temp=3D (Vehicle) enum.nextElement();
  locations.addElement(temp.getLocation());
 }
 //locations.addElement(weightedMovement);
 l=3Dthis.getMidpoint(locations);
 if(p.getOldDirection()!=3Dp.getDirection()) fit=3D1.1;
 if(fit=3D=3D0) {
 Location old =3D new Location(p.getXold()-l.getX(),p.getYold()-l= .getY());
 ang2=3DMath.asin(old.getY()/old.getVectorLength());
 distx=3Ddist*Math.sin(ang2);
 disty=3Ddist*Math.cos(ang2);
 Location fitvector =3D new Location(old.getX()-distx, old.getY()= -disty);
if(fitvector.getVectorLength()!=3D0){
   ang2=3DMath.asin(fitvector.getY()/fitvector.getVectorLeng= th());
  Location current=3D new Location(p.getX() - l.getX() - old.getX= (),(p.getY()-l.getY())-old.getY());
  if(current.getVectorLength()!=3D0) {
   ang1=3DMath.asin(current.getY()/current.getVectorLength()= );
   ang1-=3Dang2;
   fit=3D(Math.cos(ang1)) * p.getSpeed();
   //if(fit<0) fit=3D0;
  }
  else {
   fit=3D0;
  }
}
else {
 fit=3D0;
}

}
if(fit=3D=3D0) fit=3D-0.5;
if(p.getFitness()<-900) p.setFitness(-900);
 p.updateFitness(fit);
 
}

public void computegenome() {
  int size=3D(this.getParticipants().size()) - 1;
  SimulationParticipant sims[]=3Dnew SimulationParticipant[size];
  SimulationParticipant a;
  int temp=3D0;
  int third=3Dsize/3;
  Vehicle p;
  Random rand =3D new Random();
  // Get array of fitest to least fit participants
 for (temp=3D0;temp<size;temp++) {
  sims[temp]=3Dthis.getFitestV();
  if(sims[temp]=3D=3Dnull) debugWindow.addText("Crashed\n");
  if(temp=3D=3D0) debugWindow.addText("Fitest: " + sims[temp].get= Fitness() + "\n");
  sims[temp].setFitness(-1100);
 }
 // Reset all participants fitness values
 for (temp=3D0;temp<size;temp++) {
  sims[temp].setFitness(0.0);
 }
 // Kill the worst participants and add in new ones.
 for (temp=3D(third*2);temp<size;temp++) {
  sims[temp].displayBroken(this.getGraphics());
  this.removeParticipant(sims[temp]);
   p =3D Vehicle.evolution1( this.getRandomX() , this.getRan= domY() , this.getRandomDirection(), Color.blue, Color.blue, rand.nextLong() );
   this.addParticipant(p);
   p.breed((Vehicle)sims[temp-(third*2)], 0.7);
 
 }
 // Breed the decent performers to the excellent performers
 for (temp=3Dthird;temp<size-third;temp++) {
  ((Vehicle)sims[temp]).breed((Vehicle)sims[temp-third], 0.5);
  sims[temp].displayBroken(this.getGraphics());
  sims[temp].setX(this.getRandomX());
   sims[temp].setY(this.getRandomY());
   sims[temp].setY(this.getRandomDirection());
 }
 // Randomize the best performers locations
 for (temp=3D0;temp<third;temp++) {
  sims[temp].displayBroken(this.getGraphics());
  sims[temp].setX(this.getRandomX());
  sims[temp].setY(this.getRandomY());
  sims[temp].setY(this.getRandomDirection());
 }
 
  numsteps=3D0;
 }