ScriptBots is an Open Source Evolutionary Artificial Life Simulation of Predator-Prey dynamics, written originally by Andrej Karpathy. 

Note: I stopped developing this project in ~2011, but I am leaving this page here as a record of what it was about.
Julian Hershey maintains a still-active fork of the project under the name Evagents.

The simulation can currently be found: 
- as a Visual C++ Project in the Downloads section [deprecated, very old version]
- On Github :
  - Andrej Karpathy code (original code, now uses cmake, but mostly intended for Ubuntu dev)
  - David Coleman's fork with a number of changes, aimed at running the simulation distributed, on a supercomputer
  - Jim Allanson converted the entire project to Javascript and published it on Github here. It runs in any browser with canvas.
  - A much simpler bare bones simulation in Javascript with Canvas also exists here for educational purposes.
  - Another independent but same-spirited implementation was made by Tyler here.
If you fork and develop something interesting, I will add it here...

Below are some pictures, videos, and notes about how the simulation works, but these are by now slightly outdated. Still, I choose to leave them here because they still show the basic idea.


Version 3 description


Version 4 release

Bots can specialize in processing plant food or meat, and can therefore become carnivores or herbivores.

Bots can hunt each other. They can communicate by flashing colors and also shouting at each other.

Bots can choose to share food with each other. (allows for altruism)

Both sexual and asexual reproduction is implemented.

When a bot dies, food is distributed evenly to bots around the event. This leads to emergence of both scavengers and hunting packs. 

Every bot has 20 sensors, 9 actuators, internal variables and "brain".


Every agent has 20 sensors that range from 0->1:

  • 3 eyes (2 in front 1 in back) with Red,Green,Blue,Proximity sensor in every eye
  • sound sensor (responds to amount of other agents around this agent)
  • smell sensor (responds to amount of movement around this agent)
  • food sensor (how much food is at the feet of this agent?)
  • health sensor (how healthy is this agent?)
  • clock sensors (2, fluctuate in activity over time in different frequencies)
  • hearing sensor (used to listen to other agents' shouting)
  • blood sensor (a bot can judge health of the bot ahead of it)


Every agent has 9 Actuators ranging from 0->1 that change over time based on brain activity

  • wheel 1 and wheel 2 speed
  • amount of Red,Green,Blue to emit
  • spike Length (used to kill other bots)
  • boost (used to travel much faster, but at a cost of a lot of food)
  • sound multiplier: bots can be very noisy, or very silent and sneaky through this.
  • give actuator: when turned on, this bot is willing to share its health with others


  • Every agent also has N internal variables ranging from 0->1 that can be thought of as neurons
  • The brain is essentially a recurrent neural network. But not really. (though it doesn't really matter)

Feel free to join our Google Group below and discuss the simulation, ask questions, or post some of your own forks and improvements!

Discussion Forum