Page authors

  • Andrej Karpathy
    May 28, 2012
  • Casey Link
    February 13, 2011

Site owners

  • Andrej Karpathy

Home


ScriptBots is an Open Source Evolutionary Artificial Life Simulation of Predator-Prey dynamics, written originally by Andrej Karpathy. 

The simulation can currently be found: 
- as a Visual C++ Project in the Downloads section [deprecated, very old version]
- On Github :
Andrej Karpathy code (original code, now uses cmake, but mostly intended for Ubuntu dev)
David Coleman's fork with a number of changes, aimed at running the simulation distributed, on a supercomputer
Julian Hershey's fork: several independent game changes, save/load functionality
NEW and AWESOME: Jim Allanson converted the entire project to Javascript and published it on Github here. It runs in any browser with canvas.
If you fork and develop something interesting, I will add it here...

A much simpler bare bones simulation in Javascript with Canvas also exists here for educational purposes.
Another independent but same-spirited implementation was made by Tyler here.

----------------------------
Below are some pictures, videos, and notes about how the simulation works, but these are by now slightly outdated. Still, I choose to leave them here because they still show the basic idea.


 

Version 3 description

 

Version 4 release


Bots can specialize in processing plant food or meat, and can therefore become carnivores or herbivores.

Bots can hunt each other. They can communicate by flashing colors and also shouting at each other.

Bots can choose to share food with each other. (allows for altruism)

Both sexual and asexual reproduction is implemented.

When a bot dies, food is distributed evenly to bots around the event. This leads to emergence of both scavengers and hunting packs. 


Every bot has 20 sensors, 9 actuators, internal variables and "brain".

INPUT DETAILS

Every agent has 20 sensors that range from 0->1:

  • 3 eyes (2 in front 1 in back) with Red,Green,Blue,Proximity sensor in every eye
  • sound sensor (responds to amount of other agents around this agent)
  • smell sensor (responds to amount of movement around this agent)
  • food sensor (how much food is at the feet of this agent?)
  • health sensor (how healthy is this agent?)
  • clock sensors (2, fluctuate in activity over time in different frequencies)
  • hearing sensor (used to listen to other agents' shouting)
  • blood sensor (a bot can judge health of the bot ahead of it)

OUTPUT DETAILS

Every agent has 9 Actuators ranging from 0->1 that change over time based on brain activity

  • wheel 1 and wheel 2 speed
  • amount of Red,Green,Blue to emit
  • spike Length (used to kill other bots)
  • boost (used to travel much faster, but at a cost of a lot of food)
  • sound multiplier: bots can be very noisy, or very silent and sneaky through this.
  • give actuator: when turned on, this bot is willing to share its health with others

BRAIN DETAILS

  • Every agent also has N internal variables ranging from 0->1 that can be thought of as neurons
  • The brain is essentially a recurrent neural network. But not really. (though it doesn't really matter)

Feel free to join our Google Group below and discuss the simulation, ask questions, or post some of your own forks and improvements!

Discussion Forum