Original post date: January 9, 2026
Last modified: January 28, 2026
Self-Evolving Cellular Automata
I made a game of life -type simulation, that showcases how relatively complex mechanisms that compete with each other can arise through a few simple rules.
The simulation
The slider controls where randomness is applied. Changing it has the biggest effect at the start of the simulation. Over longer runs, areas with randomness tend to develop more static mechanisms. You’ll notice that new mechanisms mainly form where randomness is introduced, then quickly spread into the area with no randomness. Try resetting the simulation; each time, the behaviour will be slightly different.
When the mechanisms spread, they automatically start "competing" with each other, since the mechanisms that effectively gather the most energy to occupy the cells will remain and replace the ones that are less effective.
Background
Hosted on GitHub is a repository with the Python script I used to test the ideas outlined on this page, along with a bunch of additional stuff as well. You get a lot more complexity by running this with multiple parallel channels instead of just one, as I do here. You get visually cool stuff happening when you use a lower spread power. It's all there in the repository. I included a very minimal version of the script as well, which is more closely aligned with the version I run on this page.
Theory
We start with a grid of cells that update in discrete steps.
All cells are computed in parallel, but we'll look at the progression from the perspective of a single focal cell (index 1). Each cell interacts with its 8 immediate neighbors, forming a group of 9, indexed as in the image to the left.
At each step the focal cell has a non-negative energy scalar , a transfer matrix , and a bias vector . Both and flow with energy and get updated by incoming energy.
To compute how energy spreads, stack the 9 neighboring energies into . We use an affine transformation followed by a nonlinearity to produce a non-negative spread:
We use rectified linearity, , so the spread is non-negative without constraining . The small prevents the all-zero case.
This is done in parallel for all cells, accounting for walls or blocked cells, and summed to get the energy grid at the next step.
The mechanism moves with the energy: the transfer matrix update is proportional to incoming energy flow. Let denote the energy that arrives to the focal cell from neighbor . We form a weighted average of the neighboring matrices:
We square the energy when weighting, which emphasizes directions with higher energy flow. The bias vector is updated in the same way. The same keeps the denominator non-zero.
Finally, we inject randomness when energy is low. The idea is that empty or low-energy cells should mutate their mechanisms, while high-energy cells preserve them:
Here sets the overall noise scale. When is small, is near zero and randomness dominates; as energy grows, the incoming mechanisms take over. The same blend is applied to .
To initialize the system, we sample and at random and place a chunk of energy in the middle. In the live simulation above, randomness is injected in a controllable region, and mechanisms spread and compete as energy flows.