- Class 1 automata settle down into a homogenous state independently of the initial conditions
- Class 2 automata rapidly settle down into predictable patterns, either static or oscillating
- Class 3 Automata show expanding growth of apparently random activity from small initial patterns
- And class 4 automata behave unpredictably, with some patterns growing and others not, and patterns interacting in complex ways.
One thing he noticed was that in his list of elementary automata, class 4 automata would often be appear between class 3 and class 2, as if lying in the boundary between chaos and order.
Although thousands of life-like automata exist, a great majority of them falls in the first 3 categories, and only a handful of them appear to lie in this elusive boundary where interesting behaviour appears.
With non-totalistic CA, we were able manipulate automata to a finer degree of granularity, creating many variations of very similar automata, so we could "tame" automata by tipping them to either side of chaotic or orderly behaviour.
If we had a better way to fine-tune automata, we could produce much more class-4 behaviour. By using a probabilistic automata, we are essentially interpolating a variety of rules, but instead of a discrete range we have an entire continuous range of parameters to test.
An implementation could choose, for each cell, a totalistic rule from a set with some fixed probability. Or we could extend conway's notation with probabilities:
B2(.02)3(.90)/S2(.90)3
Here, an off cell with two neighbours has 2% probability of becoming an on cell, and 90% if it has three neighbours instead, whereas a live cell has 90% probability of being live in the next generation if it has 2 neighbours, or 100% if it has 3.
EDIT:
I just made a couple of Python programs using the NumPy library implementing probabilistic CA:
Code: Select all
def pstep(X, *rule_sheet):
"""Receives as arguments pairs (rule, probability) where rule is a pair
describing a totalistic automaton, as in ((3,),(2,3)) for CGoL."""
nbrs_count = sum(np.roll(np.roll(X, i, 0), j, 1)
for i in (-1, 0, 1) for j in (-1, 0, 1) if (i != 0 or j != 0))
result = np.zeros(X.shape)
for rule, probability in rule_sheet:
for birth_count in rule[0]:
result += probability * ((1-X) & (nbrs_count == birth_count))
for survival_count in rule[1]:
result += probability * (X & (nbrs_count == survival_count))
return (result > np.random.random_sample(result.shape)).astype(int)
Code: Select all
def pstep2(X, p_rule):
"""Receives as arguments a pair (B,S), where B and S are tuples of pairs (n,p) meaning
the probability of births or survival, p, given the number, n, of neighbors"""
nbrs_count = sum(np.roll(np.roll(X, i, 0), j, 1)
for i in (-1, 0, 1) for j in (-1, 0, 1) if (i != 0 or j != 0))
result = (
sum(probability * ((1-X) & (nbrs_count == birth_count))
for birth_count, probability in p_rule[0]) +
sum(probability * ((X) & (nbrs_count == survival_count))
for survival_count, probability in p_rule[1])
)
return (result > np.random.random_sample(result.shape)).astype(int)