KittyTac wrote:What about 100kx100k?
Divide Hooloovoo's estimates by 100, as a first guess -- one tick every four seconds. But to distribute the effort, no matter what the size is, you'd need unrealistically fast communication between the computers in the distributed network.
Let's say each computer gets a 4096-row strip of the full pattern, and furthermore adjacent strips overlap by 2048 cells, so each cell in the full pattern is represented on two CPUs.
After each strip is calculated 1024 ticks into the future, the entire resulting 2048xN central part
of the strip will have to be communicated over the network. The computer handling the adjacent strip above will need to know about the upper 1024xN half of that 2048xN section, and the computer handling the adjacent strip below will need the other 1024xN chunk.
So in aggregate, if I'm thinking about this right, we're communicating over the network an amount of data equal to the size of the entire pattern, every 1024 ticks. If we make the strips taller we can wait proportionally longer before doing that communication -- but by doing that we're also making the simulation less distributable.
Ultimately the obvious way to avoid this over-communication problem is to make the strip as tall as the entire pattern. In other words, it probably makes more sense to limit this kind of distributed simulation to multiple cores all sharing the same memory -- that way there's no huge amount of data to communicate across the low-bandwidth Internet.
-- This is all basically talking about a distributed QuickLife-type simulation. There are some more possible shortcuts available for distributed HashLife, but here again, it's a difficult problem even just to synchronize multiple cores so that they can all access and modify the same hashtable without stepping on each other's toes.
With separate computers on a distributed network communicating via the Internet, it seems as if each computer would really have to have a complete copy of the hashtable for the entire pattern. If each computer only keeps a piece of the hashtable, you won't get HashLife's famous exponential speedup -- every lookup would need a round trip across the network.
But if a single computer can store the entire hashtable, it's not clear what work is left that can be usefully distributed -- it's probably faster to have one computer do the work, and dodge all the communication and synchronization issues... It's always possible that there's something clever I'm not thinking of, though!
There are certainly special-case patterns that could be successfully partitioned up to run well on a distributed network, but it seems to me those are just the obvious highly organized cases where different parts of the pattern don't interact with each other very much.