Idea for generating a large-scale census
Idea for generating a large-scale census
Before Geocities closed, I saved a copy of Andrzej Okrasinski's Life Screensaver, which generates statistics based on random soups. The key feature is, different censuses can be produced for different densities but the same area, and these can be combined into one final census.
My idea is that if several people run a copy of the screensaver (keeping track of who is already using which density) and the results files are combined every so often, a very large set of statistics could be generated, and a possibility for natural constructions of rare objects to be found.
The only problem I can see is that the allowed densities are limited to whole numbers between 10 and 50, meaning only a maximum of 40 people could generate data (unless the 'random seed' number affects that; I'm not sure how it works).
http://soakerhoo.110mb.com/lifess15.zip : Here is a backup of the installer for the screensaver.
http://soakerhoo.110mb.com/lf020n.pdf : Here is the census originally on the site, generated by the software.
My idea is that if several people run a copy of the screensaver (keeping track of who is already using which density) and the results files are combined every so often, a very large set of statistics could be generated, and a possibility for natural constructions of rare objects to be found.
The only problem I can see is that the allowed densities are limited to whole numbers between 10 and 50, meaning only a maximum of 40 people could generate data (unless the 'random seed' number affects that; I'm not sure how it works).
http://soakerhoo.110mb.com/lifess15.zip : Here is a backup of the installer for the screensaver.
http://soakerhoo.110mb.com/lf020n.pdf : Here is the census originally on the site, generated by the software.
Re: Idea for generating a large-scale census
Hi All,
Initial seed indeed helps to avoid overlapping of areas (within the same edge and density) searched by the saver on different machines.
But a user must be careful not to run into initial seed of other users. Program does not check for this, as far as I remember.
It is nice to hear that anyone uses this software.
My results posted on geocities (site closed) I covered 20x20 patterns with even densities in range 18..46.
Computers become faster, so I think I would implement 64-bit seeds instead of 32-bit if I updated the saver.
That would practically solve the problem of overlapping, until quantum computers
Regards,
Andrzej Okrasiński
Initial seed indeed helps to avoid overlapping of areas (within the same edge and density) searched by the saver on different machines.
But a user must be careful not to run into initial seed of other users. Program does not check for this, as far as I remember.
It is nice to hear that anyone uses this software.
My results posted on geocities (site closed) I covered 20x20 patterns with even densities in range 18..46.
Computers become faster, so I think I would implement 64-bit seeds instead of 32-bit if I updated the saver.
That would practically solve the problem of overlapping, until quantum computers
Regards,
Andrzej Okrasiński
Re: Idea for generating a large-scale census
Are your results still being updated somewhere after geocities closed? I have the version saved which has 8747 object types, is there a newer one than this?ohennel wrote: My results posted on geocities (site closed) I covered 20x20 patterns with even densities in range 18..46.
Also, on the subject of an updated version of the screen saver, would it be difficult/possible to allow the saver to operate using different rules (eg. 2x2 or B3/S2456)?
-
tysonlee12
- Posts: 1
- Joined: November 10th, 2012, 8:18 am
Re: Idea for generating a large-scale census
Historically, the one big piece of famous hardware for this is the Connection Machine. There's a great article about Richard Feynman's involvement here:
FSL:
Re: Idea for generating a large-scale census
I stopped generating results a long ago (life turbulences)...
But I am going to continue.
But I am going to continue.
Plans
I just completed making sources compatible with 64-bit Visual C++ 2022, and implemented 64-bit seeding.
Next will be, I think, counting 288- and 384- period puffers in the census (currently the soup falls in "Game failed" basket).
I think of resolving cis- and trans- queen bee shuttles (Aachim's census resolves them, so it will be interesting to compare).
Time and wife allow, this will all be done promptly.
And next I think of a version that employs more than a single CPU. Using modern C++, this should not be difficult as well.
As of other than standard rules implementation, there is too much strange standard rule specific code, it is there to cope with problems that I found developing census code, so this is not my priority.
Next will be, I think, counting 288- and 384- period puffers in the census (currently the soup falls in "Game failed" basket).
I think of resolving cis- and trans- queen bee shuttles (Aachim's census resolves them, so it will be interesting to compare).
Time and wife allow, this will all be done promptly.
And next I think of a version that employs more than a single CPU. Using modern C++, this should not be difficult as well.
As of other than standard rules implementation, there is too much strange standard rule specific code, it is there to cope with problems that I found developing census code, so this is not my priority.
Re: Plans
Have you looked at the https://catagolue.hatsya.com/statistics page, which is the largest distributed census to date? The main census (B3/S23, asymmetric 16x16 soups, CPU-only) has censused 3.8 * 10^15 objects so far. It's also the longest continuously running census, celebrating its 8th birthday next month.ohennel wrote: ↑January 22nd, 2023, 9:42 amI just completed making sources compatible with 64-bit Visual C++ 2022, and implemented 64-bit seeding.
Next will be, I think, counting 288- and 384- period puffers in the census (currently the soup falls in "Game failed" basket).
I think of resolving cis- and trans- queen bee shuttles (Aachim's census resolves them, so it will be interesting to compare).
There have been 3401331 trans and 3338327 cis queen bee shuttles observed so far -- i.e. 50.467% trans and 49.533% cis, with a standard error uncertainty of 0.019% -- so we can be sure that trans queen bee shuttles are slightly more common in 16x16 random soups.
Sample soups are recorded for rare objects. One of the rarest objects that we've seen is the c/7 Loafer:
Code: Select all
x = 16, y = 16, rule = B3/S23
4b2o3bo2bo$4ob2obo3b2o$4bob2ob6o$bobo3bob2ob4o$5obob4obo$3o6bo4bo$5o3b
o5bo$o5b7o2bo$b3ob4ob4obo$ob2obo2b3obo$2bo5b3o3bo$2b6o3b3o$o2bob2o2bob
obo$3b2obob2obob2o$2bo2b2obobob2o$o2bob6o!Code: Select all
x = 16, y = 16, rule = B3/S23
7o3b2ob2o$ob3o3b2o2bo$obo4b2o2b2ob2o$2o3b2ob2ob2obo$6o2bobobo2bo$6obo
5bo$bo2bobo3bo2b3o$ob2ob2obobobobo$3b2o2b3obob2o$o3bo2bobo3b2o$5bo2bo
3bob2o$3b3o5bo$3obobo2b2o2b2o$7obo3b3o$3o2bo3b3o$b2o2bo4bob3o!Code: Select all
x = 16, y = 16, rule = B3/S23
3o2b2obob2ob3o$2obob3o4bobo$bo2bo2bobob3obo$2bo2b2o3bo2bo$2bo5bobo3b2o
$o4b2o3b3obo$3b2o2bo2bobo2bo$2b4obo2bob2o$2ob2o2b2o5b2o$ob4obo4b3o$o3b
4o2b3o$b10o2b3o$2o3bob3obob3o$b2ob6o3bobo$obo5b4obo$3obobob2o5bo!Code: Select all
x = 16, y = 16, rule = B3/S23
2ob3ob3ob3o$o2bo3b4obobo$obob3ob2o2bo$2obo2bob3ob3o$b5obob4ob2o$o2bo7b
2obo$ob3o3b4o$b3ob2ob3o2bo$5obo3bob2obo$4ob2ob2o2bo$4b2o2bob2ob3o$6o6b
3o$o3bo2bo2b5o$obob4obo4bo$o2bo2bo2bo2b3o$2obobobobobo3bo!Code: Select all
x = 16, y = 16, rule = B3/S23
2obo3b2o3bob2o$4bo5bo4bo$3bo2b2ob2o2bo$obobo7bo$b2o2bobob5obo$b2ob2ob
2o3bobo$2bobob5obo2bo$b2o2bob3obob2o$b5o4b2ob3o$ob2o3bob2obo2bo$o3bob
2o2b3o$3bobobo3b4o$bob2obobob2o3bo$o2b2ob3o4b2o$2o7bobo$3o6bo2b2obo!What do you do with ill crystallographers? Take them to the mono-clinic!