apgsearch v1.0
Re: apgsearch v1.0
Good news: the latest version of HTTPS Everywhere has an entry for Catagolue (by way of the Google App Engine). HTTPS version here.
Princess of Science, Parcly Taxel
Code: Select all
x = 31, y = 5, rule = B2-a/S12
3bo23bo$2obo4bo13bo4bob2o$3bo4bo13bo4bo$2bo4bobo11bobo4bo$2bo25bo!- Alexey_Nigin
- Posts: 326
- Joined: August 4th, 2014, 12:33 pm
- Location: Ann Arbor, MI
- Contact:
Re: apgsearch v1.0
What will happen if anyone decides to investigate B8/S012345678?
There are 10 types of people in the world: those who understand binary and those who don't.
Re: apgsearch v1.0
Good point... I think it would cause quite a lot of junk traffic...Alexey_Nigin wrote:What will happen if anyone decides to investigate B8/S012345678?
- gameoflifeboy
- Posts: 474
- Joined: January 15th, 2015, 2:08 am
Re: apgsearch v1.0
I once tried searching B/S0123. I searched 3881100 soups and got 912554 different objects. The progress file was 40237 KB and the haul never actually seemed to be uploaded to Catagolue.Alexey_Nigin wrote:What will happen if anyone decides to investigate B8/S012345678?
A similar rule would be LongLife. I once censused LongLife and found 16641 different objects after searching just 17600 soups (this was pre-Catagolue).
B8/S012345678 would be even worse, because every single soup would likely produce a different object. However, like my B/S0123 census, the haul might be too big to upload (is there a size limit?).
Re: apgsearch v1.0
There's a universal limit of one megabyte for Datastore entities, and each haul is uploaded as an entity, so hauls larger than one megabyte would just throw an unhandled exception (a much more desirable outcome than filling the Catagolue with lots of uninteresting junk!).However, like my B/S0123 census, the haul might be too big to upload (is there a size limit?).
It doesn't appear as though I have any size limitation exceptions, however. My logs do contain some DeadlineExceededExceptions from 2015-03-28, and another from 2015-03-18; it's possible that trying to upload a really large haul would take longer than the 60-second request time limit.
Anyway, your huge file would even exceed the 32-megabyte limit on requests sent to and from the server, so you should have received a 413. However, my logs don't contain any 413s whatsoever, so I guess apgsearch just choked before it could even attempt to send all of that data to Catagolue.
What do you do with ill crystallographers? Take them to the mono-clinic!
- praosylen
- Posts: 2449
- Joined: September 13th, 2014, 5:36 pm
- Location: Pembina University, Home of the Gliders
- Contact:
Re: apgsearch v1.0
Any idea why apgsearch did not successfully separate the pseudo still life that results from this soup:?
Code: Select all
x = 16, y = 16, rule = B3/S23
3obo2b3ob2o2bo$5bo2b6o$o2bobo4b2o3bo$o3bob4o2bo$4bo3bobo2bo$o2bob2o2b
2o2bobo$o4b3obobobobo$bo2b2o7b3o$o3bo3b2ob2ob2o$2bobo5b4obo$ob3o3bobob
o$b3ob5o2b4o$6bobo3b2obo$2b2o2b3obo2bo$5b2ob3obobo$b2o7bo2b2o!
former username: A for Awesome
praosylen#5847 (Discord)
The only decision I made was made
of flowers, to jump universes to one of springtime in
a land of former winter, where no invisible walls stood,
or could stand for more than a few hours at most...
praosylen#5847 (Discord)
The only decision I made was made
of flowers, to jump universes to one of springtime in
a land of former winter, where no invisible walls stood,
or could stand for more than a few hours at most...
Re: apgsearch v1.0
Congratulations! You found a bug in the search program!A for awesome wrote:Any idea why apgsearch did not successfully separate the pseudo still life that results from this soup:?Code: Select all
x = 16, y = 16, rule = B3/S23 3obo2b3ob2o2bo$5bo2b6o$o2bobo4b2o3bo$o3bob4o2bo$4bo3bobo2bo$o2bob2o2b 2o2bobo$o4b3obobobobo$bo2b2o7b3o$o3bo3b2ob2ob2o$2bobo5b4obo$ob3o3bobob o$b3ob5o2b4o$6bobo3b2obo$2b2o2b3obo2bo$5b2ob3obobo$b2o7bo2b2o!
Princess of Science, Parcly Taxel
Code: Select all
x = 31, y = 5, rule = B2-a/S12
3bo23bo$2obo4bo13bo4bob2o$3bo4bo13bo4bo$2bo4bobo11bobo4bo$2bo25bo!- Extrementhusiast
- Posts: 1966
- Joined: June 16th, 2009, 11:24 pm
- Location: USA
Re: apgsearch v1.0
Is it possible to label the soups by symmetry (and rule, if needed)? Color-coding them gets confusing for me, especially when the soups aren't completely sorted. I was thinking of something like this:
Code: Select all
b3s23/C1: • • • • • • • • • • • • • • • • • • • •
b3s23/C2_1: • • • • • • • • • •
b3s23/C2_2: • • • • •
b3s23/C2_4: • • • • • • • • • • • • • • •
(and so on)I Like My Heisenburps! (and others)
Re: apgsearch v1.0
Oh, it erroneously assumes that if there is an empty space adjacent to three 'A' cells and at least one 'B' cell, then 'A' has a dependency on 'B'.Any idea why apgsearch did not successfully separate the pseudo still life that results from this soup:
What do you do with ill crystallographers? Take them to the mono-clinic!
Re: apgsearch v1.0
I feel like some modifications could be made to the scoring system so that scores better correlate with the excitement. Common patterns that evolve into more than one "exciting" object don't add much - most of the 38-pt soups are from the same 2-pi reaction that ends up with 2 spark coils, and a large number of the other 38 pointers are from the smiley reaction I posted in useless discoveries that makes an 18-bit and a 22-bit symmetric SL.
One could have a set of known pairs of patterns (for B3S23 at least..) that, if both are present, score an amount less than the sum of the individual scores. This would lower the scores of soups that create both objects independently but by the numbers game, it might still be a good thing.
Additionally, still lives that are named often get scores of 0, but unnamed SLs get a score based on their bit count - in the range of 13-bitters to about 19-bitters I feel like this is a bit generous. Compare xs15_09v0ccz321 (hook-join-table-with-block) to xs17_2ege1t6zx11 (looks like it should be called paperclip with tail), for example - the former has been seen 7k times and scores 0, the latter 15k and scores 17. The score caps also seem a bit odd. Once the Catagolue has reached some threshhold, do you think the scoring system could be updated to actually use relative abundance? Perhaps the client-side program could flag a small subset of the soups as "potentially high-scoring" without stopping computation, and a server-side program could actually score them relative to a fixed backup of the Catagolue?
All that said, making a scoring system that relies on too much external info is awkward and inefficient, and making one that is speedy and nonintrusive is almost always going to have some cases where it is unfit. In all, I would not mind scores being eliminated altogether..
Edit: Also, despite the awesomeness of the MWSS on MWSS 1, it definitely shouldn't be worth 50 points. Maybe 35.
Edit2: I am, as of tonight, running 21 instances of apgsearch. 16 are on 4 high-performance 2.9GHz computers in my school's library. 3 are on my own 2GHz laptop, saving me 1 core, and 2 are on a shared/abused computer in the dorm's lounge, probably only making about 150 soups/sec each. Totals about 5700/sec.
One could have a set of known pairs of patterns (for B3S23 at least..) that, if both are present, score an amount less than the sum of the individual scores. This would lower the scores of soups that create both objects independently but by the numbers game, it might still be a good thing.
Additionally, still lives that are named often get scores of 0, but unnamed SLs get a score based on their bit count - in the range of 13-bitters to about 19-bitters I feel like this is a bit generous. Compare xs15_09v0ccz321 (hook-join-table-with-block) to xs17_2ege1t6zx11 (looks like it should be called paperclip with tail), for example - the former has been seen 7k times and scores 0, the latter 15k and scores 17. The score caps also seem a bit odd. Once the Catagolue has reached some threshhold, do you think the scoring system could be updated to actually use relative abundance? Perhaps the client-side program could flag a small subset of the soups as "potentially high-scoring" without stopping computation, and a server-side program could actually score them relative to a fixed backup of the Catagolue?
All that said, making a scoring system that relies on too much external info is awkward and inefficient, and making one that is speedy and nonintrusive is almost always going to have some cases where it is unfit. In all, I would not mind scores being eliminated altogether..
Edit: Also, despite the awesomeness of the MWSS on MWSS 1, it definitely shouldn't be worth 50 points. Maybe 35.
Edit2: I am, as of tonight, running 21 instances of apgsearch. 16 are on 4 high-performance 2.9GHz computers in my school's library. 3 are on my own 2GHz laptop, saving me 1 core, and 2 are on a shared/abused computer in the dorm's lounge, probably only making about 150 soups/sec each. Totals about 5700/sec.
Physics: sophistication from simplicity.
Re: apgsearch v1.0
I'm intending to make a future release of apgsearch download the scores from Catagolue (calculated from relative abundance via the usual formula).
Tom Rokicki could probably overtake everyone, but at the moment his computing power is being applied to the Minsky Stock Index (by Gosper, Bickford, and Ziegler-Hunts^2). Andrew Trevorrow mainly submitted symmetrical soups, rather than b3s23/C1, and others are concentrating on alternate rules.
The scoring system was implemented ab initio in an attempt to make a viable cryptocurrency. Although it seems simpler and more robust to just have the 'target' be a specific oscillator, which changes depending on the total amount of estimated hashpower. It's unlikely that Lifecoin will be sufficiently popular that (for a concrete example) fumaroles are too common to be used as the proof-of-work for generating blocks*.
* As in blockchain blocks, not copies of xs4_33. A fumarolechain would be awesome, though...
It's hard to believe that all of this stemmed from the 'CAcoin' suggested in a forum thread 10 months ago:
http://conwaylife.com/forums/viewtopic. ... 92&start=0
I agree about the disproportionate MWSS-on-MWSS score. The scoring system in apgsearch was based on the assumption that any non-standard spaceship would be super-exciting (since Okrasinski's search found no such objects, most probably due to an inadequacy in the census program). It is clear a posteriori that two-*WSS flotillae also occur with a reasonable frequency, so their scores should be modified accordingly (especially the MWSS-on-MWSS, which is much more common than the fumarole so probably deserves a score of about 32).it definitely shouldn't be worth 50 points. Maybe 35.
I'm currently running 50 instances of apgsearch (19 dual-core and 3 quad-core machines). However, I don't have the freedom of being able to have them running 24/7 uninterrupted, so you'll probably overtake me. I think the time-averaged throughput of the Catagolue currently equates to about 20 CPUs running continually, which is why my slice of the pie chart increased by an entire percentage point overnight. Still a long way from catching up with Michael Simkin's slice, and even further from the disjoint union of Dave Greene's two slices.I am, as of tonight, running 21 instances of apgsearch.
Tom Rokicki could probably overtake everyone, but at the moment his computing power is being applied to the Minsky Stock Index (by Gosper, Bickford, and Ziegler-Hunts^2). Andrew Trevorrow mainly submitted symmetrical soups, rather than b3s23/C1, and others are concentrating on alternate rules.
Saving one core isn't really helpful, from what I gather; Golly is very submissive and relinquishes resources as soon as your computer needs to do anything else. There's no noticeable performance impact whilst web browsing, for example, despite having four instances quietly churning away in the background.3 are on my own 2GHz laptop, saving me 1 core
Possibly, although I found Dave's natural instance of Elkies' p5 by looking at the unvisited browser links to high-scoring soups. Admittedly someone else could have found it even more easily by checking the /statistics page. I'm contemplating using the Twitter API (my friends used it at a Hackathon last summer) so that Catagolue can tweet new discoveries. At the moment, though, my current work on Catagolue is implementing the verification process to prevent malicious attempts to vandalise b3s23/C1.In all, I would not mind scores being eliminated altogether..
The scoring system was implemented ab initio in an attempt to make a viable cryptocurrency. Although it seems simpler and more robust to just have the 'target' be a specific oscillator, which changes depending on the total amount of estimated hashpower. It's unlikely that Lifecoin will be sufficiently popular that (for a concrete example) fumaroles are too common to be used as the proof-of-work for generating blocks*.
* As in blockchain blocks, not copies of xs4_33. A fumarolechain would be awesome, though...
It's hard to believe that all of this stemmed from the 'CAcoin' suggested in a forum thread 10 months ago:
http://conwaylife.com/forums/viewtopic. ... 92&start=0
What do you do with ill crystallographers? Take them to the mono-clinic!
Re: apgsearch v1.0
I don't use twitter, but if that gets implemented, I would definitely make an account.calcyman wrote:I'm contemplating using the Twitter API (my friends used it at a Hackathon last summer) so that Catagolue can tweet new discoveries.
I think the comments here are mixed... (I think it would be the same with zzs and xp2_2a54.)
https://catagolue.appspot.com/object/PATHOLOGICAL/b3s23
Re: apgsearch v1.0
Is there any concern of catagolue choking from too many hauls being committed? I just added 56 3.1 GHz instances to run at roughly 1M soups between uploads (found out how to get it working on another set of underused school computers!)
Physics: sophistication from simplicity.
Re: apgsearch v1.0
I've just doubled the frequency of the cron task (which periodically commits hauls) from every 30 minutes to every 15 minutes, so hauls can now be dequeued at a rate of 400 per hour (and enqueued however quickly you like, since Google creates extra instances of the server to cope with increased load). Impressively, the last 100 hauls span a period of about 64 minutes, compared with the usual period of 30 hours.Is there any concern of catagolue choking from too many hauls being committed? I just added 56 3.1 GHz instances to run at roughly 1M soups between uploads (found out how to get it working on another set of underused school computers!)
I can't imagine it will take long for you to jump into the lead on the pie chart. Are those 56 instances *in addition* to the 21 you described earlier? If so... wow.
EDIT: According to /statistics, there are now 8001 different object types observed in b3s23/C1.
What do you do with ill crystallographers? Take them to the mono-clinic!
Re: apgsearch v1.0
Yes, a computer lab here that goes unused in the daytime has a wall of excellent quad-core machines. I probably lost a few of the initial 21 from people rebooting to switch OS, but I imagine I still have over 70 instances in total running.
Also, the average haul was about 3.5M last I checked, while my hauls are averaging 1M, so the 64 minutes figure is slightly less impressive than at first glance.
Also, the average haul was about 3.5M last I checked, while my hauls are averaging 1M, so the 64 minutes figure is slightly less impressive than at first glance.
Physics: sophistication from simplicity.
Re: apgsearch v1.0
I've got another feature request. Please add census for pseudo-still-lifes, pseudo-oscilllators and pseudo-spaceships (such as LWSS on LWSS). They don't have to be a part of the general census, but it would be very nice, if soups producing them were indexed on their object pages.
Ivan Fomichev
Re: apgsearch v1.0
And we've reached the 20B soup marker! Less than 24 hours ago we were around 18.5B. I will do what I can to keep my army of instances running, and since it seems they go largely unhindered, I'll up the time between commits to around 8M, which should make the haul list less overrun with bwbigmacs.
EDIT: Okay, I have now 100 known and undisturbed instances running at between 8M and 10.4M soups per upload. All are making over 300 soups/second, giving roughly 2.6B soups per day. I also have a number of sporadic instances running on unattended computers. We could probably double the current census within a week.
EDIT: Okay, I have now 100 known and undisturbed instances running at between 8M and 10.4M soups per upload. All are making over 300 soups/second, giving roughly 2.6B soups per day. I also have a number of sporadic instances running on unattended computers. We could probably double the current census within a week.
Physics: sophistication from simplicity.
Re: apgsearch v1.0
Thanks!And we've reached the 20B soup marker! Less than 24 hours ago we were around 18.5B. I will do what I can to keep my army of instances running, and since it seems they go largely unhindered, I'll up the time between commits to around 8M, which should make the haul list less overrun with bwbigmacs.
So let me guess: you've defined a bijection f from your set of 25 quad-core computers to the set {0, 1, ..., 24}, and then set four instances on each computer M to each commit (80 + f(M)) * 10^5 soups between uploads?Okay, I have now 100 known and undisturbed instances running at between 8M and 10.4M soups per upload.
I don't know how many instances of mine (out of the original 36) are still running, but the room containing the machines is currently being repainted so it's unlikely they will be disturbed any time soon.
...which is about 60 * 10^9 objects per day. This is a clear order of magnitude larger than the average baseline load on Catagolue. So at the moment Lifecoin is definitely _not_ a viable cryptocurrency, since a single person has well over half of the computing power!All are making over 300 soups/second, giving roughly 2.6B soups per day.
Exciting! I wonder how many more surprises we'll encounter this month (Elkies' p5 being the first example).We could probably double the current census within a week.
What do you do with ill crystallographers? Take them to the mono-clinic!
Re: apgsearch v1.0
Exactly.calcyman wrote: So let me guess: you've defined a bijection f from your set of 25 quad-core computers to the set {0, 1, ..., 24}, and then set four instances on each computer M to each commit (80 + f(M)) * 10^5 soups between uploads?
With the current intersection between level of interest and available computing power, yes. People sometimes use the array here to mine BTC, but because that interest is so much higher it's barely a drop in the bucket. The equalizer here would thus be to up the interest in apgsearch (and catagolue's ability to handle it) and its viability would return.calcyman wrote: This is a clear order of magnitude larger than the average baseline load on Catagolue. So at the moment Lifecoin is definitely _not_ a viable cryptocurrency, since a single person has well over half of the computing power!
Physics: sophistication from simplicity.
Re: apgsearch v1.0
@calcyman How about keeping track of feature requests in UserVoice or a similar tool?
Ivan Fomichev
- gameoflifeboy
- Posts: 474
- Joined: January 15th, 2015, 2:08 am
Re: apgsearch v1.0
WE DID IT GUYS!!! As of April 15, 2015, Catagolue has the largest asymmetric life census, breaking the record held by Okrasinski's screensaver census for almost ten years!
I'm so glad that everyone helped make this the biggest census of Life objects. And if anything happens to it, I have a backup I update every few days here: http://mathandnumberystuff.tumblr.com/catagolue-backup.
I'm so glad that everyone helped make this the biggest census of Life objects. And if anything happens to it, I have a backup I update every few days here: http://mathandnumberystuff.tumblr.com/catagolue-backup.
- gameoflifeboy
- Posts: 474
- Joined: January 15th, 2015, 2:08 am
Re: apgsearch v1.0
We're now approaching 10000 different objects. Unfortunately I am about to go to bed and won't be there to see the 10000th object, but thanks to the Twitter bot, I will know what it is.
Re: apgsearch v1.0
Hit 10K distinct objects! And over 30B soups. We should be making roughly 15B soups per week now, which is pretty exciting.
Physics: sophistication from simplicity.
Re: apgsearch v1.0
I tried B345678/S678, a Day & Night rule. And I was suprised, as junks of p2 oscilators came out and caused lag on my computer...
Call me "Dannyu NDos" in Forum. Call me "Park Shinhwan"(박신환) in Wiki.