apgsearch v5.0

For general discussion about Conway's Game of Life.
wildmyron
Posts: 1542
Joined: August 9th, 2013, 12:45 am
Location: Western Australia

Re: apgsearch v5.0

Post by wildmyron » October 8th, 2020, 10:21 am

John Goodman wrote:
October 8th, 2020, 8:50 am
Dylan Chen wrote:
August 7th, 2020, 8:07 pm

Current version of WSL does not support full GPU application. With later win10 2004 update, it is said that WSL2 will be released. In WSL2 you can full utilize GPU with Linux kernal, inlcuding CUDA. A good news for GPU search and AI training user.
I finally updated to WSL 2 and tried again, but I still get the segmentation fault. But thanks anyway.
The current WSL2 version doesn't have CUDA support yet, though it does have GPU hardware support. There are builds available on Windows Insiders Program with CUDA support, currently they require a custom NVIDIA driver. See
https://docs.nvidia.com/cuda/wsl-user-guide/index.html for details and installation instructions.

I strongly recommend to anyone with personal data on their hard disk which isn't regularly backed up: Do NOT use Windows Insider Builds. There have been numerous serious dataloss bugs in some of these builds in recent times.
The 5S project (Smallest Spaceships Supporting Specific Speeds) is now maintained by AforAmpere. The latest collection is hosted on GitHub and contains well over 1,000,000 spaceships.

Semi-active here - recovering from a severe case of LWTDS.

John Goodman
Posts: 32
Joined: December 13th, 2019, 10:00 am

Re: apgsearch v5.0

Post by John Goodman » October 8th, 2020, 12:27 pm

wildmyron wrote:
October 8th, 2020, 10:21 am
The current WSL2 version doesn't have CUDA support yet, though it does have GPU hardware support. There are builds available on Windows Insiders Program with CUDA support, currently they require a custom NVIDIA driver. See
https://docs.nvidia.com/cuda/wsl-user-guide/index.html for details and installation instructions.

I strongly recommend to anyone with personal data on their hard disk which isn't regularly backed up: Do NOT use Windows Insider Builds. There have been numerous serious dataloss bugs in some of these builds in recent times.
I appreciate the information and the heads up.

wwei23

Re: apgsearch v5.0

Post by wwei23 » October 11th, 2020, 9:31 am

Try Cygwin.

wildmyron
Posts: 1542
Joined: August 9th, 2013, 12:45 am
Location: Western Australia

Re: apgsearch v5.0

Post by wildmyron » October 11th, 2020, 9:56 am

wwei23 wrote:
October 11th, 2020, 9:31 am
Try Cygwin.
To do what exactly?

In case you aren't aware, on Windows the only compiler supported by NVIDIAs CUDA compiler is cl.exe from Visual Studio. It's not possible to do a Windows build for G1 with gcc because of that lack of support. And it's not possible to build Lifelib with cl.exe because of the incompatible assembly intrinsics.
The 5S project (Smallest Spaceships Supporting Specific Speeds) is now maintained by AforAmpere. The latest collection is hosted on GitHub and contains well over 1,000,000 spaceships.

Semi-active here - recovering from a severe case of LWTDS.

wwei23

Re: apgsearch v5.0

Post by wwei23 » October 11th, 2020, 9:59 am

wildmyron wrote:
October 11th, 2020, 9:56 am
wwei23 wrote:
October 11th, 2020, 9:31 am
Try Cygwin.
To do what exactly?
Build apgmera? It should be possible to use C1 instead of G1.

wildmyron
Posts: 1542
Joined: August 9th, 2013, 12:45 am
Location: Western Australia

Re: apgsearch v5.0

Post by wildmyron » October 11th, 2020, 10:16 am

wwei23 wrote:
October 11th, 2020, 9:59 am
wildmyron wrote:
October 11th, 2020, 9:56 am
wwei23 wrote:
October 11th, 2020, 9:31 am
Try Cygwin.
To do what exactly?
Build apgmera? It should be possible to use C1 instead of G1.
Right. Building apgmera for C1 is not the issue here - that works fine with Cygwin or WSL and as can be seen on Catagolue works well for John Goodman. Building for G1 on Windows is the issue of the discussion here.
The 5S project (Smallest Spaceships Supporting Specific Speeds) is now maintained by AforAmpere. The latest collection is hosted on GitHub and contains well over 1,000,000 spaceships.

Semi-active here - recovering from a severe case of LWTDS.

Dylan Chen
Posts: 114
Joined: March 27th, 2020, 8:07 am
Contact:

Re: apgsearch v5.0

Post by Dylan Chen » December 16th, 2020, 10:09 pm

Dylan Chen wrote:
August 7th, 2020, 10:31 pm
For those who may get interest on Cloud based APGsearch, though not familiar with Linux command line. I'd like to share my note about how to do APGsearch on a cloud Linux server. Hoping it would be helpful to you.

Apart from month/annual renting instance, recently I have tried spot price hour-renting instance. It's turned out to be more cost effiency.
As I posted on discord:
spot price v8-cpu
speed: 47000/s
price:0.023$/h

24h objects: 88 billion
24h cost: 0.55$

average cost of new object discovery:
11h & 0.255$
(Poisson distribution)
And if you can manipulate more instance simultaneously, the 1-core instance cluster can improve it for more. I run 70 1-core instance for half week, get hundreds of new discover including a xp5, a linear growth and a xp30.

Here is a bash-code which can complie & running the search. My cloud provider console can send it to every instance, and monitor the running output

Code: Select all

#!/bin/bash
sudo apt-get update  
sudo apt-get -y upgrade 

sudo apt-get install make #
sudo apt-get install gcc #
sudo apt-get install g++ #
sudo apt-get install -y git 
sudo apt-get install -y screen 

git clone https://gitlab.com/apgoucher/apgmera.git
cd apgmera

./recompile.sh --profile 
to have each instance send their haul with a unique suffix

Code: Select all

#!/bin/bash
cd apgmera
i=$(hostname -i) #get the inner ip
num=`expr   ${i##*.}  +  70000000`#suffix the haul size with last digits of ip
./apgluxe  -k Helloworld -v 0  -n  $num

It is recommanded to divide your cloud computers to different cluster, under different names, to compare the performance, and remove the "not economic" ones.
Tools should not be the limit.
Whether your obstacle is a script, an stdin, or Linux environment computing resouces.
check New rules thread for help.

User avatar
calcyman
Moderator
Posts: 2932
Joined: June 1st, 2009, 4:32 pm

Re: apgsearch v5.0

Post by calcyman » December 18th, 2020, 2:58 pm

I'm going to look at creating some 'unit tests' for apgsearch: soups that test the correctness of the software in the presence of obscure edge cases.

Here's one such case: an emitted NE glider collides into an eastward LWSS:

https://catagolue.hatsya.com/hashsoup/G ... 7777/b3s23

The current version of apgsearch erroneously removes the glider, thinking that it has escaped, which causes the LWSS-glider collision to never occur, and that changes the subsequent evolution such that a trans-table-on-table emerges in the ash.

The reason for this problem is that apgsearch's linear separability check for escaping gliders is currently quite weak, but it should be relatively straightforward to strengthen it.

EDIT: Fixed in commit https://gitlab.com/apgoucher/lifelib/-/ ... 8607525911
What do you do with ill crystallographers? Take them to the mono-clinic!

User avatar
PkmnQ
Posts: 1137
Joined: September 24th, 2018, 6:35 am
Location: Server antipode

Re: apgsearch v5.0

Post by PkmnQ » March 28th, 2021, 7:27 am

How do I configure it so it uses a rule other than the standard Life?

User avatar
dvgrn
Moderator
Posts: 10612
Joined: May 17th, 2009, 11:00 pm
Location: Madison, WI
Contact:

Re: apgsearch v5.0

Post by dvgrn » March 28th, 2021, 7:43 am

PkmnQ wrote:
March 28th, 2021, 7:27 am
How do I configure it so it uses a rule other than the standard Life?
Configure apgsearch 5.0? Have you read the instructions?
The options may include, for example:
...
--rule b36s245 Run the custom rule B36/S245

User avatar
PkmnQ
Posts: 1137
Joined: September 24th, 2018, 6:35 am
Location: Server antipode

Re: apgsearch v5.0

Post by PkmnQ » March 28th, 2021, 7:47 am

dvgrn wrote:
March 28th, 2021, 7:43 am
PkmnQ wrote:
March 28th, 2021, 7:27 am
How do I configure it so it uses a rule other than the standard Life?
Configure apgsearch 5.0? Have you read the instructions?
I did not. I forgot that it was on gitlab and had a readme.md because I had issues with make and g++ (which are now fixed).

User avatar
ihatecorderships
Posts: 309
Joined: April 11th, 2021, 12:54 pm
Location: Falls Church, VA

Re: apgsearch v5.0

Post by ihatecorderships » April 20th, 2021, 1:41 pm

How can you access your user page and the hauls you submit? I'm using the precompiled apgluxe for windows if that's needed.
-- Kalan Warusa
Don't drink and drive, think and derive.

Dylan Chen
Posts: 114
Joined: March 27th, 2020, 8:07 am
Contact:

Re: apgsearch v5.0

Post by Dylan Chen » April 20th, 2021, 6:44 pm

ihatecorderships wrote:
April 20th, 2021, 1:41 pm
How can you access your user page and the hauls you submit?
https://catagolue.hatsya.com/haul/b3s23/C1
listed latest 100 hauls on C1 board.

https://catagolue.hatsya.com/user/Anonymous
This page is the user page of 'Anonymous'. feel free to replace it with your display username.
Tools should not be the limit.
Whether your obstacle is a script, an stdin, or Linux environment computing resouces.
check New rules thread for help.

User avatar
cgoler2
Posts: 224
Joined: March 10th, 2021, 2:32 pm
Location: Living in a half-bakery

Re: apgsearch v5.0

Post by cgoler2 » April 20th, 2021, 7:04 pm

How do I get apgsearch to run on Cygwin or Ubuntu? There is an error whenever I try to run it.
Edit: The error is

Code: Select all

$ ./recompile.sh --rule b3/s2
rm -f *.o */*.o *.op */*.op *.gdca */*.gcda *.profraw *.profdata apgluxe
echo Clean done
Clean done
Not a git repository; skipping updates...
Symmetry unspecified; assuming C1.
Configuring rule b3/s2; symmetry C1
Using /usr/bin/python3 to configure lifelib...
Traceback (most recent call last):
  File "mkparams.py", line 6, in <module>
    from lifelib.genera import rule_property, genus_list
ModuleNotFoundError: No module named 'lifelib.genera'

hotdogPi
Posts: 1587
Joined: August 12th, 2020, 8:22 pm

Re: apgsearch v5.0

Post by hotdogPi » May 3rd, 2021, 10:49 am

This seems to be a way to speed up C1 while not losing any results (G1 excludes large still lifes):

This will only be done if the population remains constant. If it isn't, evaluate the normal way.

When separating objects, first separate them only by connectedness. Then run each object for one generation. If they all remain stable, you just need to identify each object without worrying about disconnected parts that are part of the same object. If at least one object is not stable, run the normal separation algorithm.

The following objects are recognized immediately and are excluded from running one generation each:
* Blinker
* Toad
* Banana spark (1/2 toad)
* Both phases of the glider

The most common object for which this shortcut will be started and be aborted is the aircraft carrier or if there are two beacons in different phases (not sure which is more common).
User:HotdogPi/My discoveries

Periods discovered: 5-16,⑱,⑳G,㉑G,㉒㉔㉕,㉗-㉛,㉜SG,㉞㉟㊱㊳㊵㊷㊹㊺㊽㊿,54G,55G,56,57G,60,62-66,68,70,73,74S,75,76S,80,84,88,90,96
100,02S,06,08,10,12,14G,16,17G,20,26G,28,38,47,48,54,56,72,74,80,92,96S
217,486,576

S: SKOP
G: gun

User avatar
cgoler2
Posts: 224
Joined: March 10th, 2021, 2:32 pm
Location: Living in a half-bakery

Re: apgsearch v5.0

Post by cgoler2 » May 6th, 2021, 2:53 pm

cgoler2 wrote:
April 20th, 2021, 7:04 pm
How do I get apgsearch to run on Cygwin or Ubuntu? There is an error whenever I try to run it.
Edit: The error is

Code: Select all

$ ./recompile.sh --rule b3/s2
rm -f *.o */*.o *.op */*.op *.gdca */*.gcda *.profraw *.profdata apgluxe
echo Clean done
Clean done
Not a git repository; skipping updates...
Symmetry unspecified; assuming C1.
Configuring rule b3/s2; symmetry C1
Using /usr/bin/python3 to configure lifelib...
Traceback (most recent call last):
  File "mkparams.py", line 6, in <module>
    from lifelib.genera import rule_property, genus_list
ModuleNotFoundError: No module named 'lifelib.genera'

User avatar
calcyman
Moderator
Posts: 2932
Joined: June 1st, 2009, 4:32 pm

Re: apgsearch v5.0

Post by calcyman » May 7th, 2021, 6:20 am

cgoler2 wrote:
May 6th, 2021, 2:53 pm
cgoler2 wrote:
April 20th, 2021, 7:04 pm
How do I get apgsearch to run on Cygwin or Ubuntu? There is an error whenever I try to run it.
Edit: The error is

Code: Select all

$ ./recompile.sh --rule b3/s2
rm -f *.o */*.o *.op */*.op *.gdca */*.gcda *.profraw *.profdata apgluxe
echo Clean done
Clean done
Not a git repository; skipping updates...
Symmetry unspecified; assuming C1.
Configuring rule b3/s2; symmetry C1
Using /usr/bin/python3 to configure lifelib...
Traceback (most recent call last):
  File "mkparams.py", line 6, in <module>
    from lifelib.genera import rule_property, genus_list
ModuleNotFoundError: No module named 'lifelib.genera'
The problem here is that you haven't downloaded the repository properly. It uses submodules, so you need to install git and use:

Code: Select all

git clone https://gitlab.com/apgoucher/apgmera.git
rather than trying to download the repository as a ZIP file or whatever.
What do you do with ill crystallographers? Take them to the mono-clinic!

User avatar
cgoler2
Posts: 224
Joined: March 10th, 2021, 2:32 pm
Location: Living in a half-bakery

Re: apgsearch v5.0

Post by cgoler2 » May 7th, 2021, 7:51 am

There seems to be an error cloning anything on Gitlab.

User avatar
dvgrn
Moderator
Posts: 10612
Joined: May 17th, 2009, 11:00 pm
Location: Madison, WI
Contact:

Re: apgsearch v5.0

Post by dvgrn » May 7th, 2021, 8:00 am

cgoler2 wrote:
May 7th, 2021, 7:51 am
There seems to be an error cloning anything on Gitlab.
If you don't say anything about what the error is, there's no hope of anyone being able to help you solve it. Quote the text of the error that you're seeing.

EDIT:
dvgrn wrote:Quote the text of the error that you're seeing.
(and make a step-by-step list walking through the exact steps you took when you received that error).

User avatar
cgoler2
Posts: 224
Joined: March 10th, 2021, 2:32 pm
Location: Living in a half-bakery

Re: apgsearch v5.0

Post by cgoler2 » May 7th, 2021, 8:06 am

It said that the domain is wrong.

etmoonshade
Posts: 11
Joined: May 8th, 2021, 11:48 pm

Re: apgsearch v5.0

Post by etmoonshade » May 9th, 2021, 12:03 am

Two questions (and a lot of subquestions :V ):
First, is there any value in feeding apgsearch more memory to increase efficiency anywhere? (and if so, would it be a possible command line switch?)

Second, is there a point (other than the 1MB max size of a haul) where it's better to go for a smaller haul? I'm currently doing 100M/40min or so, and based on the listed size of the data, I could probably multiply that by 10 or 15 and stay under that 1MB limit. The question is, are there downsides to doing that which I'm not considering? The only thing I can think of is loss of a large haul if I have a power outage longer than my UPS can handle, which isn't out of the question.

Conversely, are there major bonuses to going for the significantly larger haul? I've read things in the past about there being limits and costs for I/O related to this, but nothing that looks recent (other than the "if you can go larger, do it" thing in the tutorial.)

User avatar
LaundryPizza03
Posts: 2297
Joined: December 15th, 2017, 12:05 am
Location: Unidentified location "https://en.wikipedia.org/wiki/Texas"

Re: apgsearch v5.0

Post by LaundryPizza03 » May 9th, 2021, 12:40 am

testitemqlstudop wrote:
May 19th, 2020, 1:44 am
LaundryPizza03 wrote:
May 18th, 2020, 11:30 pm
How can I automatically search large batches of rules using apgsearch? Especially, how can I autoskip non-apgsearchable rules in a batch (whether due to explosiveness or tendency to form large clumps) or automatically determine apgserachability a priori?
Fundamentally impossible to do any faster than 5 rules every 30~60 seconds, because lifelib needs recompiling every 5 rules (it can compile to run 5 rules at once as a maximum).
How do I compile apgsearch for searching multiple rules?

Code: Select all

x = 4, y = 3, rule = B3-q4z5y/S234k5j
2b2o$b2o$2o!
LaundryPizza03 at Wikipedia

User avatar
calcyman
Moderator
Posts: 2932
Joined: June 1st, 2009, 4:32 pm

Re: apgsearch v5.0

Post by calcyman » May 9th, 2021, 6:09 am

etmoonshade wrote:
May 9th, 2021, 12:03 am
Two questions (and a lot of subquestions :V ):
First, is there any value in feeding apgsearch more memory to increase efficiency anywhere? (and if so, would it be a possible command line switch?)
Not when running CPU-only, no. When running on a GPU it benefits from having extra GPU memory (because it can run more soups in parallel and better saturate the hardware), but there's no need to manually intervene; it examines the amount of free memory on the device and allocates accordingly.
Second, is there a point (other than the 1MB max size of a haul) where it's better to go for a smaller haul? I'm currently doing 100M/40min or so, and based on the listed size of the data, I could probably multiply that by 10 or 15 and stay under that 1MB limit.
The space occupied by a haul is proportional to the number of distinct objects, which is sublinear in the number of soups. Empirically, the power-law exponent is roughly log(2)/log(5), meaning that searching 5x more soups gives 2x larger data size. The largest haul (submitted by Pavgran) is 40G soups and occupied 757 kilobytes:

https://catagolue.appspot.com/haul/b3s2 ... 8278896bd8
The question is, are there downsides to doing that which I'm not considering? The only thing I can think of is loss of a large haul if I have a power outage longer than my UPS can handle, which isn't out of the question.
There are various things that have caused haul loss in the past: power outages, hardware failures (someone had a computer with all of the hallmarks of faulty RAM, yet insisted in trying to run huge hauls), unexpected restarts (e.g. automatic updates), terminating apgsearch mid-way through a large haul (sometimes it gets into a state where it isn't possible to cleanly interrupt it such that it uploads the partial haul to Catagolue). Also, peer verification occurs at the granularity of entire hauls, so if your haul has been verified by an incompatible version of apgsearch, then this can cause the haul to be rejected. (This isn't the same as permanent loss, because I occasionally trawl through rejected hauls to see whether they're genuinely bad or just the result of minor differences between apgsearch versions.)
Conversely, are there major bonuses to going for the significantly larger haul? I've read things in the past about there being limits and costs for I/O related to this, but nothing that looks recent (other than the "if you can go larger, do it" thing in the tutorial.)
Not really. That advice in the tutorial was written because there were cases where people would run small hauls (uploaded every few seconds), typically in rules that produce many many distinct objects (e.g. the Day&Night cellular automaton), and this would exhaust Catagolue's server quota and cause downtime until the next daily billing reset (around 08:00 UTC).

Better advice would be 'if you're submitting hauls at most once every 30 minutes, then there's no need to increase your haul size any further'.
What do you do with ill crystallographers? Take them to the mono-clinic!

etmoonshade
Posts: 11
Joined: May 8th, 2021, 11:48 pm

Re: apgsearch v5.0

Post by etmoonshade » May 9th, 2021, 12:42 pm

calcyman wrote:
May 9th, 2021, 6:09 am
Better advice would be 'if you're submitting hauls at most once every 30 minutes, then there's no need to increase your haul size any further'.
Thanks for the insight on this. I assume your worry is overall I/O still, so if I submit 4 concurrent hauls per 2 hours, that's mostly the same as 1 haul per 30 minutes as far as you're concerned? Other than the large data size than if I'd done a single large haul on a single machine, of course.

An idea, while I'm thinking of it: A service that can run somewhere and "bundle" hauls together for transmission. The service receives hauls over the network, and then queues them until you've got about 1MB (or another configurable amount) worth of data, then fires it off for validation/etc. It could even do some pre-processing on the hauls and merge them together - I'm not sure how validation works though, and if that would affect it.

My current setup is that I've got a home server with 64 mostly-idle cores, so I have plenty of room to spin up virtual machines. Since I start them relatively close together, they're all likely to submit at the same time - I want to do as much work as possible, but I don't want to "break Catagolue," as I've so often read. ;)

(note that on my system, 1 VM doing 62 threads is actually 20k soups/second slower than 4 VMs doing 15 threads - I could probably get even more efficiency if I go to 8 VMs at 7 or 8 threads each. I'd swear I saw some graphs somewhere...)

I'm probably going to bump each one to doing 150M/haul anyway though, just to make sure I'm within that "average" of 30+ minutes per haul.

Also, pardon the minor mess of "smaller" hauls (50M, lol) I'm making. It's not once every couple of seconds, but I keep getting more and more ideas on how to make this more efficient with what I have on hand, so I'm having to kill active searches - and I don't want to just lose the work. I'm using test mode while I'm messing with things, mostly. Honest. :)

Edit:
Now that I've got everything set up correctly, I've found the following based on the reported averages:
62 cores * 1 virtual machine, 62 cores total: ~40k soups/second (s/s)
15 * 4, 60c: ~55k s/s (I think - didn't keep this data specifically)
7 * 8, 56c: ~75k s/s

So with my specific setup (an Epyc 7302 server at 32c/64t, running Hyper-V) I get some pretty significant performance boots by using fewer cores/virtual machine vs. more, AND I'm not even using as many cores. I think that going too much lower would be silly, but it's interesting nonetheless.

Dylan Chen
Posts: 114
Joined: March 27th, 2020, 8:07 am
Contact:

Re: apgsearch v5.0

Post by Dylan Chen » May 9th, 2021, 8:09 pm

etmoonshade wrote:
May 9th, 2021, 12:42 pm
62 cores * 1 virtual machine, 62 cores total: ~40k soups/second (s/s)
15 * 4, 60c: ~55k s/s (I think - didn't keep this data specifically)
7 * 8, 56c: ~75k s/s
So with my specific setup (an Epyc 7302 server at 32c/64t, running Hyper-V)
for your reference, the apgsearch running on my WSL / 4600H (6c12t) could reach 8k in single thread, and the multi-thread could reach 60k in total.
some tiny 1c1G server I rent from cloud (with AVX-512) could reach 9k+. you can check https://catagolue.hatsya.com/haul/b3s23 ... oud_Debian
each different number suffix stands for a 1c1G cloud server.

the haul size wouldn't be problem, it is not linear growing with the soup number. And the stability of Catagolue is fairly sound, it withstand 70 tiny server 'attack' in the size of 0.11billion soups.(haul size 53kB)
the interval I set for C1 is usually 2h+, and G1 is 0.5h.
Tools should not be the limit.
Whether your obstacle is a script, an stdin, or Linux environment computing resouces.
check New rules thread for help.

Post Reply