Kazyan wrote:gmc_nxtman wrote:Sorry if this sounds a bit silly, but.. why would it need to do that?
It's possible to falsify data if that's not done.
Indeed. This is also useful for testing major changes to apgsearch to ensure that they produce accurate data.
For example, I'm writing a C++ version of apgsearch (b3s23/C1-only and highly optimised), which so far appears to be almost twice as fast as the Python version. It doesn't need a GUI installed on the machine, and can easily be executed from a bash script, so is ideal for machines designed for heavy computation. (In particular, Tom Rokicki has several such machines.)
gmc_nxtman wrote:Also, how is it possible to just have massive amounts of hauls going at once? Why is biggiemac filling 80% of the pie chart? I have 6 instances of the search running 99% of the time...
He had 25 quad-core computers, each core of which was running an instance of apgsearch, so 100 instances in total. These ran full-time for about 7 weeks, for a grand total of 120,000 CPU-hours of work.