I am thinking about reduction of the implicit "progress graph: size.
When the frontline is huge, there is a lot of permutations in which the clusters could be processed.
The "transposition tables" ... filtering of the beam prevents the search from the exponential growth ... and defragment at the end choses nice order of the gliders.
It seems to me that adding following assymetry to the search would speedup the process for this case:
Split the "rounds to categories (probably 2 would suffice)", one category (not to be used too often) would use the unresticted search as it is now, but the other would restrict the considered clusters to be in lines in "local neighbourhood" of the last glider line.
So the search would tend to look at partially defragmented solutions, but this is not the goal, it is the property. Main achievement is that the restricted phases would run much faster then the unrestricted ones. I am not sure with the phase switching strategy, but I expect there will be no two consecutive unrestricted phases. I am not sure with the strategy of interrupting the restricted phases, but it would be bad policy to run different restrictions in the same round depending on the position ... Especially when the beam is not much wider then the number of cores it would lead to wait for the unresticted core ... . It would probably require to add "no move" to the beam in restricted phases.
Hmm ... maybe if a no-move appears in the unified beam ... it's time to run unrestricted version in the next round.
(I would still resrict the restricted phases to clusters near the frontline ...)
Hmm, probably not the best idea ... max_branching_factor prevents clusters far from frontline to be considered and narrowing the search to last glider line neigbourhood would probably waste a lot of time with clusters near the glider line, but far from the frontline (unless somehow filtered).
Oh no, it could work well ... there is minidx ... filtering to clusters in the neighbourhood of the frontline.
According the first experiments, it performs rather well..., the main advantage is in the positions where the entire frontline is close to local optimum (no incentives to continue ... pessimism not big enough) and when on some lane an improvement is found, the wide search still strugles elsewhere and the progress is near the lane ... so concentration to the lane neighbourhood skips the regions wrere is a small probability for success.
Oh seems have debugged the parallel loading of the beam ... the code contains the restricted search I was talking about. There are debug prints included to help thinking about optmizing the decission making ... probbaly should include the clocks as well.
pslmake:
Code: Select all
#include <unistd.h>
#include <map>
#include <thread>
#include <atomic>
#include "recipe_store.hpp"
#include "object_cache.hpp"
#include "lifelib/cpads/include/cpads/random/prng.hpp"
#define BIG_SAVE_FREQ 10
namespace psl {
template<int N>
std::vector<apg::bitworld> load_file(std::string filename) {
std::vector<apg::bitworld> vbw;
apg::lifetree<uint32_t, N> lt(100);
apg::pattern pat = apg::pattern(<, filename);
for (int i = 0; i < N; i++) { vbw.push_back(pat.flatlayer(i)); }
return vbw;
}
/**
* This extracts a 16-bit bitmask of valid signatures from either
* a string representation of an integer or from a file containing
* gliders with those signatures.
*/
int extract_signatures(std::string signatures, int lnum) {
bool is_integer = true;
for (size_t i = 0; i < signatures.length(); i++) {
char c = signatures[i];
if ((c < '0') || (c > '9')) {
is_integer = false;
}
}
if (is_integer) {
return std::stoll(signatures);
}
apg::bitworld bw = load_file<4>(signatures)[lnum];
apg::lifetree<uint32_t, 1> lt(100);
std::vector<apg::bitworld> vbw(1, bw);
apg::pattern x(<, vbw, "b3s23");
// extract gliders:
x -= x[8];
sgsalvo y = dematerialise_gliders(x);
int res = 0;
for (auto& g : y) {
res |= (1 << g.signature());
}
return res;
}
int run_tests() {
apg::lifetree<uint32_t, 1> lt(1000);
apg::pattern block(<, "oo$oo!", "b3s23");
hh::PRNG prng(1, 2, 3);
sgsalvo x;
for (int i = 0; i < 50; i++) {
x.push_back(SlowGlider<int32_t, 3>::construct(prng.generate(), prng.generate()));
}
auto y = dematerialise_gliders(materialise_gliders(block, x));
int errors = 0;
for (int i = 0; i < 50; i++) {
if (x[i] != y[i]) {
std::cerr << "Error: " << x[i].contents << " != " << y[i].contents << std::endl;
errors += 1;
}
}
apg::classifier cfier(<, "b3s23");
ObjectCache oc(cfier);
apg::pattern blinker(<, "ooo!", "b3s23");
auto so = oc.pattern_to_so(blinker, false);
if (so.bbox[2] != 3) { std::cerr << "Error: bbox[2]=" << so.bbox[2] << " is wrong." << std::endl; errors += 1; }
if (so.bbox[3] != 3) { std::cerr << "Error: bbox[3]=" << so.bbox[3] << " is wrong." << std::endl; errors += 1; }
if (so.rows[0] != 0) { std::cerr << "Error: rows[0]=" << so.rows[0] << " is wrong." << std::endl; errors += 1; }
if (so.rows[1] != 7) { std::cerr << "Error: rows[1]=" << so.rows[1] << " is wrong." << std::endl; errors += 1; }
auto rv = so.rowvecs();
auto so2 = oc.pattern_to_so(blinker[1], false);
auto rv2 = so2.rowvecs();
if (rv.second != 9) { std::cerr << "Error: rv.second=" << rv.second << " is wrong." << std::endl; errors += 1; }
if (rv2.second != 6) { std::cerr << "Error: rv2.second=" << rv2.second << " is wrong." << std::endl; errors += 1; }
if (rv.first != rv2.first) { std::cerr << "Error: key mismatch." << std::endl; errors += 1; }
return (errors > 0);
}
struct PSLThread {
apg::lifetree<uint32_t, 1> lt;
apg::classifier cfier;
ObjectCache oc;
apg::pattern barrier;
apg::pattern clearway;
apg::pattern origin;
std::vector<apg::pattern> cached_gliders;
int valid_signatures;
std::set<int> forbidden_lanes;
int max_dimer_explore;
int max_dimer_explore_after_progress;
int max_branching_without_progress;
int max_branching_per_dimer;
int max_branching_per_block;
double pessimism;
double min_improve;
std::vector<apg::pattern> bespokes;
PSLThread() : lt(100),
cfier(<, "b3s23"),
oc(cfier),
barrier(<, "", "b3s23"),
clearway(<, "", "b3s23"),
origin(<, "", "b3s23"),
valid_signatures(65535),
max_dimer_explore(1200),
max_dimer_explore_after_progress(100),
max_branching_without_progress(250),
max_branching_per_dimer(100),
max_branching_per_block(60),
pessimism(1.1),
min_improve(0.0) {
apg::pattern eglider(<, "3o$o$bo!", "b3s23");
for (int i = 0; i < 8; i++) {
cached_gliders.push_back(eglider[i]);
}
}
void populate_bespokes(const BespokeRecipeStore &brs) {
apg::pattern res(<, "", "b3s23");
int i = 0;
std::map<uint32_t, std::vector<apg::pattern>> ordered_bespokes;
for (auto it = brs.recipes.begin(); it != brs.recipes.end(); ++it) {
auto x = oc.rows_to_pattern(it->first, 0, 0, 32, 32);
ordered_bespokes[x.totalPopulation()].push_back(x);
}
for (auto it = ordered_bespokes.rbegin(); it != ordered_bespokes.rend(); ++it) {
for (auto x : it->second) {
bespokes.push_back(x);
res += x.shift(64 * (i & 15), 64 * (i >> 4));
i += 1;
}
}
std::cerr << "#C " << bespokes.size() << " bespoke targets loaded:" << std::endl;
res.write_rle(std::cerr);
}
apg::pattern make_glider(const SlowGlider<int32_t, 3> &g, int64_t y) {
return cached_gliders[g.getPhase()].shift(g.getLane() + y, y);
}
apg::pattern b2p(const apg::bitworld &bw) {
std::vector<apg::bitworld> vbw(1, bw);
apg::pattern pat(<, vbw, "b3s23");
return pat;
}
ProblemState parse_pattern(const std::vector<apg::bitworld> &vbw) {
apg::pattern layer0 = b2p(vbw[0]);
apg::pattern inf = layer0 & layer0[8];
apg::pattern glider_pattern = layer0 - inf;
auto gvec = dematerialise_gliders(glider_pattern);
std::reverse(gvec.begin(), gvec.end());
apg::pattern env = to_env(inf);
if (vbw.size() >= 2) {
clearway = b2p(vbw[1]) - env;
for (size_t i = 2; i < vbw.size(); i++) {
clearway -= b2p(vbw[i]);
}
}
ProblemState ps;
ps.slow_gliders = gvec;
ps.objects = oc.objclusters(inf, bespokes);
barrier = inf;
ps.compute_hash();
ps.estimate_cost(pessimism);
return ps;
}
ProblemState parse_progress(const std::vector<apg::bitworld> &vbw) {
//std::cerr << "A";
apg::pattern layer0 = b2p(vbw[0]);
//std::cerr << "B";
apg::pattern inf = layer0 & layer0[8];
//std::cerr << "C";
apg::pattern glider_pattern = layer0 - inf;
//std::cerr << "D";
auto gvec = dematerialise_gliders(glider_pattern);
//std::cerr << "E";
std::reverse(gvec.begin(), gvec.end());
//std::cerr << "F";
ProblemState ps; // = ps;
//std::cerr << "G";
ps.slow_gliders = gvec;
//std::cerr << "H";
ps.objects = oc.objclusters(inf, bespokes);
//std::cerr << "I";
ps.compute_hash();
//std::cerr << "J";
ps.estimate_cost(pessimism);
//std::cerr << "K";
return ps;
}
void set_barrier(const apg::bitworld &bw) { barrier = b2p(bw); }
void set_clearway(const apg::bitworld &bw) { clearway = b2p(bw); }
void set_origin(const apg::bitworld &bw) {
origin = b2p(bw);
origin &= origin[8];
}
apg::pattern best_candidate(const ProblemState &ps) {
apg::pattern dst = barrier;
for (uint64_t i = 0; i < ps.objects.size(); i++) {
dst += oc.so_to_pattern(ps.objects[i]);
}
sgsalvo gliders;
for (auto it = ps.slow_gliders.rbegin(); it != ps.slow_gliders.rend(); ++it) {
gliders.push_back(*it);
}
dst += materialise_gliders(dst, gliders);
return dst;
}
void save_solution(UniqueEnsurer &ue, const ProblemState &ps) {
apg::pattern res = best_candidate(ps);
ue.save_solution(res, ps.slow_gliders.size());
}
void generate_successors(const ProblemState &ps, BeamSearchContainer &bsc, UniqueEnsurer &ue, RecipeStore &rs, bool wide_search) {
bool has_origin = false;
int64_t origin_bbox[4] = {0};
if (origin.nonempty()) {
has_origin = true;
origin.getrect(origin_bbox);
}
int lastLane=0; // required only for !wide_search and initialised there
if (!wide_search) {
ProblemState ps_nochange = ps;
ps_nochange.lastdimmer_bb_to_prevlane_dist=-1; //denoting no change
bsc.try_insert(ps_nochange);
lastLane = ps.slow_gliders[ps.slow_gliders.size()-1].getLane();
}
uint64_t minidx = ps.get_minidx(480);
apg::pattern dst = barrier;
for (uint64_t i = minidx; i < ps.objects.size(); i++) {
dst += oc.so_to_pattern(ps.objects[i]);
}
if (dst != dst[8]) {
std::cerr << "Critical error: dst unstable!!!" << std::endl;
barrier.write_rle(std::cerr);
dst.write_rle(std::cerr);
for (auto& x : ps.objects) {
std::cerr << x.repr() << std::endl;
}
}
// create halo for convolution purposes:
apg::pattern halo(<, "ooo$ooo$ooo!", "b3s23"); halo = halo.shift(-1, -1);
apg::pattern halo2(<, "b3o$5o$5o$5o$b3o!", "b3s23"); halo2 = halo2.shift(-2, -2);
halo2 = halo2.convolve(halo2).convolve(halo2); // radius 6
halo2 = halo2.convolve(halo2); // radius 12
apg::pattern och2 = origin.convolve(halo2);
// decrementing counters for state-level and dimer-level branching:
int branching_remaining = max_branching_without_progress;
if (ps.objects.size() <= 4) { branching_remaining = 1200 / ps.objects.size(); }
bool made_progress = false;
bool built_bespoke = false;
for (const auto &edge : ps.sorted_dimers) {
int lastdimmer_bb_to_prevlane_dist;
SmallObject dimer = ps.objects[edge.first] + ps.objects[edge.second];
lastdimmer_bb_to_prevlane_dist = dimer.bbox[0]-dimer.bbox[1]-dimer.bbox[3]-lastLane;
if (lastdimmer_bb_to_prevlane_dist < 0) {
lastdimmer_bb_to_prevlane_dist = lastLane-dimer.bbox[0]+dimer.bbox[1]-dimer.bbox[2];
}
if (lastdimmer_bb_to_prevlane_dist < 0) {
lastdimmer_bb_to_prevlane_dist = 0;
}
if (!wide_search && (lastdimmer_bb_to_prevlane_dist>20)) {// the constant should be optitmized according the statistics
continue;
}
int dimer_explore_remaining = max_dimer_explore;
if (made_progress) {
dimer_explore_remaining = (ps.objects.size() <= 4) ? (1200 / ps.objects.size()) : max_dimer_explore_after_progress;
}
ProblemState ps_minus_dimer = ps;
ps_minus_dimer.lastdimmer_bb_to_prevlane_dist = lastdimmer_bb_to_prevlane_dist;
ps_minus_dimer.erase({edge.first, edge.second});
// The thing that we're trying to construct:
SmallObject dso = ps.objects[edge.first];
if (edge.first != edge.second) {
dso = dso + ps.objects[edge.second];
if (oc.so_to_pattern(dso) != oc.so_to_pattern(ps.objects[edge.first]) + oc.so_to_pattern(ps.objects[edge.second])) {
std::cerr << "Non-additivity error!" << std::endl;
}
}
apg::pattern dp = oc.so_to_pattern(dso);
// Nearby objects, etc:
apg::pattern background = dst - dp;
apg::pattern nogo = background.convolve(halo) + clearway;
bool is_isolated = ((background + och2) & dp.convolve(halo2)).empty();
bool is_isolated_block = (dso.flags & FLAG_BLOCK) && is_isolated;
int dimer_branching_remaining = 0;
if (ps.objects.size() <= 4) {
dimer_branching_remaining = 1200 / ps.objects.size();
} else if (is_isolated_block) {
dimer_branching_remaining = max_branching_per_block;
} else {
dimer_branching_remaining = max_branching_per_dimer;
}
auto f = [&](const std::vector<SmallObject> &objs, const sgsalvo &gliders) {
if ((dimer_explore_remaining <= 0) || (dimer_branching_remaining <= 0) || (branching_remaining <= 0)) {
// We've already reached the maximum branching factor:
return;
}
for (const auto& g : gliders) {
if (((valid_signatures >> g.signature()) & 1) == 0) {
// ruled out due to monochromatic (or similar) constraint:
return;
}
}
uint64_t phash = ps_minus_dimer.objects_hash;
for (const auto &x : objs) { phash += x.positionalHash; }
uint32_t pcost = ps_minus_dimer.slow_gliders.size() + gliders.size();
if (ue.leq_cost(phash, pcost)) {
// already accomplished with lower cost:
return;
}
apg::pattern newobjs(<, "", "b3s23");
for (const auto &x : objs) { newobjs += oc.so_to_pattern(x); }
if ((newobjs & nogo).nonempty()) {
// invalid location:
return;
}
dimer_explore_remaining -= 1;
apg::pattern src = background + newobjs;
if (src != src[8]) {
// unstable:
return;
}
int64_t bbox[4] = {0}; src.getrect(bbox);
int64_t y = bbox[1] + bbox[3] + 128; y -= (y & 63);
int epochs = (bbox[3] * 4 + 2048) >> 8;
for (const auto &g : gliders) {
src += make_glider(g, y);
for (int i = 0; i < epochs; i++) {
apg::pattern src2 = src[256];
if (src2 == src) { goto stable; }
if ((src2 & background) != background) { return; }
src = src2;
}
return;
stable:
if ((src & clearway).nonempty()) { return; }
}
if (src != dst) {
// incorrect reaction:
return;
}
// compute new problem state:
ProblemState ps_succ = ps_minus_dimer;
for (const auto &x : objs) { ps_succ.append(x); }
for (auto it = gliders.rbegin(); it != gliders.rend(); ++it) {
ps_succ.slow_gliders.push_back(*it);
}
// ps_succ.compute_hash();
if (phash != ps_succ.objects_hash) {
std::cerr << "Hash inconsistency error!" << std::endl;
}
bool is_solution = (ps_succ.objects.size() == 1) && (ps_succ.objects[0].flags & FLAG_BLOCK);
if (has_origin) {
is_solution = is_solution && (ps_succ.objects[0].bbox[0] == origin_bbox[0]) && (ps_succ.objects[0].bbox[1] == origin_bbox[1]);
}
if (ue.leq_cost_update(phash, pcost, is_solution)) {
// was overtaken by a different thread:
return;
}
if (is_solution) {
save_solution(ue, ps_succ);
}
// now do the expensive spanning tree computation:
double newcost = ps_succ.estimate_cost(pessimism, has_origin, origin_bbox[0], origin_bbox[1]);
if (is_isolated && (ps_succ.remaining_cost() >= ps.remaining_cost())) { return; }
if (!made_progress) { dimer_branching_remaining -= 1; }
branching_remaining -= 1;
if (dso.flags & FLAG_BESPOKE) { made_progress = true; built_bespoke = true; }
if (newcost < ps.total_cost - min_improve) { made_progress = true; }
bsc.try_insert(ps_succ);
}; // lambda function f the recipe application
// search over all recipes:
rs.search(dso, f, is_isolated, valid_signatures);
if (branching_remaining <= 0) { break; }
}
std::cerr << (built_bespoke ? "B" : made_progress ? "P" : (branching_remaining <= 0) ? "Q" : "X") << std::flush;
}
};
void run_worker(std::atomic<uint64_t> *ctr, uint64_t n_tasks, UniqueEnsurer *ue, RecipeStore *rs, PSLThread *thread, BeamSearchContainer *bsc, const ProblemState *problems, bool wide_search) {
for (;;) {
// dequeue subtask:
uint64_t idx = (uint64_t) ((*ctr)++);
if (idx >= n_tasks) { break; }
thread->generate_successors(problems[idx], *bsc, *ue, *rs, wide_search);
}
}
bool run1iter(int beamwidth, UniqueEnsurer &ue, RecipeStore &rs, std::vector<PSLThread> &pslv, std::vector<ProblemState> &problems, std::string progressbase, int generation, bool wide_search) {
uint64_t n_tasks = problems.size();
std::atomic<uint64_t> ctr{0ull};
int parallelism = pslv.size();
std::cerr << "Processing " << n_tasks << " tasks on " << parallelism << " threads..." << std::endl;
std::vector<BeamSearchContainer> bscv(parallelism);
std::vector<std::thread> workers;
for (int i = 0; i < parallelism; i++) {
bscv[i].maxsize = beamwidth; bscv[i].max_gliders = ue.max_gliders;
workers.emplace_back(run_worker, &ctr, n_tasks, &ue, &rs, &(pslv[i]), &(bscv[i]), &(problems[0]), wide_search);
}
for (auto&& w : workers) { w.join(); }
BeamSearchContainer bsc; bsc.maxsize = beamwidth; bsc.max_gliders = ue.max_gliders;
for (int i = 0; i < parallelism; i++) { bsc += bscv[i]; }
std::cerr << "\n" << ue.size() << " nodes visited; ";
double start_best_eval=problems[0].total_cost;
problems.clear();
int beamindex=0;bool first=true; bool saveit=(generation % BIG_SAVE_FREQ)==0;
for (auto it = bsc.pmpq.begin(); it != bsc.pmpq.end(); ++it) {
apg::pattern res = pslv[0].best_candidate(bsc.contents[it->second]);
if (saveit || first) {
first = false;
std::ofstream out(progressbase+std::to_string(generation)+"_"+std::to_string(++beamindex)+".mc");
res.write_macrocell(out);
out.close();
}
problems.push_back(bsc.contents[it->second]);
}
double end_worst_beam_eval=problems[problems.size()-1].total_cost;
if (bsc.pmpq.empty()) {
std::cerr << "search complete." << std::endl;
} else {
auto it = bsc.pmpq.begin();
std::cerr << "progress = " << (100.0 * bsc.contents[it->second].slow_gliders.size() / it->first) << "%; ";
std::cerr << "start best eval = " << start_best_eval << "; end worst beam eval = " << end_worst_beam_eval << "; to wide search diff = " << (end_worst_beam_eval - start_best_eval) ;
std::cerr << "; lastdimmer_bb_to_prevlane_dist = " << bsc.contents[it->second].lastdimmer_bb_to_prevlane_dist;
std::cerr << "; best new cost estimate = " << it->first << std::endl;
}
if (ue.max_gliders==999999999) {//solution not found yet
beamindex=beamwidth;
if (saveit) {
for(int g=2;g<=generation;g++) {
if (((generation+1-g) % BIG_SAVE_FREQ)==0) {
int treshold=1+(beamwidth/g);
//std::cerr << "removal beamindex " << beamindex << " treshold " << treshold << std::endl;
while (beamindex>treshold) {
std::string delfilename = (progressbase+std::to_string(generation+1-g)+"_"+std::to_string(beamindex--)+".mc");
//std::cerr << delfilename << std::endl;
std::remove(delfilename.c_str());
};
if (beamindex==1) {
break;
}
}
}
}
}
return end_worst_beam_eval >= start_best_eval;
// for (auto& ps : problems) { std::cerr << "size = " << ps.objects.size() << std::endl; }
// problems.swap(bsc.contents);
}
void fed_worker(std::atomic<uint64_t> *ctr, uint64_t n_tasks, PSLThread *thread, BeamSearchContainer *bsc, std::string *file_names) {
for (;;) {
// dequeue subtask:
uint64_t idx = (uint64_t) ((*ctr)++);
if (idx >= n_tasks) { break; }
//std::cerr << file_names[idx] << std::endl;
//std::vector<apg::bitworld> vbw = load_file<1>(file_names[idx]);
std::vector<apg::bitworld> vbw;
apg::lifetree<uint32_t, 1> lt(100);
apg::pattern pat = apg::pattern(<, file_names[idx]);
//int64_t bbox[4] = {0, 0, 0, 0}; pat.getrect(bbox);
//std::cerr << "(" << bbox[0] << "," << bbox[1] << "," << bbox[2] << "," << bbox[3] << ")";
vbw.push_back(pat.flatlayer(0));
ProblemState ps = thread -> parse_progress(vbw);
bsc->try_insert(ps);
//std::cerr << idx;
}
}
bool test_generationfile(std::string progressbase, int generation) {
std::string filename=progressbase+std::to_string(generation)+"_1.mc";
std::ifstream f(filename.c_str());
return f.good();
}
int problems_fed(std::string progressbase, std::vector<ProblemState> &problems, std::vector<PSLThread> &pslv, int beamwidth, int maxprogress) {
// hope the pessimism prevents small generations to survive (unless near the solution)
// rather slow especially in massively parallel environment ... should be parallelised
std::cerr << "Loading progress..." << std::flush;
int generation=1;
const ProblemState ps = problems[0];
while (test_generationfile(progressbase,generation)) {
std::cerr << generation << " " << std::flush;
generation=2*generation;
}
int gen_anchor_t=generation,gen_anchor_b=generation/2;
while (gen_anchor_t-gen_anchor_b>1) {
generation = (gen_anchor_t+gen_anchor_b)/2;
std::cerr << generation << " " << std::flush;
if (test_generationfile(progressbase,generation)) {
gen_anchor_b=generation;
} else {
gen_anchor_t=generation;
}
}
generation = gen_anchor_b;
if ((maxprogress>0) && (generation > maxprogress)) {
generation = maxprogress;
std::cerr << generation << " " << std::flush;
}
gen_anchor_b=generation-BIG_SAVE_FREQ;
if (gen_anchor_b<0) {
gen_anchor_b=0;
}
std::vector<std::string> file_names(0);
while (gen_anchor_b<generation) {
std::cerr << ++gen_anchor_b << " " << std::flush;
int beamindex=0;
while (true) {
std::string filename=progressbase+std::to_string(gen_anchor_b)+"_"+std::to_string(++beamindex)+".mc";
std::ifstream f(filename.c_str());
if (!f.good()) {
break;
}
file_names.push_back(filename);
}
}
uint64_t n_tasks = file_names.size();
std::atomic<uint64_t> ctr{0ull};
int parallelism = pslv.size();
std::cerr << "Loading " << n_tasks << " states on " << parallelism << " threads..." << std::endl;
std::vector<BeamSearchContainer> bscv(parallelism);
std::vector<std::thread> workers;
for (int i = 0; i < parallelism; i++) {
bscv[i].maxsize = beamwidth;
workers.emplace_back(fed_worker, &ctr, n_tasks, &(pslv[i]), &(bscv[i]), &(file_names[0]));
}
std::cerr << "Waiting for workers" << std::endl;
BeamSearchContainer bsc; bsc.maxsize = beamwidth;
for (auto&& w : workers) { w.join(); }
std::cerr << "Workers finished" << std::endl;
for (int i = 0; i < parallelism; i++) { bsc += bscv[i]; }
if (generation > 0) {
problems.clear();
for (auto it = bsc.pmpq.begin(); it != bsc.pmpq.end(); ++it) {
problems.push_back(bsc.contents[it->second]);
}
}
return generation;
}
void run_main( int beamwidth,
int parallelism,
std::string dbfile,
std::string infile,
std::string outfile,
std::string progress,
int valid_signatures) {
bool is_mono = ((valid_signatures & 0x00ff) == 0) || ((valid_signatures & 0xff00) == 0);
int max_gliders=999999999;
int maxprogress=-1;
if (progress=="keep") {
max_gliders--;
} else {
std::string inidig="123456789";
if (inidig.find(progress.substr(0,1))!=std::string::npos){
maxprogress = std::stoi(progress);
max_gliders--;
}
}
UniqueEnsurer ue(max_gliders, outfile);
RecipeStore rs(dbfile);
rs.read_table();
std::vector<PSLThread> pslv(parallelism);
std::vector<ProblemState> problems(1);
std::cerr << "Loading bespoke database..." << std::flush;
rs.brs.load("data/bespoke/bespokes.dat");
std::cerr << " done!" << std::endl;
// Obtain bespoke objects:
pslv[0].populate_bespokes(rs.brs);
int generation;
std::string progress_base;
int pos=outfile.rfind(".");
progress_base=outfile.substr(0,pos)+"_";
std::cerr << "Loading problem..." << std::flush;
{
std::vector<apg::bitworld> vbw = load_file<4>(infile);
problems[0] = pslv[0].parse_pattern(vbw);
auto clearway = pslv[0].clearway.flatlayer(0);
auto barrier = pslv[0].barrier.flatlayer(0);
for (int i = 0; i < parallelism; i++) {
pslv[i].set_clearway(clearway);
pslv[i].set_barrier(barrier);
pslv[i].set_origin(vbw[3]);
pslv[i].valid_signatures = valid_signatures;
if (is_mono) { pslv[i].pessimism += 0.2; }
}
auto origin = pslv[0].origin;
bool has_origin = false;
int64_t origin_bbox[4] = {0};
if (origin.nonempty()) {
has_origin = true;
origin.getrect(origin_bbox);
std::cerr << "Origin: " << origin_bbox[0] << "," << origin_bbox[1] << std::endl;
} else {
std::cerr << "No origin! " << std::endl;
}
generation = problems_fed(progress_base,problems,pslv,beamwidth,maxprogress);
if (generation==0) {
problems[0].estimate_cost(pslv[0].pessimism, has_origin, origin_bbox[0], origin_bbox[1]);
}
}
std::cerr << " done!" << std::endl;
bool wide_search=true;
while (!(problems.empty())) {
std::cerr << " (" << generation << ")" ;
wide_search = run1iter(beamwidth, ue, rs, pslv, problems, progress_base, ++generation, wide_search);
}
}
} // namespace psl
int main(int argc, char* argv[]) {
bool incorrectly_called = (argc != 8);
for (int i = 1; i < argc; i++) {
if (argv[i][0] == '-') { incorrectly_called = true; }
}
if (incorrectly_called) {
std::string program_name = argv[0];
program_name += ".py";
std::cerr << "Warning: you called " << argv[0] << " when you probably meant " << program_name << std::endl;
execvp(program_name.c_str(), argv);
std::cerr << "Error: cannot run Python script." << std::endl;
return 1;
}
int x = psl::run_tests();
if (x) { return x; }
int signatures = psl::extract_signatures(argv[4], 3);
if (signatures == 0) { signatures = psl::extract_signatures(argv[7], 0); }
std::string dbfile = argv[3];
if (((signatures & 0xff)==0) || ((signatures & 0xff00)==0)) {
if (dbfile.substr(0,8) == "data/std") {
dbfile = "data/mono"+ dbfile.substr(8,dbfile.length()-3);
std::cerr << "Dbfile changed from " << argv[3] << " to " << dbfile << " to better correspond to the signature" << std::endl;
}
}
std::cerr << "\033[33;1mRunning pslmake with valid signatures " << signatures << "\033[0m" << std::endl;
psl::run_main(
std::stoll(argv[1]),
std::stoll(argv[2]),
dbfile, argv[4], argv[5], argv[6],
signatures
);
return 0;
}
structures:
Code: Select all
#pragma once
#include <stdint.h>
#include <type_traits>
#include <vector>
#include <set>
#include <unordered_map>
#include <map>
#include <sstream>
#include <string>
#include "lifelib/cpads/include/cpads/mxor.hpp"
#include "unique_ensurer.hpp"
#include "lifelib/spantree.h"
#include "lifelib/avxlife/uli.h"
namespace psl {
constexpr static uint64_t FLAG_BESPOKE = 1;
constexpr static uint64_t FLAG_BLOCK = 2;
constexpr static uint64_t FLAG_DIMER = 4;
/**
* A structure representing a single glider in a slow salvo.
*/
template<typename T, int PhaseBits>
struct SlowGlider {
T contents;
static_assert(std::is_integral<T>::value && std::is_signed<T>::value,
"T must be a signed integral type.");
constexpr static uint64_t PhaseMask = (1ull << PhaseBits) - 1;
static SlowGlider<T, PhaseBits> construct(int64_t lane, uint64_t phase) {
T res = ((T) ((lane << PhaseBits) + (phase & PhaseMask)));
return SlowGlider<T, PhaseBits>{res};
}
SlowGlider<T, PhaseBits> shift(int64_t dx) const {
T res = contents + ((T) (dx << PhaseBits));
return SlowGlider<T, PhaseBits>{res};
}
int64_t getLane() const {
return ((int64_t) (contents >> PhaseBits));
}
uint64_t getPhase() const {
return ((uint64_t) contents) & PhaseMask;
}
using ThisType = SlowGlider<T, PhaseBits>;
INHERIT_COMPARATORS_FROM(ThisType, contents, _HI_)
int signature() const {
return (contents & 15);
}
};
typedef std::vector<SlowGlider<int32_t, 3>> sgsalvo;
template<typename T>
sgsalvo translate_salvo(const T& gliders, int64_t dx, int64_t dy, int64_t dt) {
sgsalvo res;
for (const auto& g : gliders) {
int64_t lane = g.getLane();
uint64_t phase = g.getPhase();
lane += (dx - dy);
phase += (dt - 4*dy);
res.push_back(SlowGlider<int32_t, 3>::construct(lane, phase));
}
return res;
}
std::vector<sgsalvo> translate_salvo2(const std::vector<sgsalvo>& gliders, int64_t dx, int64_t dy, int64_t dt) {
std::vector<sgsalvo> res;
for (const auto& g : gliders) {
res.push_back(translate_salvo(g, dx, dy, dt));
}
return res;
}
_HI_ uint64_t hash_rows(const uint32_t *rows) {
uint64_t state = 0;
for (int i = 0; i < 32; i++) {
state = state * 6364136223846793005ull + hh::mix_circulant(rows[i]);
}
// powerful mixing function (PractRand cannot distinguish the output
// of two rounds of hh::fibmix() applied to a simple counter):
return hh::fibmix(hh::fibmix(state));
}
struct SmallObject {
// Bounding box of the envelope of the pattern:
int64_t bbox[4]; // 32 bytes
uint64_t positionalHash;
uint64_t flags;
double estimatedCost;
int32_t period;
int32_t popx8;
uint32_t rows[32]; // 128 bytes
std::string repr() const {
std::ostringstream ss;
ss << "# bbox = [" << bbox[0] << ", " << bbox[1] << ", " << bbox[2] << ", " << bbox[3] << "];" << std::endl;
ss << "# flags = " << flags << "; cost = " << estimatedCost << "; period = " << period << "; pop = " << (0.125 * popx8) << ";" << std::endl;
for (int i = 0; i < bbox[3]; i++) {
for (int j = 0; j < bbox[2]; j++) {
ss << ".*"[(rows[i] >> j) & 1];
}
ss << std::endl;
}
return ss.str();
}
std::vector<uint32_t> rowvec(int j) const {
uint32_t d[32] = {0};
// translate:
for (size_t i = 2; i < 30; i++) { d[i] = rows[i-2] << 2; }
if (j & 1) { b3s23::iterate_var_32_28_onegen(d); }
std::vector<uint32_t> key(32);
for (int y = 0; y < 28; y++) {
uint32_t row = (d[y + 2] >> 2) & 0xfffffff;
while (row) {
int x = hh::ctz32(row);
row ^= (1 << x);
if (j & 2) {
key[x] |= (1 << y);
} else {
key[y] |= (1 << x);
}
}
}
return key;
}
std::pair<std::vector<uint32_t>, uint32_t> rowvecs() const {
std::vector<uint32_t> res = rowvec(0);
uint32_t mask = 1;
for (int j = 1; j < 4; j++) {
if (j & 1 & period) { continue; }
auto v = rowvec(j);
if (v < res) {
mask = (1 << j);
res = v;
} else if (v == res) {
mask |= (1 << j);
}
}
return std::pair<std::vector<uint32_t>, uint32_t>(res, mask);
}
/**
* Get the diagonal position of the centre, measured in quarter-diagonals.
*/
int64_t qd() const {
return (bbox[0] + bbox[1]) * 2 + (bbox[2] + bbox[3]);
}
uint64_t hash() const {
uint64_t state = hash_rows(rows);
for (int i = 0; i < 4; i++) {
state = hh::fibmix(state + ((uint64_t) bbox[i]));
}
return state;
}
/**
* Comparison operator for the purpose of sorting by diagonal position.
*/
bool operator<(const SmallObject &other) const {
auto qd1 = qd();
auto qd2 = other.qd();
if (qd1 < qd2) { return true; }
if (qd2 < qd1) { return false; }
return bbox[0] < other.bbox[0];
}
void shift_append(int dx, int dy, const SmallObject &other) {
if ((dx >= 32) || (dy >= 32)) { return; }
for (int i = dy; i < 32; i++) {
rows[i] |= (other.rows[i - dy] << dx);
}
}
SmallObject operator+(const SmallObject &other) const {
SmallObject res;
memset(&res, 0, sizeof(res));
res.bbox[0] = hh::min(bbox[0], other.bbox[0]);
res.bbox[1] = hh::min(bbox[1], other.bbox[1]);
res.bbox[2] = hh::max(bbox[0] + bbox[2], other.bbox[0] + other.bbox[2]) - res.bbox[0];
res.bbox[3] = hh::max(bbox[1] + bbox[3], other.bbox[1] + other.bbox[3]) - res.bbox[1];
res.shift_append(bbox[0] - res.bbox[0], bbox[1] - res.bbox[1], *this);
res.shift_append(other.bbox[0] - res.bbox[0], other.bbox[1] - res.bbox[1], other);
res.popx8 = popx8 + other.popx8;
res.positionalHash = res.hash();
res.flags = FLAG_DIMER;
return res;
}
};
static_assert(sizeof(SmallObject) == 192, "SmallObject should occupy 192 bytes");
struct ProblemState {
// irredundant fields:
std::vector<SmallObject> objects;
sgsalvo slow_gliders;
// computed fields:
uint64_t objects_hash;
std::vector<apg::edge64> sorted_dimers;
double total_cost; // from objects + spanning tree length + slow_gliders.size()
int lastdimmer_bb_to_prevlane_dist; //statstic to take into account for "concentration optimization" ... not needed for search
bool operator<(const ProblemState &other) const {
return total_cost < other.total_cost;
}
double remaining_cost() const {
return total_cost - slow_gliders.size();
}
uint64_t compute_hash() {
objects_hash = 0;
for (const auto &x : objects) {
objects_hash += x.positionalHash;
}
return objects_hash;
}
void append(const SmallObject &so) {
objects.push_back(so);
objects_hash += so.positionalHash;
}
void erase(const std::vector<uint64_t> &idxs) {
std::vector<uint64_t> sorted_idxs = idxs;
std::sort(sorted_idxs.begin(), sorted_idxs.end());
sorted_idxs.erase(std::unique(sorted_idxs.begin(), sorted_idxs.end()), sorted_idxs.end());
while (sorted_idxs.size()) {
uint64_t idx = sorted_idxs.back();
sorted_idxs.pop_back();
objects_hash -= objects[idx].positionalHash;
objects.erase(objects.begin() + idx);
}
}
uint64_t get_minidx(int bandwidth) const {
uint64_t minidx = objects.size() - 1;
int64_t maxqd = objects[minidx].qd();
while ((minidx > 0) && (objects[minidx-1].qd() >= maxqd - bandwidth)) {
minidx -= 1;
}
return minidx;
}
// prepares sorted_dimmers for following search as well the graph creation for the spanning tree size calculation
// is the bottleneck then we just filter edges which were already considered
double estimate_cost(double pessimism, bool has_origin = false, int64_t ox = 0, int64_t oy = 0) {
// contribution from existing gliders:
total_cost = slow_gliders.size();
sorted_dimers.clear();
if (objects.empty()) { return total_cost; }
std::sort(objects.begin(), objects.end());
int64_t sqdist = 0;
std::vector<apg::coords64> ccentres;
for (const auto &x : objects) {
// compute centre of bounding box in half-coordinates:
int64_t mx2 = x.bbox[0] * 2 + x.bbox[2];
int64_t my2 = x.bbox[1] * 2 + x.bbox[3];
if (x.flags & FLAG_BESPOKE) {
// weird trick to encourage blocks near bespoke objects to
// be moved northwest so that they don't get in the way:
mx2 -= 50; my2 -= 50; // 25fd behind where it actually is
}
int64_t dx = ox - x.bbox[0];
int64_t dy = oy - x.bbox[1];
sqdist = hh::max(sqdist, dx * dx + dy * dy);
ccentres.emplace_back(mx2, my2);
// contribution from objects:
total_cost += x.estimatedCost * pessimism;
}
if (has_origin) { total_cost += 0.2 * std::sqrt(sqdist); }
auto all_edges = apg::spanning_graph(ccentres);
// correct for using half-coordinates:
double stlength = 0.5 * apg::spanning_graph_to_tree(ccentres, 0, &std::sqrt, all_edges);
// cost contribution from spanning tree:
total_cost += 0.15 * stlength * pessimism;
uint64_t minidx = get_minidx(200);
std::vector<std::pair<int64_t, apg::edge64>> dimers;
for (uint64_t i = minidx; i < objects.size(); i++) {
int64_t pen = -2 * objects[i].qd();
if (objects[i].flags & FLAG_BESPOKE) {
// bespoke
pen -= 480;
} else if (objects[i].flags & FLAG_BLOCK) {
// block
pen += 480;
} else {
// monomer
pen += 80;
}
dimers.emplace_back(pen, apg::edge64(i, i));
}
for (const apg::edge64 &edge : all_edges) {
if ((edge.first < minidx) || (edge.second < minidx)) { continue; }
int64_t x1 = ccentres[edge.first].first;
int64_t y1 = ccentres[edge.first].second;
int64_t x2 = ccentres[edge.second].first;
int64_t y2 = ccentres[edge.second].second;
if ((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2) > 10000) { continue; }
if ((objects[edge.first].flags & FLAG_BESPOKE) || (objects[edge.second].flags & FLAG_BESPOKE)) { continue; }
SmallObject dimer = objects[edge.first] + objects[edge.second];
if ((dimer.bbox[2] > 28) || (dimer.bbox[3] > 28)) { continue; }
dimers.emplace_back(0 - (x1 + y1 + x2 + y2), edge);
}
std::sort(dimers.begin(), dimers.end());
for (const auto &x : dimers) {
sorted_dimers.push_back(x.second);
}
return total_cost;
}
};
struct BeamSearchContainer {
uint64_t maxsize;
std::vector<ProblemState> contents;
std::set<std::pair<double, uint64_t>> pmpq; // poor man's priority queue
std::unordered_map<uint64_t, uint64_t> hash_to_idx;
int max_gliders=999999999;
void try_insert(const ProblemState& ps) {
double optimism = 0.66;
if (ps.remaining_cost() * optimism + ps.slow_gliders.size() > max_gliders) { return; } // very small chance to beat the best solution
auto it = hash_to_idx.find(ps.objects_hash);
if (it != hash_to_idx.end()) {
uint64_t idx = it->second;
if (ps.slow_gliders.size() < contents[idx].slow_gliders.size()) {
pmpq.erase(std::pair<double, uint64_t>{contents[idx].total_cost, idx});
contents[idx] = ps;
pmpq.insert(std::pair<double, uint64_t>{contents[idx].total_cost, idx});
}
return;
}
uint64_t idx = contents.size();
if (idx >= maxsize) {
auto opair = *(pmpq.rbegin());
if (ps.total_cost >= opair.first) { return; }
idx = opair.second;
pmpq.erase(opair);
hash_to_idx.erase(contents[idx].objects_hash);
contents[idx] = ps;
} else {
contents.push_back(ps);
}
pmpq.insert(std::pair<double, uint64_t>{contents[idx].total_cost, idx});
hash_to_idx[ps.objects_hash] = idx;
}
void operator+=(const BeamSearchContainer &other) {
for (const auto &x : other.contents) {
try_insert(x);
}
}
};
struct BasicSalvo {
uint16_t src; // encodes a 4x4 pattern
int8_t dx;
int8_t dy;
SlowGlider<int8_t, 1> gliders[12];
uint64_t size() const {
for (uint64_t i = 0; i < 12; i++) {
if (gliders[i].contents == ((int8_t) 0x80)) {
return i;
}
}
return 12;
}
uint32_t qcost() const {
uint32_t res = 12;
switch (src) {
case 0x0033: res = 0; break;
case 0x0696: case 0x2552: res = 5; break;
case 0x6996: res = 8; break;
case 0x6952: case 0x69a4: case 0x4a96: case 0x2596: case 0x0252: res = 9; break;
case 0x0352: case 0x0652: case 0x0253: case 0x0256: res = 10; break;
case 0x0653: case 0x0356: res = 11; break;
}
return res;
}
bool operator<(const BasicSalvo &other) const {
auto this_size = size();
auto other_size = other.size();
auto this_cost = qcost() + 4 * this_size;
auto other_cost = other.qcost() + 4 * other_size;
if (this_cost < other_cost) { return true; }
if (other_cost < this_cost) { return false; }
if (this_size < other_size) { return true; }
if (other_size < this_size) { return false; }
if (src < other.src) { return true; }
if (other.src < src) { return false; }
for (uint64_t i = 0; i < this_size; i++) {
if (gliders[i] < other.gliders[i]) { return true; }
if (other.gliders[i] < gliders[i]) { return false; }
}
return false;
}
bool operator==(const BasicSalvo &other) const {
uint64_t res[4] = {0};
memcpy(res, this, 16);
memcpy(res + 2, &(other), 16);
return ((res[0] == res[2]) && (res[1] == res[3]));
}
BasicSalvo transform(int j) const {
BasicSalvo bs = (*this);
uint8_t xordiffs[4] = {0x00, 0x01, 0xfe, 0xff};
for (int i = 0; i < 12; i++) {
if (gliders[i].contents == ((int8_t) 0x80)) { break; }
bs.gliders[i].contents ^= xordiffs[j];
}
if (j & 2) {
// swap x and y displacements:
int8_t dz = bs.dx; bs.dx = bs.dy; bs.dy = dz;
// transpose the source object bitarray:
bs.src = hh::transpose4(bs.src);
}
return bs;
}
sgsalvo to_salvo(int64_t dx, int64_t dy) const {
uint64_t n = size();
std::vector<SlowGlider<int8_t, 1>> g(n);
for (uint64_t i = 0; i < n; i++) { g[i] = gliders[i]; }
return translate_salvo(g, dx, dy, 0);
}
};
static_assert(sizeof(BasicSalvo) == 16, "BasicSalvo should be 16 bytes");
} // namespace psl
Part of output:
Code: Select all
Loading problem...No origin!
Loading progress...1 2 4 8 16 32 64 128 256 512 1024 1536 1792 1664 1728 1696 1712 1704 1708 1710 1711 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 Loading 73 states on 64 threads...
Waiting for workers
Workers finished
done!
(1710)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPQQPPPPPPPPPPPPPPPPPPPPPPPPPPPPQQQQQQQ
15933 nodes visited; progress = 3.27912%; start best eval = 310971; end worst beam eval = 310970; to wide search diff = -0.914124; lastdimmer_bb_to_prevlane_dist = 1575; best new cost estimate = 310968
(1711)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
28030 nodes visited; progress = 3.2814%; start best eval = 310968; end worst beam eval = 310966; to wide search diff = -1.85885; lastdimmer_bb_to_prevlane_dist = 2; best new cost estimate = 310965
(1712)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
28030 nodes visited; progress = 3.2814%; start best eval = 310965; end worst beam eval = 310966; to wide search diff = 1.37722; lastdimmer_bb_to_prevlane_dist = -1; best new cost estimate = 310965
(1713)Processing 64 tasks on 64 threads...
QQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQ
44606 nodes visited; progress = 3.28301%; start best eval = 310965; end worst beam eval = 310966; to wide search diff = 1.04134; lastdimmer_bb_to_prevlane_dist = 2625; best new cost estimate = 310965
(1714)Processing 64 tasks on 64 threads...
QQQQQPPPPQQPQPPPQQQPQQQPPPQPQQQPPPPPQQPQPQPQQQQQQPQQQQPPQPQQQQQQ
60831 nodes visited; progress = 3.28527%; start best eval = 310965; end worst beam eval = 310965; to wide search diff = -0.454461; lastdimmer_bb_to_prevlane_dist = 3179; best new cost estimate = 310964
(1715)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
61447 nodes visited; progress = 3.28527%; start best eval = 310964; end worst beam eval = 310965; to wide search diff = 0.924389; lastdimmer_bb_to_prevlane_dist = -1; best new cost estimate = 310964
(1716)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
78023 nodes visited; progress = 3.28625%; start best eval = 310964; end worst beam eval = 310963; to wide search diff = -0.62461; lastdimmer_bb_to_prevlane_dist = 3618; best new cost estimate = 310962
(1717)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
91399 nodes visited; progress = 3.28819%; start best eval = 310962; end worst beam eval = 310962; to wide search diff = -0.362752; lastdimmer_bb_to_prevlane_dist = 2; best new cost estimate = 310961
(1718)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
91399 nodes visited; progress = 3.28819%; start best eval = 310961; end worst beam eval = 310962; to wide search diff = 0.681868; lastdimmer_bb_to_prevlane_dist = -1; best new cost estimate = 310961
(1719)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
108817 nodes visited; progress = 3.29079%; start best eval = 310961; end worst beam eval = 310960; to wide search diff = -1.27493; lastdimmer_bb_to_prevlane_dist = 3386; best new cost estimate = 310959
(1720)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
108817 nodes visited; progress = 3.29079%; start best eval = 310959; end worst beam eval = 310960; to wide search diff = 0.58979; lastdimmer_bb_to_prevlane_dist = -1; best new cost estimate = 310959
(1721)Processing 64 tasks on 64 threads...
QQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQ
125393 nodes visited; progress = 3.29239%; start best eval = 310959; end worst beam eval = 310960; to wide search diff = 0.733259; lastdimmer_bb_to_prevlane_dist = 2625; best new cost estimate = 310960
(1722)Processing 64 tasks on 64 threads...
lastdimmer_bb_to_prevlane_dist = -1 means the best in the beem remaned from the prev generation. This does not suggest the modifcation is worth it.
After addng timestempts ... it could actually be helpfull:
Code: Select all
(1750)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
16016 nodes visited; Wed Jul 3 19:59:52 2024
progress = 3.35421%; start best eval = 310896; end worst beam eval = 310895; to wide search diff = -0.556073; lastdimmer_bb_to_prevlane_dist = 3092; best new cost estimate = 310893
(1751)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXPPPPPXPPXXXXXXXXPXXXXXPXPPXXXXXXXXXPXPXXXPPXP
28770 nodes visited; Wed Jul 3 20:03:03 2024
progress = 3.35487%; start best eval = 310893; end worst beam eval = 310893; to wide search diff = -0.363397; lastdimmer_bb_to_prevlane_dist = 0; best new cost estimate = 310891
(1752)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXPPPPPPP
36013 nodes visited; Wed Jul 3 20:06:34 2024
progress = 3.35876%; start best eval = 310891; end worst beam eval = 310891; to wide search diff = 0.289207; lastdimmer_bb_to_prevlane_dist = 10; best new cost estimate = 310889
(1753)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
52812 nodes visited; Wed Jul 3 20:11:36 2024
progress = 3.35944%; start best eval = 310889; end worst beam eval = 310887; to wide search diff = -1.68072; lastdimmer_bb_to_prevlane_dist = 3617; best new cost estimate = 310885
(1754)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXPXXXPXXXXXPXPPPXXXXXXXXXXXXXX
61493 nodes visited; Wed Jul 3 20:14:20 2024
progress = 3.35944%; start best eval = 310885; end worst beam eval = 310887; to wide search diff = 1.89222; lastdimmer_bb_to_prevlane_dist = -1; best new cost estimate = 310885
(1755)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
79164 nodes visited; Wed Jul 3 20:19:25 2024
progress = 3.36269%; start best eval = 310885; end worst beam eval = 310884; to wide search diff = -0.947207; lastdimmer_bb_to_prevlane_dist = 3092; best new cost estimate = 310883
(1756)Processing 64 tasks on 64 threads...
XXXXXXXXXXXXXXXXXXXXXXXPPPPPPPPPPPPPPPPPPXXXXXXXXXXXXXPPPPPPPPPP
87476 nodes visited; Wed Jul 3 20:22:09 2024
progress = 3.36527%; start best eval = 310883; end worst beam eval = 310883; to wide search diff = 0.237742; lastdimmer_bb_to_prevlane_dist = 2; best new cost estimate = 310881
(1757)Processing 64 tasks on 64 threads...
PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP
105226 nodes visited; Wed Jul 3 20:27:05 2024
progress = 3.36724%; start best eval = 310881; end worst beam eval = 310879; to wide search diff = -1.8366; lastdimmer_bb_to_prevlane_dist = 3021; best new cost estimate = 310878
(1758)Processing 64 tasks on 64 threads...