Referring calcyman in a certain discussion on the discord.
CAcoin
- testitemqlstudop
- Posts: 1367
- Joined: July 21st, 2016, 11:45 am
- Location: in catagolue
- Contact:
Re: CAcoin
The security of Bitcoin doesn't require having a continuous block difficulty.
The reason for having a continuous block difficulty is to make each block take comparable amounts of time to mine. But even with continuous block difficulty, the block mining time is not constant, but rather a random variable with an exponential distribution:
https://en.wikipedia.org/wiki/Exponential_distribution
As such, it's sufficient to be able to approximate the frequency of an object. We already know this information accurately for common objects (using a fixed snapshot of the b3s23/C1 census), and (crucially!) we only need to know the frequency of an object (other than 'it's very rare') when the block difficulty target begins to approach that.
Let's give a worked example:
So we can generate an initial implied census from the textcensus of b3s23/C1 and b3s23/G1 (fixed at a certain date), and then update it at predetermined times (e.g. 'every 10000 blocks' or, even better, 'when the difficulty target first surpasses 10^n') by incorporating the partial census consisting of the blockchain blocks that haven't already been incorporated into the implied census.
The reason for having a continuous block difficulty is to make each block take comparable amounts of time to mine. But even with continuous block difficulty, the block mining time is not constant, but rather a random variable with an exponential distribution:
https://en.wikipedia.org/wiki/Exponential_distribution
As such, it's sufficient to be able to approximate the frequency of an object. We already know this information accurately for common objects (using a fixed snapshot of the b3s23/C1 census), and (crucially!) we only need to know the frequency of an object (other than 'it's very rare') when the block difficulty target begins to approach that.
Let's give a worked example:
- If someone discovers a loafer or an xq19 in a soup and the difficulty target is 'fumarole or rarer', this will obviously surpass the difficulty target. (In b3s23/C1, we've seen 4002 fumaroles, 1 loafer, and no xq19, so the probability that a loafer is more common than a fumarole is about 2^-3990.) It doesn't matter that we don't have accurate frequency information for the loafer and xq19; all that's important is that we can reliably decide whether an object is above or below the difficulty target.
Here are some examples:phi(x) is true ==> every time x appears naturally in a soup, it will be included in the partial census.
- For b3s23/C1, phi(x) is identically true.
- For b3s23/G1, phi(x) is 'x is not a still life, nor p2/p3/p6 oscillator, nor one of the four standard spaceships'.
- For an initial segment of the blockchain where the difficulty target never surpasses 'fumarole is rarer', phi(x) is 'x is a fumarole or rarer'.
- For each census, discard objects x for which phi(x) is false;
- Define the weights W_A and W_B to be the total count (in the respective censuses) of all objects x where phi_A(x) and phi_B(x) are both true;
- Sum the two censuses together;
- Multiply the count of each object x in the combined census A+B by the scaling factor (W_A + W_B) / (phi_A(x) * W_A + phi_B(x) * W_B) to give a new census C;
- Define phi_C(x) to be the logical disjunction (boolean OR) of phi_A(x) and phi_B(x).
So we can generate an initial implied census from the textcensus of b3s23/C1 and b3s23/G1 (fixed at a certain date), and then update it at predetermined times (e.g. 'every 10000 blocks' or, even better, 'when the difficulty target first surpasses 10^n') by incorporating the partial census consisting of the blockchain blocks that haven't already been incorporated into the implied census.
What do you do with ill crystallographers? Take them to the mono-clinic!
Re: CAcoin
I just want to ask one thing: When will we get it to work (and how will it work? I can't follow the discussions)
Re: CAcoin
I have a new idea for the coin verification system. After each block is uploaded, all of the interesting soups are put into a limbo state. After an interesting soup gets uploaded, it gets verified and a combination of the soup and its resulting state gets hashed, which determines whether or not it becomes a block worth a certain amount of coins. Each interesting soup only has a small chance of becoming a block, but soups that fail are put on a different database to be archived.