Giter Club home page Giter Club logo

Comments (20)

zawy12 avatar zawy12 commented on August 18, 2024 2

I found the error in the Zcash terminology. The N=17 "averaging window" is given only 25% weight. The other 75% is the target time of the averaging window (which is a constant). This is why it has an effective averaging window that is almost 4x longer than N=17.

Another problem is that the code says that in order to prevent "timewarp" attacks, it uses the median timestamp of the previous 11 blocks (which is the 6th block in the past if the timestamps are sequential). They do not use the timestamps in the N=17 window, but shift them 6 blocks behind the difficulties. In other words they are using:

# Zcash and Hush difficulty algorithm (exact when timestamps are sequential)
next_D = AVG(D) * 150 / ( 0.75*150 + 0.25*AVG(ST) )
D = previous 1 to 17 difficulties 
ST = previous 6th to 22rd solvetimes

The 16% upper and 32% lower limits are applied to the denominator and are never reached due to the 75% dilution of the "N=17" measurement.

The delay caused by seeking a median is not needed. The comments say it is needed to prevent timewarp attacks. The classic Bitcoin timewarp attack is not possible when using rolling averages. Timestamps can be manipulated to lower difficulty, but unless they are getting >50% and manipulating most timestamps, just using the correct timestamp instead of looking 6 blocks in the past will cause only 1 block to have the wrong difficulty, and it is immediately corrected as soon as the next correct timestamp is given. The size of the error caused by the wrong timestamp on the 1 block is limited by Gandalf's limits. Median does not prevent the =50% problem because after 11 blocks (or less if he has > 50%) the attacker should "own" the median timestamp, having the same steady-state result as when the "median delay" is not used. With medians, it takes longer to see the bad actor, but it takes longer for the effect to go away. After N blocks, the bad timestamps pass out the window which has the reverse effect (if =~50% and not too much >50%). In other words, after N blocks of 50% large forwarded-stamped times (that drive difficulty down) have come in the front of the window, it would start seeing the 50% correct times (with large negative solvetimes) pass out the rear of the window, counteracting any new forward-stamped times coming in the front.

The N=63 and 6 blocks-too-slow errors reduce profits for dedicated miners and probably causes an extra sell pressure. If all the "ill-gotten" gains (which is 10% of daily volume) are selling pressure and if 1/2 the cryptopia volume is from back-and-forth day or week traders, and if "permanent sell pressure" is 1/2 of the permanent buy/sell activity then 40% of the "permanent sell pressure" is from the slow difficulty. Another effect is that if a better difficulty algorithm raises prices, it is a positive feedback that encourages more permanent mining, making a non-perfect difficulty algorithm a little tougher to cheat with on-off mining.

Here are links to the code. I hope it can be reduced to a one-line for loop.
Constants
Core calculation
Median
Limits on D change

The 1st chart below shows the above simplified equation above for the bird's nest is correct. The black line hidden under the orange line is actual HUSH difficulties. The second chart shows how the median is delaying the difficulty response by 6 blocks. Actually it looks like 5 behind instead of 6 for some reason.

zcash
zcash_error

The above are the reasons I would like to see my Gandalf difficulty algorithm implemented. Less aggressive versions of this have been used in Sumokoin and Karbowanek and maybe 1 or 2 others.

Do not be tempted to use the simplification below that others have used. It is used in the Zcash algorithm which may have prevented people from discovering the median is not needed. Do not throw out or prevent negative solvetimes. They are the corrections to bad timestamps.

next_D = sum(D's) * T / [max(timestamp)-min(timestamp) ] # don't do this!
Superficially it looks like what I want because the N's seem like they can cancel:
next_D = avg(D) * T / avg (solvetimes) = sum(D)/N * T / [ sum(solvetimes)/N ]
But
max(timestamp) - min(timestamp) <> sum(solvetimes)
when timestamps are out of order.
The first one does not allow negative solvetimes to cancel forward-stamped times. Example: 5 Blocks solved in same time = 1 with 1 bad timestamp "10" trying to lower difficulty: Timestamps: 0, 1, 2, 3, 10, 5 with apparent solvetimes 1,1,1, 7, -5. First denominator gives wrong (10-0)/5 = 2 avg solvetime and second gives correct (1+1+1+7-5)/5 = 1 avg solvetime.

from hush.

leto avatar leto commented on August 18, 2024 2

In general, a big 👍 to our network rewarding "stable" miners and making jumping on and off the network less attractive, via however we can do that.

from hush.

zawy12 avatar zawy12 commented on August 18, 2024 2

Here's the algorithm I'm recommending to everyone.

http://zawy1.blogspot.com/2017/11/best-difficulty-algorithms.html

from hush.

zawy12 avatar zawy12 commented on August 18, 2024 2

We've developed a new algorithm that's the best of the best and allows integer math on the target.

zawy12/difficulty-algorithms#17

from hush.

zawy12 avatar zawy12 commented on August 18, 2024 1

The closest I can simulate Zcash's algo is N=63 averaging window (I minimized the sum of the least squares of the difference between mine and theirs). The 16% rise and 32% fall limits do not seem to ever be reached. If they were, there would be smooth line during periods of sudden rise and fall in hashrate.
zcash_63

But it is possible to get a lot closer to theirs with N=17 if I severely restrict how fast D can change to 1.009 and 0.991 of the previous D. This would seem to strongly indicate their problem is with the rise and fall limits being 15x and 30x more restrictive than they think. But again, theirs should show smoother lines if that were the case. The lack of symmetry they have in the limits should cause other problems if the limits were being reached. If (rise limit) < 1 / (fall limit) like theirs then avg difficulty should be too low, but it is not. That indicates the limits are NOT being reached.

zcash_rise_fall

Here's an N=17 averaging window again on this scale. The two big peaks > 2*Diff are hash attacks ( N+Stdev*SQRT(N) = 2*N => Stdev > 4 was reached out of only 1200/17 = 70 samples). Similarly, my math indicates about 75% of the smaller peaks may also been attacks (10% > 1.5*Diff when 2.5% were expected because N+1.96*SQRT(N) = 1.5*N.

So the N=17 looks ugly with wide variation, but it's mostly responding to actual hashrate changes.

zcash_17

I am mentioning Zcash only because I started looking at it this morning since you showed me they should be the same algorithm. I'm showing all the above with only HUSH in mind, since it should be the same algorithm.

from hush.

zawy12 avatar zawy12 commented on August 18, 2024 1

@JamesKegel I just use librecalc spreadsheet. I obtained the data while running Zcash and HUSH as nodes and using a perl script that outputs the solvetimes and difficulty to a txt files. The pseudocode for the one that looks most like Zcash (that no one should ever use because it invites attacks, very possibly currently driving HUSH price down) is:

# Approximated Zcash (and HUSH) difficulty algorithm
N=17;
rise_limit=1.009; # never do this, needs fixing
fall_limit=0.991;  # never do this, needs fixing
T:=2.5*60;
next_D = avg(last N Ds) * T / avg(last N solvetimes];
next_D = rise_limit*previous_D  if next_D/previous_D > rise_limit;
next_D = fall_limit*previous_D  if next_D/previous_D < fall_limit;

I use an excel spreadsheet for testing difficulty algorithms and for generating simulated solvetimes under different hash attach scenarios. If you tell me exactly what your're doing I might have a spreadsheet for it. I use excel most of the time because librecalc crashes too much and is slow under heavy number crunching.

from hush.

zawy12 avatar zawy12 commented on August 18, 2024 1

PS, if you're making a POW where difficulty needs to decrease a lot (target needs to increase a lot) then setting a guess for target that is in the correct range for N blocks is a good idea if getting the chain stuck is a possibility. The guess would be in the DA and of the form

If (height > fork_height && height < fork_height+N ) {
   then next_target = target_guess;
}

Also, I think it would be neat to rescale the difficulty so that it is 1/1000 of the hashrate so that everyone can immediately see the current hashrate estimate of the difficulty algorithm. The official hashrate is based on 144 blocks or something like that and is more stable but does not change as fast. To make difficulty scale to the hashrate, change powLimit max, where it is, to:

powLimit = 2^256 / 1000 / solvetime

In hex this is:
00006FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

In ZEC the powLimit is:

powLimit = uint256S("0007ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");

Hush looks like the exact same number but it isn't because the 3 leading zeros in HUSH's line below were inserted for no apparent reason by some original dev who needs to be flogged. It's 67 hex places instead of the 64 (32 bytes) for a hash.

consensus.powLimit = uint256S("0007fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff");

The above numbers mean ZEC and HUSH difficulty's are multiples of hashrate like this:

D = HR * 150 * 2^256 / powLimit 
D = HR * 150 * 2^(leading zeros) 

HUSH has 1 leading zero when the 3 excess hex zeros are removed, and ZEC has 13.

from hush.

madbuda avatar madbuda commented on August 18, 2024

https://github.com/MyHush/hush/blob/master/src/chainparams.cpp#L47

Current average is 17 blocks which is the same as Zcash, I think we experience more swings since we have a smaller network hashrate.

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

I'm mentioning only Zcash in this, but I assume it's the same algorithm as HUSH. The only difference is that HUSH is experiencing more obvious attacks, if not more. Any wider swings hush is seeing should be to it's total hashrate being small compared to Zcash, so the big miners affect it more.

I just checked Zcash and they also seem to be using what is effectively an N=50 algorithm. Zcash and HUSH difficulties look pretty darn smooth which shows they are not responding very quickly to hashrate changes. Either there is a mistake in their code that has been carried over to HUSH, or they are doing something strange with medians that's causing their algorithm to respond very slowly. As a result, they too are still getting hash attacks, at least 1 per day. Below is an example attack they suffered in the past few days. Notice how the response to the attack would have been much better if they had been using a simple N=17 average.

zcash_not_17

The following is the response with a diff algo with N=8.

zcash_vs_8

from hush.

QuinnMallorry avatar QuinnMallorry commented on August 18, 2024

@zawy12 How are these charts being generated? They're great, and I'd like to use them to test different N values with a project. Could you shed any light on how you're generating them?

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

Hush is showing more rythmic peaks than Zcash. This indicates some miner is out there waiting for the high difficulty of his last attack to wear off, and beginning immediately again. During his off time he's probably switching over to Zcash or another small coin that uses Zcash's POW and difficulty algo. This alone is a reason to not use Zcash's difficulty algo: it just makes it easy for attackers if everyone has the same diff algo.

This is chart was done same as the last Zcash one above. The 9 spikes above diff=300 M are the same confidence interval of the 3 spikes above 8M on Zcash.

hush_vs_17

Assuming 10 attacks a day getting 20 blocks times 12.5 coins each and 50% more likely to sell them soon, it could be a selling pressure of 1250 coins per day.

from hush.

madbuda avatar madbuda commented on August 18, 2024

@zawy12 I really think the overall effect is related to overall hashrate / difficulty. High hashrate = higher diff = less susceptible (well less noticeable)

I have seen some conversation lately where solo miners are moving from coin to coin as difficulty drops with massive hashing power. Sadly, I have even heard a fellow pool owner doing the same thing using nicehash

I do agree, we should come up with a better way of handling diff adjustments, but I feel we need to address spikes in hashrate differently than growth

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

@madbuda What overall effect?

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

This is background information for any future HIP that may seek to change the difficulty algorithm.

I'm having trouble proving N=8 is harder to hack than Zcash's N=17 (aka "N=63"). I can simulate attacks against Zcash's algo and get 20 blocks per attack, 10 times per day. This appears to be what they are doing. I was not able to do better. It can throw the difficulty into an oscillation that allows attacks more often with a lower starting difficulty, so longer attacks are not as good, despite the difficulty not rising much even after 30 blocks.

Similarly with N=8, I can attack for 7 blocks when D goes low, again throwing it into an oscillation, getting block at low D and leaving others a higher difficulty. The situation is not necessarily better if they do it 25 times a day instead of 10.

So I can get 20x10 = 200 blocks at low D per day with "N=63" and 7x25 = 175 at "N=8". With N=8, the D starting point can be lower, so the net profit attainable is the same. Notice in a chart below that N=8 leaves constant miners with higher difficulty than the "N=63" attacks. Those 3x difficulties only last a few blocks. In both cases about 33% of the day's blocks are obtained at 1/2 the correct difficulty. This means 66% of the blocks are at a difficulty that averages 15% above the correct difficulty. Average solvetimes in both cases remains accurate.

In these examples, the attacker has only 1/2 the hashrate, so he gets only 100 the 200 blocks mentioned. Larger attackers do not change any of this except they get close to the full 200 blocks. For N=8 and 2x attacks (1x network hashrate) they get only 3.5 blocks per attack.

Concluding that N=8 is not better assumes they know how to do this and that there there is no lost time in switching, so N=8 could be a lot better if they lose 2 blocks in switching their 2.5 MH/s (Hush attackers) to and from other coins. Another potential benefit of N=8 is that they need to wait for a low period to begin the attack, but with "N=63" they can begin almost any time, so it's easier to switch around coins. They need to leave a coin at the same time it is easier on another coin, so there could be an added cost in waiting.

I had been saying the attackers were "3x" the base hashrate (I meant 2x above base). But the charts did not look exactly the same. So I made a 2x attack (1x over base) trigger on different values of D and the statistics and chart looked more correct. That's the last chart below.

Actually advertising the hack invites big miners to compete against each other, cancelling all of their profit and all of the harm if about 5 are constantly trying to do it. A similar thing can be said for "N=63" so this point is moot in terms of trying to decide which is better.

In the charts below, every grey area that is not under the blue is when the difficulty is set too low. The taller the exposed grey area, the more the D is too low. The exposed grey areas above 1 are attacker profit.

Switching from Zcash to N=8 (N=4 is possible) hinges on their losses in switching to and from the coin.

There are no tricks with the algorithms you can do to "fix" this. Straight averages are the best.

Zcash algorithm is incredibly slow:

h6

Zawy v1b N=8 (Gandolf) seems awesomely fast and great. Notice that most of the grey area (an attacker's profit) is covered up pretty good.

zawy_n 8

....and that the Zcash/HUSH algo doesn't even seem to know the attacker exists in the next chart. This is a "dumb" attack that does not trigger on low D and it ends up oscillating to its own disadvantage. Notice it starts hashong on peaks in stead of valleys. See last chart for a smart attack that begins as the difficulty is dropping.

zcash

The 5x swings in the chart below for N=8 (green) occur 2 or 3 times per day and look worse than the Zcash algorithm (blue), but they are not really a bad thing. These wide swings are unavoidable if a fast response is desired. The following chart is for a constant hashrate so the 5x swings mean it will take an average of 5x the desired solvetime for 2 to 4 of blocks in those peaks. 34% of those block will take > 10x.

zcash_8

It is still possible to hack zawy v1b N=8 if you can switch on and off without losing more than 1 block per switch. All that grey area is not good.

zawy_hack

But they are currently doing the same thing on Hush. The following is a simulation that gives the same measures as HUSH's actual data.

zcash_hack

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

Looking at the statistics again, I can only prove 3.5 attacks per day in the last 6 days at 2x to 3x hash power above base level, not 9 or 10 at 1x above base level. They're probably getting about 60 blocks per day, still at my old estimate of 10%. Due to being able to hack my own difficulty algorithm where it can be cheated as much as longer averaging periods I'm not going to push a change for N=8 unless it can be shown attackers lose 2 or more blocks in switching. It's possible to go to a 6 block averaging window, but 2 blocks per hour would reach 1/2 the correct difficulty on accident, giving them plenty of choice times to begin an attack to get the first block "free".

As I've shown above, it's hard to prove any type of averaging is better than even this terribly slow "N=63". But it is so hard to believe that such an unresponsive algorithm is a good thing, that I still recommend change. I've just realized we can just change constants without rewriting code. Karbowanek coin has been happy with a real N=17 in Zawy v1 (simple average) since their hard fork last november due to cryptonote. Somewhere on bitcointalk is a chart of the difficulty before and after Zawy v1 with N=17 was implemented. What I'm doing below is basically Zawy v1b with N=12. It could be done with N=4 to N=50.

Simulation again shows 200 blocks per day (~ 1/3) can be obtained with with a smart trigger as above when using N=12 as it was with N=8 and "N=63". Since attackers are not maximizing what's possible, they may abandon what they're doing completely with a smaller window.

All my comments are optional reading. To summarize, by changing these constants I am changing the algorithm:

# Hush/ Zcash
# ST = solvetime
next_D = AVG(past 17 D's) * 150 / ( 0.75*150 + 0.25*AVG(past 17 ST's delayed 5 blocks) )

# Zawy recommended change by changing Zcash constants
# 1.05 keeps average solvetime better
#change the "0" and "1.05" if you want a slower response like Zcash.
next_D = AVG(past 12 D's) * 150 / ( 0*0.75*150 + 1.05*AVG(past 12 ST's) )
# approx +/- 25% limit per block rise and fall.

Summary of the 3 places to change constants without all my text below:

  1. Eliminate Median Time Past. Use
    enum { nMedianTimeSpan=1 };
    instead of
    enum { nMedianTimeSpan=11 };

  2. Shorten window size, use symmetrical allowed rise and fall

consensus.nPowAveragingWindow = 12; 
consensus.nPowMaxAdjustDown = 31; 
consensus.nPowMaxAdjustUp = 23.7;
  1. Remove the extreme 75% "tempering" or dilution of the average. Use:
    nActualTimespan = 1.05*nActualTimespan;
    instead of
    nActualTimespan = params.AveragingWindowTimespan() + (nActualTimespan - params.AveragingWindowTimespan())/4;

Links to code again:
Constants
Core calculation
Median Past Timestamp (MPT)
Limits on D change

# The following change does away with the delaying the timestamp averaging. 
# Timestamp error protection is provided by the rise and fall limits.  
# The 11 comes from BTC timewarp problem
# that is not applicable to rolling averages. 
# If a bad timestamp comes in, a good timestamp in th next block erases the negative effect.
# If X bad timestamps come in a row, then X good ones erase their effect.

# old:
 enum { nMedianTimeSpan=11 }; 
# new:
enum { nMedianTimeSpan=1 };  # I assume nothing in the code will make this fail

[ edit: the above change shuold probably not be made unless another change is made to the code. See next comment ]

# The Zcash limits are never activated and if they were it could potentially cause an 
# asymmetry problem.  The these new limits will be important due to other changes.
# They will provide protection against 4 negative timestamps in a row, which even in 
# Zcash's crazy first days I do not believe occurred.  Even 10 negative timestamps in a row 
# will cause a problem for only a few blocks. 

# old limits
# min allowed diff (aka POW) decrease per block = 1/(1+0.32) 
# max allowed diff (aka POW) increase per block =1/(1-0.16)
# consensus.nPowAveragingWindow = 17;
# consensus.nPowMaxAdjustDown = 32; // 32% adjustment down
# consensus.nPowMaxAdjustUp = 16; // 16% adjustment up

# new limits
# If my numbers are not used, this still needs to be made symmetrical  
# up = 1- 1/(1+down)   
# The previous asymmetry goes back to Digishield who never realized 
# their oscillations were caused by the asymmetry.
# They made it asymmetrical because they wanted a faster return to 
# normal hashrate after attacks.
# Digishield v3 and/or Zcash "fixed" the oscillations by diluting the N=17
# so much that the limits were never reached. 
# The source of my limits are:
# down = (5x attack max typical)^(-2/N) = 0.765 via Zawy v1b
# but here "down" = (1 - 1/0.765)x100 = 31
# Zawy v1b up = 1/0.765 = 1.31 but here:  up = 1- 1/(1+.31)x100 = 23.7

consensus.nPowAveragingWindow = 12; 
consensus.nPowMaxAdjustDown = 31; 
consensus.nPowMaxAdjustUp = 23.7;

# old
# this is the line that dilutes the Zcash N=17 to an "N=63"
nActualTimespan = params.AveragingWindowTimespan() + (nActualTimespan - params.AveragingWindowTimespan())/4;

# new
# this is the big change that will make it more responsive and unstable
nActualTimespan = 1.05*nActualTimespan; 

# Zcash has a timestamp limit somewhere of something  like +/-3600 from previous timestamp.
# +/-  1200 would be more reasonable, but the POW limits above prevent timestamp limit from
# ever becoming a factor. The POW limit

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

The Zcash (and Digishield) code uses
int64_t nActualTimespan = nLastBlockTime - nFirstBlockTime;
which is used to get an average solvetime (after effectively dividing by N). However, when there are negative solvetimes due to some timestamps being before previous timestamps, the negative solvetimes are not accounted for by this method. The average comes out incorrect. My changes above will be more accurate if the average is calculated correctly because the first correct timestamp erases the effect of previous bad timestamps. I've posted examples of the error in other threads. The error might be substantial if it's not corrected.

Zcash's "incorrect" average does not cause an incorrect final average in the Zcash code because the MTP code (see note) prevents Zcash's false average from being incorrect: it never sees a negative timestamp. But the use of the MTP code comes at the cost of delaying the average 5 blocks.

So, I can't change the 11 to 1 in the above code for the MTP unless I also fix the average.

note: Median Time Past - see link to "Median" in previous comment. This code goes back to Bitcoin trying to stop timewarp attacks which do not occur in rolling averages. Although it appears to handle negative timestamps well, the correct average is better because it changes only 1 block to the wrong setting, quickly corrects, and thereby dose not cause a 5-block delay.

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

Here's an algorithm to directly penalize "hash attackers" and reward constant-on miners. I'm putting it on the Karbowanek coin page because they will probably test it if not implement it. In my testing it did not need any modification (the average solvetime came out correct).

seredat/karbowanec@231db52#commitcomment-24037066

from hush.

radix42 avatar radix42 commented on August 18, 2024

from hush.

leto avatar leto commented on August 18, 2024

@zawy12 can you give guidance on the exact parameters/code to use so we can implement your latest LWMA-3 in Hush?

from hush.

zawy12 avatar zawy12 commented on August 18, 2024

I really don't have a LWMA-2 or 3 "ready to go" for BTC/Zcash clones. There was a Zcash clone working on LWMA-2 but I can't remember which one. Here's the code for LWMA for Zcash/BTC clones:

zawy12/difficulty-algorithms#3 (comment)

from hush.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.