Giter Club home page Giter Club logo

Comments (37)

evgenykuzyakov avatar evgenykuzyakov commented on July 26, 2024 1

Current solution doesn't address the congestion issue completely. It just makes gas prices being consistent at prepay time with the gas price at the burn time. It prevents a shard from going into delayed queue for too long, since the max total input equals to the total possible output.
It's still possible to spam shard with new transactions and preventing legit transactions from being selected. It's lottery right now.

So we need more capitalistic solution which allows people to pay more to get a priority, instead of everyone paying the same and not being able to affect the order.

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

Unfortunately, there is no incentive for users not to set too high prepaid gas, since we reimburse all unused gas. Which means people can fill in blocks with transactions that have 300Tgas attached but burn only 5Tgas

I don't think that is the case. When we process receipts, we look at burnt gas to determine the limit. This means that if there are a lot of transactions that have 300Tgas attached but only uses 5Tgas, we will be able to process all receipts in one block. I agree that it is a problem if each of the transaction actually burns 300Tgas, but if that is the case, I don't think what proposed here helps either.

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

Unfortunately, there is no incentive for users not to set too high prepaid gas, since we reimburse all unused gas. Which means people can fill in blocks with transactions that have 300Tgas attached but burn only 5Tgas

I don't think that is the case. When we process receipts, we look at burnt gas to determine the limit. This means that if there are a lot of transactions that have 300Tgas attached but only uses 5Tgas, we will be able to process all receipts in one block. I agree that it is a problem if each of the transaction actually burns 300Tgas, but if that is the case, I don't think what proposed here helps either.

My argument that you quoted explains why users don't have incentive to care about the prepaid gas. The contr-argument: "This means that if there are a lot of transactions that have 300Tgas attached but only uses 5Tgas, we will be able to process all receipts in one block." does not argue that users will care about the prepaid gas.

Here is an example: Imagine we've been running network for 2 months, users do not care about prepaid gas, because it is refunded, as a bonus all transactions are processed without delay. Then malicious user comes in and exploits the fact that they can stall the system for 2 mins.

I.e. what I am arguing is "we need to unbreak the variant" => "we need to fill block using prepaid gas" => "prepaid gas need to be close to used gas or else blocks will be underutilized" => "users need to care about prepaid gas" => "we need to burn all prepaid gas". What you are trying to disprove is "user don't care about the gas" => "the blocks are congested" which is not what I am saying.

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

I don't think I understand. How is burning all prepaid gas different from a transaction that uses all prepaid gas in the current setting?

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

I don't think I understand. How is burning all prepaid gas different from a transaction that uses all prepaid gas in the current setting?

The fact that we have occasional transactions that utilize all prepaid gas does not solve the congestion issue, while burning all prepaid gas solves it.

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

I think I understand your argument now. Let's say that we burn all prepaid gas. Now if an attacker wants to execute the same attack they have to saturate every block with transactions that attach a lot of gas. But the overall cost is the same as the attack in the current system because we charge receipts at the gas price of the block in which they are processed. Is your argument that it is much harder for an attacker to continuously saturate blocks with transactions?

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

I think I understand your argument now. Let's say that we burn all prepaid gas. Now if an attacker wants to execute the same attack they have to saturate every block with transactions that attach a lot of gas. But the overall cost is the same as the attack in the current system because we charge receipts at the gas price of the block in which they are processed. Is your argument that it is much harder for an attacker to continuously saturate blocks with transactions?

Close, but not exactly. The argument is that there won't be a difference between the attacker and the regular user -- if user is willing to saturate the block and pay for it it does not matter whether they saturate it with useful computation, useless computation, or they underutilize the block but still pay for the unused CPU.

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

they underutilize the block but still pay for the unused CPU.

It seems that you suggest that in the current system attacker can abuse without paying for unused CPU. I don't see why that is the case. As I mentioned before, the number of receipts we process depends on their burnt gas, not used gas. So I don't see how the attacker get away without paying for unused CPU.

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

It seems that you suggest that in the current system attacker can abuse without paying for unused CPU.

In a system where blocks are filled in based on prepaid gas rather than fees the attacker can abuse without paying for CPU, therefore if we fill in based on prepaid gas we also need to burn all the prepaid gas.

We need to fill in blocks based on prepaid gas, rather than the fees to ensure the invariant described in the issue, if invariant is broken no heuristics can prevent the abuse.

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

In a system where blocks are filled in based on prepaid gas rather than fees the attacker can abuse without paying for CPU, therefore if we fill in based on prepaid gas we also need to burn all the prepaid gas.

But that is not the case. When we process receipts the block is filled based on burnt gas, not prepaid gas. So we can process a lot of receipts with 300T prepaid gas but 5T burnt gas.

from neps.

evgenykuzyakov avatar evgenykuzyakov commented on July 26, 2024

Has to go to a NEP

from neps.

evgenykuzyakov avatar evgenykuzyakov commented on July 26, 2024

@vgrichina @kcole16 @mikedotexe @potatodepaulo for DevX.

@ilblackdragon @SkidanovAlex For comment on whether this is acceptable approach.

from neps.

evgenykuzyakov avatar evgenykuzyakov commented on July 26, 2024

While I think it's the best option so far, I'm not quite sure how to address the downfall of it within contract development. Since our fees right now is 3 times higher than reality, all contracts are incentivized to attach more to be on a safe side.

It also going to affect limited allowance access keys, because they will not produce receipts.

from neps.

evgenykuzyakov avatar evgenykuzyakov commented on July 26, 2024

Another big issue is compilation cost. If we don't resolve it with magical solution, then every call (including cross-contract promises and callbacks) has to attach ton of gas for potential cold cache compilation hit.

@bowenwang1996 suggested to ignore compilation cost and pre-compile contracts on deploy. We can increase deploy cost to do this. Then during sync your node is responsible into compiling all contracts within a shard. Pre-compiled contracts can be stored on disk (not in the trie).

More details for compilation cost fix: #97 (comment)

from neps.

SkidanovAlex avatar SkidanovAlex commented on July 26, 2024

@evgenykuzyakov pre-compiled contracts don't have to be in-memory, right? If not, I like the approach.

from neps.

mikedotexe avatar mikedotexe commented on July 26, 2024

I can't say that I'm tracking all of this but I think I get the gist.
I don't see this being a huge issue for DevX, honestly. I think we would add an important line item to our Go to Mainnet Checklist to make sure partners take gas estimation seriously.

As far as I am aware we have not delivered a demonstration app showing gas estimation (that line item "Basic Tool for Ballparking" on this doc is slated for end of the month), but have delivered docs. We'll definitely want to change this section of that page if we go forward with this plan.

The way I see it, we would instruct partners (or "heavily suggest") to create simulation tests for the most common transactions in their project, gathering the gas costs. During the simulation tests they should add something like:
println!("Log(s) gas burnt {:?}", execution_outcome.gas_burnt); in a place similar to this line. Then use that value for how much gas they should be adding per call. Right now I don't think anyone knows how much gas to add.

Besides that we would also want to sweep the example repositories (including the near-sdk-rs examples directory) and change the large amount of gas set as constants. (Or at the very least have a comment that no one should simply add this max value as it's costly long term.)

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

@SkidanovAlex yes they will be on disk. See our discussion here #97

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

@mikedotexe I actually think the opposite. If we commit to this change, it means that if you attach smaller amount of gas than what is needed, you contract call will fail and you have to pay for it. It is difficult to estimate precisely the cost because it can be dependent on the contract state, which may be constantly changing. It becomes really bad when the contract owner redeploys the contract and changes the logic inside. So you almost always have to attach more gas than needed to err on the side of caution and therefore waste some gas.

from neps.

evgenykuzyakov avatar evgenykuzyakov commented on July 26, 2024

Overall with the free compilation (moved to deploy) the cost of a function call will decrease, so you would mostly attach gas only for compute and storage read/writes. We'll also subtract base cost and re-run param estimator.
There is going to be less refunds, so the overall cost will be even lower.

The solution will work perfectly fine for a single shard, but we'll need to consider gas price auctions or max gas-price across shards for sharded system.

So I would vote for doing this with the full cache of compiled contracts. I think these 2 solution fully address the congestion issues and compilation issues in a short term for a single shard. But we'll need to reconsider gas pricing for a multi-sharded version, due to attack on a single shard without global gas increase.

from neps.

vgrichina avatar vgrichina commented on July 26, 2024

I think our DevX is already pretty bad around gas and there is no way to estimate reliably how much gas to attach besides measuring and overshooting by 1-2 orders of magnitude.

Making it even worse (by burning prepaid gas) doesn't look like good solution from DevX POV at all. Especially when combined by other fun stuff (like having to send some NEAR with the call as well when using fungible token contract, etc).

Is there any way we can make it radically simpler? E.g. maybe attach some fee in NEAR tokens instead of gas?

P.S. I'm not sure why having tokens locked for a while isn't already pretty big deterrent from attaching too much gas.

Like before we decreased default gas in near-api-js – I effectively had to lock 2 NEAR for every function call. Isn't this already big enough deterrent to not set gas limit too high?

cc @mikedotexe @chadoh @kcole16

from neps.

vgrichina avatar vgrichina commented on July 26, 2024

@nearmax I read your post more attentively and I think I understand why even locking 2 NEAR not big enough deterrent.

However this assumes that there is no other incentives not to spam like that. What will happen if we sample transactions based on the attached prepaid gas (lower gas – higher chance to be included in the block)?

from neps.

vgrichina avatar vgrichina commented on July 26, 2024

@evgenykuzyakov what do you think regarding sampling transactions to include with priority to lower gas ones? Can use transaction hash for deterministic randomness. Should encourage specifying lower gas as it gets lower latency.

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

@vgrichina that is not great because people who legitimately need to run transactions that cost more gas will starve.

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

@vgrichina Our top goal is to make sure our protocol works and is not abusable. We cannot have a node which has convenient DevX but is abusable. For example, we cannot argue that our consensus is slow and replace it with heuristics that works in most but not all cases and has better DevX.

When you propose a heuristic make sure it fixes the invariant, if it does not then most likely the node remains abusable.

@evgenykuzyakov

So we need more capitalistic solution which allows people to pay more to get a priority, instead of everyone paying the same and not being able to affect the order.

We can add this feature incrementally. First, we need to make sure people cannot grab part of the network's capacity at a cost that's lower than actually using the network capacity.

from neps.

vgrichina avatar vgrichina commented on July 26, 2024

@nearmax I'm assuming the invariant you mention is:

In a given time interval T it is possible to submit transactions with attached gas that can potentially result in more CPU computation that can be processed in a time interval of the same length T.

I don't see how we break it when we use attached gas to estimate how many transactions can be processed. Seems like we have the opposite problem – we can have far less CPU computation than expected given attached gas, i.e. everything will still go on ok but with suboptimal number of transactions in block.

This IMO is nowhere near being a blocker for Phase 1. It is effectively a performance degradation issue which we have plenty of (e.g. heavy RPC load doesn't seem to be handled well either).

We can add this feature incrementally.
First, we need to make sure people cannot grab part of the network's capacity at a cost that's lower than actually using the network capacity.

I think it's exactly the opposite. Being able to pay for priority is more important, as we will have congestion happen (with real load spike) no matter how hard we try to protect from all possible attacks on throughput (especially the ones only profitable for validators).

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

I don't see how we break it when we use attached gas to estimate how many transactions can be processed. Seems like we have the opposite problem – we can have far less CPU computation than expected given attached gas, i.e. everything will still go on ok but with suboptimal number of transactions in block.

@vgrichina , @bowenwang1996 to clarify, we are already directly incentivizing system to work only at 50% capacity, by having gas cost inflation when blocks are more than 50% full. So our current system is already directly designed to work at partial capacity.

I don't see how we break it when we use attached gas to estimate how many transactions can be processed.

We currently do not fill blocks based on how much gas is attached.

This IMO is nowhere near being a blocker for Phase 1.

It is not a Phase 1 blocker, we agree on that.

It is effectively a performance degradation issue which we have plenty of (e.g. heavy RPC load doesn't seem to be handled well either).

It is not a performance degradation, it is a explicit vulnerability that allows anyone to create a 1 hour a day downtime for our system at a very low cost.

I think it's exactly the opposite. Being able to pay for priority is more important, as we will have congestion happen (with real load spike) no matter how hard we try to protect from all possible attacks on throughput (especially the ones only profitable for validators).

It is a valid opinion that gas pricing can be more important, but burning all gas is also extremely important, because as I explained above it makes our system extremely vulnerable.

from neps.

mikedotexe avatar mikedotexe commented on July 26, 2024

This is going to be rough for partners like Flux that can't accurately determine gas cost.
I suggest, if possible, we try to allow for some kind of dry run or estimation system before we institute this change.

from neps.

vgrichina avatar vgrichina commented on July 26, 2024

@mikedotexe why Flux specifically cannot determine gas cost?

Note that it should be pretty reasonable to always burn say 30 Tgas even if some tx only takes 5 gas.
As long as we keep gas cost sufficiently low (i.e. < 1 cent per tx), should be ok to just estimate order of magnitude.

from neps.

mikedotexe avatar mikedotexe commented on July 26, 2024

I propose we capture the surplus gas, turn it into Ⓝ, and send it to a community fund account; not burn it. That community fund can, down the road, be voted on with governance in order to better the ecosystem.

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

@mikedotexe I suspect that validators will not like it since it means they get less reward.

from neps.

mikedotexe avatar mikedotexe commented on July 26, 2024

@mikedotexe I suspect that validators will not like it since it means they get less reward.

This makes me think that I misunderstand this whole issue. The "burn all prepaid gas" part in the title of this issue leads me to believe the validators would not have gotten this reward, it would be burned. I'm suggesting that whatever was going to be burned should be invested in public goods. Do I fundamentally misunderstand this?

from neps.

bowenwang1996 avatar bowenwang1996 commented on July 26, 2024

@mikedotexe I don't think we agreed on whether the surplus is burned without contributing to validator reward. But even if that is the case, validators will still prefer burning them since it decreases the total supply.

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

Independently on whether we burn it all or send it to the community fund, it is going to affect contract DevX the same.

from neps.

mikedotexe avatar mikedotexe commented on July 26, 2024

I think the best suggestion I can give them is to use Simulation Tests to determine how much each call with be, and for those that may fluctuate (Flux has these) determine some amount (percentage?) of padding on top.
If there are other ways to help them estimate, please let me know.

from neps.

MaksymZavershynskyi avatar MaksymZavershynskyi commented on July 26, 2024

The problem is until, we enable these fees: near/nearcore#3279 (comment) their estimation is going to be wrong, however, very likely they will overestimate it significantly. So I suggest them to not spend too much time on sophisticated estimator and just grab some meaningful upper bound.

from neps.

mikedotexe avatar mikedotexe commented on July 26, 2024

The problem is until, we enable these fees: nearprotocol/nearcore#3279 (comment) their estimation is going to be wrong, however, very likely they will overestimate it significantly. So I suggest them to not spend too much time on sophisticated estimator and just grab some meaningful upper bound.

Thanks, that's very helpful.

from neps.

norwnd avatar norwnd commented on July 26, 2024

I wonder whether this is still relevant ... ? Anyway, interesting discussion, leaving my 2 cents here.

The source and sink analogy is interesting, but as @evgenykuzyakov pointed out above it's not the whole story.

Analysing the congestion example:

Suppose, there is a contract that burns 300Tgas during its execution. Suppose I create 200 transactions that call this contract and submit it asynchronously from multiple accounts so that they end up in the same block. All 200 transactions are going to be admitted into a single block, because converting a single function call transaction to a receipt only costs ~2.5Tgas. Unfortunately only 2 such function calls can be processed per block, which means for the next 100 blocks the shard will be doing nothing but processing delayed receipts and will not be processing new transactions. Resulting in almost 2min downtime for the clients that are using our blockchain.

The cost of a single such attack is 60NEAR, but the attacker then can repeat this attack after delayed receipts are processed.

there are 2 different downsides (to how it's currently, seemingly, handled):

  • the transactions being executed come from single (perhaps malicious) actor temporarily denying "democratic" access to Near network
  • shard charges too low a fee (during such congested condition)

denying "democratic" access

Since we cannot know whether transactions are coming from the same actor (and whether or not they are maliscious) there seems to be no way to solve it other than user attaching gas price to each transaction (so those who need access get it, otherwise wait in line - it's fine as long as Near network makes enough on transaction fees, it sells block space after all).

shard charges too low

60 NEAR is surprisingly low number to pull off such an attack, which either means shards should either be more "beefy" or transaction execution should be charged a higher rate, or both.

So it looks like Ethereum with EIP-1559 nailed it ?

Edit: 1 thing where Ethereum fee model lacks is it doesn't differentiate between block space consumers and penalizes ALL of them equally, ideally when lots of NFT get minted or somebody is rushing to liquidate someone in Defi - these shouldn't bump my fees for sending stable coins around (because it's totally unrelated to those congested things).

from neps.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.