Giter Club home page Giter Club logo

fips's Introduction

Filecoin Improvement Protocol

The Filecoin Improvement Protocol contains the set of fundamental governing principles for the Filecoin Network. It outlines the vision for Filecoin and the principles, processes, and parties involved in making decisions that affect the future of the network. It also describes how improvements to these rules can be proposed and ratified.

The Filecoin Vision

Filecoin is a peer-to-peer network that stores files, with built-in economic incentives to ensure that files are stored reliably over time. Its mission is to create a decentralized, efficient and robust foundation for humanity’s information. To advance that mission, Filecoin has created a decentralized storage network that lets anyone in the world store or retrieve files.

In Filecoin, users pay to store their files on storage miners. Storage miners are computers responsible for storing files and proving they have stored the files correctly over time. Anyone who wants to store their files or get paid for storing other users’ files can join Filecoin. Available storage and pricing are not controlled by any single entity. Instead, Filecoin facilitates open markets for storing and retrieving files that anyone can participate in, thereby providing storage to billions of people who are currently locked out of the web.

Filecoin Design Principles

The design of Filecoin is intended to follow a set of principles. The community will help define these principles in the coming months.

Filecoin Improvement Principles

When making decisions about how to improve Filecoin, we will follow a set of principles. The community will help define these principles in the coming months.

Making changes to the Filecoin network

Filecoin Improvement Proposals (FIPs) are the primary mechanism by which the Filecoin community can submit, discuss, and approve changes relevant to the Filecoin network. These discussions and decisions should be guided by the governance and design principles above.

FIPs are classified into three categories:

Technical FIPs, or Filecoin Technical Proposals (FTPs) are designed to gather community feedback on technical Filecoin issues. These include changes to the Filecoin protocol, a change in block or transaction validity rules, and proposed application standards or conventions. They are then reviewed by the Filecoin community and the technical steering committee. They are normally followed by a PR to the Filecoin Specification repository to update the protocol's spec.

Organizational FIPs, or Filecoin Organization Proposals (FOPs) allow the Filecoin community to propose, discuss, and achieve consensus on Filecoin governance. This includes procedures, guidelines, decision-making processes, and changes to FIP processes.

Recovery FIPs, or Filecoin Recovery Proposals (FRPs) are intended to provide the Filecoin community with a forum to raise, discuss, and achieve consensus on fault recovery and chain rewrites, under a very limited, clearly-defined set of criteria (ex, in the case of protocol bugs destroying network value). The community will help define this process as needed in the coming months.

A decentralized, global network

Filecoin is still in its infancy, but it has the potential to play a central role in the storage and distribution of humanity’s information. To help the network grow and evolve, it is critical for the community to collectively be engaged in proposing, discussing, and implementing changes that improve the network and its operations.

This improvement protocol helps achieve that objective for all members of the Filecoin community (developers, miners, clients, token holders, ecosystem partners, and more).

FIPs

FIP # Title Type Author Status
0001 FIP Purpose and Guidelines FIP @Whyrusleeping Active
0002 Free Faults on Newly Faulted Sectors of a Missed WindowPoSt FIP @anorth, @davidad, @miyazono, @irenegia, @lucaniz, @nicola, @zixuanzh Final
0003 Filecoin Plus Principles FIP @feerst, @jbenet, @jnthnvctr, @tim-murmuration, @mzargham, @zixuanzh Active
0004 Liquidity Improvement for Storage Miners FIP @davidad, @jbenet, @zenground0, @zixuanzh, @danlessa Final
0005 Remove ineffective reward vesting FIP @anorth, @Zenground Final
0006 No repay debt requirement for DeclareFaultsRecovered FIP @nicola, @irenegia Deferred
0007 h/amt-v3 FIP @rvagg, @Stebalien, @anorth, @Zenground0 Final
0008 Add miner batched sector pre-commit method FIP @anorth, @ZenGround0, @nicola Final
0009 Exempt Window PoSts from BaseFee burn FIP @Stebalien, @momack2, @magik6k, @zixuanzh Final
0010 Off-Chain Window PoSt Verification FIP @Stebalien, @anorth Final
0011 Remove reward auction from reporting consensus faults FIP @Kubuxu Final
0012 DataCap Top up for FIL+ Client Addresses FIP @dshoy, @jnthnvctr, @zx Final
0013 Add ProveCommitSectorAggregated method to reduce on-chain congestion FIP @ninitrava @nicola Final
0014 Allow V1 proof sectors to be extended up to a maximum of 540 days FIP @deltazxm, @neogeweb3 Final
0015 Revert FIP-0009(Exempt Window PoSts from BaseFee burn) FIP @jennijuju, @arajasek Final
0016 Pack arbitrary data in CC sectors FIP donghengzhao (@1475) Deferred
0017 Three-messages lightweight sector updates FIP @nicole, @lucaniz, @irenegia Deferred
0018 New miner terminology proposal FIP @Stefaan-V Final
0019 Snap Deals FIP @Kubuxu, @lucaniz, @nicola, @rosariogennaro, @irenegia Final
0020 Add return value to WithdrawBalance FIP @Stefaan-V Final
0021 Correct quality calculation on expiration FIP @Steven004, @Zenground0 Final
0022 Bad deals don't fail PublishStorageDeals FIP @Zenground0 Final
0023 Break ties between tipsets of equal weights FIP @sa8, @arajasek Final
0024 BatchBalancer & BatchDiscount Post -Hyperdrive adjustment FIP @zx, @jbenet, @zenground0, @momack2 Final
0025 Handle expired deals in ProveCommit FIP @ZenGround0 Deferred
0026 Extend sector fault cutoff period from 2 weeks to 6 weeks FIP @IPFSUnion Final
0027 Change type of DealProposal Label field from a (Golang) String to a Union FIP @laudiacay, @Stebalien, @arajasek Final
0028 Remove DataCap and verified client status from client address FIP @jennijuju, @dkkapur Final
0029 Beneficiary address for storage providers FIP @steven004 Final
0030 Introducing the Filecoin Virtual Machine (FVM) FIP @raulk, @stebalien Final
0031 Atomic switch to non-programmable FVM FIP @raulk, @stebalien Final
0032 Gas model adjustment for non-programmable FVM FIP @raulk, @stebalien Final
0033 Explicit premium for FIL+ verified deals FIP @anorth Deferred
0034 Fix pre-commit deposit independent of sector content FIP @anorth, @Kubuxu Final
0035 Support actors as built-in storage market clients FIP @anorth Withdrawn
0036 Introducing a Sector Duration Multiple for Longer Term Sector Commitment FIP @AxCortesCubero, @jbenet, @misilva73, @momack2, @tmellan, @vkalghatgi, @zixuanzh Rejected
0037 Gas model adjustment for user programmability FIP @raulk, @stebalien Draft
0038 Indexer Protocol for Filecoin Content Discovery FRC @willscott, @gammazero, @honghaoq Draft
0039 Filecoin Message Replay Protection FIP @q9f Draft
0040 Boost - Filecoin Storage Deals Market Protocol FRC @dirkmc, @nonsense, @jacobheun, @brendalee Draft
0041 Forward Compatibility for PreCommit and ReplicaUpdate FIP @Kubuxu Final
0042 Calling Convention for Hashed Method Name FRC @Kubuxu, @anorth Draft
0044 Standard Authentication Method for Actors FIP @arajasek, @anorth Final
0045 De-couple verified registry from markets FIP @anorth, @zenground0 Final
0046 Fungible token standard FRC @anorth, @jsuresh, @alexytsu Draft
0047 Proof Expiration & PoRep Security Policy FIP @Kubuxu, @irenegia, @anorth Superseded
0048 f4 Address Class FIP @stebalien, @mriise, @raulk Final
0049 Actor Events FIP @stebalien, @raulk Final
0050 API Between User-Programmed Actors and Built-In Actors FIP @anorth, @arajasek Final
0051 Synchronous Consistent Block Broadcast for EC Security FRC Guy Goren [email protected], Alfonso de la Rocha [email protected] Draft
0052 Increase max sector commitment to 3.5 years FIP @anorth Final
0053 Non-Fungible Token Standard FRC @alexytsu, @abright, @anorth Draft
0054 Filecoin EVM Runtime (FEVM) FIP @raulk, @stebalien Final
0055 Supporting Ethereum Accounts, Addresses, and Transactions FIP @raulk, @stebalien Final
0056 Sector Duration Multiplier FIP @AxCortesCubero, @jbenet, @misilva73, @momack2, @tmellan, @vkalghatgi, @zixuanzh Rejected
0057 Update Gas Charging Schedule and System Limits for FEVM FIP @raulk, @stebalien, @aakoshh, @kubuxu Final
0058 Verifiable Data Aggregation FRC Jakub Sztandera (@Kubuxu), Nicola Greco (@nicola), Peter Rabbitson (@ribasushi) Draft
0059 Synthetic PoRep FIP @Kubuxu @Luca @Rosario Gennaro @Nicola @Irene Final
0060 Set market deal maintenance interval to 30 days FIP Jakub Sztandera (@Kubuxu), @Zenground0, Alex North (@anorth) Final
0061 WindowPoSt Grindability Fix FIP @cryptonemo @Kubuxu @DrPeterVanNostrand @Nicola @porcuquine @vmx @arajasek Final
0062 Fallback Method Handler for the Multisig Actor FIP Dimitris Vyzovitis (@vyzo), Raúl Kripalani (@raulk) Final
0063 Switching to new Drand mainnet network FIP @yiannisbot, @CluEleSsUK, @AnomalRoil, @nikkolasg, @willscott Final
0065 Ignore built-in market locked balance in circulating supply calculation FIP @anorth Accepted
0066 Piece Retrieval Gateway FRC @willscott, @dirkmc Draft
0067 PoRep Security Policy & Replacement Sealing Enforcement FIP @Kubuxu, @anorth, @irenegia, @lucaniz Accepted
0068 Deal-Making Between SPs and FVM Smart Contracts FRC @aashidham, @raulk, @skottie86, @jennijuju, @nonsense, @shrenujbansal Draft
0069 Piece Multihash and v2 Piece CID FRC @aschmahmann, @ribasushi Draft
0070 Allow SPs to move partitions between deadlines FIP Steven Li (@steven004), Alan Xu (@zhiqiangxu), Mike Li (@hunjixin), Alex North (@anorth), Nicola (@nicola) Rejected
0071 Deterministic State Access (IPLD Reachability) FIP @stebalien Final
0072 Improved event syscall API FIP @fridrik01, @Stebalien Final
0073 Remove beneficiary from the self_destruct syscall FIP @Stebalien Final
0074 Remove cron-based automatic deal settlement FIP @anorth, @alexytsu Final
0075 Improvements to the FVM randomness syscalls FIP @arajasek, @Stebalien Final
0076 Direct data onboarding FIP @anorth, @zenground0 Final
0077 Add Cost Opportunity For New Miner Creation FIP Zac (@remakeZK), Mike Li (@hunjixin) Draft
0078 Remove Restrictions on the Minting of Datacap FIP Fatman13 (@Fatman13), flyworker (@flyworker), stuberman (@stuberman), Eliovp (@Eliovp), dcasem (@dcasem), and The-Wayvy (@The-Wayvy) Draft
0079 Add BLS Aggregate Signatures to FVM FIP Jake (@drpetervannostrand) Accepted
0080 Phasing Out Fil+ and Restoring Deal Quality Multiplier to 1x FIP @Fatman13, @ArthurWang1255, @stuberman, @Eliovp, @dcasem, @The-Wayvy Draft
0081 Introduce lower bound for sector initial pledge FIP @anorth, @vkalghatgi Draft
0082 Add support for aggregated replica update proofs FIP nemo (@cryptonemo), Jake (@drpetervannostrand), @anorth Accepted
0083 Add built-in Actor events in the Verified Registry, Miner and Market Actors FIP Aarsh (@aarshkshah1992) Final
0084 Remove Storage Miner Actor Method ProveCommitSectors FIP Jennifer Wang (@jennijuju) Accepted
0085 Convert f090 Mining Reserve actor to a keyless account actor FIP Jennifer Wang (@jennijuju), Jon Victor (@jnthnvctr) Accepted
0086 Fast Finality in Filecoin (F3) FIP @stebalien, @masih, @mb1896, @hmoniz, @anorth, @matejpavlovic, @arajasek, @ranchalp, @jsoares, @Kubuxu, @vukolic, @jennijuju Draft
0087 FVM-Enabled Deal Aggregation FRC @aashidham, @honghao, @raulk, @nonsense Draft
0089 A Finality Calculator for Filecoin FRC @guy-goren, @jsoares Draft
0090 Non-Interactive PoRep FIP luca (@lucaniz), kuba (@Kubuxu), nicola (@nicola), nemo (@cryptonemo), volker (@vmx), irene (@irenegia) Accepted
0091 Add support for Homestead and EIP-155 Ethereum Transactions ("legacy" Ethereum Transactions) FIP Aarsh (@aarshkshah1992) Accepted
0092 Non-Interactive PoRep FIP luca (@lucaniz), kuba (@Kubuxu), nicola (@nicola), nemo (@cryptonemo), volker (@vmx), irene (@irenegia), Alex North (@anorth), orjan (@Phi-rjan) Draft

fips's People

Contributors

aarshkshah1992 avatar androowoo avatar anomalroil avatar anorth avatar arajasek avatar bmann avatar cluelessuk avatar cryptonemo avatar dignifiedquire avatar dkkapur avatar geoff-vball avatar ipfsunion avatar irenegia avatar jennijuju avatar jsoares avatar kaitlin-beegle avatar luckyparadise avatar momack2 avatar nicola avatar nikkolasg avatar ranchalp avatar raulk avatar rvagg avatar stebalien avatar steven004 avatar vkalghatgi avatar whyrusleeping avatar yiannisbot avatar zenground0 avatar zixuanzh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fips's Issues

FIP proposal: alleviate window post penalty in a given time of period

Problem:

Chinese Vice Premier called for crackdown on cyrtocurrency mining in 5/24. As follow up instructions to IDCs might come up from the goverment realy soon, there could be risk that IDCs to cutoff infrastructure supplies for hosted filecoin nodes in China in a matter of days, this will be a massive hit on nodes hosted in China and will be a huge hit for filecoin network as well permanetnly. As an prepared emergency remediation for that worst case scenario, we propose an intermediate period for those nodes to migrate to oversea and their power can be allowed to come back afterward.

Proposal:

Define a start height and end height, during start and end height, the ongoing missing window post pernality reduced to far less than 5br, and the 14 days deadline of terminate sectors for missing post suspended, while end height pass, penaltis and sector terminations resume. This will allow miners to have some time (presumerbily much longer than 14 days considering the logistic complexity) to migrate their nodes physically to the locations that mining is allowed for long term.

Free Faults on Newly Faulted Sectors of a Missed WindowPoSt

FIP-0002 Free Faults on Newly Faulted Sectors of a Missed WindowPoSt

Read: FIP Draft

Abstract

Given the state of the network, honest miners are sometimes disproportionately penalized for operational failures by the SectorFaultDetectionFee (SP), especially when missing a WindowPoSt. This proposal reduces fees on detected faults, without sacrificing much of the security or the incentive to provide reliable storage.
Specifically:

  1. Detected faults (missing a WindowPost deadline) on 100% healthy partitions incur no fee;
  2. In partitions where some sectors are either already faulty or have declared recovery but a WindowPoSt is missed, a SectorFaultFee (FF) is incurred on those sectors;
  3. Faulty sectors detected by missing a WindowPoSt incur an FF per proving period starting on the first proving deadline after they are detected faulty. Skipped Faults thus still require an FF starting on the first proving deadline.

Liquidity Improvement for Storage Miners

FIP-000X Liquidity Improvement for Storage Miners

Read: FIP Draft

Abstract

This proposal allows a substantial fraction of block rewards to be immediately available for withdrawal to allow for greater liquidity and decision freedom for storage miners. The majority of mining rewards continue to vest over 180 days as collateral to reduce initial pledge, to align long-term incentives, and to guarantee storage reliability.

a new FIP: make sync faster by using the lotus daemon node in the same network which is fast or using trusted high quality node.

This is a Technical FIP, or Filecoin Technical Proposal (FTP).
As talked in the thread: https://filecoinproject.slack.com/archives/C0179RNEMU4/p1600622389349900
many people are suffer from the lotus sync problem.

Background:
Two server running lotus daemon which use same network. One is vary fast in sync. the other one is always slow in sync.
We wish to speedup the slow one using the fast one by add reward/score to the trusted fast daemon manually.

Similar to:

INFO	pubsub	[email protected]/gossipsub.go:1499	peer 16Uiu2HAmRH6MMzsPXNRWzQvFyQgV8CVe3e6hsffjNBm5FavWDhJJ 
didn’t follow up in 1 IWANT requests; adding penalty

expose the API of peer score will be better.

Fast sync will make the Filecoin network stronger.

Off-chain Window PoSt verification

Problem

Window PoSt messages are necessary for ongoing maintenance of storage power. Verifying the submitted proofs is expensive and when the gas base fee rises (due to congestion) these messages become expensive. In bad cases, for small miners with mostly-empty partitions, this cost can exceed their expected reward from maintaining power. We need to ensure that these messages are cheap for miners, even when specifying a very high gas fee cap.

Note the framing of the problem here assumes congestion. There is a separate and just-as-pressing problem of reducing congestion. But even if we make great improvements there, congestion will happen sometimes in the future for different reasons (E.g. massive popularity! Deals! DeFi!) and we'll need Window PoSt to remain cheap and effective.

This is an alternative to the Fast Track for Window PoSt and related proposals.

Proposed solution

Don't verify most Window PoSt proofs "on chain". Assume they are valid, and implement a challenge mechanism for external parties to dispute an invalid proof, verifying on-chain only at that point.

Outline

  • Change the SubmitWindowPoSt method to store the proof bytes (or a hash) in per-deadline chain state instead of verifying the proof. The method still checks that the proof is expected, records skipped faults, etc. Optimistically update state assuming the proof is valid, including marking recovered sectors and gaining power for new/recovered sectors.
  • During end-of-deadline cron, snapshot the partition state relevant for proof verification (the active/faulty sector sets and proofs themselves)
  • The proofs and snapshots remains in chain state until the next occurrence of the deadline (~24h)
  • At any point in that challenge window, a challenger can force the on-chain validation of a proof. If the proof fails validation, all of the sectors in the partitions included in that proof are marked faulty, penalised, and the miner loses power for them.

With this mechanism, the vast majority of Window PoSt proofs would never be validated on chain. The network cost of maintaining storage would be effectively constant, rather than linear in storage as it is today (at sufficient scale it would revert to linear for the work of submitting the proofs, but over that timeframe expect to develop other aggregation techniques).

Discussion

This issue is intended for discussion. There are a number of details to work out before drafting a FIP.

Impact

The proof verification syscall accounts for 88% of SubmitWindowPoSt's gas cost, and loading the thousands of SectorOnChainInfo that form public inputs for the bulk of the remainder. Both of these are avoided for valid proofs, and the cost paid by a challenger rather than the miner. So the gas cost reduction will be in the range of 10-20x (maybe more for full partitions).

Rather than every node on the network validating every proof, we instead need that at least one node verifies every proof.

Note that a similar technique could be applied to ProveCommit proof verifications too, but that's a bit trickier and out of scope for this proposal.

Incentives

This proposal does not include a reward for a successful challenge, though one could be added.

  • If there were a reward, it would still not be rational for a party to verify proofs with the aim of winning the reward unless that party were also a significant block producer. Any public submission of a challenge message would be censored and stolen by a block producer, taking the reward for themselves.
    • But parties may rationally verify proofs for other reasons
  • Block producers are incentivised to detect fraudulent proofs in order to reduce the power share of other miners
    • But not proofs from below-consensus-power-threshold miners
  • Off-chain verification with no reward lends itself to miner cooperation, dividing the verification work for mutual efficiency and benefit
    • For any individual miner, the expected gain in power share might not justify the cost of verifying all proofs, but would justify doing some of the work given assurance that others would do disjoint parts of it.
    • For below-consensus-threshold miners, a cooperative of deal clients could collaborate to check all their proofs (the likely cost of this is ~ operation of a single consumer machine).
  • It may be prudent to introduce a reward covering the network transaction (gas) fee for a successful challenge, lest high gas prices exceed the expected value of increased power share
  • Protocol Labs, at least, may be considered sufficiently incentivised to run hardware to verify all proofs long into the future, regardless of any direct reward. This proposal assumes the continued existence of some organisations or cooperatives with similar broad network security incentive (the Filecoin Foundation? Institutional investors?).
  • An honest block producer is always incentivised to include a successful challenge message, if one is transmitted to them, since it will increase their power share.

Verification mechanics

Off-chain verification lends itself to continued technical improvements without the need for protocol upgrades.

  • We can immediately apply batch verification techniques to verify 10s of Window PoSt proofs together, from different miners, for a ~5x speedup, on the assumption that most proofs are valid.
  • Future improvements in verification speed can be used immediately without the need to adjust gas costs
  • Verification systems can store sector information (sealed CIDs) in any appropriate database, not limited to hash-linked structures.

Risks

Some risks to consider and satisfy ourselves about.

  • The obvious attack of this mechanism would be cabal of large miners who refuse to include challenges of each other's storage.
    • A first consideration suggests that such a cabal would have to have at least 1/3 of power in order to be effective (at which point other attacks are possible), but this needs analysis.
  • A large miner could spam bad proofs to try to overwhelm the chain with the subsequent verification cost.
    • We'd have to pick penalties/limits to avoid this.
    • Each partition can only be used once in this way before a valid proof is needed to restore its power.
    • A the moment, we have chain bandwidth to handle challenging every proof, and if we greatly reduce congestion, headroom here will increase by >= 10x
    • Future proof aggregation techniques may support submitting a challenge for multiple proofs with sub-linear verification cost.
  • How does this affect chain weight? At the moment, it's very difficult to increase chain weight by excluding a message but this makes it easier.

Implementation notes

The miner state update required for a successful challenge will be very similar to the state change currently implemented during the deadline cron when a "missed PoSt" is detected. Snapshotting the partition state is expected to be cheap since we can content-address state that already exists in the state tree.

Since submitting a proof will no longer involve loading all the sector information, we can probably increase the number of partitions that may be batched into a single submission (currently, and somewhat arbitrarily, set to 4).

Open questions

A bunch of details to be worked out include:

  • What is the expected reward for a block producer of detecting a fraudulent proof?
  • What is the appropriate penalty rate for bad proofs?
  • Do we need to reward gas?

FIP proposal: A more secure signature method of worker&post account

Simple Summary

In Filecoin network,miners use worker and PoSt account to signature the messages. And the private keys of worker and post wallet have to be stored on the Lotus-miner server or other signature server for a long time. Once the server was attacked by hackers and the private key was stolen, The miner will suffer great loss of asserts.

Change Motivation

We proposal a different signature method of worker wallet and post wallet by classifying the messages to protect the miner’s asserts.

Specification

In filecoin , there are two main work accounts: worker account and post account.

Worker account: miners use it to submit the PreCommitSector and ProveCommitSector messages when sealing the sectors. During these processes, miners need to provide the pledge of sectors and pay gas fee.(part of gas fee will be payed to the miner which packed the messages and the rest will be burned by transferring to f099). Usually, these transactions are a kind of contract transfer.

Post account: miners use it to submit the SubWindowedPoSt messages. During these processes, During these processes, miners also need to pay gas fee.(part of gas fee will be payed to the miner which packed the messages and the rest will be burned by transferring to f099). Usually, these transactions are a kind of non-contract transfer.

So we can classify the transfer messages as follows.

  • Working transfer : the transaction behavior of miner in the process of sealing the sectors, submitting the Posts and so on. Including providing the pledge of sectors, paying the gas fee.

  • Financial transfer : the transactions that are not working transfer. Such as miner A transacts to miner B, miner A transacts to individual account, miner A transacts to f099.

Therefore, we can make a definition by smart contract that if Worker account and Post account are doing some working transferring, which are contract transfer, they just adopt the Single signature. However, if they are doing some financial transferring, which are non-contract transfer, they are suggested to adopt the Multi signature.

By this way, we can effectively protect the miner’s asserts from loss when the private key of worker/post account was stolen.

Reduce congestion via Batching ProveCommits

Problem

ProveCommits and PreCommits are creating network congestion leading to high base fee, leading to high cost for SubmitWindowPoSt and PublishStorageDeal.

Proposed solution

Processing multiple ProveCommits at the same time can drastically reduce the gas used per sector, leading to a much lower gas usage from ProveCommits. We propose to add a new method: "ProveCommitBatched".

There are two parts that can be amortized:

  • State operations: several state reads and writes can be batched - done once per ProveCommitBatched instead of once per sector (similar improvements done in #25 )
  • Batch verification: we currently batch verify 10 SNARKs in a single ProveCommit, in this proposal we propose to batch verify all the proofs in a ProveCommitBatched message.

From now on, I will call "Batching saving factor" the factor of gas saved by doing this.

This change should be done in conjuction with #25

Outline

  • Implement ProveCommitBatched where we allow for submitting from 1 to MaxBatchedProofs proofs, we take advantage of batching of state operations and verification.
  • Disable ProveCommit.

With this mechanism, miners will prefer to batch multiple proofs together since they would be substantially reduce their costs.

Discussion

This issue is intended for discussion. There are a number of details to work out before drafting a FIP.

Batch verification parameters

Benchmarks

The following table describes the proof size and the batch verification times.

#proofs #snarks size verification time efficiency* savings**
1 10 1,920 bytes 3ms 1 1x
10 100 19,200 bytes 10ms 3.3 3x
20 200 38,400 bytes 15ms 5 4x
30 300 57,600 bytes 23ms 7.66 3.91x
50 500 96,000 bytes 40ms 13.3 3.75x
100 1000 192,000 bytes 53ms 17.76 5.6x
  • efficiency*: how many ProveCommit's VerifySeal fit into a ProveCommitBatched's VerifySeal timing.
  • savings**: how much gas are we saving by doing ProveCommitBatched over ProveCommits.

Tradeoffs

Here are some back of the envelope calculation to understand the advantages and disadvantages of this proposal:

  • Aggregating more ProveCommit has an advantage in verification time e.g. 100-ProveCommitBatched, would cost as must as 17-ProveCommit, which is about a ~5.6x cost reduction.
  • Proof size is not reduced, meaning that:
    • Practical tx size limitation a 100-ProveCommitBatched will still be 192kB large and it may not be practical to post large transactions like this
    • Gas is paid per tx size (about 26k per SNARK)

Risks

  • Miner throughput is much higher than the batch saving factor, leading to gas fees still being high. In other words, even if gas used is now say 5x less for ProveCommits, then miners could be trying to onboard 5x more proofs, so congestion may remain.
  • Small miners may not be able to take advantage of large batches due to incorrect timing of PoReps

Implementation details (TODO)

More work needs to go into this, but preliminary:

  • Implementation of ProveCommitBatched in actors
  • Implementation of "batching" of provecommits in lotus that takes advantages of large batches, without risking the miners' PreCommitDeposit (e.g. storage miner waits to fill the batch, but the ProveCommit deadline has passed).
  • Increase the max tx size ByteArrayMaxLen

Open questions

  • What is the largest number of proofs that we can aggregate and still have an OK tx size?
  • What is the range of possible optimizations in state operations and how much are we expecting to get?

Add return value to WithdrawBalance

Background
Both miner and market actors have WithdrawBalance methods.

  • WithdrawBalance in Miner actor is for owner to withdraw the vested block rewards, which is the only way the owner could get FIL from a miner actor;
  • WithdrawBalance in Market actor is for clients or providers to withdraw a specified amount from the balance held in escrow.

However, either method is actually an attempt to withdraw a specified amount, it could be successful even when the available balance is less than required amount, that means, the actual amount withdrawn is equal to or less than the amount specified in the method parameters, and the method always return nil, so that, there is no way to get how much amount actually withdrawn if we check the chain status and message info, e.g. from CLI or explorer.

Proposal
Simply add a return value to the method to indicate the actual withdrawn amount.

This will improve the visibility and traceability of FIL flow. This is important especially for miners who need to have a very clear balance sheet and financial report.

FIP Workflow Improvement

The current process requires a pull request to be opened before the FIP number is assigned which requires the author to guess the next number the editor will assign with possible conflict (e.g. #13) or use a temporary filename (#9). This is going to get messier as the volume of proposals increases.

I propose that FIP-1 be amended to require a Github issue to be opened first, containing the substance of what would become the WIP version of the FIP. The FIP Editor reviews and assigns a FIP number. Then the author proceeds to open a PR for the FIL to put it in WIP status.

Add a VM to Filecoin (EVM, WASM, SES, LLVM, etc)

Full smart contract capabilities will come to Filecoin, it has been in the plans since the beginning. Many people ask this, so i'm starting this issue to track the conversation. We should submit proper FIPs to add the capabilities.

  • Choice of VM was unclear. The choice of VM is an important question. Originally we wanted to use the EVM. Then, 2 or 3 years ago it it was not clear whether the EVM would become the main standard or whether other VMs (eg WASM or JS based VMs) would overtake it.
  • Many-VM approach. The approach i have recommended for the last few years is to enable support for the most important VMs in Filecoin (like a hypervisor), starting with the EVM.
  • Start with the EVM. It is very clear now that the EVM has become THE smart contracts standard. There are other very exciting contenders, but I think we should start by adding the EVM. This will connect well with all the Dapps that already use IPFS and Filecoin, as well as the vast majority of NFTs and Oracles.
  • JS and WASM later. Keep an eye on WASM and JS based VMs. Systems like Agoric are fleshing out their capabilities and could be very compelling smart contract systems for Filecoin.
  • Formally verified VMs. Also look out for formally verified VMs. There is a lot of compelling benefits in systems that lean towards formal verification, like Tezos with Michelson, and more.

Filecoin HAMT v3 Improvements

I'm opening this issue as the official discussion thread for the HAMT improvement FIP I'm currently drafting. There are several popular outstanding breaking changes many of which are already implemented in the go filecoin hamt that would improve the protocol in terms of performance, simplicity and safety. Since each change is small on its own I am bundling them all into one FIP to reduce overhead. However each change can be considered separately and if there is a strong reason to exclude one of the four I plan to do this while the FIP is in draft stage.

  1. HAMT writes already persisted cache nodes to disk and clears read caches unnecessarily. Issue @austinabell. Golang Fix @Stebalien. Breaks Puts/Gets for HAMT operations which breaks gas pricing of operations. Does not require migration.
  2. HAMT pointer serialization wastes 3 bytes. Issue @rvagg. Golang Fix @rvagg. Breaks serialization of HAMT pointers and will require migration of all state tree HAMTs.
  3. HAMT node bitfield is not simple and makes canonical block form validation difficult. Issue @rvagg. Golang Fix @rvagg. Breaks serialization of HAMT bitfields/nodes and will require migration of all state tree HAMTs
  4. HAMT Set does not provide indication of what value / whether any value existed for the key in question. This functionality is motivated concretely by safety checks in the miner actor. Issue @anorth. No implementation up yet but one appealing proposal is to add to the interface method SetIfAbsent which only writes the key if not previously present returning a boolean indicating set/no set.

Meta note: bundling changes into one FIP like this is an experiment that I don't believe has been tried before. Feel free to critique the bundling as well as the HAMT issues in this thread if you see problems.

Remove vesting from PreCommitSector and ConfirmSectorProofsValid

Background
Vesting is processed via State.UnlockVestedFunds, which is called from PreCommitSector, ConfirmSectorProofsValid, WithDrawBalance and the deadline cron. The original motivation for computing vesting on sector commitment is so that any vested funds could be used for pre-commit deposit or pledge.

The vesting table is quantized to 12-hr increments. Since vesting is processed in the deadline cron every 30 minutes, the calls during sector commitment almost always achieve nothing. The most it could achieve is to accelerate a release by 30 minutes.

Processing reward vesting is quite expensive as it loads and stores a sizeable array for the vesting table. For high-scale miners, committing many sectors per deadline (or even per-epoch), this represents a sizeable portion of total gas cost and chain bandwidth consumption.

Proposal
Remove vesting from PreCommitSector and ConfirmSectorProofsValid, leaving it to the deadline cron. If necessary, the miner's owner can trigger vesting via WithdrawBalance (possibly with an argument of zero).

This will reduce the gas consumption of these methods substantially, freeing up chain bandwidth for other messages, reducing gas costs, and/or improving validation speed.

First proposed in filecoin-project/specs-actors#1258

Explicit FIL+ subsidy

This proposal is an idea of mine to reduce coupling between the storage power mechanism and deal market. Coupling between these two limits design freedom around problems like supporting exponential growth in storage capacity or making deals much more scalable. This proposal is written independent of proposals like Neutron (#119). An analogous idea would apply to supersectors in order to remove linear-in-deals costs from storage maintenance. This proposal could be made either prior to Neutron (simplifying it) or subsequently.


Background

Filecoin network storage growth will eventually be limited by the chain computation costs associated with maintaining storage if a linear amount of state is accessed or retained per sector (where sectors are a fixed size). The Neutron proposal (#119) resolves this for committed-capacity sectors, but sectors with deals still have per-sector (and per-deal) metadata. Per-sector on-chain data includes the sector->deal mapping and per-sector heterogeneous weight, pledge and penalty values. These are all required because verified deals alter a sector’s power as a means of adjusting reward distribution.

Maintaining the facility for heterogenous sectors constrains freedom for more scalable designs, even though most sectors don’t have deals. The premise that most sectors don’t have deals is not something to rely on, though. In the long term we aim for many more deals and a much greater proportion of committed storage to be in active use.

Goals

This proposal aims to reduce deal market limitations on the scale and efficiency of onboarding and maintaining exponentially larger amounts of capacity.

  • Remove per-sector on-chain information from state
  • Normalise sectors as far as possible, enabling summarised/aggregate accounting with sub-linear state
  • Decouple network security from reward distribution policy

These goals are to be sought within existing product and crypto-economic constraints around sound storage and economic security.

Design ideas

The current storage power mechanism assigns a different power value to equal-size sectors based on the size and duration of Filecoin Plus (FIL+) verified deals. This is a means of incentivising useful storage, delegating the definition of “useful” to an off-chain notary. The incentive for storing verified deals is the increased block reward that may be earned from the increased power of those sectors, despite constant physical storage and infrastructure costs. In essence, the network subsidises useful storage by taxing the other power.

Storage power has two different roles, currently coupled. One is to secure the network through raising the economic cost of some party maintaining a significant fraction of the total power, and the other is to determine the distribution of rewards. The block reward is split between rewarding security and subsidising useful storage. This coupling theoretically means the verified deal power boost reduces the hardware cost to attack the network by a factor of 10, if colluding notaries would bless all of a malicious miner’s storage (this is an impractical attack today, though).

This subsidy could be made much more direct, reducing complexity and coupling between the storage market and individual sectors, and clearly separating security from reward distribution policy.

Uniform sector power

Every sector has uniform power corresponding to its raw committed storage size, regardless of the presence of deals. This removes the concepts of sector quality and quality-adjusted power, and makes all bytes equal in the eyes of the power table. This would remove the DealWeight and VerifiedDealWeight fields from SectorOnChainInfo. Network power and committed storage are now the same concept and need not be tracked separately by the power actor.

Uniform power also means that all similarly-sized sectors committed at the same epoch would require the same initial pledge. Similarly the expected day reward and storage pledge (parameters to a possible future termination fee calculation) depend only on activation epoch. The complicated values maintained to track penalties for replaced CC sectors become unnecessary. Historical pledge/reward values may be maintained just once for the network by the power and reward actors. We currently store each of these numbers in chain state some ~500 times per epoch (@ 50PiB/day growth).

With uniform sector power, the power of groups of sectors may be trivially calculated by multiplication. Window PoSt deadline and partition metadata no longer need to maintain values for live, unproven, faulty and recovering power, but the sector number bitfields remain. Processing or recovering from faults does not require loading each sector’s specific values. The complexity of code and scope for error in these various derived data is much reduced.

This ability to aggregate by multiplying becomes instrumental to Neutron-like ideas around hierarchical storage accounting with deals. Without this, supporting partial faults requires on-chain metadata about the individual sectors (with deals) that comprise a supersector, restoring a linear-in-power network cost.

Note that this proposal does leave the per-sector deal IDs on chain. After this proposal and Neutron, this would be the only piece of per-sector data retained.

A miner’s chance of winning the election at each epoch is proportional to their share of the raw byte power committed to the network and non-faulty. Winning the consensus reward remains a lottery.

Market actor tracks deal space-time

The market actor maintains:

  • a current total of active verified deal space (optionally unverified, too);
  • a short list of “reward periods”, each aggregating a period of, say, 24 hours, and comprising:
    • a table of verified deal space-time totals provided by each miner during the period;
    • a record of the total deal subsidy earned during the period (see below)

As payments are periodically processed for each deal (currently every 100 epochs), the market actor adds the deal’s size multiplied by the update period epochs to the provider’s space-time tally for the current reward period.

After a reward period completes, the ratio between verified deal space-time of each provider gives their share of the deal subsidy to be claimable. A miner can claim a corresponding share of the total deal subsidy earned at any point up until the reward period expires (e.g. 60 days). Upon claiming a deal subsidy, the reward funds are inserted into the miner’s vesting schedule, alongside block rewards.

FIL+ subsidy paid to market actor

At every epoch, the reward actor queries the power actor for the total network power (= storage) and the market actor for the total verified deal space. The total block reward to be paid is then split into the power reward and the verified deal subsidy according to the ratio of storage to 9*deal space. The block reward is paid as usual, and the deal subsidy is sent to the market actor.

Discussion

The primary motivation for this proposal is to remove per-sector account-keeping metadata in order to unlock exponential scaling of storage. It changes reward behaviour in a couple of ways that we must verify as being beneficial.

  • Earning the verified deal subsidy doesn’t depend on winning blocks. This is a deviation from the current protocol that requires a miner to win a block (with a Winning PoSt) in order to claim rewards. This is more reliable for smaller parties that might win only occasional blocks, and might suffer from less robust blockchain node infrastructure and connectivity.
  • The market actor does not track temporary faults in sectors: a provider is eligible for the full client payment if the corresponding sector faults, so long as it is recovered soon thereafter. The power table does track temporary faults, suspending rewards until a sector is recovered (necessary for economic arguments about security). This change moves the deal subsidy from being subject to suspension during faults to the same more tolerant treatment of deals. Verified deals no longer subvert the economic security arguments, but it’s not obvious that this makes the more tolerant treatment acceptable.
    If not, we might need to communicate faults to the market actor, making them a bit more expensive and exposing the market actor to a concept of sectors it’s currently mostly abstracted from.
  • We probably need a separate pledge for the deal subsidy, so that collateral remains proportional to future earnings. This pledge would be about (verified) deals, rather than sectors, because this proposal separates those reward streams. This pledge would likely be owned by the market actor and forfeit when a deal defaults.
  • A potentially tricky situation to analyze is what happens when the power reward is significantly less than the deal subsidy. Does it create any unwanted incentives for the block producing miner to deviate from the protocol (assuming they don’t have many deals)?

Discussion: Rebalance Gas Charges 1

Due to optimizations and evolving software I propose changing Gas charged for following operations:

Call Name Old Gas New Gas Change
OnVerifyPostBase 374296768 * 117680921 -68%
OnVerifyPostPerSector 42819 * 43780 +2%
OnIpldGet 75242 114617 +52%
OnIpldPutBase 84070 353640 +320%
StorageGasMultiplier 1000 1300 +30%

* - OnVerifyPost currently has arbitrary 2x discount applied, Old Gas numbers are including this discount, New Gas do not

Above changes can be divided into four categories:

  • Reductions in the cost of verifying proofs due to improvements to proof-verification times.
  • Increase in cost of accessing the storage due to increased size of statetree and thus the index size of the storage making it slower. On Lotus side, we are working on improving this and if we see results in the next round of gas rebalancing cost of accessing the storage might decrease.
  • Increase in flat cost of saving objects, we've missed the cost of flushing newly created objects from memory to disk when pricing OnIpldPut, the increase is due to the inclusion of that cost in OnIpldPut.
  • Increase in per-byte storage cost. Size of the state tree and storage required for running nodes grows at a very fast pace. We hope that increase to storage cost will incentivise optimizations like compacting AlllocatedSectors bitfield in miners or using ID addresses in messages.

Discussion about `maximum sector lifetime extending`

Problem

Maximum sector lifetime is 540 days for now, there is no way to extend a deal lifetime now.
It is a little early to discuss about the implementation, but a design for it is needed.

Solutions

  • A: Change nothing. Client need to resend the data and miner need to re-sealing the sector. It is a waste of resource. And the User experience is not good.
  • B: Support the feature of extend a deal lifetime someday, and plan it in the roadmap.
  • C: Wait and see. No need to design it now.
  • D: Hope to hear your ideas/thoughts.

Idea: extend Filecoin Plus/Verified clients to storage providers...

This is a very long-term idea, and shouldn't be considered until after FIL+ (formerly verified clients) is deployed and working smoothly (see FIP-3 for more on that). Also, if this has been proposed elsewhere, just close the issue; I just figured I'd jot down the idea quickly.

Anyway, here's the thought:
It could be interesting to (one day) extend the Filecoin Plus program to also include verification of miners, instead of just clients (basically, enable the protocol to incorporate and compute on just a little bit of trust on the other side of the storage market as well). In this model, a verified miner would be audited occasionally (e.g. to show that they have some sort of accountable legal entity, have the right kinds of redundancies & data loss prevention). The incentive to do this would be that they don’t have to submit proofs (allowing them to save energy and decrease their operating costs). As I see it, one big reason for PoSts is that the network is permissionless - so all accountability has to be derisked with collateral and computability.

It's a nontrivial growth in the scope for the project, but it seems like a possible way to increase scalability, as there'd be less data in the chain if there are fewer proofs.

Scalable storage onboarding and maintenance (project Neutron)

Some people from the Filecoin team have been working on the next iteration of scalable storage growth and capacity for the Filecoin network. The recent Hyperdrive network upgrade unlocked a big multiple of capacity, but we expect mining demand to rise over time to meet this and again be limited by blockchain throughput. In the next iteration of improvements we aim to solve this problem for the long term, enabling exponential network growth. This effort is known as project Neutron (after the density of neutron stars).

We're still fleshing out many details ahead of a full FIP, but I'm filing this issue to show where we're headed and as a reference for other efforts. We'll publish more extensive design documents once we're more confident in the approach.

@nicola @Kubuxu @nikkolasg


Background

The Filecoin network’s capacity to onboard new storage and to maintain proofs of committed storage are limited by blockchain transaction processing throughput. The recent Hyperdrive network upgrade raised onboarding capacity to about 500-1000 PiB/day, but we expect this capacity to become saturated.

As onboarding rates increase, the fixed amount of network throughput consumed by maintaining the proofs of already-committed storage will increase, eventually toward a significant cost for the network.

Problem detail

Validation of the Filecoin blockchain is subject to a fixed amount of computational work per epoch (including state access), enforced as the block gas limit. Many parts of the application logic for onboarding and maintaining storage incur a constant computational and/or state cost per sector. This results in blockchain validation costs that are linear in the rate of growth of storage, and in the total amount of storage committed.

Linearities exist in:

  • Pre-committing and proving new sectors (both state access and proof verification)
  • Proving Window PoSt of all storage daily (state access, proof validation off-chain)
  • Detecting and accounting for faults and recoveries
  • Cron processing checking for missed proofs, expiring sectors, and other housekeeping

We wish to remove or reduce all such linear costs from the blockchain validation process in order to remove limitations on rate of growth, now and in the long future when power and growth are significantly (exponentially) higher. SNARKPack goes a long way toward addressing the linear cost of PoRep and PoSt proof verification. However, there remain linear costs associated with providing the public inputs for each sector’s proof.

Goals

Our goal is to enable arbitrary amounts of storage to be committed and maintained by miners within a fixed network transaction throughput.

This means redesigning storage onboarding and maintenance state and processes to remove linear per-sector costs, or dramatically reduce constants below practical needs. We want to do this while maintaining security, micro- and macro-economic attractiveness, discoverable and verifiable information about deals, and reasonable miner operational requirements.

This effort is seeking a solution that is in reach for implementation in the next 3-6 months (which means relying on PoRep and ZK proof technologies that already exist today), and that is good enough that we won’t have to re-solve the problem within a few years

Of course there exist other, orthogonal approaches to the general problem of scaling, but these are generally longer and harder propositions (e.g. sharding, layer 2 state).

Out of scope

This proposal does not attempt to solve exponential growth in deals, except by making it no harder to solve that problem later. We think this sequencing is reasonable because (a) deals are in practise rare at present, and (b) off-chain aggregation into whole-sector-size deals mitigates costs in the near term. We expect exponential deal growth to be a challenge to address in 2022.

Key ideas

The premise behind this proposal is that we cannot store or access a fixed-size piece of state for each 32 or 64 GiB sector of storage, either while onboarding or maintaining storage. Specifically, we cannot store or access a replica commitment (CommR) per sector, nor mutate per-partition state when accounting Window PoSt. CommR in aggregate today accounts for over half of the state tree at a single epoch, and Window PoSt partition state manipulation dominates the cost of maintenance.

The key design idea is to maintain largely the same data and processes we have today, but applied to an arbitrary number of sectors as a unit. The proposal will redesign state, proofs and algorithms to enable a miner to commit to and maintain units of storage larger than one sector, with cost that is logarithmic or better in the amount of storage. Thus, with a fixed chain processing capacity, the unit of accounting and proof can increase in size over time to support unbounded storage growth and capacity. We will assume that miners will increase their unit of commitment if blockchain transaction throughput is near capacity.

Reduce congestion via Aggregating ProveCommits via Inner Product Pairing

Problem

(same as #49 )

ProveCommits and PreCommits are creating network congestion leading to high base fee, leading to high cost for SubmitWindowPoSt and PublishStorageDeal.

(however, differently from #49)

ProveCommits messages and gas used scale linearly with network growth.

Proposed solution

(similar to #49)

Processing multiple ProveCommits at the same time can drastically reduce the gas used per sector, leading to a much lower gas usage from ProveCommits. We propose to add a new method: "ProveCommitAggregated".

(differently from #49)

The ProveCommitAggregated method allows for gas used in the network to scale sub-linearly with the growth of the network. A miner can submit a single short proof for many ProveCommits and the gas used for verification is sublinear in the ProveCommits aggregated. In other words, miners could be making a single ProveCommitAggregated transaction per day, instead of one per sector or one per batch of sectors.

There are ~6M individual SNARK are published on chain per day, largest miner publishes ~600k of them. With this solution - at least in theory - we would need a single SNARK per miner ~700. If we assume that miners only batch every 1,000 proofs on average, the SNARKs published per day would be ~6,000.

There are two parts that can be amortized:

  • State operations: several state reads and writes can be batched - done once per ProveCommitBatched instead of once per sector (similar improvements done in #25 )
  • Aggregation: we currently batch verify 10 SNARKs in a single ProveCommit, in this proposal we propose that the miner aggregates a large amount of ProveCommits in a single message ProveCommitAggregation message.

Context on Groth16 aggregation

The Protocol Labs team in collaboration with external researchers and engineers has improved the performances of the rust implementation of the IPP protocol (see ripp). The inner product pairing protocol for aggregating groth16 proofs has been described in this paper.

In high level, the idea is the following: given X Groth16 proofs, we can generate a single proof that these were correctly aggregated in a single proof.

Our preliminary result show that a prover can aggregate up to ~65,000 SNARKs (6,500 ProveCommits) in a size of ~78kB and with verification time of ~150ms.

Note the this type of aggregation sits on top of the existing SNARKs that we do. In other words, there is no need for a new trusted setup.

Comparison with #49 (Batching ProveCommits)

The size of an aggregated proof grows logarithmically in the numbers of proofs to aggregate, differently with batching, where the proof size scales linearly.

In other words, proposal #49 ProveCommitBatched has a limitation in the numbers of proofs to be aggregated, while ProveCommitAggregatedIPP does not. This opens up the possibility for miners to submit a daily single proof of new storage being added.

Outline

  • Implement and audit the IPP over groth16
  • Implement ProveCommitAggregatedIPP that allow miners to submit a single proof for multiple PoReps
  • Disable ProveCommit

With this mechanism, miners should always prefer to aggregate multiple proofs together since they would be substantially reduce their costs.

Discussion

This issue is intended for discussion. There are a number of details to work out before drafting a FIP.

Aggregation parameters

This is a test that aggregates 65536 SNARKs (~6,500 poreps) with a proof size of 78kB and verification time of 197ms

Proof aggregation finished in 41685ms
65536 proofs, each having 328 public inputs...
Verification aggregated finished in 197ms (Proof Size: 78336 bytes)

Open questions:

  • What is the range of possible optimizations in state operations and how much are we expecting to get?
  • What is the expected saving for such a change?

Note there is a separate line of investigation to propose ProveCommitAggregated with Halo instead of IPP, however it seems that IPP would be ready to be used faster than Halo.

Support non-deal data in sectors (off-chain deals part 1)

Problem

Currently Filecoin sectors can only store deal data referenced by on-chain deals. CC sectors and non-deal areas of sectors must store null (\0) bytes to be verifiable on-chain. This limitation is caused by how the UnsealedCID (CommD) is computed in the miner actor.

Unfortunately publishing deals is a very expensive on-chain operation, especially for smaller pieces where this cost it can be the main part of the storage fee. Off-chain deal support / alternative markets (e.g. more scalable storage market on ETH2.0 with data stored on Filecoin) seem to be good solutions to this problem, at least for non-FIL+ deals.

This proposal is only the first, but major, part of what needs to be done to support off-chain deals. It's likely that some off-chain deal protocols will require additional actor methods to be implemented (for example a method to check if a sector wasn't terminated early for settling payment channels)

Current state

Currently, when precommitting sectors, a number of DealIDs can be specified with SectorPreCommitInfo:

// Information provided by a miner when pre-committing a sector.
type SectorPreCommitInfo struct {
	SealProof       abi.RegisteredSealProof
	SectorNumber    abi.SectorNumber
	SealedCID       cid.Cid `checked:"true"`
	SealRandEpoch   abi.ChainEpoch
	DealIDs         []abi.DealID    // ****
	Expiration      abi.ChainEpoch
	ReplaceCapacity bool
	ReplaceSectorDeadline  uint64
	ReplaceSectorPartition uint64
	ReplaceSectorNumber    abi.SectorNumber
}

This information is then stored in miner actor state, and used to compute UnsealedCID when verifying PoRep by turning specified DealIDs into []abi.PieceInfos, and calling the ComputeUnsealedSectorCID syscall through the storage market actor.

type abi.PieceInfo struct {
	Size     abi.PaddedPieceSize
	PieceCID cid.Cid
}
ComputeUnsealedSectorCID(reg abi.RegisteredSealProof, pieces []abi.PieceInfo) (cid.Cid, error)

Proposed solution

We can change or create a new version of the SectorPreCommitInfo miner actor struct, changing the DealIDs array to a new array which allows specifying non-deal PieceCIDs:

+type SectorPieceInfo struct{
+	PieceCID  *cid.Cid `checked:"true"` // CommP
+	PieceSize abi.PaddedPieceSize
+
+	DealID abi.DealID
+}

// Information provided by a miner when pre-committing a sector.
type SectorPreCommitInfo struct {
	SealProof       abi.RegisteredSealProof
	SectorNumber    abi.SectorNumber
	SealedCID       cid.Cid `checked:"true"` // CommR
	SealRandEpoch   abi.ChainEpoch
-	DealIDs         []abi.DealID
+	Pieces          []SectorPieceInfo

	Expiration      abi.ChainEpoch
	ReplaceCapacity bool // Whether to replace a "committed capacity" no-deal sector (requires non-empty DealIDs)
	// The committed capacity sector to replace, and it's deadline/partition location
	ReplaceSectorDeadline  uint64
	ReplaceSectorPartition uint64
	ReplaceSectorNumber    abi.SectorNumber
}

When computing UnsealedCID for PoRep verification, []SectorPieceInfo would be turned into []abi.PieceInfo for the ComputeUnsealedSectorCID actor syscall as follows:

  • If PieceCID is null, assert that PieceSize==0, then get abi.PieceInfo for the referenced DealID from the market actor
  • If PieceCID is not null, assert that DealID==0, check the multicodec/multihash the same way it's checked in the storage market actor, then create abi.PieceInfo using specified PieceCID/PieceSize
  • Process all entries in sequence, keeping order when creating []abi.PieceInfo to allow interleaving non-deal and deal data

(it's possible to save 1 byte per entry by changing PieceSize/DealID to a single uint64 value (DealIdOrPieceSize), and processing it based on whether PieceCID is null or not)

Discussion

State migration

Depending on implementation details, this proposal may involve a relatively major state migration. We should look into ways of limiting that.

Added overhead for deal data

Publishing storage on-chain deals already has multiple kilobytes of read/write overhead. Each abi.DealID entry is 5B (1B cbor header, 4B for integer data). Depending on implementation, a dealID SectorPieceInfo entry will be 7 or 8 bytes, which is a negligible difference

Related FIPs

  • FIP-0008 (Add miner batched sector pre-commit method) is proposing creation of a new precommit method. It could be used to also introduce changes to SectorPreCommitInfo without breaking the old method
  • FIP-0007 (h/amt-v3) requires a migration of all HAMT data. Since this proposal also likely requires migration of at least miner PreCommittedSectors HAMTs, it may be a good idea to bundle those migrations into a single state upgrade

ChangeOwnerAddress support MultiSig Address

We all know the Owner Address is the key of miner, even we can delete owner address after we change owner address, but when we withdraw FIL or must keep owner address stay in the lotus for some reason, it's still dangerous.

FIP proposal: Allow storage providers to set *negative* prices for verified storage deals

The Mission of Filecoin is to create a decentralized, efficient, and robust foundation for humanity’s information. As such, one of the core goals of Filecoin is to create strong incentives for capacity committed to the network to be utilized in storage deals for valuable data. To achieve that, Filecoin clients and storage providers should be able to propose and accept negatively priced deals - aka deals where the storage provider pays the client to store their Filecoin+ data.

Right now, demand from storage providers for verified storage deals (made by clients with FIL+ datacap) is very high, since it confers a 10x multiple in block rewards proportional to the verified deal size (which helps storage providers quickly achieve and maximize profitability). However, access to these verified deals is constrained by the number of new storage clients onboarding data onto the network. To compete for this high-value resource, miners should be able to offer not just FREE storage and retrieval for verified deals, but NEGATIVELY PRICED storage/retrieval deals.

Negatively priced deals (that share the 10x multiplier returns from FIL+ data with the storage client) will help attract new high-value clients to the Filecoin market, thereby increasing the network's utility and the overall number of FIL+ deals available. As @jbenet mentioned in his EthCC talk, this is a super exciting opportunity that is uniquely available in Filecoin. Instead of having to build these auction/incentive systems outside the deal-making structures of Filecoin, we should simply encode the ability for storage providers to set a negative ask price.

Implement a tie breaker for forks of equal weight

Problem

The tie-breaker described in the spec for EC has never been implemented. Although not critical it would help miners converge more easily towards a common chain. This means that when two forks with the same weight occur all the miners that have received both forks will mine on the same one. Without a tie-breaker they would (randomly?) mine on both and may extend the forks for longer.

Proposed solution

For two tipsets of the same weight we choose the one with the smallest electionProof. In the case where two Tipsets of equal weight have the same min Election proof, the miner will compare the next smallest Election proof in the Tipset (and select the Tipset with the next smaller proof). This continues until one Tipset is selected.

Other solutions

Any solution that is both deterministic and non-biasable (i.e. an adversary may not make their chain more likely to be selected by the chain selection algorithm in the case of forks of equal weight) may work.
The ElectionProof is unbiasable and hence is a good candidate for tie-breaker. Any deterministic rule other than minimum would work just as fine.

Draft FIP here: https://docs.google.com/document/d/1VHm_0rh9XCcjJsrnpbLtbFpLNRmOgkEoNTNlt0NHiOQ/edit?usp=sharing

Pausable mining

As holiday seasons approaching, I would propose a new cool feature for Filecoin, Pausable Mining.

Why do we need it:

  • To reduce stress from miners, to allow miners have more time with loved one instead of staring at PC for windowPost and typing mpool replace.
  • Reduce traffic therefore reduce basefee.
  • Keep small miners alive little bit longer until the basefee is cool down.
  • Upgrading system, relocating miners, recover hardware failure without too much stress about the time.

So how does it work:

Miners send message to the chain to inform about its break time. Pausable mining is a fix period (1-2 week or a number of epoch), during pausing period, miner still need to participate in the network but less strictly (1 WindowsPost in 72 hours, pelnaty after 2nd missing WindowPost). The pausing miner will not able to mine any block, accept any deals or do CC sector. However, it still allow for retrieval deals.

FIP Proposal: Ability to extend life of a sector with deals from lotus client

Is your feature request related to a problem? Please describe.
Unable to extend the life of a sector that has deals in it from a client side.

Describe the solution you'd like
Extending a sector with deals expiration time with the amount of weeks needed.
On the client side it would be great to be able to specify it's an active deal and request a reseal with a new expiration date.
Resealing it as a new sector at the same miner would be fine, the data gets copied in a new sector and sealed again with new collateral and the rest of the process. Probably the easiest way to do it, as everything is available at the miner.

Describe alternatives you've considered
I considered copying the data from the old sector back to my client, and requesting a new deal..

Additional context
Add any other context or screenshots about the feature request here.

FIP Proposal: Almgren-Chriss additive formula for base fee update

There is a simple price manipulation strategy exploiting round trips to attack EIP-1559, which I have discussed extensively in the Ethereum community but is relevant to Filecoin as well. You can read more about this here. Also, see the ideas near the end of this thread for more details. The minimal remedy is to solve this problem by changing the multiplicative update formula into an additive one. Please let me know if anyone is interested, and I can provide more links and resources.

FIP 0011 Bug

Persisting a slack conversation here

Hey @Kubuxu I think FIP0011 has a mistake. It claims that current consensus fault penalty is 5 * BlockReward here but its actually just BlockReward from here as this function both multiplies by 5 and divides by 5.
If I understand the motivation behind FIP0011 I think this means the consensus fault reporting reward should become BlockReward / 20, the current max possible. Let me know if you're in agreement on all this. If so would you mind amending the FIP to fix this?

FYI @placer14 us resolving this blocks your implementation PR

Add a batched PreCommitSectors method

The miner PreCommitSector method only supports committing a single sector at a time. It's one of the two highest frequency methods observed on the chain at present (the other being ProveCommitSector). High-growth miners commit sectors at rates exceeding 1 per epoch. It's also a relatively expensive method, with multiple internal sends and loading and storing state including:

  • the Storage Power actor's power total (read)
  • the Storage Market actor's deal proposal AMT (read)
  • the Reward actor's totals (read)
  • the AllocatedSectors bitfield (read and modify)
  • the PrecommittedSectors HAMT (read and modify)
  • the Sectors AMT (read)
  • the PreCommittedSectorsExpiry AMT (read and modify)
  • the Storage Power actor's pledge total (read and modify)

A PreCommitSectorsBatch method has potential to amortize some of these costs across multiple sectors. If miner operators implemented a relatively short batch aggregation period (a few epochs), the number of invocations could be reduced significantly, and some of the state manipulations above reduced in proportion.

FIP Proposal: A proposal of Proof-of-Quality

Everyone, for the real and effective data verification of filecoin, our team has proposed a solution that can effectively solve the data verification problem and help the filecoin ecological application. Link: filecoin-project/community#167

A proposal of Proof-of-Quality

by Extend Labs

Abstract:To promote the storage of meaningful data, Extend Labs suggests the Filecoin community develop a method of Proof-of-Quality and reward the storage miner who has passed the proof. The basic process of the proposed PoQ is as follows. In the training phase, first, the storage miners publish the features of the data under differential privacy; second, the model miners train the PoQ model using distributed machine learning methods on the published features. In the testing phase, the storage miners load the latest PoQ model, submit the proof of quality and get the verified clients reward.

1. Introduction


On the existing Filecoin system, most of the stored data are randomly generated data but not meaningful data, which makes the community difficult to realize its ambition. How to promote meaningful data storage and applications has become a key issue of the Filecoin community. To this end, the Filecoin community has launched the “verified clients” reward.

The key to the success of “verified clients” reward is how to build an efficient, reliable, and adaptable way to verify the quality of the data. In this context, Extended Labs suggests the community develop the “Proof-of-Quality” (PoQ). PoQ is a proof method for the quality verification of the content of the data stored by miners, and it is also the only way for storage miners to obtain the “verified clients” reward.

The main challenges of the PoQ include: 1) Meaningful data storage puts forward higher requirements for data privacy, which means the PoQ methods usually cannot directly access the data; 2) The pattern of meaningful data changes dynamically, which means the PoQ methods should be able to adjust in conjunction with community development; 3) Due to the huge attraction of the “verified clients” reward, the storage miners have greater incentives to forge the meaningful data, which is also a core issue that needs to be considered in PoQ design.

Therefore, an effective PoQ should be able to ensure data privacy, adjust with the development of the community, and prevent malicious attacks by the storage miners. Based on the above considerations, we propose a PoQ scheme based on federated computing based on existing privacy computing and distributed machine learning.

2. A Proof-of-Quality Solution based on Federated Learning


There are two stages of the proposed PoQ method: PoQ model training and PoQ model testing. In the PoQ model training phase, federated learning is used to train the PoQ model, and the miners participating in the model training will receive model training rewards; in the PoQ model testing, each storage miner generates the PoQ certificate on its stored data and obtains the “verified clients” reward.

2.1 Training of PoQ model

To train the PoQ model efficiently, we employ differential privacy to publish the features of the data and use the distributed machine learning methods to train the PoQ model, under the federated computing framework [1]. The framework is shown in Figure 1. It includes three components:

  • Differential privacy data publish module: In this module, the features of the data will be calculated, desensitized, and finally published under differential privacy. Combined with actual needs, the data features can be the histograms of the data’s n-Gram segments. A possible solution for this module is the "Differentially private histogram publication" method [2], which is a widely used histogram publication method with grantees.

  • The local model training module. In this module, a specific part of the PoQ model is trained locally on a batch of samples or a subspace of the features on the differential privacy published data. The locally trained models will be reported to the global model training miners.

  • The global model training model. In this module, we obtain the global PoQ model by merging and tuning the local models. The split strategy of the local and global model, and the merging strategy of the global model can refer to "Scaling distributed machine learning with the parameter server", which is a widely used method in the field of distributed machine learning [3].

In practice, the differential privacy data publish module is deployed and completed by each storage miner, the local model training module is deployed on the storage miner or the model training miner, and the global model training model is completed by several (3, 5, or more) randomly selected model training miners to avoid the Sybil attack.

image

2.2 Generation of PoQ

The generation of PoQ is shown in Figure 2. Similar to the training process, this phase will also guarantee data privacy. It has two modules as follows:

  • Differential privacy data publish module: this model is the same as the training phase.

  • PoQ generation module: In this module, first, obtains the latest PoQ model from the model training miner; second, estimates the PoQ on the features published under differential privacy; finally, submits the PoQ score to get the “verified clients” reward.

image

2.3 Training Samples Collection

The training samples with accurate labels are the key to the success of PoQ. Possible solutions to collect the training samples include:

  • In the initial stage, the samples can be collected according to the meaningful data rules. The rules are based on the community consensus.

  • In the update stage, the samples can be collected according to meaningful actions on the data, such as the retrieve, update and so on.

2.4 Rewards

Model training reward: In order to encourage miners to participate in PoQ model training, PoQ model training rewards are given to miner nodes participating in local model training and global model training. The local model training and global model training rewards are related to the amount of data involved in the training.

Verified clients reward: storage miners that passed the PoQ will get the verified clients reward. The score returned by PoQ can be quality-adjusted power.

3. Conclusion


In summary, we propose to build a PoQ method based on the latest advanced privacy computing and machine learning technologies. The proposal is featured with:

  • Use both the rules provided by the community and the meaningful behaviors performed by the users, as the criteria of the meaningful data.

  • Devise a PoQ model training method with privacy grantee. Its characteristics include: using differential privacy to publish the features of the data; training the model using the distributed large-scale machine learning methods.

  • Use the trained PoQ model to verify the quality of the data. Its characteristics include that the storage miners use the latest PoQ model to verify the quality of the data and get the verified clients reward.

References

  1. Yang Q, Liu Y, Chen T, et al. Federated machine learning: Concept and applications[J]. ACM Transactions on Intelligent Systems and Technology (TIST), 2019, 10(2): 1-19.

  2. Xu J, Zhang Z, Xiao X, et al. Differentially private histogram publication[J]. The VLDB Journal, 2013, 22(6): 797-822.

  3. Li M, Andersen D G, Park J W, et al. Scaling distributed machine learning with the parameter server[C]//OSDI. 2014: 583-598.

FIP template

fip: <to be assigned>
title: <FIP title>
author: <a list of the author's or authors' name(s) and/or username(s), or name(s) and email(s), e.g. (use with the parentheses or triangular brackets): FirstName LastName (@GitHubUsername), FirstName LastName <[email protected]>, FirstName (@GitHubUsername) and GitHubUsername (@GitHubUsername)>
discussions-to: <URL>
status: Draft
type: <Standards Track | Informational | Meta>
category (*only required for Standards Track): <Core, Networking, Interface, or FRC>
created: <date created on, in ISO 8601 (yyyy-mm-dd) format>
requires (*optional): <FIP number(s)>
replaces (*optional): <FIP number(s)>

This is the suggested template for new FIPs.

Note that an FIP number will be assigned by an editor. When opening a pull request to submit your FIP, please use an abbreviated title in the filename, fip-draft_title_abbrev.md.

The title should be 44 characters or less.

Simple Summary

If you can't explain it simply, you don't understand it well enough." Provide a simplified and layman-accessible explanation of the FIP.

Abstract

A short (~200 word) description of the technical issue being addressed.

Motivation

The motivation is critical for FIPs that want to change the Filecoin protocol. It should clearly explain why the existing protocol specification is inadequate to address the problem that the FIP solves. FIP submissions without sufficient motivation may be rejected outright.

Specification

The technical specification should describe the syntax and semantics of any new feature. The specification should be detailed enough to allow competing, interoperable implementations for any of the current Filecoin platforms (lotus, go-filecoin, Forest, Fuhon).

Rationale

The rationale fleshes out the specification by describing what motivated the design and why particular design decisions were made. It should describe alternate designs that were considered and related work, e.g. how the feature is supported in other languages. The rationale may also provide evidence of consensus within the community, and should discuss important objections or concerns raised during discussion.

Backwards Compatibility

All FIPs that introduce backwards incompatibilities must include a section describing these incompatibilities and their severity. The FIP must explain how the author proposes to deal with these incompatibilities. FIP submissions without a sufficient backwards compatibility treatise may be rejected outright.

Test Cases

Test cases for an implementation are mandatory for FIPs that are affecting consensus changes. Other FIPs can choose to include links to test cases if applicable.

Implementation

The implementations must be completed before any FIP is given status "Final", but it need not be completed before the FIP is accepted. While there is merit to the approach of reaching consensus on the specification and rationale before writing code, the principle of "rough consensus and running code" is still useful when it comes to resolving many discussions of API details.

Security Considerations

All FIPs must contain a section that discusses the security implications/considerations relevant to the proposed change. Include information that might be important for security discussions, surfaces risks and can be used throughout the life cycle of the proposal. E.g. include security-relevant design decisions, concerns, important discussions, implementation-specific guidance and pitfalls, an outline of threats and risks and how they are being addressed. FIP submissions missing the "Security Considerations" section will be rejected. An FIP cannot proceed to status "Final" without a Security Considerations discussion deemed sufficient by the reviewers.

References

This template was derived heavily from Ethereum’s EIP template.

Copyright

Copyright and related rights waived via CC0.

FIP Proposal: Upgrade regular deal to verified deal

Simple Summary

Allow miner to send an upgraded verified deal after the piece data has been sealed and sector has been landed on chain.

Abstract

Today if client/miner wants to take advantage of Filecoin Plus program, the process is client send DataCap request to notary, with supplemental in formations provided, notary approve the request and assign DataCap to the client, then client start the deal process by sending proposal(s) with VerifiedDeal field marked "true", and miner go through the rest of deal process steps, while being part of Filecoin Plus, miners enjoy the higher quality adjusted power and clients enjoy lower storage price, a de factor binding has been imposed between filecoin plus onboarding process and deal-seal process by making successful onboarding process as the pre-condition before deal-seal

This proposal is aiming to break this de factor dependency without changing or compromising the Filecoin Plus Client Onboarding process, but just wants to introduce an additional deal upgrade message to be allowed to submit to the chain to either 1, make the previous regular deal to be an "verified deal" or 2, To supplement deal information related to a CC sector which sealed piece data with meaningful content

Change Motivation

1, Given the benefit of Filecoin Plus to both clients and miners, clients might tend to apply for DataCap before store their data in the chain, and DataCap approval process normally take days even weeks, and it's possible that eventually only partial or even no DataCap is granted, therefore it could be a barrier or a delay factor for clients to land their data in the network. Decouple Filecoin Plus and propose deal will encourage client to store their data in Filecoin first and apply DataCap in parallel or later.

2, Although the network storage power is over 6EiB, majority of it is Committed Capacity with zero data in it. Miners are not intended to use existing "CC upgrade" to make it a sector with deals because of no extra economic motivation, given the "post verification" mechanism in place, expect deals might be increasing sharply and it could be economic efficient to leverage existing committed capacity with zero piece by CC upgrade, therefore the overall piece data size parentage could be increased w.r.t overall network storage power. (For reference, in date of Jun 4, In the total 768062 precommit message, only 2 were "ReplaceCapcity" )

3, For CC sectors with non zero data piece (assume to implement in FIP "Pack arbitrary data in CC sectors and be able to upgrade them to verified data later"), no re-seal will be needed to upgrade the CC sector to a "deal sector"

Design Rationale

Two scenario will need to be considered in design:
1, New "Upgrade deal" proposal needs to be defined with verified field to be true, and new "UpggadeStorageDeal" needs to be added to the Market Actor, to link the existing original deal to the sector already committed to the network, and update the quality adjusted power.
2, With another FIP "Pack arbitrary data in CC sectors and be able to upgrade them to verified data later" implemented, a complemented proposal need to be defined to the CC sector, and a "ComplemntStorageDeal" needs to be
added to the Market Actor to link the complement deal to the existed CC sector.

Backwards Compatibility

There will be no impact to the existing sectors/deals committed to the chain

Security Considerations

Seal process will keep as it is, there will be no chance to bring sectors to the network in faster manner by this proposal.
The DataCap approval process will not be no change (or even be strengthen for notary to retrieve the piece data before grant the DataCap), therefore there will be no shortcut for clients/miners to bring more quality adjusted power nonauthentic way.

Incentive Considerations

1, Encourage client to bring more business data to the network without waiting for DataCap proving process
2, Encourage miner to leverage CC upgrade more often
3, For "CC sectors with data piece", no re-seal needed for CC upgrade.

Consider reverting FIP-0009

Optimistic WindowPoSt largely obviates the need for FIP-0009, so we can consider reverting it. We could also wait until FIP-0013 drives down costs even further.

Discussion about "Fast-track for WindowPoSt"

Problem

A WindowPoSt message is a “unique” kind of message because it has an expiration (i.e., it has to go on chain before a given epoch in order to be valid and effective). Because of this, we need to guarantee that enough windowPoSt messages are selected and included by a block producer, even when there is message congestion.

Proposed Solution

Allocate dedicated block space for WindowPoSt messages.

Specification

We introduce a message classification system:

  • Type A messages: WindowPoSt submission;
  • Type B messages: any message that is not in type A;

Each message type now has its own gas pricing mechanism. Each mechanism is the same as in the Dynamic Network Fee mechanism, but they are independent. More precisely:
We introduce variables: BlockGasLimit_A, BlockGasTarget_A, BlockGasLimit_B, BlockGasTarget_B

  • BlockGasLimit_A = TypeAShare*BlockGasLimit,
  • BlockGasTarget_A = TypeAShare*BlockGasTarget,
    • TypeAShare is a number between 0 and 1 , to be decided
  • BlockGasLimit_B = TypeBShare*BlockGasLimit,
  • BlockGasTarget_B = TypeBShare*BlockGasTarget,
    • TypeBShare = 1 -TypeAShare

Each block now has two base fees: baseFee_A and baseFee_B. Both base fees are computed using the formula “NewBaseFee = OldBaseFee + delta*OldBaseFee”, where delta depends on the difference between the actual gas limit of the block (i.e., the sum of each included message’s gas limit) and the BlockGasTarget. The differences among the two are
In the computation of the delta for baseFee_B (resp. baseFee_A) we use BlockGasTarget_B/2 (resp. BlockGasTarget_A/2) and consider msg.GasLimit taken from messages of type B (resp. Type A) only;
Currently the delta is a value in the range [-⅛, ⅛], we can keep this range for both or use two different ranges.

Discussion

One problem @Kubuxu pointed out for this proposal is that if we reserve too little for WindowPoSt then the window post partition will run up the corresponding base fee very high, on the other side reserving too much decreases the chain throughput and creates problem for the other messages (especially precommit/proveCommit).

Other Solutions

  • Discounted base fee for windowPost message (to be analyzed/discussed)

FIP Discussion: Reduce PoRep verification time by reducing graph parents generation cost

Facts:

  • According to preliminary calculations, about half of the time spent in verifying a PoRep proof goes into generating the correct indices of the parents of the challenged nodes (from now challenged parents)
  • Challenged parents are currently independent per proof and they depend on the randomness at the height of PreCommit.

Potential proposals that could evolve into a FIP:

  • Proposal 1: Add cache to verification:
    • Full nodes must keep in cache the full graph in order to generate parents efficiently by reading memory (~1GiB)
    • Pro:
      • No major change, only gas cost adjustments and requirement of graph cache
      • It will improve today's proofs (no requirement on SnarkPack)
    • Cons:
      • Large memory/storage requirement for full node (note that they are already storing a very large state, so likely not too bad afterall)
  • Proposal 2: Generate parents once for aggregated precommits (SnarkPack only)
    • Given a ProveCommitAggregate, for all sectors precommitted at the same epoch, only generate the parents once (since they will share the same randomness). In this way, it will be more rational for miners to reduce their gas cost by PreCommittAggregate and then ProveCommitAggregate their proofs
    • Pros:
      • No mem requirement as in 1
    • Cons:
      • Miners need to time things right to minimize costs
      • Only works with SnarkPack
      • Miners may PreCommit across epochs, so we may not realize the full 50% gas reduction
  • Proposal 3: Same randomness for all precommits in a window
    • Instead of randomness being assigned per pre-commit, only generate randomness every X blocks, every precommitted before X will use randomness at X. This means that for all sectors precommitted at position X, they will all have the same randomness, hence parents only need to be calculated once every X for all miners.
    • Pros:
      • No mem requirement as in 1
      • Can realize 50% gas reduction
      • Should work also without SnarkPack

There may be other solutions that we did not consider. Let us know what are your thoughts on the three proposals.

Once this gets more solidified we can turn it into an FIP proposal.

Gain more pay more

All participant today are suffering from filecoin congestion issue. To make things worse, top miners who have more power share of the network produce more traffic to keep their leverage. Thinking about in real world, you have to pay congestion fee more if you own more vehicles today. So if we add a "trade-off" to the action of adding power simply, we might get things revert.

  1. Active miners should pay average gas plus their power share to the network, i.e., if one miner have 5% power share of the whole network, any message from this miner will cost average gas * 1.05.
  2. The adding part of the gas it will be considered as congestion fee will be burned immediately.
  3. A congestion window can be set and miners will pay more if they produce many messages during this window.

Discussion: Consolidate sectors into one proving deadline until deadline is full

Problem
If a miner finishes sealing a sector during a challenge window, the sector gets assigned to the next deadline. This leads to sectors spread across multiple deadlines, even if a deadline is not filled with 2349 sectors.

Example:

deadline  partitions  sectors (faults)  proven partitions
0         1           686 (0)           0
1         1           13 (0)            0
2         1           9 (0)             0

This leads to unnecessary messages on the chain, and also unnecessary fees paid by miners, especially when wPoSt-fees are as high as they are in this time of writing.

Proposed Solution
In a earlier discussion on Slack, nickle proposed two solutions:

  1. Have a function that consolidates sectors
  2. Change sector allocation to not leak into other sectors + one-time sector consolidation.

Discussion
Nickle noted that solution 1 might be messy regarding security.

Introduce FIP to exempt Window PoSts from BaseFee burn

This is a discussion issue for #51 (Rendered).

Summary: Exempt direct SubmitWindowedPoSt messages that execute successfully from base-fee burn (i.e., don't burn baseFee*gasUsed).

If a miner sends a direct on-chain message to a miner actor's "SubmitWindowedPoSt" method, and the message executes successfully, immediately "refund" all the burned gas, instead of just refunding the overestimation. However, overestimation penalties, gas premium, etc. still apply.

Motivation:

This proposal is a short-term stopgap to reduce the impact of rising base fee on continuously proving existing storage. In principle, BaseFee*GasUsage is burned to compensate the network for the resources consumed by messages and ensure incentive alignment. However, SubmitWindowPoSt is a required message to continue mining operations in Filecoin.

  • Long-term solutions for reducing the cost of Window PoSt include #42.
  • Long-term solutions for reducing chain congestion include #49, #50.

Unfortunately, such long-term solutions are complex and cannot be properly implemented and tested on the required timescale (before the holidays). This proposal is a short-term mitigation until those solutions are ready.

FIP Proposal: Pack arbitrary data in CC sectors

Simple Summary

For CC sectors, allow miners to choose if they want to seal empty data or pack data pieces which could be real for CC
sector, and it can be verified accordingly.

Abstract

Currently in seal process, miners pack empty pieces (all zeros) into CC sectors, after porep finished, create precommit
messages with empty dealID information. In verification of commit messages, verifier will get the DealID from
corresponding precommit info, and if DealID is empty, verifier recover the unsealed id by all zeros, which limit the CC
sector to seal only zeroed data.

Change Motivation

We propose design to allow miners to pack real data for CC sectors, add unsealed id in precommit message and keep it on
chain. It introduce the possibility for CC upgrade without re-seal, and turn the piece into verified data afterwards (
Separated FIPs).

Specification

  1. CLI for associate sector ID with data files in local FS. (lotus change)
  2. Add piece change for CC sector to read the mapping relationship between sector ID and data files and process
    accordingly if there is any, if there is no mapping than keep the add piece process as is. (lotus change and
    filecoin_ffi)
  3. Add data field “unsealed ID” in precommit.info and populate value in it in . (lotus/spec actor change)
  4. Add Optional data field “unsealed ID” in SectorOnChainInfo (for further CC upgrade verification between unsealed id
    and piece CID)/SealVerifyStuff struct, (spec actor change)
  5. In getVerifyInfo, if “unsealed ID” of SealVerifyStuff is not empty and dealIDs is empty, than use it for commD,
    otherwise keep the logic as is (spec actor change)
  6. In ConfirmSectorProofsValid, populate “unsealed ID” in the additional field of SectorOnChainInfo, this is for future
    verification (spec actor change)

Design Rationale

Extra onchain storage for “unsealed ID” in SectorOnChainInfo will be required. As this field provides the possibility
for post-verification between data pieces and unsealed ID.

Backwards Compatibility

Fully backwards compatibility is expected

Security Considerations

N/A

Incentive Considerations

No impact on existing incentives, but provides extra flexibility to miners to pack real data in CC sectors. As the
onchain unsealed id is available, the cheaper CC upgrade and new process to assign DataCap is possible for the follow up
FIP

Implementation

WIP

Copyright

Copyright and related rights waived via CC0.

Support account termination method for non-singleton actors

Problem

Currently Filecoin does not support a full life cycle of an account from creating to termination, i.e. once an account is created, it will be always living in the network with a state, no matter it is obsoleted by the creator or not. This leads to two issues:

  1. All obsoleted accounts, including signable ones or miners, still occupy network resources, which leads cost unnecesserily higher
  2. When a user changes her account, there is always some FIL can not be transferred, which can not be ignored especially when gas base fee is pretty high. This also makes it very hard for people to use HD-wallet frequently changing addresses from privacy perspective.
  3. Heavy actors (e.g. miners) can be created for almost free. They can be easily and cheaply spammed, taking a lot of network resource, and resulting in security concerns.

Proposed Solution

Add a termination method for non-singleton actors for vailable balance transferring and account termination.
The termination method has one parameter as beneficiary to get all available balance

Introduce collateral for heavy actors, e.g. miners, and the collateral will be return when it is terminated, to provide higher security guard.

Outline

This can be done step by step. For example:

  1. Only support signable accounts
  2. Support miner creation collateral and miner termination (only by owner under a certain condition)
  3. Support paych if needed

Discussion

This issue is intended for discussion.

Survey before the submission:

This proposal is first published in a WeChat article and have done a survey, got the following vote data:
https://mp.weixin.qq.com/s/9GSvuuPb73TVIoA_foXlTQ

Do you support adding account termination method in Filecoin:

  1. Yes, I support it mainly because I will not lose money when I drop an address - 47% (22/46)
  2. Yes, I support it mainly because it will save the network resource . - 34% (16/46)
  3. No, it does not make much sense - 17% (8/46)

Overall, there are 81% voter support this proposal.

Risks

  • An account might be terminated by accident. This does not really matter, the address state can be re-created, just as a new address though we remove the state of the particular address
  • An account might be associated with another one, e.g. a Miner's owner or worker. This may lead to 2 consequences if an owner or worker is terminated
  1. One can not withdraw, or a miner can not mine; For this, we may consider the account user should take the full responsibility to check before termination
  2. The program might panic due to the corresponding state does not exist any more, this should be carefully taken care of
    Or, we may not allow an account to be terminated when it is associated to another one, e.g. Owner, miner, controller, multisig signer, etc.

Other Solutions

We may do Bitcoin like transferring, that is, 3 addresses are included is in a send message, From, To, Change addresses. But this makes send message heavier, and in most cases, the change address will be set to as From address. At the same time, it does not solve the the issue that obsoleted address still ocuppy network resource.

Enable DataCap Top up for Filecoin Plus Client Addresses

Problem

What exists today

  • Clients can request a DataCap allocation to be made to a specific address by providing information (off-chain) to a Notary
  • A Notary can make an allocation to that address to give the Client DataCap
    • For Clients, DataCap allocations are treated as a “one-time” gift card per address
    • This means that if an address has previously received DataCap it is ineligible to receive future DataCap allocations (as long as there is at least the minimum deal size worth DataCap remaining)
    • DataCap allocations are non-transferrable

The above creates an issue for the average client - if you receive an allocation of DataCap and end up with an amount that is less than enough to make a deal (e.g. 4GiB) and more than the minimum deal size, you will be unable to receive further allocations to your address. This can create unnecessary friction for users who might then be required to create a new address in order to continue making deals.

Proposed Solution

Client addresses should be able to receive additional DataCap allocations to a given address.

Cases to consider:

  • On-chain address, never received DataCap
    • No change from current mechanism - Client should be able to request DataCap to this address.
  • On-chain address, received DataCap previously
    • Client should be able to request DataCap to this address again
    • DataCap should be treated as an addition to existing balance (Existing balance = existing balance + new allocation)

It is worth also discussing how a client applying for DataCap from multiple Notaries may manifest - many of these solutions focus on process and tooling for Notaries, as what may appear as a race condition could also take the form of legitimate requests:

  • A client who needs 5TiB total applies to three Notaries (5TiB each), hoping to get approved by at least one of them quickly.
    • In the first case, it is expected that the Notaries would (in the course of due diligence) ask if the applicant had other pending requests.
    • When making the approval, the Notary should check the specific address (either in the public repo, block explorer, or natively in plus.fil.org) to verify when the last allocation was made.
      • In plus.fil.org - the aim is to have this appear in the same modal as a warning denoting when the last allocation was made, the size of the allocation, and the sender.
    • When in doubt, the Notary can also stagger their allocation (because addresses can receive multiple allocations) to ensure a Client is not receiving more than is due.
      • While a process is not established here, there are several possible ways that the Notary avoids the race condition:
        • Announcing their intended action to the intended address to the other Notaries
        • Staggering their allocation such that the modal warning will pop up for others.
    • In the event that multiple Notaries simultaneously allocate the full amount (Client receives 10TiB+ of DataCap) - the Client has received more than they intended.
      • We anticipate this will be an unlikely event given:
        • Notaries are geographically dispersed (and only a few per region), so the likelihood of race conditions is quite low
        • In the event a Client receives DataCap, the Notaries individually would be following their allocation plans to ensure the Client is trustworthy.
        • Given the rigor of these plans, it is highly likely that someone who was worthy of receiving 5TiB would credibly be able to have applied twice in a row - the risk factor is relatively low.
    • A client who needs 15TiB total applies to three Notaries (5TiB each), hoping to get approved by each of them.
      • In this case, it is expected that all Notaries would all separately do their due diligence to arrive at a decision.
      • In the case of large scale allocations (e.g. PiB scale), it may require a higher degree of coordination across Notaries - this is something that would require more process to be defined.

Discussion

Potential Impact

  • The previous reasoning for treating allocations as a “one-time” gift card was to mitigate the potential for abuse.
    • The original reasoning was that by requiring Clients to request a new address each time they’d be limited in their allocation and ability to spend.
    • The additional friction was intended to slow down the speed of allocation and limit damages in the event of abuse or uneducated use of DataCap (allocating all to the same miner).
      • Given there was less known about DataCap allocations and how it would manifest, the approach was to bias towards increasing friction to reduce risk.
    • However, this has a few practical implications:
      • Malicious actors who want to acquire large amounts of DataCap simply need to spin up many addresses (which can be done trivially)
      • Notaries trying to evaluate past Client allocations require the Client to self identify all their previous addresses
      • Legitimate actors (e.g. web3 devs) now need to manage the rotation of addresses for end users to be able to spend their DataCap. Rather than enabling legitimate users the ability to have a “default” address, they need to manage the rotation across an ever-increasing list.
        • Notably when deal renewal is a thing (™) it’s also likely that the client will be using the same address as before - and will want to be able to spend DataCap to renew
  • The above notably does not actually address the underlying issue:
    • Because Notaries can’t necessarily know all the previous allocations a Client received AND because spinning up new addresses is the norm / trivial - the security benefits are minimal.
    • On the flip side, the friction introduced (and required implementation implications for legitimate actors) is quite high
  • An added benefit - by normalizing having Clients have a “default” address they make deals from and receive DataCap to (which might also make them more eligible for higher DataCap allocations) we can build up a mechanism of credit / reputation from the Client.
  • Other benefits:
    • Enabling reputation via longevity of wallet addresses
    • Security of dataCap since a smaller number of wallets need to be secured
    • Fund security (lots of wallets by the same party makes it harder for orgs to track their own funds)
    • Analytics for the same org getting allocations from multiple notaries. (Some of the fraud vectors of creating multiple addresses still exist, but this proposal makes it easier for honest players to differentiate themselves).

Open Questions

  • Would having multiple notaries change the security implications here?
    • I don’t think materially so - as it stands a Client could spin up two separate addresses and make two separate allocation requests to two different Notaries. From the perspective of a malicious actor, they might do this anyways if they’re trying to be undetected.
    • For legitimate clients, it’s possible you apply to multiple Notaries and independently they give you additional DataCap (more than what you require)
      • This can first be addressed with education of Notaries
      • Additionally, the UX in the Notary Dashboard can be modified to show
        • DataCap for the Client address in question
        • Time of last allocation
        • New DataCap
      • Given the small number of Notaries and limited allocations available, the risk factor is quite low
        • Open question might be whether we should also implement a deletion of DataCap (though this opens many questions about who should have this power, what limits to this power should exist, etc).
    • Arguably having a reliable address (where a Notary could inspect previous spend from a Client), you give Notaries an additional metric of previous allocations that is less reliant on self-reporting.
    • Should the history of DataCap be stored on-chain? (presumably )
      • As of today you can trace the acquisition of DataCap (when it was granted) and how it was spent - so presumably with every address you can compute how much DataCap it had remaining at a given point in time.
      • In lotus-shed, you can see for the current point in time what is the DataCap balance
    • Alternative: Should there be a mechanism to link multiple addresses in a public way? Does this transparency help?

No repay debt requirement for DeclareFaultsRecovered

No repay debt requirement for DeclareFaultsRecovered

Abstract

Filecoin miners are subject to pay storage fault and consensus fault fees. These are paid using locked funds (ie, unvested block rewards) and then the miner’s available balance. If these two are not enough to cover the total fee amount, the remaining part to pay is recorded in a variable (FeeDebt) in the miner’s state and the miner is declared “in fee debt”.

For a miner in debt, the call to the methods PreCommit, DeclareFaultsRecovered and WithdrawBalance fails. Moreover this miner can not mine blocks. The fact that a miner in debt can not recover faults creates an undesired loop situation, in which a miner with faulted partitions can not recover, keep increasing its debt (note that continued faults cause more fees), eventually leading to sector termination if the miner does not repay their debt after 14 days. This is especially bad for small miners with little balance. To avoid this we propose to allow recovery of faults for miners in debt.

FIP Proposal: Client can make an off-chain deal and miner can use CC sectors to storage data without resealing

Simple Summary

Client can make an off-chain deal to miner. The miner will use existing or new CC sector to storage client's real data. If client want its off-chain data on chain, miner should upgrade the CC sectors to Regular Sectors with resealing.

Problem

Right now, the total network power is more than 6EiB,but the real storage is just only 15PB(https://storage.filecoin.io/). There are a lot of CC sectors with the zero deal. The miner should have an easy way to use the existing CC sectors to store the real data from the clients, no more resealing. So that it can solve the problem about the idle hardware equipment.

The currently method is that the pledged sector can be replaced with a new sector containing deals.(https://docs.filecoin.io/mine/lotus/sector-pledging/#upgrading-pledged-sectors). However, it requires the deal to publish and new sector to seal. It will take up more resources of the miners and also occupy more resources of the network. Client storage costs will also increase because of it.

Proposed Solution

The client can choose whether to publish its own data and deal on chain or not.

  • If the client make the on chain deal, the miner can store it in a new sector or upgrade the CC sector by 'mark-for-upgrade'.
  • If the client make the off chain deal, there are not publishStorageDeals message, sealing and dealID etc. The client's data will be stored in the CC sector of the miner. At the same time, miners need to save DealCid and PieceCID. SectorPreCommitInfo also needs to be changed or add some data fields. (we also see some similar proposals at #57 #89)

Storing the off-chain data will save the gas fee, which is so expensive when the base fee is high especially. Like Verified data, the miners can provide the client another lower storage price than on-chain mode. The clients also can download files from the miner by the file CID.

If the client needs, it can change its off-chain deal on chain. The miner will migrate the data into the regular sector.

Especially, when the client finds that the miner do not provide services any more, it can create an arbitration or appeal. If the miner lost the data, the miner needs to pay the customer compensation for the loss of data. Meanwhile, the power and pledge will also be slashed. (Perhaps we can discuss another FIP in detail)

Add a maintenance window flag for network upgrades and/or instability

Currently, network upgrades (and incidents like https://filecoin.statuspage.io/incidents/ffhr434cd14c) can temporarily reduce chain quality and, in some cases, cause miners to miss Window PoSts. This is mitigated by the fact that no penalties are paid for missed Window PoSt. Unfortunately, power that wasn't proven is still lost until the next Window PoSt.

Proposal

Introduce a "maintenance window" flag that can be set for a range of epochs. Any deadlines that end within a maintenance window will not be penalized for missed Window PoSts.

Implementation

  1. Expose a MaintenanceWindow() bool function on the runtime. Implementations will return true from this function when a maintenance window is in-progress.
  2. When processing an end-of-deadline cron event, if MaintenanceWindow() returns true, assume that all partitions were correctly proven and move on without applying any penalties.

Usage

  • During large network updates that could potentially reduce chain quality (e.g., ones with time consuming state-upgrades), MaintenenceWindow() bool will return true starting at the upgrade epoch and ending some time after the upgrade epoch (likely 1 hour).
  • In the case of a chain-halt and an emergency network upgrade:
    • The maintenance window would begin at the point where the network resumes.
    • The maintenance window would end some number of hours later when the chain is expected to be stable and caught up.

Sealing Transaction Fee Rate

Problem

Chain congestion due to sector sealing messages -- PreCommit and ProveCommit -- is driving up the BaseFee and making WindowPosts, Deals, and other messages very expensive. Other proposals (#24, #24-b, #42) propose ways to make WindowPost cheaper or get a dedicated control gas plane, but most of those solutions will take significant time to implement.

It seems that Miners for now have settled on a BaseFee expense between 1-5 nFIL (gas expenditure dominated by PreCommit and ProveCommit.

Temporary Solution

The other solutions discussed are the right thing to do. But in the meantime:

We might alleviate the pain by introducing a gas fee rate increase for PreCommit and ProveCommit messages. This means multiply the gas fee by a SealingFeeRate -- a factor, say 10x or even 20x. This will cause parties that are sealing massive amounts currently (in this difficult congestion period) to arrive at a BaseFee about 10x cheaper.

Scales with BaseFee. because this is a multiplication on top of the gas costs of those messages, it will scale with the BaseFee, and will cause the network to find the miners' price point much faster.

Burn fees pass to miners sealing. This is unlikely to reduce burn fees, only move the proportion paid by most users (sp WindowPosts) to be paid by miners growing their storage significantly.

Downside: harms sealing new storage. This harms all sealing, not just CC sectors. This will hurt new sectors with storage, as they will be more expensive to seal. This may be an acceptable price to pay until we get a better solution in.

Network Policy Lever. This creates a network policy lever that trades off between cheaper capacity onboarding or cheaper everything else.

Implementation Notes

This should be very easy to implement. this is why I'm proposing it. This should be a relatively straight forward change to introduce in gas accounting.

Other Solutions

#24, #24-b, #42 and more

FIP Proposal: ZKCP over payment channels for fair retrievals

Posting this FIP proposal here to open a discussion before pushing a PR. Would love the feedback! 🙏

---
fip:
title: Zero Knowledge Contingent Payment (ZKCP) over Payment Channels for fair retrievals.
author: @adlrocha
discussions-to: #73
status: Draft
type: Technical
category: Networking
created: 2020-02-22
spec-sections:

Simple Summary

Abstract

Filecoin retrieval deals currently happen primarily offline. Clients send periodic payment vouchers over a payment channel to pay for the retrieval of blocks of data to a provider. The provider sends blocks until it requires payment. When the provider requires payment, it pauses the data transfer until the client sends its next payment. This setup produces a clear unbalance in the exchange: if the provider requests a payment before the blocks paid for are sent, it can escape with the payment voucher and redeem it without sending any data to the client; and the other way around, if the provider sends the blocks before the client sends its payment, the client is able to run with the blocks without having to pay for them.

The aim of this FIP is to remove this unbalance in retrieval deals implementing all the primitives required to run a Zero Knowledge Contingent Payment protocol over Filecoin payment channels. This opens the door to the coexistence of different retrieval exchange protocols in the retrieval market, and the implementation of fair exchange of data over retrieval deals.

Change Motivation

The unbalances in the current implementation of retrieval deals is vulnerable to unfair exchanges, where a client or a provider can take advantage of the other party. In the case of clients, they would be able to run with some blocks of data without paying the provider; while in the case of providers, they would be able to request a payment and run with the payment voucher without sending the corresponding blocks paid for.

A retrieval of data is currently conformed by the following steps:

  • The client finds a provider of a given piece and sends a `RetrievalDealProposal. In this proposal a field defining the type of retrieval exchange to be used could be included.
  • All the required operations to get ready for the retrieval are performed: setting up the payment channel, unsealing the corresponding sectors, etc.
  • The provider monitors data transfer as it sends blocks over the protocol, until it requires payment.
  • When the provider requires payment, it pauses the data transfer and sends a request for payment as an intermediate voucher. At this point, the client would be able to run with the blocks received up to that point without sending any payment in exchange for them.
  • The client receives the request for payment.
  • The client creates and stores a payment voucher off-chain.
  • The client responds to the provider with a reference to the payment voucher, sent as an intermediate voucher (i.e., acknowledging receipt of a part of the data and channel or lane value). At this stage, if the voucher also pays for subsequent blocks of the exchange not sent already, the provider would be able stop the exchange and run with the payment voucher without sending the pending blocks.
  • The provider validates the voucher sent by the client and saves it to be redeemed on-chain later
  • The provider resumes sending data and requesting intermediate payments. Each of these additional interactions gives room for both parties to leverage this attack.

Specification

Add a parameter in the negotiation of the retrieval deal (for instance in the '
RetrievalDealProposal) to agree on the type of retrieval to be performed: mainly, pay-per-chunk or zkcp (at least for now).

When a retrieval deal of type zkcp is accepted by a provider, the provider performs the following steps before sending the data:
- It generates a symmetric key k, and uses k to encrypt the chunk of blocks to be sent in the retrieval, c = Enc_k (data).
- It computes a hash of the key, y = hash(k), and builds a Zero Knowledge Proof, v, that verifies E^-1_{hash-1(y)} == c, i.e. that the ciphertext c has indeed been encrypted using the key whose hash is y.

The provider then sends over the data channel the encrypted data, c along with the hash of the key, y, and the proof v, and requests a payment. In order for the provider to reveal the key, the client needs to send a payment voucher.

The client sends a payment voucher setting the field SecretPreimage = y in the voucher. For the provider to be able to redeem the payment it needs to reveal the key, k, for the client to be able to recover the data.

If the accepted retrieval is of type pay-per-chunk, client and provider run the current implementation of the retrieval exchange seamlessly.

The basic implementation of the zkcp retrieval assumes a single payment for all the blocks of data requested. To support an exchange with n partial payments, the provider needs to generate n different keys, encrypt each of the n set of blocks with a different key, build n proofs, and redeem the n vouchers to reveal the keys to the client.

Design Rationale

This proposal adds overheads to the current implementation of retrieval deals, and it may not suit the needs of every retrieval exchange. The rationale behind the proposal is to put in place all the constructions required to run a ZKCP protocol to perform fair-exchanges over Filecoin payment channels. This opens the door to the coexistence of several retrieval protocols so clients and providers can agree on the most suitable one to fulfill their needs in a retrieval deal.

There are already some proposals being discussed for alternative implementations of the ZKCP protocol with optimizations and modifications to circumvent some of its limitations. Having a basic implementation of ZKCP in Filecoin can open the door to the design and implementation of more retrieval exchange protocols for fair-exchange based in this basic construction. A few examples of the aforementioned alternative implementation of ZKCP retrievals can be found here:

Backwards Compatibility

This proposal is backward compatible. What's more, one of its goal is to enable the coexistence of different retrieval exchange protocols over the Filecoin retrieval network.

Test Cases

TBD

Security Considerations

The main security considerations for this proposal involve:

  • The zero-knowledge proof construction used for v in order to verify E^-1_{hash-1(y)} == c, not leaking any information about the data or the key.
  • The payment vouchers only being redeemable by the revelation by the provider of the key used to encrypt the data.

Incentive Considerations

Supporting the coexistence of several retrieval exchange protocols and fair-exchanges of data based on a ZKCP protocol gives flexibility to the retrieval network and retrieval deals, as clients and applications are able to choose the retrieval scheme that better suit their needs.

Moreover, this paves the way for the design and implementation of new retrieval exchange protocols, and further innovations in the retrieval market.

Product Considerations

  • Having the ability to perform fair exchanges of data in retrieval deals removes the need of any kind of reputation in retrieval deals and off-chain data exchanges/interactions.
  • Making payment channels ZKCP-compatible opens a new space for the implementation of new coexisting retrieval deals. This would enable clients and applications to select the retrieval exchange protocol that best suit their needs (a client may choose a pay-per-chunk exchange for fast retrievals, and a zkcp fair retrieval for costly ones).

Implementation

TBD

Copyright

Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).

Proposal on the extension of the V1 proof sector lifetime

Summary
It is currently not possible to extend the life cycle of the V1 proof sectors. This has caused a huge loss to the miners who sealed the V1 proof sectors in early days. Sealed sectors should only take windowPoSt challenges, security issues would not be involved. So we propose the V1 proof sectors' life cycle could be extended.

If the life cycle of these sectors cannot be extended, there will be a lot of sealing computation resources wasted to seal these sectors again. If miners have to seal sectors again, it will take much more collaterals and gas. More messages will make the chain congested, and other miners will have to pay more high gas fees too.

If the life cycle of V1 proof sectors is set to 180 days, miners would experience horrible loss and may cause even greater losses due to gas fee issues.

Improvement Goal
Allow the life cycle of V1 proof sectors to be able to extend like v8 proof sectors. Please see the detailed improvement plan: [https://github.com/filecoin-project/specs-actors/pull/1310]

Create a table for FIPs by name

In the main page of the FIP repo - it'd be nice to have a table of FIPs that lists them by number and title, with a link to the FIP itself.

FIP Proposal: Alter sector upgrade behavour to make an upgraded sector's power consistent with a newly sealed sector

FIP Proposal: Alter sector upgrade behavour to make an upgraded sector's power consistent with a newly sealed sector

Author

Alex Fox - Neonix

Slack: @NeonixAF f019551

Overview

The Filecoin Spec describes the purpose of the sector upgrade mechanism in 2.6.1.1 Sector Lifecycle:

It is reasonable to assume that miners enter the network by adding Committed Capacity sectors, that is, sectors that do not contain user data. Once miners agree storage deals with clients, they upgrade their sectors to Regular Sectors. Alternatively, if they find Verified Clients and agree a storage deal with them, they upgrade their sector accordingly. Depending on whether or not a sector includes a (verified) deal, the miner acquires the corresponding storage power in the network.

(https://spec.filecoin.io/#section-systems.filecoin_mining.sector.lifecycle)

And in 2.6.1.6 Upgrading Sectors:

In order for a Miner to maximize storage power (and profit), they should take advantage of all available storage space immediately, even before they find enough Clients to use this space.

[...]

If the newly ProveCommitted Regular sector contains a Verified Client deal, i.e., a deal with higher Sector Quality, then the miner’s storage power will increase accordingly.

(https://spec.filecoin.io/#section-systems.filecoin_mining.sector.adding_storage)

The intention seems clear that miners should always pledge capacity when they have the resources to do so, and there is no indication that performing a sector upgrade should be discouraged in any circumstance.

Problem

There is a situation where upgrading a sector will disadvantage a miner. When upgrading a sector with a deal having a duration shorter than the lifetime of the old sector, the new sector is given the same Expiration epoch as the sector that it replaces. Due to the calculation of the Sector Quality Multiplier, the new sector has a lower storage power than a sector which had been sealed without upgrading.

For example:

  1. A miner marks a sector for upgrade. The sector's expiration epoch is 1 year from now.
  2. The miner receives a 32GiB verified deal, with a duration of 6 months. The deal is sealed into a sector, and the new sector replaces the sector marked for upgrade. The new sector inherits the old sector's expiration epoch of 1 year from now.
  3. The Sector Quality Multiplier of this new sector is half of what it would have been if the sector had been sealed without upgrading. This results in the upgraded sector having half of the power that a normal sector would have had, but for twice the amount of time.

Even though the upgraded sector has the same total amount of power overall, the miner receives less benefit from this power because half of it has been pushed into the future when the network will presumably have grown and the reward received for that power will be smaller.

This could result in a miner being unfairly penalised after having pledged storage space to the network when they unexpectedly receive many short deals. Or it could provide a peverse incentive for a miner to not pledge space to the network if they expect to receive significant short verified deals, as they will make the rational decision to avoid this penalty.

Proposal

The calculation of Sector Quality should be changed for sectors that are upgrading CC sectors.

For the purposes of the Sector Quality calculation, the sector should be treated as if its expiration is the epoch on which the longest deal in the sector expires. As such, the miner receives the same benefit from Verified Deal Weight, at the same time, as they would have had they sealed the sector without upgrading.

After the last deal in the sector expires, it reverts back to having a Sector Quality of a CC sector, and power reduces accordingly.

Additionally, as the sector's power has reverted back to that of a CC sector, the whole sector has essentially reverted back to being a CC sector, and as such it may make sense to permit upgrading it again subject to maximum allowed sector lifetime.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.