Giter Club home page Giter Club logo

2023-02-gogopool-mitigation-contest-findings's Introduction

Gogopool Mitigation Contest

Unless otherwise discussed, this repo will be made public after contest completion, sponsor review, judging, and two-week issue mitigation window.

Contributors to this repo: prior to report publication, please review the Agreements & Disclosures issue.


Contest findings are submitted to this repo

Sponsors have three critical tasks in the contest process:

  1. Weigh in on severity.
  2. Respond to issues.
  3. Share your mitigation of findings.

Let's walk through each of these.

High and Medium Risk Issues

Please note: because wardens submit issues without seeing each other's submissions, there will always be findings that are duplicates. For all issues labeled 3 (High Risk) or 2 (Medium Risk), these have been pre-sorted for you so that there is only one primary issue open per unique finding. All duplicates have been labeled duplicate, linked to a primary issue, and closed.

Weigh in on severity

Judges have the ultimate discretion in determining severity of issues, as well as whether/how issues are considered duplicates. However, sponsor input is a significant criteria.

For a detailed breakdown of severity criteria and how to estimate risk, please refer to the judging criteria in our documentation.

If you disagree with a finding's severity, leave the severity label intact and add the label disagree with severity, along with a comment indicating your opinion for the judges to review. You may also add questions for the judge in the comments.

Respond to issues

Label each open/primary High or Medium risk finding as one of these:

  • sponsor confirmed, meaning: "Yes, this is a problem and we intend to fix it."
  • sponsor disputed, meaning either: "We cannot duplicate this issue" or "We disagree that this is an issue at all."
  • sponsor acknowledged, meaning: "Yes, technically the issue is correct, but we are not going to resolve it for xyz reasons."

(Note: please don't use sponsor disputed for a finding if you think it should be considered of lower or higher severity. Instead, use the label disagree with severity and add comments to recommend a different severity level -- and include your reasoning.)

Add any necessary comments explaining your rationale for your evaluation of the issue.

Note that when the repo is public, after all issues are mitigated, wardens will read these comments; they may also be included in your C4 audit report.

QA and Gas Reports

For low and non-critical findings (AKA QA), as well as gas optimizations: wardens are required to submit these as bulk listings of issues and recommendations. They may only submit a single, compiled report in each category:

  • QA reports should include all low severity and non-critical findings, along with a summary statement.
  • Gas reports should include all gas optimization recommendations, along with a summary statement.

For QA and Gas reports, sponsors are not required to weigh in on severity or risk level. We ask that sponsors:

  • Leave a comment for the judge on any reports you consider to be particularly high quality. (These reports will be awarded on a curve.)
  • Add the sponsor disputed label to any reports that you think should be completely disregarded by the judge, i.e. the report contains no valid findings at all.

Once labelling is complete

When you have finished labelling findings, drop the C4 team a note in your private Discord backroom channel and let us know you've completed the sponsor review process. At this point, we will pass the repo over to the judge to review your feedback while you work on mitigations.

Share your mitigation of findings

Note: this section does not need to be completed in order to finalize judging. You can continue work on mitigations while the judge finalizes their decisions and even beyond that. Ultimately we won't publish the final audit report until you give us the OK.

For each finding you have confirmed, you will want to mitigate the issue before the contest report is made public.

As you undertake that process, we request that you take the following steps:

  1. Within a repo in your own GitHub organization, create a pull request for each finding.
  2. Link the PR to the issue that it resolves within your contest findings repo.

This will allow for complete transparency in showing the work of mitigating the issues found in the contest. If the issue in question has duplicates, please link to your PR from the open/primary issue.

2023-02-gogopool-mitigation-contest-findings's People

Contributors

code423n4 avatar paroxism avatar itsmetechjay avatar

Watchers

Ashok avatar  avatar

2023-02-gogopool-mitigation-contest-findings's Issues

Mitigation Confirmed for MR-Mitigation of H-04: See comments

Since Prelaunch has been removed from the state transition from Withdrawable or Error, I have added the following assertion to confirm that it can no longer transition to Prelaunch to further affirm the working logic of the mitigated steps:

	function testCycleMinipool() public {
		uint256 duration = 4 weeks;
		uint256 depositAmt = 1000 ether;
		uint256 avaxAssignmentRequest = 1000 ether;
		uint256 validationAmt = depositAmt + avaxAssignmentRequest;
		// Enough to start but not to re-stake, we will add more later
		uint128 ggpStakeAmt = 100 ether;

		vm.startPrank(nodeOp);
		ggp.approve(address(staking), MAX_AMT);
		staking.stakeGGP(ggpStakeAmt);
		MinipoolManager.Minipool memory mp = createMinipool(depositAmt, avaxAssignmentRequest, duration);
		vm.stopPrank();

		address liqStaker1 = getActorWithTokens("liqStaker1", MAX_AMT, MAX_AMT);
		vm.prank(liqStaker1);
		ggAVAX.depositAVAX{value: MAX_AMT}();

		rialto.processMinipoolStart(mp.nodeID);

		skip(duration / 2);

		// Give rialto the rewards it needs
		uint256 rewards = 10 ether;
		deal(address(rialto), address(rialto).balance + rewards);

		// Fail due to invalid multisig
		vm.expectRevert(MinipoolManager.InvalidMultisigAddress.selector);
		minipoolMgr.recordStakingEndThenMaybeCycle{value: validationAmt + rewards}(mp.nodeID, block.timestamp, rewards);

		// Fail due to under collat
		vm.prank(address(rialto));
		vm.expectRevert(MinipoolManager.InsufficientGGPCollateralization.selector);
		minipoolMgr.recordStakingEndThenMaybeCycle{value: validationAmt + rewards}(mp.nodeID, block.timestamp, rewards);

		// Add a bit more collateral to cover the compounding rewards
		vm.prank(nodeOp);
		staking.stakeGGP(1 ether);

		// Pay out the rewards and cycle
		vm.prank(address(rialto));
		startMeasuringGas("testGas-recordStakingEndAndCycle");
		minipoolMgr.recordStakingEndThenMaybeCycle{value: validationAmt + rewards}(mp.nodeID, block.timestamp, rewards);
		stopMeasuringGas();
		MinipoolManager.Minipool memory mpCompounded = minipoolMgr.getMinipoolByNodeID(mp.nodeID);
+               assertFalse(mpCompounded.status == uint256(MinipoolStatus.Prelaunch));
		assertEq(mpCompounded.status, uint256(MinipoolStatus.Launched));
		assertGt(mpCompounded.avaxNodeOpAmt, mp.avaxNodeOpAmt);
		assertGt(mpCompounded.avaxNodeOpAmt, mp.avaxNodeOpInitialAmt);
		assertGt(mpCompounded.avaxLiquidStakerAmt, mp.avaxLiquidStakerAmt);
		assertEq(staking.getAVAXStake(mp.owner), mpCompounded.avaxNodeOpAmt);
		assertEq(staking.getAVAXAssigned(mp.owner), mpCompounded.avaxLiquidStakerAmt);
		assertEq(staking.getMinipoolCount(mp.owner), 1);
		assertEq(mpCompounded.startTime, 0);
		assertGt(mpCompounded.initialStartTime, 0);
	}

On a side note, initialStartTime + duration > block.timestamp would be better off adding a threshold on the right side of the inequality. This will cater for edge cases like initialStartTime + duration is only minimally greater than block.timestamp, e.g. 1 or a few seconds more.

Moreover, if the minimum validation period were to increase in the future from 14 days to say 30 days, the over the duration limit will be even more drastic.

Consider adding a settable threshold as suggested above that could currently take its default 0 value, but assigned a value like 3 days or something else reasonable and practical when need be.

Mitigation Confirmed for Mitigation of M-05 Issue mitigated

C4 issue

M-05: Bypass whenNotPaused modifier

Comments

The original implementation has some places where whenNotPaused modifier is not correctly used.

Mitigation

PR #22
This PR includes fixes for two issues M-01 and M-05.
The protocol team clarified the places where the whenNotPaused modifier should be used and should not be used.
After the mitigation, I confirm that the problem reported in the original issue does not exist anymore.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of M-14 Issue mitigated

C4 issue

M-14: any duration can be passed by node operator

Comments

Anyone can call createMinipool() with nodeID, duration, delegationFee parameters.
The original implementation did not have sanity checks for duration, delegationFee parameters and this could lead to various issues.

Mitigation

PR #38
Double checked the Avalanche documentation about the requirements for duration, delegationFee.
Imgur
The mitigation added new sanity checks as below.
Imgur

Tests

There were several unreasonable test cases in the original code base (e.g. 0 duration) and these are fixed now. All passing.

Note

There is another issue found in the mitigation for H-04 and it is slightly related to this one.

Conclusion

LGTM

Agreements & Disclosures

Agreements

If you are a C4 Certified Contributor by commenting or interacting with this repo prior to public release of the contest report, you agree that you have read the Certified Warden docs and agree to be bound by:

To signal your agreement to these terms, add a 👍 emoji to this issue.

Code4rena staff reserves the right to disqualify anyone from this role and similar future opportunities who is unable to participate within the above guidelines.

Disclosures

Sponsors may elect to add team members and contractors to assist in sponsor review and triage. All sponsor representatives added to the repo should comment on this issue to identify themselves.

To ensure contest integrity, the following potential conflicts of interest should also be disclosed with a comment in this issue:

  1. any sponsor staff or sponsor contractors who are also participating as wardens
  2. any wardens hired to assist with sponsor review (and thus presenting sponsor viewpoint on findings)
  3. any wardens who have a relationship with a judge that would typically fall in the category of potential conflict of interest (family, employer, business partner, etc)
  4. any other case where someone might reasonably infer a possible conflict of interest.

After pausing `startRewardsCycle()`, rewards continues to accumulate only to be inflated at a later date

Lines of code

https://github.com/multisig-labs/gogopool/blob/9ad393c825e6ac32f3f6c017b4926806a9383df1/contracts/contract/RewardsPool.sol#L74-L76

Vulnerability details

Impact

Pausing startRewardsCycle() is simply putting off the calling of this function. It does not prevent GGP rewards from accumulating, which under certain situations could lead to over inflation even though all operators have stopped validating.

When eventually unpaused, newTotalSupply is going to get inflated based on inflationIntervalsElapsed.

File: RewardsPool.sol#L74-L76

		for (uint256 i = 0; i < inflationIntervalsElapsed; i++) {
			newTotalSupply = newTotalSupply.mulWadDown(inflationRate);
		}

Proof of Concept

Here is one specific scenario:

  1. pauseEverything() is invoked by a defender.
  2. 14 days later, no node operators will be validating because all registered multisigs have also been disabled to complete or recreate node validations.
  3. resumeEverything() is not expected to be called till say another 14 days.

GGP rewards supposedly meant for node operations is accumulating for no reason and hence defeat the purpose as originally documented in the Litepaper on the third form of rewards, i.e. GGP:

"... This allows a node operator to begin staking with less AVAX than is normally required, earn rewards on their staked AVAX, and earn rewards via the operating fee and GGP network rewards."

Tools Used

Manual inspection

Recommended Mitigation Steps

Consider implementing a function in RewardsPool.sol that will allow Rialto or the Guardian fast-forward InflationIntervalStartTime to offset the idling period.

    function setInflationIntervalStartTime(uint256 offset) external onlyGuardian {
        uint256 startTime = getInflationIntervalStartTime();
        setUint(keccak256("RewardsPool.InflationIntervalStartTime"), startTime + offset);

Mitigation Confirmed for Mitigation of M-20 Issue mitigated

C4 issue

M-20: TokenggAVAX: maxDeposit and maxMint return wrong value when contract is paused

Comments

The contract TokenggAVAX was implemented as to the ERC-4626 spec, but a few important functions were not overridden, namely maxDeposit and maxMint.
As a result the pausability of the protocol was not reflected for these functions and wrong values are returned.

Mitigation

PR #33
The relevant functions are overridden with correct logic for paused state.
Double checked other ERC-4626 specs as well and it looks good.

Tests

Newly added relevant tests are checked. All passing - testMaxMint(), testMaxDeposit().

Conclusion

LGTM

The node operators are likely to be slashed in an unfair way

Lines of code

https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/MinipoolManager.sol#L464

Vulnerability details

C4 issue

H-04: Hijacking of node operators minipool causes loss of staked funds

Comments

In the original implementation, the protocol had some unnecessary state transitions and it was possible for node operators to interfere the recreation process.
The main problem was the recordStakingEnd() and recreateMiniPool() were separate external functions and the operator could frontrun the recreateMiniPool() and call withdrawMinipoolFunds().

Mitigation

PR #23
The mitigation added a new function recordStakingEndThenMaybeCycle() and handled recordStakingEnd() and recreateMiniPool() in an atomic way.
With this mitigation, the state flow is now as below and it is impossible for a node operator to interfere the recreation process.
Imgur
But this mitigation created another minor issue that the node operators have risks to be slashed in an unfair way.

New issue

The node operators are likely to be slashed in an unfair way

Code snippet

https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/MinipoolManager.sol#L464

Proof of concept

In the previous implementation, I assumed rialtos are smart enough to recreate minipools only when it's necessary.
But now, the recreation process is included as an optional way in the recordStakingEndThenMaybeCycle(), so as long as the check initialStartTime + duration > block.timestamp at L#464 passes, recreation will be processed.

Now let us consider the timeline. One validation cycle in the whole sense contains several steps as below.
Imgur

  1. Let us assume it is somehow possible that startTime[1] > endTime[0], i.e., the multisig failed to start the next cycle at the exact the same timestamp to the previous end time. This is quite possible due to various reasons because there are external processes included.
    In this case the timeline will look as below.
    Imgur
    As an extreme example, let us say the node operator created a minipool with duration of 42 days (with 3 cycles in mind) and it took 12 days to start the second cycle. When the recordStakingEndThenMaybeCycle() (finishing the second cycle) was called, two cases are possible.
  • It is possible that the initialStartTime + duration <= block.timestamp. In this case, the protocol will not start the next cycle. And the node validation was done for two cycles different to the initial plan.
  • If initialStartTime + duration > block.timestamp, the protocol will start the third cycle. But on the end of that cycle, it is likely that the node is not eligible for reward by the Avalanche validators voting. (Imagine the node op lent a server for 42 days, then 42-14*2-12=2 days from the third cycle start the node might have stopped working and does not meet the 80% uptime condition) Then the node operator will be punished and GGP stake will be slashed. This is unfair.
  1. Assume it is 100% guaranteed that startTime[n+1]=endTime[n] for all cycles.
    The timeline will look as below and we can say the second case of the above scenario still exists if the node operator didn't specify the duration to be a complete multiple of 14 days. (365 days is not!)
    Imgur
    Then the last cycle end will be later than initialStartTime + duration and the node op can be slashed in an unfair way again.
    So even assuming the perfect condition, the protocol works in kind of unfair way for node operators.

The main reason of this problem is that technically there exists two timelines. And the protocol does not track the actual validation duration that the node was used accurately.
At least, the protocol should not start a new cycle if initialStartTime + duration < block.timestamp + 14 days because it is likely that the node operator get punished at the end of that cycle.

Tools used

Manual review

Recommended additional mitigation

  • If it is 100% guaranteed that startTime[n+1]=endTime[n] for all cycles, I recommend starting a new cycle only if initialStartTime + duration < block.timestamp + 14 days.
  • If not, I suggest adding a new state variable that will track the actual validation period (actual utilized period).

Conclusion

Mitigation error - created another issue for the same edge case.

There is no way to retrieve the rewards from the `MultisigManager` and rewards are locked in the vault.

Lines of code

https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/RewardsPool.sol#L229

Vulnerability details

C4 issue

M-21: Division by zero error can block RewardsPool#startRewardCycle if all multisig wallet are disabled.

Comments

The protocol provides an external function startRewardsCycle() so that anyone can start a new reward cycle if necessary.
Before mitigation, there was an edge case where this function will revert due to division by zero.
Edge case: there is no multisigs enabled. (possible when Ocyticus.disableAllMultisigs(), Ocyticus.pauseEverything() is called)

Mitigation

PR #37
If no multisig is enabled, the mitigation sends the rewards to the MultisigManager and it makes sense.
But this created another issue. There is no way to retrieve the rewards back from the MultisigManager.

New issue

There is no way to retrieve the rewards from the MultisigManager and rewards are locked in the vault.

Code snippet

https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/RewardsPool.sol#L229

Impact

There is no way to retrieve the rewards from the MultisigManager and rewards are locked in the vault.

Proof of Concept

The rewards that were accrued in this specific edge case are locked in the MultisigManager.
It is understood that the funds are not lost and the protocol can be upgraded with a new MultisigManager contract with a proper function.
I evaluate the severity of the new issue as Medium because funds are locked in some specific edge cases and only withdrawable after contract upgrades.

Tools used

Manual Review

Recommended additional mitigation

Add a new external function in the MultisigManager with guardianOrSpecificRegisteredContract("Ocyticus", msg.sender) modifier and distribute the pending rewards to the active multisigs.

Conclusion

Mitigation error - created another issue for the same edge case.

Mitigation Confirmed for Mitigation of M-01 Issue mitigated

C4 issue

M-01: RewardsPool.sol : It is safe to have the startRewardsCycle with WhenNotPaused modifier

Comments

The original implementation has some places where whenNotPaused modifier is not correctly used.

Mitigation

PR #22
This PR includes fixes for two issues M-01 and M-05.
The protocol team clarified the places where the whenNotPaused modifier should be used and should not be used.
After the mitigation, I confirm that the problem reported in the original issue does not exist anymore.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of H-05 Issue mitigated

C4 issue

H-05: Inflation of ggAVAX share price by first depositor

Comments

This issue is very well-known for ERC-4626 contracts. The attacker can inflate the share price and take profit (or cause DoS for following depositors).

Mitigation

PR #49
This PR implemented a fix so that the deployer can deposit some initial amount (1 ETH from the comment).
With this mitigation, the share price inflation attack is not possible now.

Conclusion

LGTM

Transferring the allotAmount reward to MultisigManager leads to the loss of reward when no wallet is enabled in the RewardsPool

Lines of code

https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/RewardsPool.sol#L229
https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/RewardsPool.sol#L227

Vulnerability details

Impact

Transferring the allotAmount reward to MultisigManager leads to the loss of reward

Proof of Concept

If we refers to the original M-21 finding:

code-423n4/2022-12-gogopool-findings#143

Division by zero error can block RewardsPool#startRewardCycle if all multisig wallet are disabled.

Because of this line:

https://github.com/code-423n4/2022-12-gogopool/blob/aec9928d8bdce8a5a4efe45f54c39d4fc7313731/contracts/contract/RewardsPool.sol#L229

	uint256 tokensPerMultisig = allotment / enabledCount;
	for (uint256 i = 0; i < enabledMultisigs.length; i++) {
		vault.withdrawToken(enabledMultisigs[i], ggp, tokensPerMultisig);
	}

The fix is that when there is no multisigWallet enabled, transfert he allotment to the multisigWallet

https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/RewardsPool.sol#L227

// If there are no enabled multisigs then the tokens will just sit under the multisig manager contract
if (enabledCount == 0) {
	vault.transferToken("MultisigManager", ggp, allotment);
	return;
}

However, if we look into the MultisigManager.sol implementation, there is no way to retrieve the allotment, then the reward that are meant to be distributed to the enabled multisig wallet is locked.

https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/MultisigManager.sol#L17

As we can see in the MultisigManager.sol, only the function related to enable / disable reward is implemented, I do not see in other part of the code, the implementation attempts to retrieve the balance from the MultisigManager.sol

When vault.transferToken is called

function transferToken(
	string memory networkContractName,
	ERC20 tokenAddress,
	uint256 amount
) external onlyRegisteredNetworkContract {
	// Valid Amount?
	if (amount == 0) {
		revert InvalidAmount();
	}
	// Make sure the network contract is valid (will revert if not)
	getContractAddress(networkContractName);
	// Get contract keys
	bytes32 contractKeyFrom = keccak256(abi.encodePacked(getContractName(msg.sender), tokenAddress));
	bytes32 contractKeyTo = keccak256(abi.encodePacked(networkContractName, tokenAddress));
	// emit token transfer event
	emit TokenTransfer(contractKeyFrom, contractKeyTo, address(tokenAddress), amount);
	// Verify there are enough funds
	if (tokenBalances[contractKeyFrom] < amount) {
		revert InsufficientContractBalance();
	}
	// Update Balances
	tokenBalances[contractKeyFrom] = tokenBalances[contractKeyFrom] - amount;
	tokenBalances[contractKeyTo] = tokenBalances[contractKeyTo] + amount;
}

token balance is removed from the contractKeyFrom and will never be added back

tokenBalances[contractKeyFrom] = tokenBalances[contractKeyFrom] - amount;

Tools Used

Manual Review

Recommended Mitigation Steps

If the enabledCount is 0, just return and do nothing, or make sure there is method to retrieve and add the reward amount hold in the MultisigManager.sol back to avoid the loss of reward.

Mitigation Confirmed for MR-Mitigation of M-02: Issue mitigated

According to ProtocolDAO.sol#L225-L230, upgradeContract() should revert when the contractName is an empty string that makes the bytes length equal to zero, as denoted by the first condition of the if block below:

		if (
			bytes(getString(keccak256(abi.encodePacked("contract.name", existingAddr)))).length == 0 ||
			getAddress(keccak256(abi.encodePacked("contract.address", contractName))) == address(0)
		) {
			revert ExistingContractNotRegistered();
		}

The following test will confirm that an empty string will revert as expected, further assuring the mitigated steps work as intended:

	function testUpgradeContractExistingNotRegistered() public {
		// setup existing contract
		address addr = randAddress();
		string memory name = "TestContract";

		vm.prank(guardian);
		dao.registerContract(name, addr);

		address newAddr = randAddress();

		// attempt upgrade with bad name
		vm.startPrank(guardian);
		vm.expectRevert(ProtocolDAO.ExistingContractNotRegistered.selector);
		dao.upgradeContract("BadName", addr, newAddr);

+		// attempt upgrade with empty string
+		vm.expectRevert(ProtocolDAO.ExistingContractNotRegistered.selector);
+		dao.upgradeContract("", addr, newAddr);


		// attempt upgrade with bad address
		vm.expectRevert(ProtocolDAO.ExistingContractNotRegistered.selector);
		dao.upgradeContract(name, randAddress(), newAddr);
		vm.stopPrank();
	}

Mitigation Confirmed for Mitigation of M-02 Issue mitigated

C4 issue

M-02: Coding logic of the contract upgrading renders upgrading contracts impractical

Comments

The original implementation was deleting the addresses associated with the contract name just after setting it.
Due to that, it was practically difficult to upgrade the contracts.

Mitigation

PR #32
The mitigation resolved the issue and I confirm the original issue does not exist anymore.

Test

There are new tests added to confirm the upgradability with the same names and all are passing.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of M-08 Issue mitigated

C4 issue

M-08: Recreated pools receive a wrong AVAX amount due to miscalculated compounded liquid staker amount

Comments

In the previous implementation, the new avaxNodeOpAmt and avaxLiquidStakerAmt were being set to mp.avaxNodeOpAmt + mp.avaxNodeOpRewardAmt.
The problem is avaxNodeOpRewardAmt is always greater than avaxLiquidStakerRewardAmt.
So assuming there is enough liquidity in the Staking pool, the implementation was in favor of the node operator so that the next cycle can use as much as possible.
But it assumes that there is enough liquidity in the Staking pool to be used for the new cycle and when it is broken, some unexpected problems arise.

Mitigation

PR #43
Now the protocol uses mp.avaxNodeOpAmt + mp.avaxLiquidStakerRewardAmt for the new avaxNodeOpAmt instead of mp.avaxNodeOpAmt + mp.avaxNodeOpRewardAmt.
This ensures there is always enough AVAX to start the new cycle.
Although some rewards will just accrue in the Vault and not utilized for the new cycle (decrease efficiency), I confirm that the mitigation resolved the original issue.

Suggestion

If the protocol cares about the efficiency for the node operators, we can check the TokenggAVAX.amountAvailableForStaking() and pull as much as possible. Not a bug but a suggestion for improvements.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of M-13 Issue mitigated

C4 issue

M-13: slashing fails when node operator doesn't have enough staked GGP

Comments

The original implementation was not checking if the node operator's GGPStake amount is not less than the slash amount.
So technically it was possible that slashGGP() reverts for various reasons.
It is notable that this made an attack path feasible (H-06 MinipoolManager: node operator can avoid being slashed).

Mitigation

PR #41
This PR includes mitigation for various issues (H-03, H-06, M-13).
Just focusing on the issue M-13, it is now checked if the staker has enough GGPStake.
If the staked GGP is not enough, the whole staked GGP is slashed instead of revert.

Analysis

H-06 reported two scenarios where the reverts can happen.

  • The first case is when the malicious node op provides a long duration intentionally to trigger revert on slash.
    This attack path is not feasible now because there is a sanity check on the duration parameter.
  • The second case is when the GGP price drops down.
    For this case, I wonder if we need to slash AVAX from the node op's AVAXStake as much as the difference slashGGPAmt-staking.getGGPStake(owner).
    I know it might be too strict for the node operators but just flagging.

Conclusion

LGTM

Mitigation Confirmed for MR-Mitigation of M-10: Issue mitigated

Since no tests were added to ensure that rewardsStartTime is reset to zero when staking.getAVAXValidatingHighWater(owner) == 0 && staking.getAVAXAssigned(owner) == 0, I have added the missing assertion to improve test coverage, which is a crucial affirmation after all for the ultimate purpose of the mitigated efforts, i.e. resetting rewards start time in cancel minipool:

	function testCancelMinipool() public {
		uint256 duration = 2 weeks;
		uint256 depositAmt = 1000 ether;
		uint256 avaxAssignmentRequest = 1000 ether;
		uint128 ggpStakeAmt = 200 ether;

		vm.startPrank(nodeOp);
		ggp.approve(address(staking), MAX_AMT);
		staking.stakeGGP(ggpStakeAmt);
		MinipoolManager.Minipool memory mp1 = createMinipool(depositAmt, avaxAssignmentRequest, duration);
		vm.stopPrank();

		//will fail
		vm.expectRevert(MinipoolManager.MinipoolNotFound.selector);
		minipoolMgr.cancelMinipool(address(0));

		//will fail
		vm.expectRevert(MinipoolManager.OnlyOwner.selector);
		minipoolMgr.cancelMinipool(mp1.nodeID);

		//will fail
		int256 minipoolIndex = minipoolMgr.getIndexOf(mp1.nodeID);
		store.setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Error));
		vm.prank(nodeOp);

		//will fail
		vm.expectRevert(MinipoolManager.CancellationTooEarly.selector);
		minipoolMgr.cancelMinipool(mp1.nodeID);

		skip(5 seconds); //cancellation moratorium

		vm.prank(nodeOp);
		vm.expectRevert(MinipoolManager.InvalidStateTransition.selector);
		minipoolMgr.cancelMinipool(mp1.nodeID);
		store.setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Prelaunch));

		vm.startPrank(nodeOp);
		uint256 priorBalance = nodeOp.balance;
		minipoolMgr.cancelMinipool(mp1.nodeID);

		MinipoolManager.Minipool memory mp1Updated = minipoolMgr.getMinipool(minipoolIndex);

		assertEq(mp1Updated.status, uint256(MinipoolStatus.Canceled));
		assertEq(staking.getAVAXStake(mp1Updated.owner), 0);
		assertEq(staking.getAVAXAssigned(mp1Updated.owner), 0);
		assertEq(staking.getAVAXValidating(mp1Updated.owner), 0);
+		assertEq(staking.getRewardsStartTime(mp1Updated.owner), 0);
		assertEq(nodeOp.balance - priorBalance, mp1Updated.avaxNodeOpAmt);
	}

Node operators could tap on the last hour GGP stake

Lines of code

https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/ClaimNodeOp.sol#L51
https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/ProtocolDAO.sol#L30
https://github.com/code-423n4/2022-12-gogopool/blob/main/contracts/contract/ClaimNodeOp.sol#L82-L84

Vulnerability details

Impact

The mitigation steps are very intuitive in circumventing the previous loopholes. However, this will not prevent node operators from taking advantage on the last hours/minutes openings to tremendously increase their GGP stakes via new minipools created. This was going to go wild rampantly particularly when GGP tokens happened to be a commodity in demand, where the primary aim of node operators was to earn GGP rewards rather than AVAX rewards as validators.

Proof of Concept

Here is a typical exploit scenario:

  1. Bob first creates a minipool as a primary one for 3 months with 1000 AVAX say 16 days, to give himself 2 days of cushion, before the next GGP reward cycle is going to hit.
    File: ProtocolDAO.sol#L30
		setUint(keccak256("ProtocolDAO.RewardsEligibilityMinSeconds"), 14 days);
  1. Bob's RewardsStartTime is set adequately with block.timestamp to be eligible for the next round of GGP reward distribution.
    File: ClaimNodeOp.sol#L51
		return (rewardsStartTime != 0 && elapsedSecs >= dao.getRewardsEligibilityMinSeconds());
  1. Bob's getAVAXAssigned(stakerAddr) != 0 in the next 3 month so RewardsStartTime is never set to 0.
    File: ClaimNodeOp.sol#L82-L84
		if (staking.getAVAXAssigned(stakerAddr) == 0) {
			staking.setRewardsStartTime(stakerAddr, 0);
		}
  1. 24 - 48 hours before Rialto is going to call calculateAndDistributeRewards(), Bob adds 10 new minipools with MinipoolMaxAVAXAssignment, i.e. 10000 AVAX each for only 2 weeks.
  2. Bob also makes sure 150% collateralization ratio is capitalized on all minipools.
  3. All things going well, Bob's is going to entitled a sizable GGP reward split when getEffectiveGGPStaked() gets factored in since it is stakerAddr specific, i.e. all associated minipoolIndices would be lumped into the same owner's address.

At the end of the 14 day validation, all AVAX and GGP associated with the short term node operations along with their respective rewards are withdrawn, waiting for the next round of steps 4 - 6 to be repeated.

Tools Used

Manual inspection

Recommended Mitigation Steps

Consider adding another condition before updating AVAXValidatingHighWater in recordStakingStart() as follows:

+ import {RewardsPool} from "./RewardsPool.sol";

+		RewardsPool rewardsPool = RewardsPool(getContractAddress("RewardsPool"));
+		uint256 startTime = rewardsPool.getRewardsCycleStartTime();
+		uint256 timeElapsed = block.timestamp - startTime;

-		if (staking.getAVAXValidatingHighWater(owner) < staking.getAVAXValidating(owner)) {
+		if (staking.getAVAXValidatingHighWater(owner) < staking.getAVAXValidating(owner) && timeElapsed < 14 days) {
			staking.setAVAXValidatingHighWater(owner, staking.getAVAXValidating(owner));
		}

This will push it to the next reward cycle for all the latecomers where AVAXValidatingHighWater gets updated only after the next GGP reward gets distributed.

Mitigation Confirmed for MR-Mitigation of M-05: Issue mitigated

The mitigation steps have been well implemented. I have added an assertion in the following test that the GGP stake amount before and after staking.sol is paused remains unchanged even though an attempt to re-stake has been made. The graceful failure to inform the user that they can claim but not restake could be a bit costly though considering whenNotPaused visibility has been moved to _stakeGGP() from stakeGGP():

	function testClaimAndRestakePaused() public {
		// start reward cycle
		skip(dao.getRewardsCycleSeconds());
		rewardsPool.startRewardsCycle();

		// deposit ggAVAX
		address nodeOp1 = getActorWithTokens("nodeOp1", MAX_AMT, MAX_AMT);
		vm.startPrank(nodeOp1);
		ggAVAX.depositAVAX{value: 2000 ether}();

		// stake ggp and create minipool
		ggp.approve(address(staking), MAX_AMT);
		staking.stakeGGP(100 ether);
		MinipoolManager.Minipool memory mp = createMinipool(1000 ether, 1000 ether, 2 weeks);
		vm.stopPrank();

		// launch minipool
		rialto.processMinipoolStart(mp.nodeID);

		// distribute rewards
		vm.prank(address(rialto));
		nopClaim.calculateAndDistributeRewards(nodeOp1, 200 ether);

		// pause staking
		vm.prank(address(ocyticus));
		dao.pauseContract("Staking");

		uint256 stakerRewards = staking.getGGPRewards(nodeOp1);
		uint256 previousBalance = ggp.balanceOf(nodeOp1);

		// ensure restaking fails
		vm.startPrank(nodeOp1);
		vm.expectRevert(BaseAbstract.ContractPaused.selector);
		nopClaim.claimAndRestake((stakerRewards / 2)); // claim half of their rewards

		//  claim entire reward amount
		nopClaim.claimAndRestake((stakerRewards)); // claim all rewards

		vm.stopPrank();
		
+		assertEq(staking.getGGPStake(nodeOp1), 100 ether); // 100 AVAX
		assertEq(ggp.balanceOf(nodeOp1), previousBalance + stakerRewards);
	}

Mitigation Confirmed for Mitigation of M-12 Issue mitigated

C4 issue

M-12: Cancellation of minipool may skip MinipoolCancelMoratoriumSeconds checking if it was cancelled before

Comments

The original implementation was relying on getRewardsStartTime to check the calcel maratorium.
But this was a logic flaw because rewardsStartTime is supposed to stay unchanged as long as there is at least one minipool.
This logic flaw made it possible to cancel the minipool before the moratorium and it could be abused by attackers to create/cancel junk minipools that can lead to grief attack.

Mitigation

PR #40
The mitigation introduced a new state variable creationTime on the minipool level that will track the creation time for every minipool.
And the cancellation moratorium is checked against the minipool creation time.

Analysis

After the mitigation, the state transitions happen as below.
Imgur
It is notable that creationTime is reset to zero in the function resetMiniPool() that is called when the rialto calls recordStakingEndThenMaybeCycle().
But I believe it is fine because claimAndInitiateStaking() is called right after that.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of H-01 Issue mitigated

C4 issue

H-01: AVAX Assigned High Water is updated incorrectly

Comments

The original implementation was using AVAXHighWater to calculate the reward for the node operator.
But the AVAXHighWater was not beign tracked accurately and wrong amount of rewards were sent.
The main reason was using a single state variable AVAXHighWater to track two things.

Mitigation

PR #25
The mitigation added a new state AVAXValidating and use the new state for tracking the current contribution status.
And now AVAXHighWater is used dedicated for calculation of rewards and updated by AVAXValidating (if necessary) on every call to recordStakingStart.
The important state variables and their changes at various states are shown below.
The mitigation resolved the original issue.
Imgur

Test

New test cases are added for the new state variable and all are passing.

Conclusion

LGTM

The mitigation does not sufficiently address the bug report M-02

Lines of code

https://github.com/multisig-labs/gogopool/blob/4bcef8b1d4e595c9ba41a091b2ebf1b45858f022/contracts/contract/ProtocolDAO.sol#L225

Vulnerability details

Impact

M-02: The mitigation does not sufficiently address the bug report M-02

Proof of Concept

If we look into the M-02 report

code-423n4/2022-12-gogopool-findings#742

The report points out two issues:

Implication 1

The above function

upgradeExistingContract

registers the upgraded contract first, then unregisters the existing contract. This leads to the requirement that the upgraded contract name

must be different from

the existing contract name.

Since using the same name after upgrading will run into trouble with current coding logic, a safeguard should be in place to make sure two names are really different. For example, put this statement in the

upgradeExistingContract

function:

require(newName != existingName, "Name not changed");

, where

existingName

can be obtained using:

string memory existingName = store.getString(keccak256(abi.encodePacked("contract.name", existingAddr)));

.

There is no such check if the mitigation code that ensures name newName ≠ existingName, the mitigation only make sure the existing contract is reigstered.

if (
bytes(getString(keccak256(abi.encodePacked("contract.name", existingAddr)))).length == 0 ||
getAddress(keccak256(abi.encodePacked("contract.address", contractName))) == address(0)
) {
revert ExistingContractNotRegistered();
}

Also the code still update the new contract then remove the existing contract instead of remove existing contract then update the new contract.

setAddress(keccak256(abi.encodePacked("contract.address", contractName)), newAddr);
setString(keccak256(abi.encodePacked("contract.name", newAddr)), contractName);
setBool(keccak256(abi.encodePacked("contract.exists", newAddr)), true);

deleteString(keccak256(abi.encodePacked("contract.name", existingAddr)));
deleteBool(keccak256(abi.encodePacked("contract.exists", existingAddr)));

Also, when removing the existing contract, the line of code below is missing, failed to remove the old contract address leads to corrupted storage.

deleteAddress(keccak256(abi.encodePacked("contract.address", name)));

Implication 2

:

If we really want a different name for an upgraded contract, we then get into more serious troubles: We have to upgrade other contracts that reference the upgraded contract since contract names are referenced mostly hardcoded (for security considerations). This may lead to a very complicated issue because contracts are cross-referenced.

For example, contract

ClaimNodeOp

references contracts

RewardsPool

,

ProtocolDAO

and

Staking

. At the same time, contract

ClaimNodeOp

is referenced by contracts

RewardsPool

and

Staking

. This means that:

  1. If contract ClaimNodeOp was upgraded, which means the contract name ClaimNodeOp was changed;
  2. This requires contracts RewardsPool and Staking to be upgraded (with new names) in order to correctly reference to newly named ClaimNodeOp contract;
  3. This further requires those contracts that reference RewardsPool or Staking to be upgraded in order to correctly reference them;
  4. and this further requires those contracts that reference the above upgraded contracts to be upgraded ...
  5. This may lead to complicated code management issue and expose new vulnerabilites due to possible incomplete code adaptation.
  6. This may render the contracts upgrading impractical.

There is no mitigation implemented in-place to ensure that if claimNodeOp is upgraded, his children (RewardsPool, Staking, etc is updated as well.)

Tools Used

Manual Review

Recommended Mitigation Steps

I think the original M-02’s report’s recommendation is well-written

code-423n4/2022-12-gogopool-findings#742

  1. Upgrading contract does not have to change contranct names especially in such a complicated system wherein contracts are cross-referenced in a hardcoded way. I would suggest not to change contract names when upgrading contracts.
  2. In function upgradeExistingContract definition, swap fucnction call sequence between registerContract() and unregisterContract() so that contract names can keep unchanged after upgrading

I think removing the existing contract then update the new contract is very important to avoid changing name or overwrite the updated the contract to address(0)

deleteString(keccak256(abi.encodePacked("contract.name", existingAddr)));
deleteBool(keccak256(abi.encodePacked("contract.exists", existingAddr)));
deleteAddress(keccak256(abi.encodePacked("contract.address", name)));

setAddress(keccak256(abi.encodePacked("contract.address", contractName)), newAddr);
setString(keccak256(abi.encodePacked("contract.name", newAddr)), contractName);
setBool(keccak256(abi.encodePacked("contract.exists", newAddr)), true);

Mitigation Confirmed for Mitigation of M-03: See comments

finishFailedMinipoolByMultisig() is a useful function for the Multisig to invoke on in case of a human review of an error. The issue can be resolved, to keep the function if need be, by just removing setUint() from the code logic as follows:

File: MinipoolManager.sol#L528-L533

	function finishFailedMinipoolByMultisig(address nodeID) external {
		int256 minipoolIndex = onlyValidMultisig(nodeID);
		requireValidStateTransition(minipoolIndex, MinipoolStatus.Finished);
-		setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Finished));
		emit MinipoolStatusChanged(nodeID, MinipoolStatus.Finished);
	}

Alternatively, the status could be reset to Error:

	function finishFailedMinipoolByMultisig(address nodeID) external {
		int256 minipoolIndex = onlyValidMultisig(nodeID);
		requireValidStateTransition(minipoolIndex, MinipoolStatus.Finished);
-		setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Finished));
+		setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Error));
		emit MinipoolStatusChanged(nodeID, MinipoolStatus.Finished);
	}

Either way, node operators will be able to withdraw their funds from Error when the following code line is doing its check in withdrawMinipoolFunds():

File: MinipoolManager.sol#L304

		requireValidStateTransition(minipoolIndex, MinipoolStatus.Finished);

Deficiency of slashed GGP amount should be made up from node operator's AVAX

Lines of code

https://github.com/multisig-labs/gogopool/blob/9ad393c825e6ac32f3f6c017b4926806a9383df1/contracts/contract/MinipoolManager.sol#L731-L733

Vulnerability details

Impact

If staked GGP doesn't cover slash amount, slashing it all will not be fair to the liquid stakers. Slashing is rare, and that the current 14 day validation cycle which is typically 1/26 of the minimum amount of GGP staked is unlikely to bump into this situation unless there is a nosedive of GGP price in AVAX. The deficiency should nonetheless be made up from avaxNodeOpAmt should this unfortunate situation happen.

Proof of Concept

File: MinipoolManager.sol#L731-L733

		if (staking.getGGPStake(owner) < slashGGPAmt) {
			slashGGPAmt = staking.getGGPStake(owner);
		}

As can be seen from the code block above, in extreme and unforeseen cases, the difference between staking.getGGPStake(owner) and slashGGPAmt can be significant. Liquid stakers would typically and ultimately care about how they are going to be adequately compensated with, in AVAX preferably.

Tools Used

Manual inspection

Recommended Mitigation Steps

Consider having the affected if block refactored as follows:

		Staking staking = Staking(getContractAddress("Staking"));
		if (staking.getGGPStake(owner) < slashGGPAmt) {
			slashGGPAmt = staking.getGGPStake(owner);

+			uint256 diff = slashGGPAmt - staking.getGGPStake(owner);
+			Oracle oracle = Oracle(getContractAddress("Oracle"));
+			(uint256 ggpPriceInAvax, ) = oracle.getGGPPriceInAVAX();
+			uint256 diffInAVAX = diff.mulWadUp(ggpPriceInAvax);
+                       staking.decreaseAVAXStake(owner, diffInAVAX);
+			Vault vault = Vault(getContractAddress("Vault"));
+			vault.transferAVAX("ProtocolDAO", diffInAVAX);
		
		}

Mitigation Confirmed for Mitigation of M-09 Issue mitigated

C4 issue

M-09: State Transition: Minipools can be created using other operator's AVAX deposit via recreateMinipool

Comments

In the previous implementation, there were unnecessary state transitions and it was possible for malicious actors to steal other user's minipools.

Mitigation

PR #23
This PR includes mitigation for multiple issues (H-04) and I will focus on the state transition in this report.

The new state transition after mitigation is as below.
Imgur

Confirmed that the reported attack path does not exist anymore after the mitigation.

Note

There is another issue found in the mitigation for H-04 and it is related to this one.

Conclusion

LGTM

Mitigation Confirmed for MR-Mitigation of H-05: See comments

An additional measure could be added by implementing slippage protections. This will greatly complement the mitigation steps to further protect the users.

For instance, deposit() may have minSharesOut added in the function parameters and the function logic refactored as follows:

File: ERC4626Upgradeable.sol#L42-L54

-	function deposit(uint256 assets, address receiver) public virtual returns (uint256 shares) {
+	function deposit(uint256 assets, address receiver, uint256 minSharesOut) public virtual returns (uint256 shares) {
		// Check for rounding error since we round down in previewDeposit.
		require((shares = previewDeposit(assets)) != 0, "ZERO_SHARES");

+		require(shares >= minSharesOut, "MINIMUM_SHARES_NOT_MATCHED");

		// Need to transfer before minting or ERC777s could reenter.
		asset.safeTransferFrom(msg.sender, address(this), assets);

		_mint(receiver, shares);

		emit Deposit(msg.sender, receiver, assets, shares);

		afterDeposit(assets, shares);
	}

Similarly, mint() may have maxAmountIn added in the function parameters and the function logic refactored as follows:

File: ERC4626Upgradeable.sol#L56-L67

-	function mint(uint256 shares, address receiver) public virtual returns (uint256 assets) {
+	function mint(uint256 shares, address receiver, uint256 maxAmountIn) public virtual returns (uint256 assets) {
		assets = previewMint(shares); // No need to check for rounding error, previewMint rounds up.

+		require(assets <= maxAmountIn, "MAXIMUM_AMOUNT_EXCEEDED");

		// Need to transfer before minting or ERC777s could reenter.
		asset.safeTransferFrom(msg.sender, address(this), assets);

		_mint(receiver, shares);

		emit Deposit(msg.sender, receiver, assets, shares);

		afterDeposit(assets, shares);
	}

Mitigation Confirmed for Mitigation of H-06 Issue mitigated

C4 issue

H-06: MinipoolManager: node operator can avoid being slashed

Comments

In the original implementation, there were a few scenarios where malicious node operators can avoid being slashed.

Mitigation

PR #41
This PR includes mitigation for various issues (H-03, H-06, M-13).
Just focusing on the issue H-06, the original issue reported two scenarios.

  • The first case is when the malicious node op provides a long duration intentionally to trigger revert on slash.
    This attack path is not feasible now because there is a sanity check on the duration parameter.
  • The second case is when the GGP price drops down.
    It is now checked if the staker has enough GGPStake and the whole GGP stake is slashed if not enough.

Analysis

  • For the second case of the original issue, I wonder if we need to slash AVAX from the node op's AVAXStake as much as the difference slashGGPAmt-staking.getGGPStake(owner).
    I know it might be too strict for the node operators but just flagging.
  • I checked other possible scenarios where slash() can revert.
    If the multisig calls recordStakingEnd() with wrong endTime or msg.value, it will still revert but assuming multisigs are honest, it seems to be good now.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of M-19 Issue mitigated

C4 issue

M-19: MinipoolManager: recordStakingError function does not decrease minipoolCount leading to too high GGP rewards for staker

Comments

The protocol was using a state variable minipoolCount to track the node op's contribution.
This was being used in a function ClaimNodeOp.calculateAndDistributeRewards() to reset the rewardStartTime.
But in the original implementation, minipoolCount was not being decreased when a rialto fails to start a new staking and calls recordStakingError() and malicious node ops could abuse it to stay eligible for rewards.

Mitigation

PR #42
The variable minipoolCount is removed completely and a state avaxAssigned is used instead to track the existence of minipool or eligibility for reward.
This state tracks the assigned AVAX amount to a node operator and we need to make sure if it tracks the reward-eligible status of node operators.

The new state transition after mitigation is as below. (focused on avaxAssigned change)
Imgur

Confirmed that avaxAssigned is increased and decreased properly for all state transitions.

Suggestion

Please confirm if it is correct that the minipool with positive avaxAssigned is eligible for rewards.
After the mitigation, queued pools are in a state of eligible for reward.
So a node operator can keep queueing minipools and stay in the state of eligible for reward.
Might be fine because after all AVAXValidatingHighWater is used to determine the rewards amount.
Just flagging out for a second look.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of M-03 Issue mitigated

C4 issue

M-03: NodeOp funds may be trapped by a invalid state transition

Comments

The original implementation had a path where the multisig can recreate the minipools from the Error state.
In this case, the node operator's funds are locked in the protocol.

Mitigation

PR #20
The protocol now removed unnecessary state transitions completely and the new state flow is as below.
Imgur

With this new state flow, I confirm that it is always possible for the node operator to withdraw his funds.

Conclusion

LGTM

Mitigation Confirmed for MR-Mitigation of M-09: Issue mitigated

The mitigation steps are working as intended. To further confirm that, I have written a separate test where the Minipool duration has not been exceeded, i.e. skip(duration / 2), to ensure the status transitions correctly to Launched in the atomic execution originating from recordStakingEndThenMaybeCycle. As logically expected, recordStakingEndThenMaybeCycle() ends with invoking claimAndInitiateStaking() that will set the minipool status to Launched:

File: MinipoolManager.sol#L349

		setUint(keccak256(abi.encodePacked("minipool.item", minipoolIndex, ".status")), uint256(MinipoolStatus.Launched));
	function testCycleMinipoolDurationNotExceeded() public {
		uint256 duration = 4 weeks;
		uint256 depositAmt = 1000 ether;
		uint256 avaxAssignmentRequest = 1000 ether;
		uint256 validationAmt = depositAmt + avaxAssignmentRequest;
		uint128 ggpStakeAmt = 200 ether;

		// stake ggp
		vm.startPrank(nodeOp);
		ggp.approve(address(staking), MAX_AMT);
		staking.stakeGGP(ggpStakeAmt);
		MinipoolManager.Minipool memory mp = createMinipool(depositAmt, avaxAssignmentRequest, duration);
		vm.stopPrank();

		// deposit liquid staker funds
		address liqStaker1 = getActorWithTokens("liqStaker1", MAX_AMT, MAX_AMT);
		vm.prank(liqStaker1);
		ggAVAX.depositAVAX{value: MAX_AMT}();

		// launch minipool
		rialto.processMinipoolStart(mp.nodeID);

		// Give rialto the rewards it needs
		uint256 rewards = 10 ether;
		deal(address(rialto), address(rialto).balance + rewards);

		skip(duration / 2);

		// attemp to cycle when block.timestamp equals duration
		vm.startPrank(address(rialto));
		minipoolMgr.recordStakingEndThenMaybeCycle{value: validationAmt + rewards}(mp.nodeID, block.timestamp, rewards);
		vm.stopPrank();

		mp = minipoolMgr.getMinipoolByNodeID(mp.nodeID);
		assertEq(mp.status, uint256(MinipoolStatus.Launched));
	}

Mitigation Confirmed for Mitigation of M-10 Issue mitigated

C4 issue

M-10: Functions cancelMinipool() doesn't reset the value of the RewardsStartTime for user when user's minipoolcount is zero

Comments

The protocol was using a state variable minipoolCount to track the node op's contribution.
This was being used in a function ClaimNodeOp.calculateAndDistributeRewards() to reset the rewardStartTime.
But in the original implementation, rewardsStartTime was not being reset on cancelMinipool() and malicious node ops could abuse it to stay eligible for rewards.

Mitigation

PR #51
The variable minipoolCount is removed completely and a state avaxAssigned is used instead to track the existence of minipool or eligibility for reward.
This state tracks the assigned AVAX amount to a node operator and we need to make sure if it tracks the reward-eligible status of node operators.
The mitigation uses this new state to check if there is a ququed pool for the same node operator on cancelation.
Confirmed that this will fix the original issue.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of H-03 Issue mitigated

C4 issue

H-03: node operator is getting slashed for full duration even though rewards are distributed based on a 14 day cycle

Comments

In the original implementation, the slash was implemented with a wrong logic and the node operators were at risk of being slashed more than fair.

Mitigation

PR #41
After mitigation, the protocol tracks the startTime and endTime of every validation cycle and slash based on the actual cycle duration.
The original issue does not exist anymore.

Note

Regarding the new state variables startTime and endTime, I have a doubt if it is 100% guaranteed that startTime[n+1]=endTime[n] for any n-th cycle. I explained the concern in the report for H-04.

Test

New test cases are added for the new state variables and all passing.

Conclusion

LGTM

Mitigation Confirmed for Mitigation of M-17 Issue mitigated

C4 issue

M-17: NodeOp can get rewards even if there was an error in registering the node as a validator

Comments

The protocol had an unnecessary state transition flow Staking->Error before.
Even with assumptions that rialtors are not malicious, this was on obvious flaw in the state transitions.
It confused me personally as well because the state transition itself looked like it's possible for a node to become Error in the middle of staking cycles while it's not possible according to the Avalanche documentation.
Imgur

Mitigation

PR #28
The unnecessary transition flow is now prohibited by the mitigation.

The new state transition after mitigation is as below and all transitions look reasonable now.
Imgur

Conclusion

LGTM

Mitigation Confirmed for MR-Mitigation of M-12: Issue mitigated

The mitigation appears to be working as intended. I have further confirmed it by making sure that cancelMinipool() will revert with a CancellationTooEarly error when the cancel minimum time has not passed yet by splitting up skip(5 seconds) to skip(3 seconds) and skip(2 seconds). Apparently, the revert shows up at skip(3 seconds) prior to making all needed assertions after MinipoolCancelMoratoriumSeconds (currently set to 5 seconds throughout the test suites) has elapsed:

	function testAVAXValidatingHighWaterMarkCancelledMinipool() public {
		uint256 duration = 2 weeks;
		uint256 depositAmt = dao.getMinipoolMinAVAXAssignment();
		uint256 ggpStakeAmt = depositAmt.mulWadDown(dao.getMinCollateralizationRatio());

		// Liq Stakers deposit all their AVAX and get ggAVAX in return
		vm.prank(liqStaker3);
		ggAVAX.depositAVAX{value: (ONE_K * 7)}();

		vm.startPrank(nodeOp1);
		ggp.approve(address(staking), ggpStakeAmt);
		staking.stakeGGP(ggpStakeAmt);
		MinipoolManager.Minipool memory mp1 = createMinipool(depositAmt, depositAmt, duration);
		vm.stopPrank();

		assertEq(staking.getAVAXAssigned(nodeOp1), depositAmt);
		assertEq(staking.getAVAXValidatingHighWater(nodeOp1), 0);
		mp1 = rialto.processMinipoolStart(mp1.nodeID);
		assertEq(staking.getAVAXValidatingHighWater(nodeOp1), depositAmt);
		skip(mp1.duration);
		// avax rewards
		mp1 = rialto.processMinipoolEndWithRewards(mp1.nodeID);
		vm.prank(nodeOp1);
		minipoolMgr.withdrawMinipoolFunds(mp1.nodeID);

		assertTrue(nopClaim.isEligible(nodeOp1), "isEligible");
		assertEq(staking.getAVAXAssigned(nodeOp1), 0);
		assertEq(staking.getAVAXValidatingHighWater(nodeOp1), depositAmt);
		assertGt(staking.getEffectiveGGPStaked(nodeOp1), 0);

		vm.startPrank(nodeOp1);
		MinipoolManager.Minipool memory mp2 = createMinipool(depositAmt, depositAmt, duration);
		assertEq(staking.getAVAXValidatingHighWater(nodeOp1), depositAmt);
-		skip(5 seconds); //cancel min time
-		minipoolMgr.cancelMinipool(mp2.nodeID);
+		skip(3 seconds);
+		vm.expectRevert(MinipoolManager.CancellationTooEarly.selector);
+		minipoolMgr.cancelMinipool(mp2.nodeID); // should revert since not enough time has passed

+		skip(2 seconds); // cancel min time is 5 seconds
+		minipoolMgr.cancelMinipool(mp2.nodeID); // should work since 5 seconds have passed
		vm.stopPrank();

		// We canceled mp2, but we should still have a highwater mark of mp1 and be eligible for rewards
		assertTrue(nopClaim.isEligible(nodeOp1), "isEligible");
		assertEq(staking.getAVAXAssigned(nodeOp1), 0);
		assertEq(staking.getAVAXValidatingHighWater(nodeOp1), depositAmt);
		assertGt(staking.getEffectiveGGPStaked(nodeOp1), 0);
	}

Mitigation Confirmed for MR-Mitigation of H-03: Issue mitigated

The mitigation steps are flawlessly implemented.

I have written an additional test confirming that the cycle duration is 14 days, which is the standard validation period currently adopted by the protocol, to further affirm the working logic of the mitigated steps:

   function testCycleDuration() public {
   	uint256 duration = 2 weeks;
   	uint256 depositAmt = 1000 ether;
   	uint256 avaxAssignmentRequest = 1000 ether;
   	uint128 ggpStakeAmt = 200 ether;


   	vm.startPrank(nodeOp);
   	ggp.approve(address(staking), MAX_AMT);
   	staking.stakeGGP(ggpStakeAmt);
   	MinipoolManager.Minipool memory mp1 = createMinipool(depositAmt, avaxAssignmentRequest, duration);
   	vm.stopPrank();

   	//Manually set their GGP stake to 1, to ensure that the GGP slash amount will be more than the GGP staked.
   	int256 stakerIndex = staking.getIndexOf(address(nodeOp));
   	store.subUint(keccak256(abi.encodePacked("staker.item", stakerIndex, ".ggpStaked")), ggpStakeAmt - 1);

   	address liqStaker1 = getActorWithTokens("liqStaker1", MAX_AMT, MAX_AMT);
   	vm.prank(liqStaker1);
   	ggAVAX.depositAVAX{value: MAX_AMT}();

   	rialto.processMinipoolStart(mp1.nodeID);

   	skip(duration);

   	rialto.processMinipoolEndWithoutRewards(mp1.nodeID);

   	uint256 cycleDuration = store.getUint(keccak256(abi.encodePacked("minipool.item", index, ".endTime"))) -
   		store.getUint(keccak256(abi.encodePacked("minipool.item", index, ".startTime")));

   	assertEq(cycleDuration, 14 days);
   }

Nevertheless, as denoted by Gogopool LitePaper,

"If a node operator has excessively low uptime, stakers are compensated from the GGP insurance put up by the Node Operator. This socializes the risk of being matched with a bad operator, and minimizes any potential losses. Slashed GGP can be sold to token holders at a discounted rate, with AVAX proceeds awarded to Liquid Stakers."

To the liquid stakers, they are generally more concerned about getting compensated with AVAX rather than GGP.

GGP is initialized with TotalGGPCirculatingSupply == 18_000_000 ether. However, an exchange is needed to at least allow node operators to swap AVAX or ggAVAX into GGP. Devoid of this, no one would be able to secure any GGP to stake prior to creating a minipool.

Additionally, with an exchange in place, liquid stakers would be able to sell the slashed GGP awarded at their discretion instead of being dictated by Protocol DAO.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.