opentensor / subtensor Goto Github PK
View Code? Open in Web Editor NEWBittensor Blockchain Layer
License: The Unlicense
Bittensor Blockchain Layer
License: The Unlicense
Type of nodes to define in the docker compose file:
We aim to fully integrate the functionality for delegates to set and adjust their take values per subnet, including the ability to increase or decrease these values. This involves updating the do_become_delegate
function, modifying the existing DelegateInfo
struct to include subnet-specific take values, and ensuring that delegates can dynamically adjust their take values for specific subnets.
By providing delegates with the flexibility to manage their take values on a per-subnet basis, we enable them to tailor their participation and rewards strategy according to their preferences and the unique characteristics of each subnet.
DelegateInfo
struct should be modified to include a new field subnet_takes
of type Vec<(Compact<u16>, Compact<u16>)>
to store subnet-specific take values.do_become_delegate
function should be updated to accept a list of subnet IDs and their corresponding take values, allowing delegates to set initial take values for multiple subnets upon becoming a delegate.subnet_takes
field to the DelegateInfo
struct, ensuring a smooth transition for existing delegates.do_increase_take
and do_decrease_take
functions should be adjusted to handle subnet-specific take adjustments, ensuring that increases or decreases are performed within the constraints of the system's rules.DelegateInfo
struct in lib.rs
to include the subnet_takes
field.#[derive(Decode, Encode, PartialEq, Eq, Clone, Debug)]
pub struct DelegateInfo<T: Config> {
// ...
subnet_takes: Vec<(Compact<u16>, Compact<u16>)>, // New field: Vec of (subnet ID, take value)
}
subnet_takes
field to the DelegateInfo
struct.do_become_delegate
function in staking.rs
to accept a list of subnet IDs and their corresponding take values.// Update the do_become_delegate function
pub fn do_become_delegate(
origin: T::RuntimeOrigin,
hotkey: T::AccountId,
subnet_takes: Vec<(u16, u16)>,
) -> dispatch::DispatchResult {
// ...
let mut delegate_info = DelegateInfo {
// ...
subnet_takes: subnet_takes.iter().map(|(id, take)| (Compact(*id), Compact(*take))).collect(),
};
// ...
}
do_increase_take
function in staking.rs
to handle subnet-specific take increases.do_decrease_take
function in staking.rs
to handle subnet-specific take decreases.Currently, the Subtensor layer doesn't include the proxy pallet which is a key feature for delegating operations securely such as staking to a third party.
Include the proxy pallet to the subtensor layer.
No response
No response
Need a lint (or something) preventing any kind of panicking or direct array indexing in pallets
AC:
We need a safe type that allows sparse matrix operations that will not panic / does not provide panicking operations or methods. Everything should be infallible or return a Result
or Option
.
needed as part of #300
The nodes currently reject some extrinsics (registration/stake/unstake) for rate limiting purposes, etc.
However, the error message for any case is "Transaction would exhaust the block limits" which is not descriptive/informative.
It would be good to be more informative to clients about why the extrinsic was rejected
No response
No response
To recreate clone subtensor and run scripts/localnet_setup.sh
./scripts/localnet_setup.sh *** Local testnet installation *** Installing substrate support libraries Substrate library script checksum not valid, exiting.
Right now we actually have a lot of //
comments that are intended to be ///
doc comments. This is rampant throughout our pallets. A really good way to fix this would be to enforce #[deny(missing_docs)]
at the crate level for each of our crates and then work through all the compile errors. Prob a big effort but will dramatically improve @rajkaramchedu's effort to improve the subtensor docs, and will require us to document things going forward as we create them.
AC:
#[deny(missing_docs)]
to all subtensor crates//
item-level comments to ///
and ensuring they do not introduce any broken doc linkscargo doc --workspace
passing without any broken doc linksIt appears that v.1.0.0 does not support websockets , at least not through the --ws-port
flag. This currently breaks the localnet setup script (./scripts/localnet.sh
), and might have impacts upstream, as most of our services connect via websockets.
.scripts/localnet.sh
Node runs
*** Binary compiled
*** Building chainspec...
2024-04-12 17:19:46 Building chain spec
*** Chainspec built and output to file
*** Purging previous state...
*** Previous chainstate purged
*** Starting localnet nodes...
error: unexpected argument '--ws-port' found
tip: a similar argument exists: '--rpc-port'
Usage: node-subtensor --bob --port <PORT> --rpc-port <PORT> <--chain <CHAIN_SPEC>|--dev|--base-path <PATH>|--log <LOG_PATTERN>...|--detailed-log-output|--disable-log-color|--enable-log-reloading|--tracing-targets <TARGETS>|--tracing-receiver <RECEIVER>>
For more information, try '--help'.
error: unexpected argument '--ws-port' found
tip: a similar argument exists: '--rpc-port'
No response
M3 Max ,OSX
No response
Hi, I'm currently operating a Bittensor node, and it seems to be stuck at block 2585474.
The logs indicate that the best block remains at 2585476, while the finalized block remains at 2585474. Does anyone have any suggestions on how to resolve this issue?
Additionally, I would greatly appreciate it if someone could provide the p2p address of a node that has successfully passed this block. I'd like to try using it as a peer for my node to see if it helps resolve the problem.
Thank you in advance for any assistance!
--base-path=/chain-data
--rpc-cors=all
--port=20540
--rpc-port=9933
--ws-port=9944
--ws-external
--rpc-external
--node-key=bb86e433fe0f1f662a6fdf93211d21fa1e72537865f2f58ddec8e63d6eab3348
--pruning=archive
--rpc-methods=Unsafe
--in-peers=25
--out-peers=25
--prometheus-external
--chain=/raw_spec.json
--in-peers-light=0
--max-runtime-instances=128
--ws-max-connections=10000
Expect the node to keep syncing the latest block
No response
Ubuntu VERSION="20.04.6 LTS (Focal Fossa)"
No response
Right now it is possible to have code that panics in pallets, extrinsics, etc., which can brick the chain. Ideally we disallow this at the clippy linting level so the CI will not allow such code to be merged. This is a tall order, because there are a bunch of instances currently where we do panic, so these all need to be fixed before this CI change will pass.
AC:
unwrap()
sexpect()
sunwrap_err()
spanic!
sunreachable!()
sunimplemented!()
sunwrap()
s in CIexpect()
s in CIunwrap_err()
s in CIpanic!
s in CIunreachable!()
s in CIunimplemented!()
s in CIIt's pretty annoying that the UI doesn't know at all what errors are if they haven't been hard-coded. Unknown error isn't a great user experience, and right now introducing new error types or errors in new situations creates a lot of churn with releases.
Substrate provides the metadata, which includes all info about errors, calls and events.
At first, we must guarantee all of them are well documented. In this issue, all docs for errors are checked.
From this unit test file for metadata check, the structure is clear for other language like python and js to parse.
https://github.com/opentensor/subtensor/blob/development/runtime/tests/metadata.rs
fixes #375
Currently staking is affected by existential deposit because it is using Currency::withdraw
(and Currency::deposit
for unstaking). Pallet balances has the locking mechanism for this, which does not affect the total balance of the account, but effectively locks the currency.
The staking/unstaking logic of working with balances is located in remove_balance_from_coldkey_account and add_balance_to_coldkey_account
Stake full amount less transaction fees.
Result: Staked amount is less by ED (or account is wiped, unsure).
Staked amount can be full balance and account does not get dust-collected.
No response
Any
No response
Description : We need CI jobs implemented in the subtensor repo for auto checks for rust using cargo, specifically cargo check
, cargo format
, and cargo fix
AC:
cargo check
commandcargo format
commandcargo fix
commandright now part of #275, done but could be merged separately if needed
running new local subtensor keep waiting for peers with main lite node command
btcli w overview
return 0 balance
while using old running subttensor return with correct balance
2024-04-03 15:38:42 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.2kiB/s ⬆ 3.1kiB/s
2024-04-03 15:38:47 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.6kiB/s
2024-04-03 15:38:52 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.9kiB/s
2024-04-03 15:38:57 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.8kiB/s ⬆ 2.3kiB/s
2024-04-03 15:39:02 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.3kiB/s ⬆ 3.2kiB/s
2024-04-03 15:39:07 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.8kiB/s
2024-04-03 15:39:12 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.2kiB/s ⬆ 3.1kiB/s
2024-04-03 15:39:17 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.6kiB/s
2024-04-03 15:39:22 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.0kiB/s ⬆ 2.7kiB/s
2024-04-03 15:39:27 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.7kiB/s
2024-04-03 15:39:32 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.9kiB/s
2024-04-03 15:39:37 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.0kiB/s ⬆ 2.8kiB/s
2024-04-03 15:39:42 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.0kiB/s ⬆ 2.9kiB/s
2024-04-03 15:39:47 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.3kiB/s ⬆ 3.1kiB/s
2024-04-03 15:39:52 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.2kiB/s ⬆ 3.2kiB/s
2024-04-03 15:39:57 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.1kiB/s ⬆ 2.9kiB/s
2024-04-03 15:40:02 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 2.4kiB/s ⬆ 3.3kiB/s
2024-04-03 15:40:07 ⏩ Warping, Waiting for peers, 0.00 Mib (0 peers), best: #0 (0x2f05…6c03), finalized #0 (0x2f05…6c03), ⬇ 1.9kiB/s ⬆ 2.6kiB/s
git clone https://github.com/opentensor/subtensor.git
sudo ./scripts/run/subtensor.sh -e docker --network mainnet --node-type lite
btcli w overview # should return correct balance
ubuntu 22.02
No response
To ensure continuous reliability between our Subtensor and the Bittensor package, we need to implement a comprehensive GitHub Actions workflow. This workflow will automate the entire testing process, from building the blockchain node using the localnet.sh
script, to installing the Bittensor package from a configurable branch, and finally running the test_subtensor_integration.py
integration test.
The primary objective of this setup is to verify that any changes introduced to the subtensor codebase do not break or introduce regressions in the Bittensor Python code. By parameterizing the Bittensor repository branch, we can test against various development stages and release candidates, ensuring compatibility and robustness across different versions.
localnet.sh
script.test_subtensor_integration.py
integration test should be executed after successful installation of the Bittensor package..github/workflows
directory.
localnet.sh
script into the workflow for building and starting the blockchain nodes.
test_subtensor_integration.py
integration test.
Part of #300, we need to deny the ability to perform potentially-panicking and/or overflowing operations on number types in subtensor
AC:
#[deny(arithmetic_overflow)]
clippy::integer_arithmetic
f32
/ f64
entirelyI generally don't like (production) code to compile with warnings, and I get quite a few of them compiling subtensor with default settings. Would it be an idea to eliminate them? Most of the current warnings seem easy enough to eliminate. Some of them are hard to understand without in-depth knowledge of rust and substrate.
I would like to work together on this, e.g. by submitting a PR and iteratively working toward a warning-free build. At some point the equivalent of -Werror
(in C) could be enabled in the default build settings.
First I would like to know if such a PR has any chance of being accepted.
No response
No response
Now, we don't have the test for runtime to verified the pallets, their configuration, rpc and so on
Follow the solution from upstream, the reference like https://github.com/polkadot-fellows/runtimes/blob/main/integration-tests/emulated/tests/collectives/collectives-polkadot/src/tests/fellowship_treasury.rs.
No response
No response
Building subtensor leads to a non-functioning binary (with fatal errors such as "runtime requires function imports which are not present on the host: 'env:ext_benchmarking_current_time_version_1'"
) which is solved by adding --features=runtime-benchmarks
as mentioned in the Discord. It would be nice if the README would reflect this requirement.
cargo build --release
"runtime requires function imports which are not present on the host: 'env:ext_benchmarking_current_time_version_1'"
I expect a working binary after performing the build instructions.
No response
Linux Ubuntu
It seems to be solved by adding --features=runtime-benchmarks
to the cargo build
command.
We want to move to a more formal release process characterized by the following points:
main
on-devnet
, devnet-pass
, on-testnet
, and testnet-pass
, respectively, and having both the devnet-pass
and testnet-pass
labels will be required by the CI to merge into main.all work in #346
AC:
devnet
/ testnet
stages (possibly using labels)main
The current governance mechanism in the Subtensor blockchain needs to be revised to introduce a new group called "SubnetOwners" alongside the existing "Triumvirate" and "Senate" groups. The goal is to establish a checks and balances system where a proposal must be accepted by the other two groups in order to pass.
For instance, if the Triumvirate proposes a change, both the SubnetOwners and Senate must accept it for the proposal to be enacted. Each acceptance group should have a configurable minimum threshold for proposal acceptance.
SubnetOwners
struct and associated storage items.// runtime/src/lib.rs
// ...
pub struct SubnetOwners;
impl SubnetOwners {
fn is_member(account: &AccountId) -> bool {
// Implement logic to check if an account is a member of SubnetOwners
// ...
}
fn members() -> Vec<AccountId> {
// Implement logic to retrieve the list of SubnetOwners members
// ...
}
fn max_members() -> u32 {
// Implement logic to retrieve the maximum number of SubnetOwners members
// ...
}
}
// ...
propose
function to include the new acceptance requirements.// pallets/collective/src/lib.rs
// ...
#[pallet::call]
impl<T: Config<I>, I: 'static> Pallet<T, I> {
// ...
#[pallet::call_index(2)]
#[pallet::weight(/* ... */)]
pub fn propose(
origin: OriginFor<T>,
proposal: Box<<T as Config<I>>::Proposal>,
#[pallet::compact] length_bound: u32,
duration: BlockNumberFor<T>,
) -> DispatchResultWithPostInfo {
// ...
// Check if the proposer is a member of the Triumvirate
ensure!(T::CanPropose::can_propose(&who), Error::<T, I>::NotMember);
// ...
// Initialize vote trackers for Senate and SubnetOwners
let senate_votes = Votes {
index,
threshold: SenateThreshold::get(),
ayes: sp_std::vec![],
nays: sp_std::vec![],
end,
};
let subnet_owners_votes = Votes {
index,
threshold: SubnetOwnersThreshold::get(),
ayes: sp_std::vec![],
nays: sp_std::vec![],
end,
};
// Store the vote trackers
<SenateVoting<T, I>>::insert(proposal_hash, senate_votes);
<SubnetOwnersVoting<T, I>>::insert(proposal_hash, subnet_owners_votes);
// ...
}
// ...
}
// ...
// runtime/src/lib.rs
// ...
parameter_types! {
pub const TriumvirateThreshold: Permill = Permill::from_percent(60);
pub const SenateThreshold: Permill = Permill::from_percent(50);
pub const SubnetOwnersThreshold: Permill = Permill::from_percent(40);
}
// ...
do_vote
function to handle voting from the new SubnetOwners
group.// pallets/collective/src/lib.rs
impl<T: Config<I>, I: 'static> Pallet<T, I> {
// ...
pub fn do_vote(
who: T::AccountId,
proposal: T::Hash,
index: ProposalIndex,
approve: bool,
) -> DispatchResult {
// ...
// Check if the voter is a member of the Senate or SubnetOwners
if Senate::is_member(&who) {
// Update the Senate vote tracker
<SenateVoting<T, I>>::mutate(proposal, |v| {
if let Some(mut votes) = v.take() {
if approve {
votes.ayes.push(who.clone());
} else {
votes.nays.push(who.clone());
}
*v = Some(votes);
}
});
} else if SubnetOwners::is_member(&who) {
// Update the SubnetOwners vote tracker
<SubnetOwnersVoting<T, I>>::mutate(proposal, |v| {
if let Some(mut votes) = v.take() {
if approve {
votes.ayes.push(who.clone());
} else {
votes.nays.push(who.clone());
}
*v = Some(votes);
}
});
} else {
return Err(Error::<T, I>::NotMember.into());
}
// ...
}
// ...
}
// pallets/collective/src/lib.rs
// ...
impl<T: Config<I>, I: 'static> Pallet<T, I> {
// ...
pub fn do_vote(
who: T::AccountId,
proposal: T::Hash,
index: ProposalIndex,
approve: bool,
) -> DispatchResult {
// ...
// Check if the voter is a member of the Senate or SubnetOwners
if Senate::is_member(&who) {
// Update the Senate vote tracker
<SenateVoting<T, I>>::mutate(proposal, |v| {
if let Some(mut votes) = v.take() {
if approve {
votes.ayes.push(who.clone());
} else {
votes.nays.push(who.clone());
}
*v = Some(votes);
}
});
} else if SubnetOwners::is_member(&who) {
// Update the SubnetOwners vote tracker
<SubnetOwnersVoting<T, I>>::mutate(proposal, |v| {
if let Some(mut votes) = v.take() {
if approve {
votes.ayes.push(who.clone());
} else {
votes.nays.push(who.clone());
}
*v = Some(votes);
}
});
} else {
return Err(Error::<T, I>::NotMember.into());
}
// ...
}
// ...
}
// ...
let old_pallet = "Triumvirate";
let new_pallet = <Governance as PalletInfoAccess>::name();
frame_support::storage::migration::move_pallet(
new_pallet.as_bytes(),
old_pallet.as_bytes(),
);
# bittensor/subtensor.py
class subtensor:
# ...
def get_subnet_owners_members(self, block: Optional[int] = None) -> Optional[List[str]]:
subnet_owners_members = self.query_module("SubnetOwnersMembers", "Members", block=block)
if not hasattr(subnet_owners_members, "serialize"):
return None
return subnet_owners_members.serialize() if subnet_owners_members != None else None
# bittensor/subtensor.py
class subtensor:
# ...
def get_governance_members(self, block: Optional[int] = None) -> Optional[List[Tuple[str, Tuple[Union[GovernanceEnum, str]]]]]:
senate_members = self.get_senate_members(block=block)
subnet_owners_members = self.get_subnet_owners_members(block=block)
triumvirate_members = self.get_triumvirate_members(block=block)
if senate_members is None and subnet_owners_members is None and triumvirate_members is None:
return None
governance_members = {}
for member in senate_members:
governance_members[member] = (GovernanceEnum.Senate)
for member in subnet_owners_members:
if member not in governance_members:
governance_members[member] = ()
governance_members[member] += (GovernanceEnum.SubnetOwner)
for member in triumvirate_members:
if member not in governance_members:
governance_members[member] = ()
governance_members[member] += (GovernanceEnum.Triumvirate)
return [item for item in governance_members.items()]
# bittensor/subtensor.py
class subtensor:
# ...
def vote_subnet_owner(self, wallet=wallet,
proposal_hash: str,
proposal_idx: int,
vote: bool,
) -> bool:
return vote_subnet_owner_extrinsic(...)
def vote_senate_extrinsic(
subtensor: "bittensor.subtensor",
wallet: "bittensor.wallet",
proposal_hash: str,
proposal_idx: int,
vote: bool,
wait_for_inclusion: bool = False,
wait_for_finalization: bool = True,
prompt: bool = False,
) -> bool:
r"""Votes ayes or nays on proposals."""
if prompt:
# Prompt user for confirmation.
if not Confirm.ask("Cast a vote of {}?".format(vote)):
return False
# Unlock coldkey
wallet.coldkey
with bittensor.__console__.status(":satellite: Casting vote.."):
with subtensor.substrate as substrate:
# create extrinsic call
call = substrate.compose_call(
call_module="SubtensorModule",
call_function="subnet_owner_vote",
call_params={
"proposal": proposal_hash,
"index": proposal_idx,
"approve": vote,
},
)
# Sign using coldkey
# ...
bittensor.__console__.print(
":white_heavy_check_mark: [green]Vote cast.[/green]"
)
return True
# bittensor/subtensor.py
class subtensor:
# ...
def vote_governance(self, wallet=wallet,
proposal_hash: str,
proposal_idx: int,
vote: bool,
group_choice: Tuple[GovernanceEnum],
) -> Tuple[bool]:
result = []
for group in group_choice:
if GovernanceEnum.Senate == group:
result.append( self.vote_senate(...) )
if GovernanceEnum.Triumvirate == group:
result.append( self.vote_triumvirate(...) )
if GovernanceEnum.SubnetOwner == group:
result.append( self.vote_subnet_owner(...) )
return tuple(result)
# bittensor/cli.py
# bittensor/commands/senate.py -> bittensor/commands/governance.py
COMMANDS = {
"governance": {
"name": "governance",
"aliases": ["g", "gov"],
"help": "Commands for managing and viewing governance.",
"commands": {
"list": GovernanceListCommand,
"senate_vote": SenateVoteCommand,
"senate": SenateCommand,
"owner_vote": OwnerVoteCommand,
"proposals": ProposalsCommand,
"register": SenateRegisterCommand, # prev: RootRegisterCommand
},
},
...
}
# bittensor/commands/governance.py
class VoteCommand:
@staticmethod
def run(cli: "bittensor.cli"):
# ...
@staticmethod
def _run(cli: "bittensor.cli", subtensor: "bittensor.subtensor"):
r"""Vote in Bittensor's governance protocol proposals"""
wallet = bittensor.wallet(config=cli.config)
# ...
member_groups = subtensor.get_governance_groups(hotkey, coldkey)
if len(member_groups) == 0:
# Abort; Not a governance member
return
elif len(member_groups) > 1: # belongs to multiple groups
# Ask which group(s) to vote as
group_choice = ask_group_select( member_groups )
else: # belongs to only one group
group_choice = member_groups
# ...
subtensor.governance_vote(
wallet=wallet,
proposal_hash=proposal_hash,
proposal_idx=vote_data["index"],
vote=vote,
group_choice=group_choice,
)
# ...
@classmethod
def add_args(cls, parser: argparse.ArgumentParser):
vote_parser = parser.add_parser(
"vote", help="""Vote on an active proposal by hash."""
)
vote_parser.add_argument(
"--proposal",
dest="proposal_hash",
type=str,
nargs="?",
help="""Set the proposal to show votes for.""",
default="",
)
bittensor.wallet.add_args(vote_parser)
bittensor.subtensor.add_args(vote_parser)
# bittensor/commands/governance.py
class GovernanceMembersCommand:
# ...
@staticmethod
def _run(cli: "bittensor.cli", subtensor: "bittensor.subtensor"):
r"""View Bittensor's governance protocol members"""
# ...
senate_members = subtensor.get_governance_members()
table = Table(show_footer=False)
table.title = "[white]Senate"
table.add_column(
"[overline white]NAME",
footer_style="overline white",
style="rgb(50,163,219)",
no_wrap=True,
)
table.add_column(
"[overline white]ADDRESS",
footer_style="overline white",
style="yellow",
no_wrap=True,
)
table.add_column(
"[overline white]GROUP(S)",
footer_style="overline white",
style="yellow",
no_wrap=True,
)
table.show_footer = True
for ss58_address, groups in governance_members:
table.add_row(
(
delegate_info[ss58_address].name
if ss58_address in delegate_info
else ""
),
ss58_address,
" ".join(groups), # list all groups
)
table.box = None
table.pad_edge = False
table.width = None
console.print(table)
# ...
@classmethod
def add_args(cls, parser: argparse.ArgumentParser):
member_parser = parser.add_parser(
"members", help="""View all the governance members"""
)
bittensor.wallet.add_args(senate_parser)
bittensor.subtensor.add_args(senate_parser)
Currently, the delegation rewards distribution system in our blockchain project has a limitation where delegates' commission rates (takes) are hardcoded at 18%. This inflexibility prevents delegates from adjusting their commission rates based on market conditions and their individual strategies.
As a result, delegates often resort to off-chain agreements and rebate systems to attract delegators and remain competitive. This creates market inefficiencies and hinders the overall user experience within our ecosystem.
Implementing the ability for delegates to alter their commission rates within the defined range would provide significant value to both delegates and delegators, enabling them to make informed decisions and adapt to evolving market dynamics.
Status as of 2024-04-19: The code partially implemented in stao branch:
Copied from @rajkaramchedu:
Currently, if a subnet owner registers on mainnet, the cost for the subnet is locked, but not recycled. So if the subnet fails, the only people significantly affected are the miner and validator operators, who have recycled TAO to register within the subnet. I feel as if the price of failure to launch a subnet and gain emissions from the root network should actually be a price.
I would like to see the subnet registration either:
No response
No response
After building with the command : cargo build --release
And running the build with : ./target/release/subtensor --dev, I get the error: -bash: ./target/release/subtensor: No such file or directory. When I looked in the /target/release folder, there is no subtensor file, but there is a node-subtensor file.
build with the command cargo build --release
run the build with the command ./target/release/subtensor --dev
The build should be successful
No response
Linux Ubuntu
No response
The goal is to implement a commit-reveal scheme for submitting weights in the Subtensor module. This scheme will require validators to submit a hashed version of their weights along with a signature during the commit phase. After a specified number of blocks (reveal tempo), validators will reveal the actual weights, which will be verified against the commit hash.
commit_weights
function that allows validators to submit a hashed version of their weights along with a signature during the commit phase.reveal_weights
function that allows validators to reveal the actual weights after the specified reveal tempo.Add the following storage item to store commit hashes, signatures, and block numbers per validator and subnet:
#[pallet::storage]
pub type WeightCommits<T: Config> = StorageDoubleMap<_, Twox64Concat, u16, Twox64Concat, T::AccountId, (T::Hash, T::Signature, T::BlockNumber), ValueQuery>;
commit_weights
FunctionImplement the commit_weights
function for the commit phase:
#[pallet::call]
impl<T: Config> Pallet<T> {
pub fn commit_weights(
origin: T::RuntimeOrigin,
netuid: u16,
commit_hash: T::Hash,
signature: T::Signature,
) -> DispatchResult {
let who = ensure_signed(origin)?;
ensure!(Self::can_commit(netuid, &who), Error::<T>::CommitNotAllowed);
WeightCommits::<T>::insert(netuid, &who, (commit_hash, signature, <frame_system::Pallet<T>>::block_number()));
Ok(())
}
}
reveal_weights
FunctionImplement the reveal_weights
function for the reveal phase:
pub fn reveal_weights(
origin: T::RuntimeOrigin,
netuid: u16,
uids: Vec<u16>,
values: Vec<u16>,
version_key: u64,
) -> DispatchResult {
let who = ensure_signed(origin)?;
WeightCommits::<T>::try_mutate_exists(netuid, &who, |maybe_commit| -> DispatchResult {
let (commit_hash, signature, commit_block) = maybe_commit.take().ok_or(Error::<T>::NoCommitFound)?;
ensure!(Self::is_reveal_block(netuid, commit_block), Error::<T>::InvalidRevealTempo);
let provided_hash = T::Hashing::hash_of(&(uids.clone(), values.clone(), version_key));
ensure!(provided_hash == commit_hash, Error::<T>::InvalidReveal);
ensure!(Self::verify_signature(&who, &commit_hash, &signature), Error::<T>::InvalidSignature);
Self::do_set_weights(
T::Origin::from(origin),
netuid,
uids,
values,
version_key,
)
})
}
Implement helper functions for the commit-reveal process:
impl<T: Config> Pallet<T> {
fn can_commit(netuid: u16, who: &T::AccountId) -> bool {
// Check if commit-reveal is enabled for the subnet
// Check if the validator hasn't committed within the current tempo```rust
// ...
}
fn is_reveal_block(netuid: u16, commit_block: T::BlockNumber) -> bool {
// Check if the current block is within the reveal tempo
// ...
}
fn verify_signature(who: &T::AccountId, commit_hash: &T::Hash, signature: &T::Signature) -> bool {
// Verify the provided signature against the commit hash and validator's public key
// ...
}
}
Implement background tasks for cleaning up expired commits and managing the commit-reveal process.
commit_weights
FunctionAdd to bittensor/subtensor.py
:
def commit_weights(
self,
wallet: "bittensor.wallet",
netuid: int,
commit_hash: str,
signature: str,
wait_for_inclusion: bool = False,
wait_for_finalization: bool = False,
) -> Tuple[bool, Optional[str]]:
@retry(delay=2, tries=3, backoff=2, max_delay=4)
def make_substrate_call_with_retry():
with self.substrate as substrate:
call = substrate.compose_call(
call_module="SubtensorModule",
call_function="commit_weights",
call_params={
"netuid": netuid,
"commit_hash": commit_hash,
"signature": signature,
},
)
extrinsic = substrate.create_signed_extrinsic(call=call, keypair=wallet.coldkey)
response = substrate.submit_extrinsic(
extrinsic,
wait_for_inclusion=wait_for_inclusion,
wait_for_finalization=wait_for_finalization,
)
if not wait_for_finalization and not wait_for_inclusion:
return True, None
response.process_events()
if response.is_success:
return True, None
else:
return False, response.error_message
return make_substrate_call_with_retry()
reveal_weights
FunctionAdd to bittensor/subtensor.py
:
def reveal_weights(
self,
wallet: "bittensor.wallet",
netuid: int,
uids: List[int],
values: List[int],
version_key: int,
wait_for_inclusion: bool = False,
wait_for_finalization: bool = False,
) -> Tuple[bool, Optional[str]]:
@retry(delay=2, tries=3, backoff=2, max_delay=4)
def make_substrate_call_with_retry():
with self.substrate as substrate:
call = substrate.compose_call(
call_module="SubtensorModule",
call_function="reveal_weights",
call_params={
"netuid": netuid,
"uids": uids,
"values": values,
"version_key": version_key,
},
)
extrinsic = substrate.create_signed_extrinsic(call=call, keypair=wallet.coldkey)
response = substrate.submit_extrinsic(
extrinsic,
wait_for_inclusion=wait_for_inclusion,
wait_for_finalization=wait_for_finalization,
)
if not wait_for_finalization and not wait_for_inclusion:
return True, None
response.process_events()
if response.is_success:
return True, None
else:
return False, response.error_message
return make_substrate_call_with_retry()
Add to bittensor/subtensor.py
:
def can_commit(self, netuid: int, who: str) -> bool:
# Check if commit-reveal is enabled for the subnet
# Check if the validator hasn't committed within```python
# Check if the validator hasn't committed within the current tempo
# ...
pass
def is_reveal_block(self, netuid: int, commit_block: int) -> bool:
# Check if the current block is within the reveal tempo
# ...
pass
def verify_signature(self, who: str, commit_hash: str, signature: str) -> bool:
# Verify the provided signature against the commit hash and validator's public key
# ...
pass
Implement background tasks for cleaning up expired commits and managing the commit-reveal process.
Provide clear error messages in bittensor/errors.py
for various scenarios, such as:
commit_weights
and reveal_weights
functions in both Rust and Python.How to call subtensor from Cosmos SDK?
Integrate IBC pallet which is trust minimized (light client based) way to call subtensor from cosmos-sdk and other ibc enabled chains.
Here is example https://github.com/ggxchain/ibc/blob/main/ibc-ggx-cosmos%20ICF%20M3%20deliverabl.md
Offchain solutions.
No response
The Run in Docker
section is outdated.
Follow README section Run in Docker
Lines 281 to 306 in 1d3cb71
But there is no
./scripts/docker_run.sh
Section correctly explains how to run in subtensor in docker, this is probably done with the
docker compose up
No response
repo
I think this is just outdated Readme, but it's misleading.
Currently the StakeAdded/Removed events do not list the origin that initiated the stake add or removal action. This makes it hard to track staking action based on events and requires a link to the actual extrinsic triggering this event.
Add the origin's AccountId to these events. The information is already available in the functions emitting the events, so it should be as simple as adding another field to the Events tuple.
No response
No response
look at steps to reproduce and expected behavior
The miners for deregistration should be sorted by (emission, registration_time), so if both miners have the same emission, the one that registered earlier on should be ejected first. It was observed in the wild on sn12 mainned that this is not the case and in presence of 120 miners that do nothing, a relatively fresh one was ejected a day after it registered.
No response
linux
No response
When registering, it is possible for the cost to register recycle to change in the middle of registration: if the adjustment interval happens after the registration reports "The cost to register by recycle is τx.xxxxxxxxx", then when the registrant (or their script) enters "y" in response to "Do you want to continue? [y/n] (n):", then they will be charged the new amount, not the amount reported.
I would like the script to store the cost it has reported, then when attempting to execute the actual recycle, if the amount is > that reported amount, fail the transaction. (I imagine that most people would be fine with the transaction going through if the new cost is less than the reported cost)
No response
No response
The python package currently makes 4 queries to subtensor in order to retrieve the dynamic pool info, which leads to a higher latency.
This PR is reduce of the process , but implementing APIs of subtensor that return this information.
dynamic_pool_info
: Returns the pool info for a single pool given the netuiddynamic_pool_infos
: Returns dynamic pool info for all the pools.I have recently spun up a new server, I followed my usual process.
Issue is that I am unable to connect my local node to chain. I cloned the latest repo and run it on docker, like I do for all of my other servers.
I am unable to register a new miner or connect a miner to chain.
git clone https://github.com/opentensor/subtensor.git
sudo ./scripts/run/subtensor.sh -e docker --network mainnet --node-type lite
Note: my server running a older version of subtensor have no issues at all. I am using the version released just after the chain when down.
No response
Ubuntu
No response
we have upgraded to the final 1.0
version of polkadot before the move to the monorepo, now it is time to upgrade to the latest version on polkadot-sdk so we are fully up-to-date.
v1.9.0 as of writing. Currently we are on v1.0.0
AC:
cargo check
passingcargo check --workspace
passingcargo test
compilingcargo test --workspace
compilingcargo test --workspace
passingcargo test --workspace --features=runtime-benchmarks
compilingcargo test --workspace --features=runtime-benchmarks
passingBy following the README instructions for cargo run
I get the following error:
root@b8032326feec:~/subtensor# cargo run --release -- --dev
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package: /root/subtensor/runtime/Cargo.toml
workspace: /root/subtensor/Cargo.toml
error: `cargo run` could not determine which binary to run. Use the `--bin` option to specify a binary, or the `default-run` manifest key.
available binaries: integration-tests, node-subtensor
So i added the flag --node-subtensor
and things work
root@b8032326feec:~/subtensor# cargo run --release --bin node-subtensor -- --dev
warning: profiles for the non root package will be ignored, specify profiles at the workspace root:
package: /root/subtensor/runtime/Cargo.toml
workspace: /root/subtensor/Cargo.toml
Updating git repository https://github.com/paritytech/substrate.git
Updating crates.io index
Fetch [==========> ] 46.22%, (114640/474454) resolving deltas ```
### To Reproduce
I followed the main README steps for linux
### Expected behavior
Everything should install as expected when following the instructions
### Screenshots
_No response_
### Environment
Linux ubuntu
### Additional context
_No response_
fixed by #253
My subtensor node doesn't execute native code, apparently due to a version mismatch. On-chain runtime version is specVersion=143
while the code in github specifies 142
:
Line 124 in e607d7e
Run node-subtensor with --execution native
and observe that the native code is not run (e.g. when adding some extra debugging in local pallets).
I expect the github to reflect the on-chain runtime in every respect, including the version number, and I expect that node-subtensor
uses native runtime where possible when --execution native
is specified.
No response
Linux Ubuntu
No response
When trying to unstake all hotkeys I get this error sometimes.
SubstrateRequestException: {'code': 1014, 'message': 'Priority is too low: (18446744073709551615 vs 18446744073709551615)', 'data': 'The transaction has too low priority to replace another transaction already in the pool.'}
I need to restart the entire unstake all operation from the start. It would be great if the error can be caught and it will retry again after a short while.
upgrades subtensor to polkadot 1.0.0
AC:
cargo check --workspace
in about 3 mins 😎, also added CI checks for clippy and one that asserts that cargo fix
has no trivial fixes availablecargo check
passingcargo check --workspace
passingcargo test
compilingcargo test --workspace
compilingcargo test --workspace
passingcargo test --workspace --features=runtime-benchmarks
compilingcargo test --workspace --features=runtime-benchmarks
passingSubtensor nodes continue to attempt connections to services that have become unavailable, persisting even after those services explicitly refuse connections or stop responding. This issue has been observed over a period of 7 days, during which a service initially returned TCP RST packets for 5 days before completely ceasing to respond for an additional 2 days.
The Subtensor node should cease its attempts to connect to a service that has consistently been unavailable or explicitly refused connections over a reasonable period.
Actual Behavior:
The Subtensor node persists in attempting to connect to the service, sending TCP SYN packets continuously without recognizing the service's unavailability. This behavior persists over an extended period, observed for at least 7 days in this instance.
Debian Linux 11, Subtensor v0.0.1, Docker v25.0.4
Dear OpenTensor team,
I'm reaching out as a Coretex developer of the OpenTensor project to discuss a critical improvement in our Substrate deployment practices that I've identified and successfully tested. As part of my ongoing efforts to enhance the security posture of the OpenTensor project, a significant opportunity has been identified to align both our Docker and binary deployment methods with the Principle of Least Privilege (PoLP). This principle is a cornerstone of security and systems administration best practices, advocating for minimal user privileges to perform required tasks, thereby reducing the attack surface and potential impact of a compromise.
Currently, the service within the Docker container is configured to start and run as the root user, and similar privilege concerns apply to our binary deployment process. Furthermore, I also have not witnessed the executable performing a privdrop after initialization, suggesting that the process continues to run as root throughout its life cycle. This setup diverges from best practices by not minimizing the operational privileges of the service, potentially exposing it to unnecessary risks.
Upon further exploration and testing, I discovered that initializing and running the service as a non-privileged user within a Docker container does not adversely affect its operation, granted that the necessary file permissions have been applied before execution. This finding suggests that our service does not require root privileges for its initialization or runtime.
Implementing the Principle of Least Privilege by default in our Dockerfile could significantly mitigate potential security risks. Such risks include the escalation of privileges in the event of a vulnerability being exploited, which could lead to unauthorized access or control over the host machine or other containers.
In light of this, I propose the following changes to our Docker deployment methodology:
I am eager to discuss this further and collaborate on implementing these changes. Your feedback and insights will be invaluable as we strive to make Subtensor safer and more resilient against potential threats.
Best Regards
I am opening this issue to report the latest status on subtensor node installation steps:
./scripts/docker_run.sh
).See above description.
See above description.
No response
macOS
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.