Giter Club home page Giter Club logo

hyperswarm's Introduction

hyperswarm

A high-level API for finding and connecting to peers who are interested in a "topic."

Installation

npm install hyperswarm

Usage

const Hyperswarm = require('hyperswarm')

const swarm1 = new Hyperswarm()
const swarm2 = new Hyperswarm()

swarm1.on('connection', (conn, info) => {
  // swarm1 will receive server connections
  conn.write('this is a server connection')
  conn.end()
})

swarm2.on('connection', (conn, info) => {
  conn.on('data', data => console.log('client got message:', data.toString()))
})

const topic = Buffer.alloc(32).fill('hello world') // A topic must be 32 bytes
const discovery = swarm1.join(topic, { server: true, client: false })
await discovery.flushed() // Waits for the topic to be fully announced on the DHT

swarm2.join(topic, { server: false, client: true })
await swarm2.flush() // Waits for the swarm to connect to pending peers.

// After this point, both client and server should have connections

Hyperswarm API

const swarm = new Hyperswarm(opts = {})

Construct a new Hyperswarm instance.

opts can include:

  • keyPair: A Noise keypair that will be used to listen/connect on the DHT. Defaults to a new key pair.
  • seed: A unique, 32-byte, random seed that can be used to deterministically generate the key pair.
  • maxPeers: The maximum number of peer connections to allow.
  • firewall: A sync function of the form remotePublicKey => (true|false). If true, the connection will be rejected. Defaults to allowing all connections.
  • dht: A DHT instance. Defaults to a new instance.

swarm.connecting

Number that indicates connections in progress.

swarm.connections

A set of all active client/server connections.

swarm.peers

A Map containing all connected peers, of the form: (Noise public key hex string) -> PeerInfo object

See the PeerInfo API for more details.

swarm.dht

A hyperdht instance. Useful if you want lower-level control over Hyperswarm's networking.

swarm.on('connection', (socket, peerInfo) => {})

Emitted whenever the swarm connects to a new peer.

socket is an end-to-end (Noise) encrypted Duplex stream

peerInfo is a PeerInfo instance

swarm.on('update', () => {})

Emitted when internal values are changed, useful for user interfaces.

For example: emitted when swarm.connecting or swarm.connections changes.

const discovery = swarm.join(topic, opts = {})

Start discovering and connecting to peers sharing a common topic. As new peers are connected to, they will be emitted from the swarm as connection events.

topic must be a 32-byte Buffer opts can include:

  • server: Accept server connections for this topic by announcing yourself to the DHT. Defaults to true.
  • client: Actively search for and connect to discovered servers. Defaults to true.

Returns a PeerDiscovery object.

Clients and Servers

In Hyperswarm, there are two ways for peers to join the swarm: client mode and server mode. If you've previously used Hyperswarm v2, these were called "lookup" and "announce", but we now think "client" and "server" are more descriptive.

When you join a topic as a server, the swarm will start accepting incoming connections from clients (peers that have joined the same topic in client mode). Server mode will announce your keypair to the DHT, so that other peers can discover your server. When server connections are emitted, they are not associated with a specific topic -- the server only knows it received an incoming connection.

When you join a topic as a client, the swarm will do a query to discover available servers, and will eagerly connect to them. As with server mode, these connections will be emitted as connection events, but in client mode they will be associated with the topic (info.topics will be set in the connection event).

await swarm.leave(topic)

Stop discovering peers for the given topic.

topic must be a 32-byte Buffer

If a topic was previously joined in server mode, leave will stop announcing the topic on the DHT. If a topic was previously joined in client mode, leave will stop searching for servers announcing the topic.

leave will not close any existing connections.

swarm.joinPeer(noisePublicKey)

Establish a direct connection to a known peer.

noisePublicKey must be a 32-byte Buffer

As with the standard join method, joinPeer will ensure that peer connections are reestablished in the event of failures.

swarm.leavePeer(noisePublicKey)

Stop attempting direct connections to a known peer.

noisePublicKey must be a 32-byte Buffer

If a direct connection is already established, that connection will not be destroyed by leavePeer.

const discovery = swarm.status(topic)

Get the PeerDiscovery object associated with the topic, if it exists.

await swarm.listen()

Explicitly start listening for incoming connections. This will be called internally after the first join, so it rarely needs to be called manually.

await swarm.flush()

Wait for any pending DHT announces, and for the swarm to connect to any pending peers (peers that have been discovered, but are still in the queue awaiting processing).

Once a flush() has completed, the swarm will have connected to every peer it can discover from the current set of topics it's managing.

flush() is not topic-specific, so it will wait for every pending DHT operation and connection to be processed -- it's quite heavyweight, so it could take a while. In most cases, it's not necessary, as connections are emitted by swarm.on('connection') immediately after they're opened.

PeerDiscovery API

swarm.join returns a PeerDiscovery instance which allows you to both control discovery behavior, and respond to lifecycle changes during discovery.

await discovery.flushed()

Wait until the topic has been fully announced to the DHT. This method is only relevant in server mode. When flushed() has completed, the server will be available to the network.

await discovery.refresh({ client, server })

Update the PeerDiscovery configuration, optionally toggling client and server modes. This will also trigger an immediate re-announce of the topic, when the PeerDiscovery is in server mode.

await discovery.destroy()

Stop discovering peers for the given topic.

If a topic was previously joined in server mode, leave will stop announcing the topic on the DHT. If a topic was previously joined in client mode, leave will stop searching for servers announcing the topic.

PeerInfo API

swarm.on('connection', ...) emits a PeerInfo instance whenever a new connection is established.

There is a one-to-one relationship between connections and PeerInfo objects -- if a single peer announces multiple topics, those topics will be multiplexed over a single connection.

peerInfo.publicKey

The peer's Noise public key.

peerInfo.topics

An Array of topics that this Peer is associated with -- topics will only be updated when the Peer is in client mode.

peerInfo.prioritized

If true, the swarm will rapidly attempt to reconnect to this peer.

peerInfo.ban(banStatus = false)

Ban or unban the peer. Banning will prevent any future reconnection attempts, but it will not close any existing connections.

License

MIT

hyperswarm's People

Contributors

4c656554 avatar andrewosh avatar bcomnes avatar billiegoose avatar davidmarkclements avatar douganderson444 avatar hdegroote avatar indutny avatar kasperisager avatar kylemaas avatar lejeunerenard avatar lukks avatar mafintosh avatar nooitaf avatar pfrazee avatar pvh avatar rafapaezbas avatar sethvincent avatar staltz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperswarm's Issues

Can't connect on two topics to same peer

I noticed that in Queue.push, the id is defined as const id = peer.host + ':' + peer.port, which means that a peer can't connect to the same peer on two different topics. Is there anyway, similar to discovery-swarm, that we can make it such that the id is based on the topic?

Something like:
const id = peer.host + ':' + peer.port + '@' + peer.topic.toString('hex')

UPNP support

I think it'd be useful to integrate UPNP into the stack so that we can leverage it on home networks without needing to think as much about the hole punching.

Does node-nat-upnp do the trick?

Is there any reason UPNP would be a bad idea?

suggestion: make utp-native as an option

as we are behind company proxy, we neither can download utp-native binary, nor build when installing.
I'd suggest make utp-native as an option when instantiating hyperswarm (for @hyperswarm/network), so we can download manually utp-native, and provide to hyperswarm via an option. such as:

const swarm = hyperswarm({
   utpProvider: our-predownloaded-utp-native
})

Thanks a lot.

Add noduplicate event in deduplication

I want to connect nodes with hyperswarm, and only continue with further usage of the connection when I can be sure that there are no duplicate connections. To accomplish this, some kind of noduplicate event (in addition to duplicate) would be handy. That way one could wait for the event before continuing, while also knowing which connection to use.

Hyperswarm go implementation

Hi everyone

(I am new at this) Sorry if this might not sound like a real issue... I am trying to learn about p2p technology and hyperswarm is really interesting p2p project.

Is there plan to build a go based implementation for the hyperswarm?

Does this encrypt connections?

Are connections between peers encrypted by hyperswarm?

If not, does dat put its own layer of encryption on top of these peer streams?

Duplicate messages between connected servers

I'm a little confused, and I apologize if it shows my ignorance/lack of experience.

I am trying to send a message from one connected node server to the other using:

  // you can now use the socket as a stream, eg:
    process.stdin.pipe(socket).pipe(process.stdout)

The problem is the message from one terminal is duplicated on the other.

For example, if I type the following in serverA:

test 123

I get the following in serverB:

test 123
test 123

. . . and vice versa

I can work around this by setting one of the two servers to not announce:

swarm.join(topic, {
  lookup: true, // find & connect to peers
  announce: false // <--------- don't announce, stops duplicates
})

Errors from utp-native

Has anyone seen this error with utp-native?

node[51393]: ../src/node_buffer.cc:220:char *node::Buffer::Data(Local<v8::Value>): Assertion `val->IsArrayBufferView()' failed.                         
1: 0x1011c2f75 node::Abort() (.cold.1) [/usr/local/bin/node] 
2: 0x10009cfe9 node::Abort() [/usr/local/bin/node]                         
3: 0x10009ce51 node::Assert(node::AssertionInfo const&) [/usr/local/bin/node]                                                                          
4: 0x100079ce9 node::Buffer::Data(v8::Local<v8::Value>) [/usr/local/bin/node]                                                                          
5: 0x100075683 napi_get_buffer_info [/usr/local/bin/node]                  
6: 0x1059da6b0 utp_napi_bind(napi_env__*, napi_callback_info__*) [/Users/paolofragomeni/projects/optoolco/x/node_modules/utp-native/prebuilds/darwin-x64/node.napi.node]                                                    
7: 0x100953c36 uv__udp_io [/usr/local/bin/node]                            
8: 0x10095764f uv__io_poll [/usr/local/bin/node]                           
9: 0x1009448f1 uv_run [/usr/local/bin/node]                                
10: 0x1000da345 node::NodeMainInstance::Run() [/usr/local/bin/node]         
11: 0x100072e63 node::Start(int, char**) [/usr/local/bin/node]              
12: 0x7fff6a6ab2e5 start [/usr/lib/system/libdyld.dylib]                    
[1]    51393 abort      INST=1 ./bin/x.js

Protocol documentation?

Hi! I'm looking for docs to implement this in other languages. My understanding is that it's DHT + MDNS, but I'm not sure how straightforward it is. Are there any resources y'all could recommend for making Hyperswarm work outside of Node.js? Thanks! ⭐

Logging / debugging

With discovery-swarm it was quite useful to debug the DHT with env var DEBUG='*'. This module could also use debug() and allow for troubleshooting network connections. At moment my simple example usage of hyperswarm with two peers can't connect to each other.

Add ability to load peers to use as bootstrap nodes

To increase the resilience of using hyperswarm (e.g. networks being forcefully shut off, or booting into a specifically set up mesh network) it would be very useful if peers could somehow load in a list of IPs to use as introducers/bootstrap nodes for the network. Either that or just peers to connect to in general.

Similarly it would be very useful to export a list of the currently connected peers.

This has been discussed previously in

Properly closing the network?

This may be somewhat of a duplicate of #3, but how does one effectively close connections?

const network = require('@hyperswarm/network')
console.log('Event loop lives forever...')
const net = network()
// This line seems to remove handles to connections, but is it safe?
net.discovery.destroy()

Not sure why this isn't documented, but we were having some issues with the discovery-swarm when calling destroy (segfaults and the like) .

Why Socket Auto Closed After Send Data

everytime when i send data using socket, it always closed after sending, so how to keep the connection.

this is my swarm configuration:

const swarm = hyperswarm({
    ephemeral: false,
})
swarm.join(topic, {
    lookup: true,
     announce: true
})
// i use end to send data
socket.end(someText)

any help is appreciated!

N-API warning

I ran example-s.js in multiserver-dht on branch hyperswarm, with node v8.11.1, and got the error:

(node:11413) Warning: N-API is an experimental feature and could change at any time.
node: symbol lookup error: ./multiserver-dht/node_modules/@hyperswarm/network/node_modules/utp-native/prebuilds/linux-x64/node-napi.node: undefined symbol: napi_fatal_exception

If node v10 is a requirement, then that could be added to engines in package.json.

Connection management questions

I'm making a chat program for fun using hyperswarm. I'm able to get the connected sockets on the connection event but I'm not sure how to do session management. I know tcp connections have a close event, but udp connections don't do they?

  • Is there a common way to know if both kinds of connection have gone away?
  • It seems multiple connections get made to the same peer, is there a way to identify the peer so we can handle additional connections (ignore them, fall back to them, close them, etc) or do we have to do that in the app layer?

This is probably a bad idea, but it's been so fun

★ npx hackerchat
☠️  Your network is not hole-punchable. This will degrade connectivity.
🎉 new friend 0!
🎉 new friend 1!
😀 0: hello?
😀 1: hello?
> hi!
>

Thanks!

How to close a connection gracefully?

Running the below code in two different terminals, if one instance is stopped (ctrl + c), the other instance just crashes !!

const hyperswarm = require('hyperswarm')
const crypto = require('crypto')
const swarm = hyperswarm()
const topic = crypto.createHash('sha256').update('my-hyperswarm-topic').digest()
swarm.join(topic, {lookup: true, announce: true})
swarm.on('connection', (socket, details) => {
  console.log('new connection!', details.peer)
})

The crash happens with below error:

events.js:186
      throw er; // Unhandled 'error' event
      ^

Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:183:27)
Emitted 'error' event on Socket instance at:
    at emitErrorNT (internal/streams/destroy.js:91:8)
    at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
    at processTicksAndRejections (internal/process/task_queues.js:77:11) {     
  errno: 'ECONNRESET',
  code: 'ECONNRESET',
  syscall: 'read'
}

It is not clear why the app would crash if its peer exits. How to handle this gracefully?

Tried doing socket.end on the other app. But that would keep reconnecting.

Include peer IDs on the DHT?

"Peer IDs" are used solely to deduplicate connections. If they were present on the DHT, we could avoid even attempting to connect to a peer which we've already connected to.

Things to consider:

  • Could choosing peer ID be used to attack somebody? Eg, announce under somebody else's peer ID to effectively DoS them.

Add ability to toggle different layers of hyperswarm on/off

it would be great for many usecases to control if hyperswarm is on or off for e.g. battery reasons. this can be extended to toggling on/off different layers:

  • all of hyperswarm
  • the DHT
  • wifi multicast

this has been something both cabal cabal-club/cabal-cli#113 and peermaps has been discussing and thinking about

an example from the issue above of what would be desirable from e.g. within cabal:

 archive.setNetworking(['mdns','dht']) // options: mdns, tracker, dht. use [] for none.

Pluggable transports

TCP and UTP are great, but it should be easy to add more transport types. This will be really useful for seamlessly adding the ability to route your traffic through mixnets like Tor or I2P.

  • Support an API for adding transports
  • Tor?
  • I2P?
  • libp2p?

What should a transport be able to do?

  • connect(port, host)
  • listen(port)
  • get data to announce on the DHT?

`network.discovery` is null in leave

I just got this trace with calling close on a hyperswarm instance-- it might not be a bug inside hyperswarm, but some of the recent refactoring might be responsible.

/home/andrewosh/Development/megastore-swarm-networking/node_modules/hyperswarm/swarm.js:164
    const domain = network.discovery._domain(key)
                                     ^

TypeError: Cannot read property '_domain' of null
    at Swarm.leave (/home/andrewosh/Development/megastore-swarm-networking/node_modules/hyperswarm/swarm.js:164:38)
    at SwarmNetworker.unseed (/home/andrewosh/Development/megastore-swarm-networking/index.js:124:17)
    at Object.close (/home/andrewosh/Development/random-access-megastore/index.js:193:25)
    at defaultCore.ready.err (/home/andrewosh/Development/random-access-megastore/test/all.js:123:13)
    at /home/andrewosh/Development/random-access-megastore/node_modules/hypercore/index.js:206:15
    at apply (/home/andrewosh/Development/random-access-megastore/node_modules/thunky/index.js:44:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
npm ERR! Test failed.  See above for more details.

Multiplexing support

In Beaker's implementation of PeerSocket, we need each app to operate separately from each other. They need to join/leave swarm topics separately, and use the sockets separately.

In practice, I think this means

  • Beaker needs to reference count the lobby joins and exit swarm topics when all tabs have left a lobby.
  • The messaging protocol needs multiplexing support so that Beaker can parcel out the sub-sockets to apps.

The wire protocol that Beaker is currently constructing for PeerSockets is two layers.

  1. Messaging framing with length-prefixes
  2. Protobuf schemas for individual message types (currently PeerSocketMessage and PeerSocketSessionData)

I think we could handle multiplexing by adding channels to the protobuf schemas, in which case this doesn't need to be implemented in Hyperswarm itself.

Two connection events for single peer?

Trying to understand the behavior of connection by running the sample code from ReadMe. Not sure if this is the intended behavior, but when running the below code (the same code twice from two different terminals), seeing two connection events on each of them (total 4).

const hyperswarm = require('hyperswarm')
const crypto = require('crypto')
const swarm = hyperswarm()
const topic = crypto.createHash('sha256').update('my-hyperswarm-topic').digest()
swarm.join(topic, {lookup: true, announce: true})
swarm.on('connection', (socket, details) => {
  console.log('new connection!', details.peer)
})

The ouptut looks like below on both terminals:

new connection! {
  port: 1002,
  host: '192.168.0.101',
  local: true,
  referrer: null,
  topic: <Buffer fc f8 54 23 fe 2d 35 c2 a2 5f f5 90 62 7d ed b0 17 aa 2f b7 5a 76 eb 1d 69 ca 74 e6 40 43 db 41>
}
new connection! null

This happens with every connected peer. Two connection events are raised on both sides of the connection (total 4 for each p2p connection).

Is this valid behavior?

Do we need server instance on both sides of a connection? Sockets are by default duplex right?

A single client -> server connection between two nodes can be fully duplex, I believe. Do we need the additional reverse server <- client connection also for every p2p?

Or is it expected form the app that it will kill / close the duplicate connection? Would like to learn more about the right way to deal with these duplicate connections, since these connections count are rapidly escalating as we connect across multiple topics creating unnecessary resource pressure on the process (which can be better utilized for other connections, such as database connections or other app-level things). Especially on resource contraint devices.

Browser-based WebRTC support

Hello! Any chances of supporting WebRTC in the browser?

Hyperswarm seems great, and supporting in-browser WebRTC swarming would add a lot of interesting use cases.

I have some experience doing swarm networking in the browser and could volunteer programming a transport implementation if there's interest.

No native build was found, react native

Hello, I want to use hyperswarm in android (I know, that is weird).
I have rn-nodeify to supply node modules, but when I run the app Expo says "no native build was found for platform=android arch=javascript runtime=node abi=undefined uv= libc=glibc".

screenshot

More details:
Node version: v13.12.0
NPM version: 6.14.4
Expo version: 3.17.11
Hyperswarm version: 2.10.1

deprioritise peers from the same ip

this one might be tricky and is not worth a lot of complexity but here goes:

if multiple peers are discovered we should deprioritise subsequent peers from the same ip. ie if 10 peers with ip A is discovered and 1 with ip B, then it should try B after trying one of the A peers

info.backoff()

Add an info.backoff api that down priorities reconnecting similar to if the connection had failed in the first place.

If it’s a server connection it should simply noop

P2P overlay networks.

Can DAT be used as a library to create an overlay network of nodes interested in topic foo/1.2.3 and let them exchange with arbitrary messages? Number of nodes: ~1M. Message size: < 1 KB.

For example, I want to make a distributed p2p version of a books catalogue. Ther catalogue has a well-known name, e.g. books/1.2.3. Nodes use this name discover other nodes connected to this overlay network. Once connected, they download the list of book ids from a peer. If someone wants to add a book, they send its id to a few peers, they add it to the local storage and re-transmit to other peers. An important property of this system is that all the nodes are equal: there is no admin who has special rights.

API needed for this:

  • join("books/1.2.3") finds a few peers in this overlay network
  • peers("books/1.2.3") returns the list of online peers we're currently connected to
  • send("books/1.2.3", peerId, "howdy") sends a message to the given peer
  • listen("books/1.2.3", callback) gets messages sent by other peers with the send() API

Thoughts?

`connection` fired even if not looking up

Is it intentional that the swarm will attempt to connect with peers as a client if it's only set to announce and not to lookup? I.e. ondht will emit peers, and the swarm will connect to them as a client even when lookup is set to false.

If this isn't intentional, we could store the lookup option on the topic, and when the peer is emitted we only try to connect if the topic has lookup set to true.

Thanks!

Autoconnect flag

It'd be nice to be able to disable the auto-connection feature for cases where it'd be handy to control that at the application level.

how to get peer.details for current instance?

Is there any way to discover the running instance's ip and port? I would like to store this for future access...for instance in order to ping the address to see if it's connectable or not.

Does this make sense? I'm able to get other peers's ip and port from network().on('connection') but not the running instance's details.

Also it'd be great if you could just "ping" a peer's ip and port to see if they are still alive.

what the hell is 142.93.90.113 in the peer.referrals???!

it's a digitalocean VPS that is connecting to my IP address when I echo out the Peer object from network().on('peers')!!! I've NEVER EVER used digitalocean with hyperswarm. I've used Google Compute Engine before to test hyperswarm but its not running.....

peer: { port: 67838,
host: 'my.ip.address',
local: false,
referrer:
{ id:
<Buffer .....>,
port: 33352
, host: '142.93.90.113'}

Emit event when a peer you've already connected to is found for another topic

Some peers will be advertising on multiple topics that you both share. Instead of establishing a new connection to that peer per topic, it'd be useful to let the application optionally multiplex existing connections.

For example, if I subscribe to topic foo, and find a peer, then subscribe to topic bar, and find that same peer again, I'd like to be able to ask peer for bar over our foo connection instead of establishing a new tunnel.

This should help for applications like collaborative editors that might have a large amount of hyperdrives or hypercores shared between a few peers.

cc @pvh

Unexpected Deduplication behavior. Is it supposed to work like this?

Working on a PR for adding holepunchto/hypercore#291 to corestore-networker I noticed that on my local network two streams per local instance get opened (4 streams for 2 instances, per instance: one for peer: <IPV4> and one for peer ::ffff:<IPV4>). This seems like a bug but I am not sure.

(Test code):

  const { networker: networker1, store: store1 } = await create()
  const { networker: networker2, store: store2 } = await create()
  const core1a = store1.get()
  await append(core1a, 'a')
  const core2a = store2.get({ key: core1a.key })

  networker1.on('peer-add', () => console.log('nw1-add'))
  networker2.on('peer-add', () => console.log('nw2-add'))

  await networker1.configure(core1a.discoveryKey)
  await networker2.configure(core2a.discoveryKey)

(of the two outputs nw1-add and nw2-add it will output one twice and one once)

Looking at the duplication logic the two streams have indeed the same id, so the second stream on both instances triggers: https://github.com/hyperswarm/hyperswarm/blob/e4c51a407fb9e0e8fdcece461fc4917c2d5b23d2/lib/queue.js#L89-L91

and particularly the second line seems strange: cmp < 0. This means that one of the instances always returns true while the other always returns false as the localId is randomly generated. This means that on one instance the second stream will be identified as "duplicate" while on the other is not.

As a consequence we have two peers added on one instance while we have only one peer added on the second. Does that seem right?

Privacy Preserving in Hyperswarm

In search for info about privacy preserving DHTs, I found these interesting papers:

https://www.gpestana.com/blog/in-pursuit-of-private-dhts/

"Some of the desirable properties of a private and metadata resistant DHT are:

    Anonymity for producers of content: tracking down who was the originator of content stored in the DHT should not be possible.

    Anonymity for consumers of content: nodes that request content from the DHT should not be linked to the requested content by external actors.

    Plausible deniability of the files hosted in the network nodes: when peers query for content in the DHT, they should not be able to identify which peers are storing the content.
"

https://raw.githubusercontent.com/gpestana/p2psec/master/papers/privacy_preserving_dht/privacy_preserving_dht.pdf

https://www.gpestana.com/papers/everyone-is-naked-rev.pdf

where privacy vulnerabilities within a DHT are described.

In the last paragraph of the second paper, few Privacy Enhancing Technologies (PETs) are outlined and suggested for DHTs.
Namely: Onion Routines, and cryptographic tools: Multi-Party Computation and Zero Knowledge primitives.

What are the mechanisms and the tools deployed in Hyperswarm in order to enhance and preserve, as much as possible, privacy?
Would it be feasibly to implement the Onion Routines within Hyperswarm?

swarm.leave(topic) doesn't drop existing peer connections on a topic

I'm attempting to use a single hyperswarm instance for multiple connections on different topics and want to be able to drop and re-start connections in a granular fashion so I can schedule replication.

@KrishnaPG mentioned a similar question in a previous issue, so forgive me if I've missed the obvious answer there:

Can the swarm.join be called repeatedly multiple times to join different topics, or should different swarm object be created once for each topic? Will duplicate connections to same peer be automatically removed in both cases?

Is it possible for me to use hyperswarm in such a manner at the moment? It seems I can create multiple open connections, but I'm unable to drop a connection once its been opened, which is the behaviour I'd expect after calling swarm.leave(topic). At the moment, when I call swarm.leave(topic) it appears not to drop existing peer connections. Do I need to manually force disconnecting / cancel auto-reconnect with each peer as you mentioned, doing something like...

swarm.on('connection', (socket, details) => {
  details.backoff()
  socket.destroy()
  return
})

my code: https://ledger-git.dyne.org/CoBox/cobox-group-base/src/commit/e6fc3e4ab8668c151cc155d9c43b46485ca3293a/index.js#L79

Thanks!

Hole punching with a mobile connection and carrier grade nat

Looks like the hole punching in beaker browser (and presumable hyperswarm?) does not support networks using carrier grade nat (CGN / CGNAT).

Mobile connections (3G / 4G / LTE / 5G) are commonly put behind carrier grade nats.

I'm opening this issue to track what the project's intentions are in supporting hole punching for devices on these types of networks, so that other people searching can stumble on this.

xor-distance error ("Inputs should have the same length")

I ran example-s.js in multiserver-dht on branch hyperswarm, with node v10.1.0, and got the error:

./multiserver-dht/node_modules/xor-distance/index.js:4
  if (a.length !== b.length) throw new Error('Inputs should have the same length')
                                   ^

Error: Inputs should have the same length
    at dist (./multiserver-dht/node_modules/xor-distance/index.js:4:36)
    at QueryTable.addUnverified (./multiserver-dht/node_modules/dht-rpc/lib/query-table.js:15:21)
    at QueryStream._onresponse (./multiserver-dht/node_modules/dht-rpc/lib/query-stream.js:80:20)
    at IO._finish (./multiserver-dht/node_modules/dht-rpc/lib/io.js:131:9)
    at IO._onmessage (./multiserver-dht/node_modules/dht-rpc/lib/io.js:103:14)
    at UTP.emit (events.js:182:13)
    at UTP._onmessage (./multiserver-dht/node_modules/@hyperswarm/network/node_modules/utp-native/index.js:191:8)

Topic when client === false?

Is there (or could there be) a way to extract the topic from either socket or details when details.client === false in on('connection')?

Using Sockets in Hyperswarm

I'm a little new to using Hyperswarm and moreover the entire space of communicating via Sockets.

I'm trying to build a simple chat application between two peers on the same machine. Super basic but just so I can try out Hyperswarm on a basic level.

I have two files that are both running the same code:

swarm.on("connection", (socket, details) => {
    console.log("new connection!", details);
    socket.write("send-message", { msg: "Client 1 has Joined" });
    socket.on("send-message", data => {
        console.log(data.msg);
        AskForUserInput(socket);
    });
});

function AskForUserInput(socket) {
    const rl = readline.createInterface({
        input: process.stdin,
        output: process.stdout
    });
    rl.question("What do you want to say: ", (answer) => {
        socket.write("send-message", { msg: "Client 2: " + answer });
        rl.close();
    });
}

Running the program in two terminals - I can see that they have both connected to each other, but I don't see any messages being transmitted. Is there something wrong with my approach?

Basic P2p tunneling (p2t2p) feature request.

It would be nice to have some degree of privacy possible in tunneling connections so that a second anonymous rendezvous dht is established for peer <-> tunnel peer <-> peer sharing is possible using anonymous connections. Basically this would comprise of end to end rsa/dsa encryption handshakes for aes through a 1 hop encrypted tunnel peer. It may also include a routing sharing ratio. This would have to use a secondary dht layer or object where users would generate extended signed pubkeys for single connections. The handshake should hide the expected public key/cert from the proxy peer and then smaller chunks say 1kb with a hash tree for say 9kb chunks could be routed through multiple peers simultaneously using shot lived tunnels (e.g. a minute), the self signing cert could be generated ever hour or 24 hours etc. for signing key/certs for rendezvous. or any other ideas about tradeoffs?

This would be especially useful for punching holes using p2t2p negotiated over tor to keep torrenting off of tor and i2p but anonymize dht functions through hidden services. Preventing easily unmasking the data being sent per users and thereby usual attacks used by corporations to track filesharing. Not necessarily highly secure from a determined enough attacker but enough to establish network neutrality and prevent censorship.

Also obfsproxy support would be nice. Possibly another feature.

electron-rebuild: failed to rebuild node_modules/@hyperswarm/dht/node_modules/sodium-native

In my electron-react-typescript project I added hyperswarm:

yarn add hyperswarm
yarn add v1.22.5
[1/4] Resolving packages...
[2/4] Fetching packages...
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
warning "ipfs-core > [email protected]" has unmet peer dependency "abort-controller@*".
warning "react-bootstrap > @restart/[email protected]" has incorrect peer dependency "react@^16.8.0".
warning " > [email protected]" has incorrect peer dependency "react@^16.3.0".
warning " > [email protected]" has incorrect peer dependency "react-dom@^16.3.0".
warning "webpack-dev-server > [email protected]" has incorrect peer dependency "webpack@^4.0.0".
warning "[email protected]" is missing a bundled dependency "node-pre-gyp". This should be reported to the package maintainer.
[4/4] Building fresh packages...
success Saved lockfile.
success Saved 14 new dependencies.
info Direct dependencies
└─ [email protected]
info All dependencies
├─ @hyperswarm/[email protected]
├─ @hyperswarm/[email protected]
├─ @hyperswarm/[email protected]
├─ @hyperswarm/[email protected]
├─ [email protected]
├─ [email protected]
├─ [email protected]
├─ [email protected]
├─ [email protected]
├─ [email protected]
├─ [email protected]
├─ [email protected]
├─ [email protected]
└─ [email protected]
Done in 9.31s.

When electron-rebuild-ing :

(base) marco@pc01:~/webMatters/electronMatters/GGC$ $(npm bin)/electron-rebuild
⠧ Building module: sodium-native, Completed: 0gyp info find Python using Python version 3.7.4 found at "/home/marco 
/anaconda3/bin/python3"
⠇ Building module: sodium-native, Completed: 0gyp http GET https://www.electronjs.org/headers/v11.0.3/node-v11.0.3-
headers.tar.gz
⠇ Building module: sodium-native, Completed: 0gyp http 200 https://www.electronjs.org/headers/v11.0.3/node-v11.0.3-
headers.tar.gz
⠙ Building module: sodium-native, Completed: 0gyp http GET https://www.electronjs.org/headers/v11.0.3/SHASUMS256.txt
⠙ Building module: sodium-native, Completed: 0gyp http 200 https://www.electronjs.org/headers/v11.0.3/SHASUMS256.txt
gyp info spawn /home/marco/anaconda3/bin/python3
gyp info spawn args [
gyp info spawn args   '/home/marco/webMatters/electronMatters/GGC/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args   'binding.gyp',
gyp info spawn args   '-f',
gyp info spawn args   'make',
gyp info spawn args   '-I',
gyp info spawn args   '/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm/dht/node_modules
/sodium-native/build/config.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/home/marco/webMatters/electronMatters/GGC/node_modules/node-gyp/addon.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm/dht/node_modules
/sodium-native/11.0.3/include/node/common.gypi',
gyp info spawn args   '-Dlibrary=shared_library',
gyp info spawn args   '-Dvisibility=default',
gyp info spawn args   '-Dnode_root_dir=/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm
/dht/node_modules/sodium-native/11.0.3',
gyp info spawn args   '-Dnode_gyp_dir=/home/marco/webMatters/electronMatters/GGC/node_modules/node-gyp',
gyp info spawn args   '-Dnode_lib_file=/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm 
/dht/node_modules/sodium-native/11.0.3/<(target_arch)/node.lib',
gyp info spawn args   '-Dmodule_root_dir=/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm
/dht/node_modules/sodium-native',
gyp info spawn args   '-Dnode_engine=v8',
gyp info spawn args   '--depth=.',
gyp info spawn args   '--no-parallel',
gyp info spawn args   '--generator-output',
gyp info spawn args   'build',
gyp info spawn args   '-Goutput_dir=.'
gyp info spawn args ]
⠸ Building module: sodium-native, Completed: 0gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
make: Entering directory '/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm/dht/node_modules
/sodium-native/build'
  CC(target) Release/obj.target/sodium/binding.o
In file included from ../binding.c:5:
../libsodium/src/libsodium/include/sodium.h:5:10: fatal error: sodium/version.h: No such file or directory
    5 | #include "sodium/version.h"
     |            ^~~~~~~~~~~~~~~~~~
compilation terminated.
sodium.target.mk:123: recipe for target 'Release/obj.target/sodium/binding.o' failed
make: *** [Release/obj.target/sodium/binding.o] Error 1
make: Leaving directory '/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm/dht/node_modules
/sodium-native/build'
✖ Rebuild Failed

An unhandled error occurred inside electron-rebuild
node-gyp failed to rebuild '/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm/dht/node_modules   
/sodium-native'.
Error: `make` failed with exit code: 2

Error: node-gyp failed to rebuild '/home/marco/webMatters/electronMatters/GGC/node_modules/@hyperswarm  
/dht/node_modules/sodium-native'.
Error: `make` failed with exit code: 2

    at ModuleRebuilder.rebuildNodeGypModule (/home/marco/webMatters/electronMatters/GGC/node_modules/electron-rebuild
/lib/src/module-rebuilder.js:193:19)
    at processTicksAndRejections (internal/process/task_queues.js:93:5)
    at async Rebuilder.rebuildModuleAt (/home/marco/webMatters/electronMatters/GGC/node_modules/electron-rebuild/lib/src
/rebuild.js:190:9)
    at async Rebuilder.rebuild (/home/marco/webMatters/electronMatters/GGC/node_modules/electron-rebuild/lib/src
/rebuild.js:152:17)
    at async /home/marco/webMatters/electronMatters/GGC/node_modules/electron-rebuild/lib/src/cli.js:146:9

Other info:

node version: v.14.5.0
"devDependencies": {
  "@types/react": "^17.0.0",
  "@types/react-dom": "^17.0.0",
  "electron: "11.0.2"
 },
"dependencies": {
  "react": "^17.0.1",
   "react-dom": "^17.0.1"
 }

How to solve the problem?

No peer connections if joining after swarm initialization

testing on the same machine, and 2 machines over LAN

two nodes connect as expected

// pc a
const swarm = hyperswarm()
swarm.join(hash, {lookup:true, announce: true})
swarm.on("connection", () => { console.log("yuppie") })
// pc b
const swarm = hyperswarm()
swarm.join(hash, {lookup:true, announce: true})
swarm.on("connection", () => { console.log("yuppie") })

two nodes does not connect

// pc a
const swarm = hyperswarm()
setTimeout(() => swarm.join(hash, {lookup:true, announce: true}), 5000)
swarm.on("connection", () => { console.log("never happens") })
// pc b
const swarm = hyperswarm()
setTimeout(() => swarm.join(hash, {lookup:true, announce: true}), 10000)
swarm.on("connection", () => { console.log("never happens") })

workaround (the nodes connect)

// pc a
const swarm = hyperswarm()
swarm.network.bind()
setTimeout(() => swarm.join(hash, {lookup:true, announce: true}), 5000)
swarm.on("connection", () => { console.log("yuppie") })
// pc b
const swarm = hyperswarm()
swarm.network.bind()
setTimeout(() => swarm.join(hash, {lookup:true, announce: true}), 10000)
swarm.on("connection", () => { console.log("yuppie") })

Maybe i am missing something in the docs?
Maybe it has something to do with how discovery cycle works?

difference with discovery-swarm

While searching for DHT related projects, came across these two similar projects, even @mafintosh being in the contributors' list for both.

On the surface these both seem to be doing the same, so confused as to which one of these should be used:

Discovery Swarm seem to be based on bittorent-dht at the bottom, while Hyper Swarm seem to be based on DHT-RPC.

Would really appreciate if some clarity can be thrown on, if these projects are different, and/or which one should we prefer over the other for future-proofing.

Typescript support / docs

I really like this module, I love that it is way simpler to use and understand than many other solutions out there. But I wish it would work better with typescript and all the other tooling out there.
That's why I started working on 'porting' Hyperswarm to typescript on my fork, it's not yet complete and I'm probably also not the best typescript dev in existence, but it's a starting point that I use in my project.
Since I think other users of the module might benefit from stuff that I've been doing i.e. typings, docs, etc. I thought I ask you what the wisest thing to do would be.
Are you open to accepting a .d.ts file PR? Or should my fork be kept separate?
I've generated a bit of WIP docs from the code as well, maybe that can be put to use somehow.
I'm here to help make this module more accessible if that's possible!
Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.