Giter Club home page Giter Club logo

hive.go's Introduction

A utility library for the GoShimmer and Hornet node software

Discord StackExchange Apache 2.0 license

AboutPrerequisitesInstallationGetting startedSupporting the projectJoining the discussion


About

Hive.go is a shared library that is used in the GoShimmer, Hornet and IOTA Core node software. This library contains shared:

  • Data structures
  • Utility methods
  • Abstractions

This is beta software, so there may be performance and stability issues. Please report any issues in our issue tracker.

Prerequisites

To use the library, you need to have at least version 1.13 of Go installed on your device.

To check if you have Go installed, run the following command:

go version

If Go is installed, you should see the version that's installed.

Installation

To install Hive.go and its dependencies, you can use one of the following options:

  • If you use Go modules, just import the packages that you want to use

    import (
    "github.com/iotaledger/hive.go/logger"
    "github.com/iotaledger/hive.go/node"
    )
  • To download the library from GitHub, use the go get command

    go get github.com/iotaledger/hive.go

Getting started

After you've installed the library, you can use it in your project.

For example, to create a new logger instance:

import "github.com/iotaledger/hive.go/logger"

log = logger.NewLogger('myNewLoggerName')

Activating deadlock detection

To replace the mutexes in the syncutils package with Go deadlock, use the deadlock build flag when compiling your program.

Supporting the project

If this library has been useful to you and you feel like contributing, consider submitting a bug report, feature request or a pull request.

See our contributing guidelines for more information.

Joining the discussion

If you want to get involved in the community, need help with getting set up, have any issues related to the library or just want to discuss IOTA, Distributed Registry Technology (DRT), and IoT with other people, feel free to join our Discord.

hive.go's People

Contributors

acha-bill avatar alexsporn avatar apenzk avatar capossele avatar daria305 avatar dependabot[bot] avatar dessaya avatar dr-electron avatar georgysavva avatar grtlr avatar hlxid avatar hmoog avatar howjmay avatar iotmod avatar jakescahill avatar jkrvivian avatar jonastheis avatar jorgemmsilva avatar juliusan avatar karimodm avatar legacycode avatar lmoe avatar luca-moser avatar lzpap avatar muxxer avatar oliviasaa avatar philippgackstatter avatar piotrm50 avatar rajivshah3 avatar wollac avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hive.go's Issues

Data not persistent after restart in object storage

Bug description

When shutting the the object storage (from goshimmer), something might not get properly persisted in the database. Upon restart, the node has a different number of messages in its storage then before restart.

Steps To reproduce the bug

  1. Run GoShimmer from feat/sync_revamp_inv branch. Possibly together with prometheus and grafana. Set metrics.local.db in the config file to true.
  2. Run the node for a couple of minutes. Observe tangle_solid_message_count exported metrics. (Or the printed values to console from here )
  3. Stop the node (gracefully)
  4. Start the node.
  5. tangle_solid_message_count exported metrics should have a different number than before the restart.

Expected behaviour

(Solid) message count in the object storage stays the same after restart.

Actual behaviour

(Solid) message count in the object storage randomly varies, depending on what machine you are running on. It might even be 0, or a greater value.

Note

According to @luca-moser , the root of the problem might be in the object storage, specifically, the batchwriter.

kvstore MapDB implementation behaves differently from Badger implementation

WithRealm each call creates separate data sets (maps) for each realm while in Badger implementation realm is a prefixed view of the DB. MapDB is incorrect.

func (db *MapDB) WithRealm(realm kvstore.Realm) kvstore.KVStore {

Failing test:

	db := mapdb.NewMapDB()

	realm := kvstore.Realm("really?")
	r1 := db.WithRealm(realm)

	key := []byte("key")
	value := []byte("value")
	err := r1.Set(key, value)
	assert.NoError(t, err)

	r2 := db.WithRealm(realm)

	musthave, err := r2.Has(key)
	assert.NoError(t, err)
	assert.True(t, musthave)
}

Making it suitable to run on FreeBSD

Description

hive.go is used for hornet. The only thing failing to compile hornet from source on FreeBSD is a platform file "constants_freebsd.go" in this dependency. Its content can be identical to "constants_darwin.go".

Motivation

This would enlarge the user base.

Are you planning to do it yourself in a pull request?

I could, but it is such a minimal change, it would be not worth it. The developers can perform it in a split second.

Seed provided with node.seed not stored, when goshimmer crashes at very first start

Bug description

GoShimmer does not store the provided node.seed in the peer database, when GoShimmer is started the very first time and does not shutdown gracefully.

hive.io reads the seed from command line and keeps the seed in the memory. The seed is only stored, when GoShimmer stops gracefully:

2021-12-02T18:33:59+01:00 INFO Peer peer/plugin.go:68 saved identity 9bUpriDHjQDa

If GoShimmer crashes when running the very first time, the peerdb does not store the seed. When you start GoShimmer the second time GoShimmer checks, if a seed is stored in the peerdb. If no seed is stored it generates a new one. This results in an error:

2021-12-02T18:43:26+01:00       FATAL   Peer    peer/plugin.go:100      private key derived from the seed defined in the config does not correspond with the already stored private key in the database: identities - pub keys (cfg/db): 9bUpriDHjQDaQhRpaPYsn94rVWX18WPnXSZV5xX7GxEx vs. DQgLQkcDBRzdkLEshBJzzwdX8YRQ6SuR9LhbAxngyR6S

Go version

  • Go version: go1.17.3 linux/amd64

Hardware specification

What hardware are you using?

  • Operating system: Docker and Ubuntu
  • RAM: 32 GB
  • Cores: 8
  • Device: Workstation

Steps To reproduce the bug

Explain how the maintainer can reproduce the bug.

  1. Make sure no peerdb exists: rm peerdb/*
  2. Start GoShimmer with node.seed: ./goshimmer --node.seed="base64:YOUR-SEED"
  3. Kill GoShimmer: kill -SIGKILL 58168
  4. Start GoShimmer with node.seed: ./goshimmer --node.seed="base64:YOUR-SEED"
  5. Error: private key derived from the seed defined in the config does not correspond with the already stored private key in the database

Expected behaviour

Correct node.seed is stored in peerdb and GoShimmer starts without error.

Actual behaviour

GoShimmer crashes after

Errors

2021-12-02T18:51:16+01:00       INFO    Node    banner/plugin.go:40     GoShimmer version v0.8.3 ...
2021-12-02T18:51:16+01:00       INFO    Node    banner/plugin.go:41     Loading plugins ...
2021-12-02T18:51:16+01:00       INFO    Node    node/node.go:107        Loading Plugin: Banner ... done
2021-12-02T18:51:16+01:00       INFO    Node    node/node.go:107        Loading Plugin: Config ... done
2021-12-02T18:51:16+01:00       INFO    Node    node/node.go:107        Loading Plugin: Logger ... done
2021-12-02T18:51:16+01:00       INFO    Node    node/node.go:107        Loading Plugin: CLI ... done
2021-12-02T18:51:16+01:00       INFO    Node    node/node.go:107        Loading Plugin: GracefulShutdown ... done
2021-12-02T18:51:16+01:00       INFO    Database        database/plugin.go:136  Running database garbage collection...
2021-12-02T18:51:16+01:00       INFO    Database        database/plugin.go:142  Database garbage collection done, took 1.159082ms...
2021-12-02T18:51:16+01:00       INFO    Node    node/node.go:107        Loading Plugin: Database ... done
2021-12-02T18:51:16+01:00       FATAL   Peer    peer/plugin.go:100      private key derived from the seed defined in the config does not correspond with the already stored private key in the database: identities - pub keys (cfg/db): 9bUpriDHjQDaQhRpaPYsn94rVWX18WPnXSZV5xX7GxEx vs. 7sqXTjVPsURM4Gc9oKgLdnJdURFBVasHJqGxDozqkyhi
github.com/iotaledger/goshimmer/plugins/peer.configureLocalPeer

Failure to compile on 32bit systems

if l > math.MaxUint32 {

Math.MaxUint32 (1 << 32 - 1) is larger than an int on 32bit systems.
Which causes compilation errors on lines 264 and 265:

/home/ee2/Projects/Proj1/pkg/mod/github.com/iotaledger/hive.go/serializer/[email protected]/serializer.go:262:10:
math.MaxUint32 (untyped int constant 4294967295) overflows int

/home/ee2/Projects/Proj1/pkg/mod/github.com/iotaledger/hive.go/serializer/[email protected]/serializer.go:263:113:
cannot use math.MaxUint32 (untyped int constant 4294967295) as int value in argument to fmt.Errorf (overflows)

One of our community members was compiling for Android and encountered these errors. Android is still one of those 32bit systems.

The cause seems to be that the l parameter of writeSliceLength is an int. Might be better to make this an uint64 instead, since lengths are unsigned anyway, and accept the 3 casts that will be necessary as a result. 

Flatten & cleanup hive.go packages

  • ... and move stuff from goshimmer into here.
  • many generics are wrapper around non-generic types -> need to be native generics
  • clean up APIs and make them more consistent
  • check downstream dependencies (eg crypto libs?)
  • move generic stuff from GoShimmer into hive
  • move stuff from hive into GoShimmer / new node implementation if it's specific to the node
  • unify / reuse between node implementations as much as possible (eg proper networking layer)
    probably some more things

ObjectStorage tests fail occasionally on slow GitHub actions

Bug description

ObjectStorage tests fail with 10 min timeout sometimes on CI.
One example is: https://github.com/iotaledger/hive.go/runs/922612527?check_suite_focus=true

The log says that two goroutines are waiting to acquire objectStorage.cacheMutex lock. Somewhere in the code the lock is not released or is still in use.

2020-07-29T09:18:23.0737955Z ?   	github.com/iotaledger/hive.go/objectstorage	[no test files]
2020-07-29T09:28:17.2860513Z panic: test timed out after 10m0s
2020-07-29T09:28:17.2860851Z 
2020-07-29T09:28:17.2861026Z goroutine 204031 [running]:
2020-07-29T09:28:17.2861157Z testing.(*M).startAlarm.func1()
2020-07-29T09:28:17.2861361Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:1377 +0xdf
2020-07-29T09:28:17.2861589Z created by time.goFunc
2020-07-29T09:28:17.2861776Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/time/sleep.go:168 +0x44
2020-07-29T09:28:17.2861930Z 
2020-07-29T09:28:17.2862069Z goroutine 1 [chan receive, 9 minutes]:
2020-07-29T09:28:17.2862314Z testing.(*T).Run(0xc0000acb00, 0xa46595, 0x1d, 0xa5ccb8, 0x48e301)
2020-07-29T09:28:17.2862509Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:961 +0x377
2020-07-29T09:28:17.2862749Z testing.runTests.func1(0xc0000ac300)
2020-07-29T09:28:17.2862941Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:1202 +0x78
2020-07-29T09:28:17.2863164Z testing.tRunner(0xc0000ac300, 0xc0000bbdc0)
2020-07-29T09:28:17.2863353Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:909 +0xc9
2020-07-29T09:28:17.2863599Z testing.runTests(0xc00000f480, 0x1032460, 0xd, 0xd, 0x0)
2020-07-29T09:28:17.2863979Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:1200 +0x2a7
2020-07-29T09:28:17.2864950Z testing.(*M).Run(0xc000116180, 0x0)
2020-07-29T09:28:17.2865146Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:1117 +0x176
2020-07-29T09:28:17.2865360Z main.main()
2020-07-29T09:28:17.2865471Z 	_testmain.go:74 +0x135
2020-07-29T09:28:17.2865545Z 
2020-07-29T09:28:17.2865688Z goroutine 6 [chan receive, 9 minutes]:
2020-07-29T09:28:17.2865860Z github.com/iotaledger/hive.go/objectstorage.init.0.func1()
2020-07-29T09:28:17.2866307Z 	/home/runner/work/hive.go/hive.go/objectstorage/leak_detection.go:146 +0x43
2020-07-29T09:28:17.2866567Z created by github.com/iotaledger/hive.go/objectstorage.init.0
2020-07-29T09:28:17.2866846Z 	/home/runner/work/hive.go/hive.go/objectstorage/leak_detection.go:144 +0x35
2020-07-29T09:28:17.2867018Z 

this one is waiting on the cacheMutex:
2020-07-29T09:28:17.2867187Z goroutine 16 [semacquire, 9 minutes]: 
2020-07-29T09:28:17.2867654Z sync.runtime_SemacquireMutex(0xc0009ee92c, 0x7f0408309800, 0x0)
2020-07-29T09:28:17.2868367Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/runtime/sema.go:71 +0x47
2020-07-29T09:28:17.2868601Z sync.(*RWMutex).RLock(...)
2020-07-29T09:28:17.2868786Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/sync/rwmutex.go:50
2020-07-29T09:28:17.2869137Z github.com/iotaledger/hive.go/objectstorage.(*ObjectStorage).deepIterateThroughCachedElements(0xc0009ee900, 0xc00028ed50, 0xc000058e20, 0xc000109800)
2020-07-29T09:28:17.2869426Z 	/home/runner/work/hive.go/hive.go/objectstorage/object_storage.go:916 +0x495
2020-07-29T09:28:17.2869718Z github.com/iotaledger/hive.go/objectstorage.(*ObjectStorage).flush(0xc0009ee900)
2020-07-29T09:28:17.2869987Z 	/home/runner/work/hive.go/hive.go/objectstorage/object_storage.go:889 +0xf2
2020-07-29T09:28:17.2870265Z github.com/iotaledger/hive.go/objectstorage.(*ObjectStorage).Shutdown(0xc0009ee900)
2020-07-29T09:28:17.2870533Z 	/home/runner/work/hive.go/hive.go/objectstorage/object_storage.go:473 +0x33
2020-07-29T09:28:17.2870803Z github.com/iotaledger/hive.go/objectstorage/test.TestStoreIfAbsentTriggersOnce(0xc0000acb00)
2020-07-29T09:28:17.2871095Z 	/home/runner/work/hive.go/hive.go/objectstorage/test/objectstorage_test.go:364 +0x2a9
2020-07-29T09:28:17.2871330Z testing.tRunner(0xc0000acb00, 0xa5ccb8)
2020-07-29T09:28:17.2871522Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:909 +0xc9
2020-07-29T09:28:17.2871730Z created by testing.(*T).Run
2020-07-29T09:28:17.2871920Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/testing/testing.go:960 +0x350
2020-07-29T09:28:17.2872071Z 
2020-07-29T09:28:17.2872218Z goroutine 10 [select]:
2020-07-29T09:28:17.2872422Z github.com/iotaledger/hive.go/objectstorage.(*BatchedWriter).runBatchWriter(0xc00006d1c0)
2020-07-29T09:28:17.2872692Z 	/home/runner/work/hive.go/hive.go/objectstorage/batch_writer.go:144 +0x13c
2020-07-29T09:28:17.2872881Z created by github.com/iotaledger/hive.go/objectstorage.(*BatchedWriter).StartBatchWriter
2020-07-29T09:28:17.2873147Z 	/home/runner/work/hive.go/hive.go/objectstorage/batch_writer.go:44 +0x93
2020-07-29T09:28:17.2873330Z 
2020-07-29T09:28:17.2873465Z goroutine 21 [select]:
2020-07-29T09:28:17.2873740Z github.com/iotaledger/hive.go/objectstorage.(*BatchedWriter).runBatchWriter(0xc000180240)
2020-07-29T09:28:17.2874009Z 	/home/runner/work/hive.go/hive.go/objectstorage/batch_writer.go:144 +0x13c
2020-07-29T09:28:17.2874295Z created by github.com/iotaledger/hive.go/objectstorage.(*BatchedWriter).StartBatchWriter
2020-07-29T09:28:17.2874562Z 	/home/runner/work/hive.go/hive.go/objectstorage/batch_writer.go:44 +0x93
2020-07-29T09:28:17.2874709Z 

this one is waiting on the cacheMutex:
2020-07-29T09:28:17.2874865Z goroutine 142572 [semacquire, 9 minutes]:
2020-07-29T09:28:17.2875096Z sync.runtime_SemacquireMutex(0xc0009ee928, 0x0, 0x0)
2020-07-29T09:28:17.2875283Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/runtime/sema.go:71 +0x47
2020-07-29T09:28:17.2875423Z sync.(*RWMutex).Lock(0xc0009ee920)
2020-07-29T09:28:17.2875672Z 	/opt/hostedtoolcache/go/1.13.14/x64/src/sync/rwmutex.go:103 +0x88
2020-07-29T09:28:17.2875973Z github.com/iotaledger/hive.go/objectstorage.(*BatchedWriter).releaseObject(0xc001582400, 0xc0012b8f00)
2020-07-29T09:28:17.2876356Z 	/home/runner/work/hive.go/hive.go/objectstorage/batch_writer.go:124 +0x43
2020-07-29T09:28:17.2876643Z github.com/iotaledger/hive.go/objectstorage.(*BatchedWriter).runBatchWriter(0xc001582400)
2020-07-29T09:28:17.2876912Z 	/home/runner/work/hive.go/hive.go/objectstorage/batch_writer.go:170 +0x251
2020-07-29T09:28:17.2877195Z created by github.com/iotaledger/hive.go/objectstorage.(*BatchedWriter).StartBatchWriter
2020-07-29T09:28:17.2877548Z 	/home/runner/work/hive.go/hive.go/objectstorage/batch_writer.go:44 +0x93
2020-07-29T09:28:17.2877787Z FAIL	github.com/iotaledger/hive.go/objectstorage/test	600.013s

It must be a write lock that holds it because one of them that is trying to acquire it is a read lock.
One suspicious part of the code where it could not have been release is this method:

func (objectStorage *ObjectStorage) accessPartitionedCache(key []byte, createMissingCachedObject bool) (cachedObject *CachedObjectImpl, cacheHit bool) {
// acquire read lock so nobody can write to the cache
objectStorage.cacheMutex.RLock()
// ensure appropriate lock is unlocked
var writeLocked bool
defer func() {
if writeLocked {
objectStorage.cacheMutex.Unlock()
} else {
objectStorage.cacheMutex.RUnlock()
}
}()
// initialize variables for the loop
keyPartitionCount := len(objectStorage.options.keyPartitions)
currentPartition := objectStorage.cachedObjects
keyOffset := 0
traversedPartitions := make([]string, 0)
// loop through partitions up until the object layer
for i := 0; i < keyPartitionCount-1; i++ {
// determine the current key segment
keyPartitionLength := objectStorage.options.keyPartitions[i]
partitionKey := string(key[keyOffset : keyOffset+keyPartitionLength])
keyOffset += keyPartitionLength
// if the target partition is found: advance to the next level
subPartition, subPartitionExists := currentPartition[partitionKey]
if subPartitionExists {
currentPartition = subPartition.(map[string]interface{})
traversedPartitions = append(traversedPartitions, partitionKey)
continue
}
// abort if we are not supposed to create new entries
if !createMissingCachedObject {
return
}
// switch to write locks and check for existence again
if !writeLocked {
objectStorage.partitionsManager.Retain(traversedPartitions)
// defer in a loop is usually bad, but this only gets called once because we switch to a write locks once
defer objectStorage.partitionsManager.Release(traversedPartitions)
objectStorage.cacheMutex.RUnlock()
objectStorage.cacheMutex.Lock()
writeLocked = true
// if the target partition was created while switching locks: advance to the next level
subPartition, subPartitionExists = currentPartition[partitionKey]
if subPartitionExists {
currentPartition = subPartition.(map[string]interface{})
continue
}
}
// create and advance partition
subPartition = make(map[string]interface{})
currentPartition[partitionKey] = subPartition
currentPartition = subPartition.(map[string]interface{})
}
// determine the object key
keyPartitionLength := objectStorage.options.keyPartitions[keyPartitionCount-1]
partitionKey := string(key[keyOffset : keyOffset+keyPartitionLength])
// return if object exists
if alreadyCachedObject, cachedObjectExists := currentPartition[partitionKey]; cachedObjectExists {
cacheHit = true
cachedObject = alreadyCachedObject.(*CachedObjectImpl).retain().(*CachedObjectImpl)
return
}
// abort if we are not supposed to create new entries
if !createMissingCachedObject {
return
}
// switch to write locks and check for existence again
if !writeLocked {
objectStorage.partitionsManager.Retain(traversedPartitions)
defer objectStorage.partitionsManager.Release(traversedPartitions)
objectStorage.cacheMutex.RUnlock()
objectStorage.cacheMutex.Lock()
writeLocked = true
if alreadyCachedObject, cachedObjectExists := currentPartition[partitionKey]; cachedObjectExists {
cacheHit = true
cachedObject = alreadyCachedObject.(*CachedObjectImpl).retain().(*CachedObjectImpl)
return
}
}
// mark objectStorage as non-empty
if objectStorage.size == 0 {
objectStorage.cachedObjectsEmpty.Add(1)
}
// create a new cached object ...
cachedObject = newCachedObject(objectStorage, key)
cachedObject.retain()
// ... and store it
currentPartition[partitionKey] = cachedObject
objectStorage.size++
return
}

Steps To reproduce the bug

  1. Try to run tests on GitHub actions and you will see it most likely.

The issue never happens when running the tests locally on your machine.

Maybe related to #147

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.