Giter Club home page Giter Club logo

graphsense-transformation's Introduction

Graphsense Website

This is based on https://github.com/nicolas-van/bootstrap-4-github-pages.

A Bootstrap 4 project for Github Pages and Jekyll.

  • A full Bootstrap 4 theme usable both on Github Pages and with a standalone Jekyll.
  • Recompiles Bootstrap from SCSS files, which allows to customize Bootstrap's variables and use Bootstrap themes.
  • Full support of Bootstrap's JavaScript plugins.
  • Supports all features of Github Pages and Jekyll.

See the website for demonstration and documentation.

Development

Having Docker installed run make watch and point your browser to localhost:4000.

Statistics

Run make REST_ENDPOINT=http://example.com stats to fetch the latest Graphsense statistics and commit them.

graphsense-transformation's People

Contributors

defconst avatar mdragaschnig avatar romankarl avatar soad003 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

graphsense-transformation's Issues

Transformation job Size exceeds Integer.MAX_VALUE

Hello ,
While i am running the transformation job rdd at TransformationJob.scala:92 using the submit.sh i got this error :
ERROR Executor: Exception in task 37460.0 in stage 1.0 (TID 37464) java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE

I'm running Cassandra 3.11 and Spark 2.2.2 under Scala 2.11.8 in Ubuntu 18.04 Server.
Any clue?

Merge cluster ID into address table

At the moment the mapping between address and clusters are stored in a separate table address_cluster, which requires an additional lookup for retrieving address_entities.

Since there is n:1 relationship between addresses and clusters, we could also store the cluster_id as part of the address table and skip the address_cluster table.

ScalaReflectionLock error in submit.sh

Hi,

I'm trying to get the graphsense-transformation running for testing within the COPKIT project.

The sbt test runs fine, but I'm getting the following exception in submit.sh:

INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@abff8b7{/metrics/json,null,AVAILABLE,@spark}
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/catalyst/package$ScalaReflectionLock$
at org.apache.spark.sql.catalyst.ReflectionLock$.(ReflectionLock.scala:5)
at org.apache.spark.sql.catalyst.ReflectionLock$.(ReflectionLock.scala)
at com.datastax.spark.connector.mapper.DefaultColumnMapper.(DefaultColumnMapper.scala:44)
at com.datastax.spark.connector.mapper.LowPriorityColumnMapper$class.defaultColumnMapper(ColumnMapper.scala:51)
at com.datastax.spark.connector.mapper.ColumnMapper$.defaultColumnMapper(ColumnMapper.scala:55)
at at.ac.ait.TransformationJob$.main(TransformationJob.scala:75)
at at.ac.ait.TransformationJob.main(TransformationJob.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.catalyst.package$ScalaReflectionLock$
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 17 more

I'm running Spark 2.3.1 under Scala 2.11.8 in Ubuntu 18.04 Server.

I have (unsuccessfully) updated the project/Dependencies.scala file to include the latest versions of spark-sql (2.3.2) and spark-cassandra-connector (2.3.2).

Any clue?

Compute anomalous growth score

Anomalous growth is one reliability indicator and should therefore be computed for each cluster. If we can store growth rates in a table, we could even display it as some chart.

Clusters of size 1 in LTC

Certain LTC addresses (e.g., LQ3B36Yv2rBTxdgAdYpU2UcEZsaNwXeATk) are of size 1 and therefore, following to the current design, not a cluster

Revisit need for fields (e.g., src and dst properties)

At the moment there are many fields in the dst schema that are not used by the dashboard. Please review the schema and mark the tables / fields that are still in use. Remove all others from the transformation job (that should reduce computation time)

Use address int IDs instead of address byte arrays

Compute an int ID for each address right at the beginning of the transformation. Store that ID as "internal_id" in the address table. Use that ID throughout the subsequent steps (cluster ID = min address ID; relation src and dst are IDs; maybe in mod X buckets)

Generalize value

Generalize value type for arbitrary fiat values; align with ethereum table

CREATE TYPE currency (
value bigint,
fiat_values list
);

cluster_addresses: eliminate unused fields

The dashboard display:

  • Address
  • First usage
  • Last usage
  • Balance
  • Received

Therefore, cluster_addresses can be reduced to (@myrho please confirm):

CREATE TABLE cluster_addresses (
cluster_group int,
cluster int,
address_id int,
first_tx_timestamp int,
last_tx_timestamp int
total_received FROZEN ,
total_spent FROZEN ,
PRIMARY KEY (cluster_group, cluster, address_id)
);

Unresolved dependency issue:.ac.ait#graphsense-clustering_2.11;0.4.1: not found

I got stuck in "Execute Transformation Locally"

to reproduce:

user@system: ./scripts/create_target_schema.sh
Creating target keyspace in Cassandra
user@system: sbt test
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn] 	::          UNRESOLVED DEPENDENCIES         ::
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn] 	:: at.ac.ait#graphsense-clustering_2.11;0.4.1: not found
[warn] 	::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn] 	Note: Unresolved dependencies path:
[warn] 		at.ac.ait:graphsense-clustering_2.11:0.4.1 (/graphsense-transformation/build.sbt#L27)
[warn] 		  +- at.ac.ait:graphsense-transformation_2.11:0.4.1
[error] sbt.librarymanagement.ResolveException: unresolved dependency: at.ac.ait#graphsense-clustering_2.11;0.4.1: not found
[error] 	at sbt.internal.librarymanagement.IvyActions$.resolveAndRetrieve(IvyActions.scala:332)
[error] 	at sbt.internal.librarymanagement.IvyActions$.$anonfun$updateEither$1(IvyActions.scala:208)
[error] 	at sbt.internal.librarymanagement.IvySbt$Module.$anonfun$withModule$1(Ivy.scala:239)
[error] 	at sbt.internal.librarymanagement.IvySbt.$anonfun$withIvy$1(Ivy.scala:204)
[error] 	at sbt.internal.librarymanagement.IvySbt.sbt$internal$librarymanagement$IvySbt$$action$1(Ivy.scala:70)
[error] 	at sbt.internal.librarymanagement.IvySbt$$anon$3.call(Ivy.scala:77)
[error] 	at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:95)
[error] 	at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:80)
[error] 	at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:99)
[error] 	at xsbt.boot.Using$.withResource(Using.scala:10)
[error] 	at xsbt.boot.Using$.apply(Using.scala:9)
[error] 	at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:60)
[error] 	at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:50)
[error] 	at xsbt.boot.Locks$.apply0(Locks.scala:31)
[error] 	at xsbt.boot.Locks$.apply(Locks.scala:28)
[error] 	at sbt.internal.librarymanagement.IvySbt.withDefaultLogger(Ivy.scala:77)
[error] 	at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:199)
[error] 	at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:196)
[error] 	at sbt.internal.librarymanagement.IvySbt$Module.withModule(Ivy.scala:238)
[error] 	at sbt.internal.librarymanagement.IvyActions$.updateEither(IvyActions.scala:193)
[error] 	at sbt.librarymanagement.ivy.IvyDependencyResolution.update(IvyDependencyResolution.scala:20)
[error] 	at sbt.librarymanagement.DependencyResolution.update(DependencyResolution.scala:56)
[error] 	at sbt.internal.LibraryManagement$.resolve$1(LibraryManagement.scala:45)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$12(LibraryManagement.scala:93)
[error] 	at sbt.util.Tracked$.$anonfun$lastOutput$1(Tracked.scala:68)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$19(LibraryManagement.scala:106)
[error] 	at scala.util.control.Exception$Catch.apply(Exception.scala:224)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11(LibraryManagement.scala:106)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11$adapted(LibraryManagement.scala:89)
[error] 	at sbt.util.Tracked$.$anonfun$inputChanged$1(Tracked.scala:149)
[error] 	at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagement.scala:120)
[error] 	at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2561)
[error] 	at scala.Function1.$anonfun$compose$1(Function1.scala:44)
[error] 	at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:40)
[error] 	at sbt.std.Transform$$anon$4.work(System.scala:67)
[error] 	at sbt.Execute.$anonfun$submit$2(Execute.scala:269)
[error] 	at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16)
[error] 	at sbt.Execute.work(Execute.scala:278)
[error] 	at sbt.Execute.$anonfun$submit$1(Execute.scala:269)
[error] 	at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:178)
[error] 	at sbt.CompletionService$$anon$2.call(CompletionService.scala:37)
[error] 	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[error] 	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
[error] 	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
[error] 	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
[error] 	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
[error] 	at java.base/java.lang.Thread.run(Thread.java:834)
[error] (update) sbt.librarymanagement.ResolveException: unresolved dependency: at.ac.ait#graphsense-clustering_2.11;0.4.1: not found

Always fill the tx_list field in cluster_outgoing_relations

Currently in the table cluster_outgoing_relations, the field tx_list is not empty only if there are less than N = 100 txs between two clusters. Usually, GraphSense users, especially in the dashboard, need as few as one tx to have proof that two clusters are linked and not N.

I'd suggest having this field always filled with at least one tx hash. The maximum number of listed txs could also be limited to 1.

Compute number of tagged addresses in summary statistics

At the moment, the summary statistics shows the number of tags. In the long-run we should display the number of tagged addresses on the dashboard landing page; a cluster tag obviously tags many addresses. So

tagged_addresses = total addresses in tagged clusters + tagged addresses in untagged clusters.

address_transactions: replace blob with id

CREATE TABLE address_transactions (
address_id_group int,
address_id int,
tx_id bigint,
value bigint,
height int,
timestamp int,
PRIMARY KEY (address_id_group, address_id, tx_id)
);

Flatten address_transactions partition distribution

Compute and add secondary partition lookup ids for flattening skewed tables

CREATE TABLE address_transactions (
address_id_group int,
address_id_secondary_group int,
address_id int,
...
);

CREATE TABLE address_transactions_secondary_ids (
address_id_group int PRIMARY KEY,
max_secondary_id int
);

Entity Total Received / Total Spent computation: distinguish between within-cluster and external transactions

At the moment total received and total spent are computed by summing over all incoming and outgoing transactions of an entity. However, this also includes within-cluster transactions, i.e. transactions that have input and output addresses belonging to the same cluster.

Proposed solution: compute two different values for total received and spent:

  • total_received_all -> considers all transactions, also within-cluster transactions
  • total_received -> considers only transactions from other entities
    ...the same for total_spent

Include noExplicit/ImplicitTags in address/cluster tables

Precomputing the number of explicit and implicit (only in the case of address) tags during the transformation reduces the number of requests when computing the egonet

Suggested fields for tables

Address

  • noExplicitTags: Int
  • noImplicitTags: Int

Writing table job is stop

Hello.

I try to setup in my laptop to use.
But it has some issue that is stop working when I do ./submit.

image
image
I checked spark job at http://localhost:4040/jobs but this job still running.
But it can't writing computed transaction data in cassandra table.

I'm setted my labtop.

  • spark 2.4.3
  • openjdk version 1.8.0_212
  • sbt 1.2.8
  • cassandra 3.11.4

Any ideas? I want know how to solve this problem.
Thanks.

Hardware requirements

Hello,
I am trying to run the project on my mac book ( CPU intel core i7, and Memory 16gb )
But i have many issue, first i got out of memory error, and after fixing this problem , my disk is full .

What is the hardware requirements to run the project ?

Thank you .

Memory utilization

Something strange with RAM. I have a lot of warnings about memory:
image
I'm setted up it to 52gb at submit.sh:
image
But looks like it does not take memory size from configs, cause memory used only 12 GB from 60GB available
image
Any ideas?

Installation Issue when running sbt test

Hi,
I am trying to run sbt test and sbt package step for getting this up and running and run into the following errors:

[error] sbt.librarymanagement.ResolveException: unresolved dependency: at.ac.ait#graphsense-clustering_2.11;0.4.0: not found
[error] 	at sbt.internal.librarymanagement.IvyActions$.resolveAndRetrieve(IvyActions.scala:332)
[error] 	at sbt.internal.librarymanagement.IvyActions$.$anonfun$updateEither$1(IvyActions.scala:208)
[error] 	at sbt.internal.librarymanagement.IvySbt$Module.$anonfun$withModule$1(Ivy.scala:239)
[error] 	at sbt.internal.librarymanagement.IvySbt.$anonfun$withIvy$1(Ivy.scala:204)
[error] 	at sbt.internal.librarymanagement.IvySbt.sbt$internal$librarymanagement$IvySbt$$action$1(Ivy.scala:70)
[error] 	at sbt.internal.librarymanagement.IvySbt$$anon$3.call(Ivy.scala:77)
[error] 	at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:95)
[error] 	at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:80)
[error] 	at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:99)
[error] 	at xsbt.boot.Using$.withResource(Using.scala:10)
[error] 	at xsbt.boot.Using$.apply(Using.scala:9)
[error] 	at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:60)
[error] 	at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:50)
[error] 	at xsbt.boot.Locks$.apply0(Locks.scala:31)
[error] 	at xsbt.boot.Locks$.apply(Locks.scala:28)
[error] 	at sbt.internal.librarymanagement.IvySbt.withDefaultLogger(Ivy.scala:77)
[error] 	at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:199)
[error] 	at sbt.internal.librarymanagement.IvySbt.withIvy(Ivy.scala:196)
[error] 	at sbt.internal.librarymanagement.IvySbt$Module.withModule(Ivy.scala:238)
[error] 	at sbt.internal.librarymanagement.IvyActions$.updateEither(IvyActions.scala:193)
[error] 	at sbt.librarymanagement.ivy.IvyDependencyResolution.update(IvyDependencyResolution.scala:20)
[error] 	at sbt.librarymanagement.DependencyResolution.update(DependencyResolution.scala:56)
[error] 	at sbt.internal.LibraryManagement$.resolve$1(LibraryManagement.scala:45)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$12(LibraryManagement.scala:93)
[error] 	at sbt.util.Tracked$.$anonfun$lastOutput$1(Tracked.scala:68)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$19(LibraryManagement.scala:106)
[error] 	at scala.util.control.Exception$Catch.apply(Exception.scala:224)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11(LibraryManagement.scala:106)
[error] 	at sbt.internal.LibraryManagement$.$anonfun$cachedUpdate$11$adapted(LibraryManagement.scala:89)
[error] 	at sbt.util.Tracked$.$anonfun$inputChanged$1(Tracked.scala:149)
[error] 	at sbt.internal.LibraryManagement$.cachedUpdate(LibraryManagement.scala:120)
[error] 	at sbt.Classpaths$.$anonfun$updateTask$5(Defaults.scala:2561)
[error] 	at scala.Function1.$anonfun$compose$1(Function1.scala:44)
[error] 	at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:40)
[error] 	at sbt.std.Transform$$anon$4.work(System.scala:67)
[error] 	at sbt.Execute.$anonfun$submit$2(Execute.scala:269)
[error] 	at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16)
[error] 	at sbt.Execute.work(Execute.scala:278)
[error] 	at sbt.Execute.$anonfun$submit$1(Execute.scala:269)
[error] 	at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:178)
[error] 	at sbt.CompletionService$$anon$2.call(CompletionService.scala:37)
[error] 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[error] 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[error] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[error] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[error] 	at java.lang.Thread.run(Thread.java:748)
[error] (update) sbt.librarymanagement.ResolveException: unresolved dependency: at.ac.ait#graphsense-clustering_2.11;0.4.0: not found

Any help would be much appreciated.

Compute tagging conflicts

Tagging conflicts are another reliability indicator. We could quite easily compute semantic similarities between tag labels and expose a score. Need to do some research on this first.

Wrong estimated value computation in 1:n transaction

In a 1 to many address transaction (57739c57fa3caa12ce6edda73fce10b1656415d875f7b96bf0e65bc83a368007), the estimated value in the address graph should reflect the output value (e.g., 0.1321 for 1Cs9tFxrJpWLY7z6a7oAzbjyvnE2r3a1cC)

On ./submit stage of installation

When attempting the submit stage of installing graphsense transformation I get a SparkSubmit error from not load the TransformationJob class

Error:

19/04/25 22:57:21 WARN DependencyUtils: Local jar /home/dockeruser/graphsense-transformation/target/scala-2.11/graphsense-transformation_2.11-0.4.0.jar does not exist, skipping.
19/04/25 22:57:21 WARN DependencyUtils: Local jar /root/.ivy2/local/at.ac.ait/graphsense-clustering_2.11/0.4.0/jars/graphsense-clustering_2.11.jar does not exist, skipping.
19/04/25 22:57:21 WARN SparkSubmit$$anon$2: Failed to load at.ac.ait.TransformationJob.
java.lang.ClassNotFoundException: at.ac.ait.TransformationJob
        at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:238)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:810)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/04/25 22:57:21 INFO ShutdownHookManager: Shutdown hook called
19/04/25 22:57:21 INFO ShutdownHookManager: Deleting directory /tmp/spark-0623bdfa-3959-4d2c-a6c7-8364305461af

Any help would be appreciated.

ZEC exchange rate issue

ZEC Txid: t1P9CEYDzUFpKkqjWGdbCeQBFt2z7aoLxHU

the zcash amount should be roughly 4.26 EUR and not 490.

Address Total received in EUR does not match incoming_relations total received

Example:

1MYVnbz9BecBihQcXtMA7rnDxVKUWdH52Z shows total received of 111.5

1DkRtM5BJ48Gi45uF9whJrKnPqpkz7EyB is an outgoing node of that address; when listing incoming address relations, it shows above address with total received of 111.5

However, when switching to EUR, the values don't match anymore: 649,387 vs. 961,751.8

It seems that there is sth wrong with the conversion.

Adapt address table

If address lookup tables are written in raw keyspace, update address table as follows

CREATE TABLE address (
addr_id_group int
addr_id bigint,
no_incoming_txs int,
no_outgoing_txs int,
first_tx FROZEN <tx_id_time>,
last_tx FROZEN <tx_id_time>,
total_received FROZEN ,
total_spent FROZEN ,
in_degree int,
out_degree int,
PRIMARY KEY (addr_id_group, addr_id)
);

Compute tables for lookup-by-tag

Compute two tables supporting lookup of addresses / clusters by tag. Proposed schema:

address_by_tag
|- tagPrefix (first three tag characters)
|- tagPrefix5 (first five tag characters) - if needed
|- tag (lowercase; removed whitespace and interpunction)
|- address (explicitly tagged address)

cluster_by_tag (same as above, cluster instead of address)

CoinJoins are not properly ignored

Wasabi Wallet shows some examples of CoinJoin transactions. BlockSci should be able to remove them, but this is not happening. Below a CoinJoin tx example:

a7157780b7c696ab24767113d9d34cdbc0eba5c394c89aec4ed1a9feb326bea5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.