Giter Club home page Giter Club logo

dispersy's Introduction

Tribler

Pytest Documentation Status Codacy Coverage Contributors Pull Requests Issues

python_3_8 python_3_9

Downloads(7.0.2) Downloads(7.1.3) Downloads(7.2.2) Downloads(7.3.2) Downloads(7.4.1) Downloads(7.5.1) Downloads(7.6.1) Downloads(7.7.0) Downloads(7.8.0) Downloads(7.9.0) Downloads(7.10.0) Downloads(7.11.0) Downloads(7.12.1) Downloads(7.13.0) Downloads(7.13.1)

DOI number openhub Join Discord chat

Towards making Bittorrent anonymous and impossible to shut down.

We use our own dedicated Tor-like network for anonymous torrent downloading. We implemented and enhanced the Tor protocol specifications. Tribler includes our own Tor-like onion routing network with hidden services based seeding and end-to-end encryption.

Tribler aims to give anonymous access to content. We are trying to make privacy, strong cryptography, and authentication the Internet norm.

For the past 11 years we have been building a very robust Peer-to-Peer system. Today Tribler is robust: "the only way to take Tribler down is to take The Internet down" (but a single software bug could end everything).

Obtaining the latest release

Just click here and download the latest package for your OS.

Obtaining support

If you found a bug or have a feature request, please make sure you read our contributing page and then open an issue. We will have a look at it ASAP.

Contributing

Contributions are very welcome! If you are interested in contributing code or otherwise, please have a look at our contributing page. Have a look at the issue tracker if you are looking for inspiration :).

Running Tribler from the repository

We support development on Linux, macOS and Windows. We have written documentation that guides you through installing the required packages when setting up a Tribler development environment.

Packaging Tribler

We have written guides on how to package Tribler for distribution on various systems.

Docker support

Dockerfile is provided with the source code which can be used to build the docker image.

To build the docker image:

docker build -t triblercore/triblercore:latest .

To run the built docker image:

docker run -p 20100:20100 --net="host" triblercore/triblercore:latest

Note that by default, the REST API is bound to localhost inside the container so to access the APIs, network needs to be set to host (--net="host").

To use the local state directory and downloads directory, the volumes can be mounted:

docker run -p 20100:20100 --net="host" -v ~/.Tribler:/state -v ~/downloads/TriblerDownloads:/downloads triblercore/triblercore:latest

The REST APIs are now accessible at: http://localhost:20100/docs

Docker Compose

Tribler core can also be started using Docker Compose. For that, a docker-compose.yml file is available on the project root directory.

To run via docker compose:

docker-compose up

To run in detached mode:

docker-compose up -d

To stop Tribler:

docker-compose down

Get in touch!

We like to hear your feedback and suggestions. To reach out to us, you can join our Discord server or create a post on our forums.

License

This file is part of Tribler, Copyright 2004-2023. Tribler is licensed under the GNU General Public License, version 3 (GPL-3.0), as published by the Free Software Foundation on 29 June 2007. Tribler is free software: you can redistribute it and/or modify it under the terms of this license. Tribler is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. For more details, see the full terms and conditions in the LICENSE.txt file at the root of this repository, or visit https://www.gnu.org/licenses/gpl-3.0.html.

dispersy's People

Contributors

badrock avatar boudewijn-tribler avatar brussee avatar captain-coder avatar devos50 avatar egbertbouman avatar ichorid avatar joswinter avatar lfdversluis avatar lipufei avatar mitchellolsthoorn avatar nielszeilemaker avatar pimveldhuisen avatar qstokkink avatar rjruigrok avatar snorberhuis avatar synctext avatar whirm avatar xoriole avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dispersy's Issues

Cleanup Candidates.

With the introduction of _iter_category and _iter_categories the candidates are no longer kept active though in and outgoing walks in other communities.

We decided to drop this cross community activity feature entirely. The following cleanup can be performed:

  • Candidates become community specific
  • remove Candidate is_any_active, associate, is_associated, disassociate, in_community, all_inactive methods
  • rewrite Candidate is_all_obsolete into is_obsolete
  • remove community parameters
  • remove global Dispersy candidate list (the code still uses this every now and then, must be rewritten to use community candidates only)

This issue is related to #38. Though if possible we want to fix #38 before doing a code cleanup.

Code readability and resolve pylint warnings

Goal: make code easier to understand for new developers and reduce pylint errors/warnings

Currently there are some automatic warning (not all need fixing):
http://jenkins.tribler.org/jenkins/job/Test_dispersy_devel/616/violations/

First pylint warning is moving functionality away from the main dispersy.py in order to reduce it, from the current 4631 SLOC:
http://jenkins.tribler.org/jenkins/job/Test_dispersy_devel/616/violations/file/tribler/Tribler/dispersy/dispersy.py/

Feedback is needed: what to change

Test performance difference with logger

We should test the performance difference between between calling the logger with and without if __debug___.

If the performance difference is negligible we won't need to use if __debug__, and we can also remove the exception we make for this from the pylint config file (#8).

Create a PeerCache using bootstraptribler.txt

I was planning to add a peercache to Dispersy. Looking at the code, the bootstraptribler.txt file seemed like a logical choice to start with.

I'm planning to add another column, which determines if we should or should not sync with a peer. For our normal bootstrappeers, it would turn sync off and for all cached peers sync would be turned on.

By modifying get_bootstrap_candidates() in bootstrap.py, I could make it return either a BootstrapCandidate or a normal Candidates depending on the sync on/off.

Most of the other code would still work, as the 1% probability of selecting a bootstrap/peercache peer should be fine, and the create_introduction_request turns off sync depending on the instance-type of the candidate, i.e. whenever it gets a BootstrapCandidate.

However, I'm a bit concerned about the

assert all(not sock_address in self._candidates for sock_address in self._dispersy._bootstrap_candidates.iterkeys()), "none of the bootstrap candidates may be in self._candidates"

lines i'm seeing in the code. These will definitely cause problems.
I get why these were added, however is it possible to remove them from the get_candidate iterators etc? I guess we should only check this assert whenever we add a new candidate to the _candidates right?

Tribler decorator dependency

Currently Tribler is using the attach_profiler decorate from Dispersy.
I have no problems with this decorator, it is actually quite handy. However, could we think of a better solution than directly importing the decorator from Dispersy?

Maybe a shared library containing a couple of these decorators etc?

Please create a basic tutorial

A basic tutorial showing how to configure, instantiate, and perform basic operations with dispersy would be very helpful.

Dispersy.stop() problems

We are experiencing some problems with the Dispersy.stop() method in regard to the timeout property. If the callback is not able to stop during this time it will return before running all expired tasks. Which by itself is I guess expected.

However, if one of those not yet run expired tasks is a dispersythread_data_came_in task, things will break. The Dispersy.stop() method now has cleaned/removed all singletons, and hence we get stracktraces like:

E         dispersy/callback:679  _loop                     | 
Traceback (most recent call last):
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/callback.py", line 677, in _loop
    result = call[0](*call[1], **call[2])
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/endpoint.py", line 136, in dispersythread_data_came_in
    timestamp)
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/dispersy.py", line 1786, in on_incoming_packets
    self._on_batch_cache(meta, batch)
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/dispersy.py", line 1858, in _on_batch_cache
    messages = list(self._convert_batch_into_messages(batch))
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/dispersy.py", line 2061, in _convert_batch_into_messages
    yield conversion.decode_message(candidate, packet)
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/conversion.py", line 1408, in decode_message
    return self._decode_message(candidate, data, verify, False)
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/conversion.py", line 1349, in _decode_message
    decode_functions.authentication(placeholder)
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/conversion.py", line 1170, in _decode_member_authentication
    members = [member for member in self._community.dispersy.get_members_from_id(member_id) if member.has_identity(self._community)]
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/dispersy.py", line 705, in get_members_from_id
    if public_key]
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/member.py", line 370, in __init__
    super(Member, self).__init__(public_key, private_key)
  File "/home/jenkins/workspace/Test_tribler_full-ui-run_61x/tribler/Tribler/dispersy/member.py", line 154, in __init__
    assert DispersyDatabase.has_instance(), "DispersyDatabase has not yet been created"
AssertionError: DispersyDatabase has not yet been created
E         dispersy/callback:679  _loop                     

Can we implement a fix for this? Maybe not process new messages while in the "stopping" state?

Calls to convert_packet_to_meta_message() might pass tunnel-prefixed packets

Line endpoint.py:212 calls convert_packet_to_meta_message() which expects a packet without the optional Swift tunnel prefix. If this gets called with a tunnel-prefixed message it will result in a get_community() call with the wrong CID. The tunnel prefix is not removed before the call.

convert_packet_to_meta_message() dumps 2 bytes (Dispersy version field) from the beginning of the packet, to get the CID, then calls get_community(cid) (which might have side-effects). Assuming CID is deadbeaf:

  • without tunnel prefix: 0001deadbeaf => deadbeaf (correct)
  • with tunnel prefix: ffffffff0001deadbeaf => ffff0001dead

This might also apply for: endpoint.py:377, endpoint.py:396, and endpoint.py:142.

For example, this affects TrackerDispersy, which tracks communities when its get_community() is called.

Encrypt packets send to a candidate

I want to start encrypting packets which are send to candidates. However, I want to change the encryption-key depending on which candidate.

Therefore, I was thinking of implementing a new endpoint which accepts any other endpoint as a parameter. Next, when sending packets using the send method, this new endpoint would use a callback to figure out if packets send to this candidate should be encrypted or not.
However, as it is now, the endpoint does not know for which community these packets are send. Therefore, it is a bit difficult to encrypt packets for only one community.

Any ideas how I can improve on this idea?

Timeout-problem in RequestCache

During the DAS4 runs I often get this error:
Looking at the code, this can only be caused by calling the _on_timeout method twice or if the _on_timeout method is still called after the unregistering it.

"callback.py", line 613, in _loop
    result = call[0](*call[1], **call[2])
"requestcache.py", line 103, in _on_timeout
    cache = self._identifiers[identifier]
KeyError: 33437

Run unitests in -O optimized mode

Before we started using nosetests, all unittests were run with both __debug__ enabled and disabled. Unfortunately nosetests doesn't (easily?) allow us to run in -O optimized mode. However, I would like to see this feature again to avoid problems like the one fixed in #39.

RequestCache.pop() behavior

First discussed here #106 (comment).

We should improve the pop behavior, it doesn't make sense.

  • We could remove the cleanup_delay entirely, although this would mean that we no longer have the ability to ensure that an identifier/number isn't used for some time. It is currently used for the introduction request, adding 4.5 seconds before the identifier can be used again. Without the cleanup_delay the randomness in the numbers will have to prevent clashes.
  • We could add the identifier to a different dictionary or set, removing it from RequestCache_identifiers. This would result in has, get, and pop no longer being able to find the identifier. We would need a new way to see if an identifier is unclaimed though, for instance by introducing is_used or something similar that checks _identifiers and the new dict/set.
  • I'm sure there are other options.

Anyway, removing the cleanup_delay entirely is the simplest change, and would result in a simplification of the RequestCache. Are there cases where the delay is essential?

LAN address never changes after startup.

Dispersy assesses its LAN address once after startup, while the WAN address keeps being continuously assessed. This allows the WAN address to keep working in roaming scenario's while the LAN address breaks.

A quick and simple fix would be to reassess (by calling _guess_lan_address) the LAN address whenever the WAN IP address (so we will ignore the port number) changes.

Checking messages with sequence numbers

A few months ago we discovered that a bug in handling incoming sequence numbers had caused peers to incorrectly accept messages. This in turn caused peers to return incorrect messages when asked for messages with a specific sequence number (range). This resulted in most peers sending endless stream of missing-sequence and incorrect responses.

This is what happened:

  1. messages are received and gathered in batches
  2. the whole batch is decoded (packet string -> Message instance)
  3. global time and sequence numbers are checked for the whole batch
  4. community check is performed for the whole batch (i.e. check_undo)
  5. the whole batch is stored in the database
  6. the whole batch is handled by the community (i.e. on_undo)

The problem was that some messages were dropped or delayed in step 4. This created gaps in the sequence numbers, but this had already been checked and passed in the previous step 3.

A few month ago I solved this by simply adding a step 4.5, which did the same as step 3. Removing or delaying the messages that had become invalid by the gaps. However this is currently only done on the undo_{own,other} messages. Hence we need a proper fix for this problem.

RequestCache set/has

I got a community which uses the RequestCache to store requests. When receiving a message, I check if the RequestCache has any requests with this identifier stored, if not i accept it.

However, if the RequestCache does not have this type of Cache stored but a different one. The has-method will return false, but the set-method will not allow me to store the new Cache.

Can we improve upon this? Why is the request-cache global?

variable 'community' referenced before assignment

As reported by user VVS on our forum

Traceback (most recent call last):
  File "/home/vvs/tribler-6.2.0-rc1/Tribler/dispersy/dispersy.py", line 2703, in store_update_forward
    messages[0].handle_callback(messages)
  File "/home/vvs/tribler-6.2.0-rc1/Tribler/dispersy/dispersy.py", line 2492, in on_introduction_response
    self.wan_address_vote(payload.destination_address, candidate)
  File "/home/vvs/tribler-6.2.0-rc1/Tribler/dispersy/dispersy.py", line 1066, in wan_address_vote
    for candidate in [candidate for candidate in community.candidates.itervalues() if candidate.wan_address == self._wan_address]:
UnboundLocalError: local variable 'community' referenced before assignment

Possible attack vector with IPv4 spoofing and 16-bit RequestCache guessing

Dispersy is light-weight and uses only a two-way handshake. We have no design yet which preserves our ligh-weight character and prevents IPv4 spoofing.
Dispersy uses a requestCache to store outstanding requests and as a mechanism to issue timeouts on pending requests. The request cache has a 16-bit random identifier, unique only per community and message type (intro-req, missing-proof, etc)

When a peer is send a request, no check is done if the IPv4 address of the response is matching.

By sending a flood of 65k to a single peer it is possible to obtain a false match with an outstanding RequestCache entry.
Code pointer: https://github.com/tribler/dispersy/blob/master/dispersy.py#L2402

Complication: the private-search community now uses the request cache as a "transaction number", with multiple peers involved.

Respond with peer specific conversion.

A peer sending a request with conversion N should ideally get a response with conversion N.

We can only do this for messages that using CandidateDestination, the candidate instance should hold a list of valid conversions where the highest can be used when encoding the message.

Acceptable_global_time is checked when sync is turned off

Currently, the acceptable_global_time property is checked in then check_full_sync method.
However, in communities which have sync turned off (searchcommunity), torrent messages which are created by on demand by other peers are dropped as the current global_time is higher than the default range.

Callback call method hiding Exceptions

Whenever an exception occurs in a method which is scheduled using the call method, I get a traceback consisting of the method "calling" the call method.

Could we expose the original traceback?
An example can be found in the Tribler/Main/Utility/GuiDBHandler.py, there I include the traceback into the exception reporting and am printing it to console before raising the exception.

A better solution seems to be to provide the raise statement with the traceback:
http://docs.python.org/2/reference/simple_stmts.html#grammar-token-raise_stmt as suggested here:
http://stackoverflow.com/questions/1603940/how-can-i-modify-a-python-traceback-object-when-raising-an-exception/13898994#13898994

Example community with an object-relational mapper

Outcome: a tutorial with one or all three ORM packages used
Goal: use higher-level relational algebra, instead of hand-coded SQL
Motivation: less code, easy to expand and re-use

Options Sqlalchemy, Storm, Django:

One of many discussions on this: http://stackoverflow.com/questions/53428/what-are-some-good-python-orm-solutions

FYI: originally created by Johan in the Tribler project Tribler/tribler#130

Migrate from dprint to logging

Boudewijn and I (I'm sure Niels too :)) think that we should move to the standard python logging system.

Pros:

  • Dprint is a custom made logging framework that only a few people understand.
  • We will be able to integrate better with the rest of the python ecosystem (nose comes to mind).
  • It will be easier for newcomers to develop and debug dispersy.
  • We will trim a bit dispersy's codebase.

The estimate is around a day of work.

Its not a fun task to do but I volunteer to do it.

Johan, do you approve? :)

Attack vector with peer list pollution: DoS, Sybil, eclipse, sinkhole attack

Problem: sending fake identities to victim will pollute internal data structures
Solution: as a first line of defense we use rate control

Each overlay has a list of neighbors and Dispersy is not different.
The incoming and re-visit list of peers is easy to pollute from any Internet location to any victim. Simply create new fake identities and send them at full speed to a single UDP port. The victim will quickly have only fake nodes in it's caches and will need to do a fall-back to a central bootstrap server, after many failed connection attempts.
Or worse not detect that all neighbors come from a single IPv4 address or /24 block.
Dispersy connects and syncs with them, without knowing it is no longer connected to any honest peer.

Proof-of-concept implementation of an attack: http://jenkins.tribler.org/job/Experiment_AllChannel+Channelcommunity_attacker/18/
This attack does not create fake identities, only pollute candidate list with 1 identity on several ports.

Other tickets have priority, but in 2014 we can hopefully address this known vulnerability.

Dispersy should apply rate control at three levels: /32. /24 and /16 blocks of addresses. We need to ensure that a single block or several blocks dominate the neighbor list. Exact details: t.b.d.

Why does Dispersy have a MetaObject class?

I have a question regarding the MetaObject object. All payload objects have to extend this and implement an Implementation class.

But why? What's the benefit of this approach vs a normal constructor?

Dispersy release policy

We need to decide on the Dispersy release policy.

We are starting to get projects using it, should we do a Dispersy release every time we do a Tribler release?

Get rid of singletons

As Boudewijn told me, we can do without them and will make dispersy's start()/stop() methods ( #6 ) easier to implement (less things to clean up manually)

Dispersy.stop() is unloading communities in random order.

We should remember the order that communities are defined using Dispersy.define_auto_load() and use its reverse order when unloading communities when ``Dispersy.stop()` is called.

This will prevent bugs from occurring when communities depend on each other, i.e. in a future database cleanup we want AllChannelCommunity to create a database that all ChannelCommunities will need.

Add version / debug message

Unfortunately we do not know the Tribler / Dispersy versions of peers we are connected to. A simple way to solve this is to add a message that requests this information.

It is possible to add some debug numbers to such a message, such as number of dropped / delayed packets, etc.

logger.py must not call config

Currently logger.py is calling config. This has to be moved to the main.py and unit-test base class, as this is currently disabling loggers when Dispersy is used as a library, e.g. in Tribler and Gumby.

Dispersy walker has fewer candidates than expected (2)

(This issue is duplicate of #38. This was closed after it fixed a bug causing the trackers to no longer respond.)

We would expect more candidates sooner. Especially the AllChannelCommunity takes much longer to obtain a good number of candidates than we would expect.

Strangely enough the walk success rate is relatively high (around 90%). This contradicts the lack of available candidates.

This behavior must be either solved or explained. Please investigate.

User specific callback id's should be either unicode or string

Currently the id_ parameter can be specified as either unicode or string. This results in warnings, as seen below.

/Tribler/dispersy/callback.py:399: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
  if tup[1] == id_:

Solutions:

  1. Allow only strings
  2. Allow only unicode strings
  3. Allow both, but always convert them to either string or unicode string
  4. Others?

I prefer to using strings slightly over using unicode. I am not a fan of the third option, as this results in unnecessary 'under the hood' conversions.

Less verbose debug output

Since we enabled the Dispersy debug output in our nose runs, we have large amounts of output. Perhaps too much.

The logger provided DEBUG, INFO, WARNING, ERROR, and CRITICAL levels. Currently most of Dispersy's output is DEBUG while a few select are INFO (i.e. start, stop, IP change, candidate statistics).

There are three options:

  1. Keep it as it is (debug output will remain very large)
  2. Move all debug to info (no distinction can be made between what is currently info)
  3. Put the SQL statements behind a separate if flag.
  4. Diable the SQL statement output entirely.

Candidate.is_obsolete is global

Communities attempt to clean their candidates at a regular interval. However, because is_obsolete is global a candidate which is active in another community will not be cleaned.

This makes sense, as we don't want to garbage collect this specific candidate instance. However, the community should remove it from its local candidate list and unvote its wan address vote.

Can we change this behaviour and pass is_obsolete the community as an argument.

Community naming convention.

Currently three naming conventions are used within community.py:

  • Name without 'dispersy' prefix
    For example: get_member, acceptable_global_time, ...
  • Name with 'dispersy' prefix
    For example: dispery_enable_candidate_walker, dispersy_store, ...
  • Name with 'create_dispersy' prefix
    For example: create_dispersy_authorize, create_dispersy_signature_request, ...

Time permitting we can rename (some of) these methods/properties to use the same convention. At the very least we should choose one of the above conventions and ensure that any new methods/properties use the same convention.

Dispersy walker has fewer candidates than expected

We would expect more candidates sooner. Especially the AllChannelCommunity takes much longer to obtain a good number of candidates than we would expect.

Strangely enough the walk success rate is relatively high (around 80%). This contradicts the lack of available candidates.

This behaviour must be either solved or explained. Please investigate.

Dispersy.stop currently always returns True.

Obviously it should return either True or False depending on success or failure.

  • Endpoint.close should return either True or False,
  • Database.close should return either True or False,
  • Callback stop already returns True or False.

Also, Tribler's LaunchManyCore should output a warning when Dispersy is unable to shutdown.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.