Giter Club home page Giter Club logo

eellak / gsoc2018-3gm Goto Github PK

View Code? Open in Web Editor NEW
40.0 13.0 9.0 142.21 MB

💫 Automated codification of Greek Legislation with NLP

Home Page: https://openlaws.ellak.gr/

License: GNU General Public License v3.0

Python 39.73% Jupyter Notebook 12.21% TeX 3.59% Shell 0.53% CSS 11.46% HTML 10.37% Makefile 0.61% TypeScript 21.22% JavaScript 0.28%
government-documents legal-texts text-mining codification government-gazette nlp automation python3 gsoc-2018 natural-language-processing

gsoc2018-3gm's Introduction

Build Status license language Awesome

🚀 Greek Government Gazette Text Mining, Cross Linking and Codification - 3gm

Welcome to Government Gazette text mining, cross linking, and codification Project (or 3gm for short) using Natural Language Processing Methods and Practices on Greek Legislation.

This project aims to provide with the most recent versions of each law, i.e. an automated codex via NLP methods and practices.

About the project

We live in a complex regulatory environment. As citizens, we obey government regulations from many authorities. As members of organized societies and groups, we must obey organizational policies and rules. As social beings, we are bound by conventions we make with others. As individuals, they are bound by personal rules of conduct. The full number and size of regulations can be really scary. We can agree on some general principles but, at the same time, we can disagree on how these principles apply to specific situations. In order to minimize such disagreements, regulators are often obliged to create numerous regulations or very large regulations to deal with special cases.

In the recent years plenty of attention has been gathering around analyzing public sector texts via text mining methods enabled by modern libraries, algorithms and practices and bought to to the forefront by open source projects such as textblob, spaCy, SciPy, Tensorflow and NLTK. These collaborative productive efforts seem to be a shift towards more efficient understanding of natural language by machines which can be used in conjunction with public documents in order to provide useful tools for legislators. This emerging sector is usually referred as "Computational Law".

This project, developed under the auspices the Google Summer of Code 2018 Program, carries out the extraction of Government Gazette (ΦΕΚ) texts from the National Printing House (ET), cross-links them with each other and, finally, identifies and applies the amendments to the legal text by providing automatic codification of the Greek legislation using methods and techniques of Natural Language Processing. This will allow the elimination of bureaucratic procedures and great time savings for lawyers looking for the most recent versions of statutes in legal databases. The detection of amendments is automated in order to amend the amendments to the laws merged into a common law, a procedure known as codification of the law. The new "merged" / modified / codified laws can show the current text of a law at every moment. This is something that is being traditionally done by hand and our aim was to automate it.

Finally, the laws are clustered into topics according to their content using a non-supervised machine learning model (Latent Dirichlet Allocation) to provide a more holistic representation of Greek legislation. Also, for easier indexing, PageRank was used and therefore the interconnections of the laws were positively taken into account, because the more references there is a legislative text than the other the more important it is characterized.

Through the analysis, categorization and codification of the GG documents, this project facilitates key elements of everyday life such as the elimination of bureaucracy and the efficient management of public documents to implement tangible solutions, which allows huge savings for lawyers and citizens.

A presentation of the project is available here as part of FOSSCOMM 2018 at the University of Crete

Demo

The project is hosted at 3gm.ellak.gr or openlaws.ellak.gr. A video presentation of the project is available here.

Timeline

You can view the detailed timeline here. What has been done during the program can be found in the Final Progress Report.


Google Summer of Code 2019

This repository will host the changes and code developped for 3gm as part of the Google Summer of Code 2019. This year's effort mainly aims to enhence NLP functionalities of the project and is based on this project proposal. The timeline of the project is described here and you can also find a worklog documenting the progress made during the development of the project.

The main goals for GSoC-2019 are populating the database with more types of amendments, widening the range of feature extraction and training a new Doc2Vec model and a new NER annotator specifically for our corpus.

Migrating Data

As part of the first week of GSoC-2019 a data mirgation project. In the scope of this project we had to mine the website of the Greek National Printing House and upload as many GGG issues to the respective Internet Archive Collection. Until now, 87,874 issues have been uploaded, in addition to the ~45.000 files that the collection contained initially and this number will continue to surge. The main goal of this whole endeavour is making the greek legislation archive more accessible.

We tried documenting our insights from this process. We would like to evolve this to an entry at the project wiki, titled " A simple guide to mining the Greek Government Gazette".

NER model Training

After uploading a major part of the Greek Government Gazette issues, including all primary type issues, it was time to start building a dataset to train a new NER tagger based on the Greek spaCy model. To do that it is necessary to use an annotation tool. A tool that is fully compatible with spaCy is prodigy. We contacted them and they provided a research licence for the duration of the project.

To mine, prepare and annotate our data we followed this workflow and followed the guidlines for annotation described here.

All above documents will be incorporated on the project wiki shortely.

As a result of this process we have created a dataset containing around 3000 sentences. A first version of this dataset can be found in the projects data folder. We have also deployed the prodigy annotator, in an effort to showcase our progress. In case you want to support this year's project. All annotations gathered will be used for model training after quality control.You can find it here.

After obtaining a large enough data-set to train our models we trained the small and medium sized Greek spaCy models using the prodigy recipes for training. The models showed significant improvement after training. A version of the small NER model that we trained can be found in the data directory of this repo. Our goal now is to optimize the model and properly evaluate it. As a first step to this process we will use the train-curve recipe of prodigy to see how to the model performs when trained with different portions of our data. Finally we will develop a python script to train the spacy model, document all its metrics and tune hyperparameters. The is process is documented in this report

The final version of the NER model is located in the models folder alongside a model of word-embedding containing around 20000 word vectors.

The most efficient in terms of performance and complexity model will then be integrated to the 3gm app.

Broadening fact extaction

During this year's GSOC we focused a lot on enhancing the NLP capabilities of the project.

As part of this procedure it is vital to broaden fact extraction on the project. Using regular expressions we will work on the entities file aiming to make it possible for the app to identify useful information such as metrics, codes, ids, contact info e.t.c.

We have created a script to test regular expressions for fact extractions. Unfortunately there is very little consistency when it come to writing information between issues and this results to difficulties in entities extraction.

After optimizing the extraction queries we integrated them to the entities module that can be found in the 3gm directory. We now have to use the regular expressions to extract entities in the pparser module, the module that is responsible for extracting amendments, laws and ratifications.

Training a new Doc2vec model

We will train a new model for doc2vec using the gensim library following the proposed workflow in the project wiki. We will use the codifier to create a large corpus and subsequently train the gensim model on it. To make sure that the model is efficient we will have to create a corpus of several thousand issues and then finetune the model's hyperparameters.

For the time being we have created a corpus file containing 2878 laws and presidential decisions totaling around 223Mb. We have also trained a doc2vec model that can be found in the models directory. Our goal is to create a corpus as big as possible and this is the reason we will continue to expand it.

Creating a natural language model

Even though it was not included in the initial project proposal we also decided to create a natural language model that generates texts, aiming to make use of the word vectors we had produced earlier using prodigy. to achieve this we will deploy transfer learning techniques

Our approach includes training a variation of a character level based LSTM model that we trained on a corpus of GGG texts.The idea is to use the embeddings produced, in an embedding layer and then stack this model on top of it. To train the model we are using Google Colab using TPU acceleration on a variation of this notebook provided by the TensorFlow Hub Authors.

Documentation

As part of our effort to document the changes to the project during GSOC-2019 we thought that it would be vital to update and integrate changes to the project's wiki. You can follow up on the process in this repo

Deliverables

The deliverables for the GSOC-2019 include:

  1. An expanded version of the Internet Archive collection containing a total of 134,113 issues from several issue types.
  2. A new Named Entity Recognision model trained exlusively on Greek Government Gazzette texts.
  3. An expanded entities.py module with broadened fact extraction functionality
  4. A new Doc2vec model containing around 3000 vectors

Final Progress Report

You can find the final progress report in the form of github gist at the following link

Google Summer of Code 2018

The project met and exceeded its goals for Google Summer of Code 2018. Link

Google Summer of Code participant: Marios Papachristou (papachristoumarios)

Organization: GFOSS - Open Technologies Alliance


Contibutors

Mentors for GSOC 2019

Mentors for GSOC 2018

Development

  • Marios Papachristou (Original Developer - Google Summer of Code 2018)
  • Theodore Papadopoulos (AngularJS UI)
  • Sotirios Papadiamantis (Google Summer of Code 2019)

Overview


Technologies used

  1. The project is written in Python 3.x using the following libraries: spaCy, gensim, selenium, pdfminer.six, networkx, Flask_RESTful, Flask, pytest, numpy, pymongo, sklearn, pyocr, bs4, pillow and wand.
  2. The information is stored in MongoDB (document-oriented database schema) and is accessible through a RESTful API.
  3. The UI is based on angular 7

Project Features & Production Ready Tools

  1. Document parser can parse PDFs from Government Gazette Issues (see the data for examples). The documents are split into articles in order to detect amendments.
  2. Parser for existing laws.
  3. Named Entities for Legal Acts (e.g. Laws, Legislative Decrees etc.) encoded in regular expressions.
  4. Similarity analyzer using topic models for finding Government Gazette Issues that have the same topics.
    1. We use an unsupervised model to extract the topics and then group Issues by topics for cross-linking between Government Gazette Documents. Topic modelling is done with the LDA algorithm as illustrated in the Wiki Page. The source code is located at 3gm/topic_models.py.
    2. There is also a Doc2Vec approach.
  5. Documented end-2-end procedure at Project Wiki
  6. MongoDB Integration
  7. Fetching Tool for automated fetching of documents from ET
  8. Parallelized tool for batch conversion of documents with pdf2txt (for newer documents) or Google Tesseract 4.0 (for performing OCR on older documents) with pdfminer.six, tesseract and pyocr
  9. Digitalized archive of Government Gazette Issues from 1976 - today in PDF and plaintext format. Conversion of documents is done either via pdfminer.six or tesseract (for OCR on older documents).
  10. Web application written in Flask located at 3gm/app.py hosted at 3gm.ellak.gr
  11. RESTful API written in flask-restful for providing versions of the laws and
  12. Unit tests integrated to Travis CI.
  13. Versioning system for laws with support for checkouts, rollbacks etc.
  14. Ranking of laws using PageRank provided by the networkx package.
  15. Summarization Module using TextRank for providing summaries at the search results.
  16. Amendment Detection Algorithm. For example (taken from Greek Government Gazette):

Μετά το άρθρο 9Α του ν. 4170/2013, που προστέθηκε με το άρθρο 3 του ν. 4474/2017, προστίθεται άρθρο 9ΑΑ, ως εξής:

Main Body / Extract

Άρθρο 9ΑΑ

Πεδίο εφαρμογής και προϋποθέσεις της υποχρεωτικής αυτόματης ανταλλαγής πληροφοριών όσον αφορά στην Έκθεση ανά Χώρα

  1. Η Τελική Μητρική Οντότητα ενός Ομίλου Πολυεθνικής Επιχείρησης (Ομίλου ΠΕ) που έχει τη φορολογική της κατοικία στην Ελλάδα ή οποιαδήποτε άλλη Αναφέρουσα Οντότητα, σύμφωνα με το Παράρτημα ΙΙΙ Τμήμα ΙΙ, υποβάλλει την Έκθεση ανά Χώρα όσον αφορά το οικείο Φορολογικό Έτος Υποβολής Εκθέσεων εντός δώδεκα (12) μηνών από την τελευταία ημέρα του Φορολογικού Έτους Υποβολής Εκθέσεων του Ομίλου ΠΕ, σύμφωνα με το Παράρτημα ΙΙΙ Τμήμα ΙΙ.

The above text signifies the addition of an article to an existing law. We use a combination of heuristics and NLP from the spaCy package to detect the keywords (e.g. verbs, subjects etc.):

  • Detect keywords for additions, removals, replacements etc.
  • Detect the subject which is in nominative in Greek. The subject is also part of some keywords such as article (άρθρο), paragraph(παράγραφος), period (εδάφιο), phrase (φράση) etc. These words have a subset relationship which means that once the algorithm finds the subject it should look up for its predecessors. So it results in a structure like this:

  • A Python dictionary is generated:
{'action': 'αντικαθίσταται', 'law': {'article': { '_id': '9AA', 'content': 'Πεδίο εφαρμογής και προϋποθέσεις της υποχρεωτικής αυτόματης ανταλλαγής πληροφοριών όσον αφορά στην Έκθεση ανά Χώρα 1. Η Τελική Μητρική Οντότητα ενός Ομίλου Πολυεθνικής Επιχείρησης (Ομίλου ΠΕ) που έχει τη φορολογική της κατοικία στην Ελλάδα ή οποιαδήποτε άλλη Αναφέρουσα Οντότητα, σύμφωνα με το Παράρτημα ΙΙΙ Τμήμα ΙΙ, υποβάλλει την Έκθεση ανά Χώρα όσον αφορά το οικείο Φορολογικό Έτος Υποβολής Εκθέσεων εντός δώδεκα (12) μηνών από την τελευταία ημέρα του Φορολογικού Έτους Υποβολής Εκθέσεων του Ομίλου ΠΕ, σύμφωνα με το Παράρτημα ΙΙΙ Τμήμα ΙΙ.'}, '_id': 'ν. 4170/2013'}, '_id': 14}
  • And is translated to a MongoDB operation (in this case insertion into the database). Then the information is stored to the database.

For more information visit the corresponding Wiki Page


Challenges

  1. Government Gazette Issues may not always follow guidelines.
  2. Improving heuristics.
  3. Gathering Information.
  4. Digitizing very old articles.

Mailing List

Development Mailing List: [email protected]

License

The project is opensourced as a part of the Google Summer of Code Program and Vision. Here, the GNU GPLv3 license is adopted. For more information see LICENSE.

gsoc2018-3gm's People

Contributors

dependabot[bot] avatar dspinellis avatar ellakdev avatar papachristoumarios avatar spapadiamantis avatar thodoris avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gsoc2018-3gm's Issues

Add ground truth evaluation tool

A tool for comparison with ground truth is needed. The tool will include

  1. Downloading the ground truth (e.g. Raptarchis' Codification from Ministry of Interiors)
  2. Parsing the ground truth to create comparable extracts of legislation
  3. Use WER to count the accuracy of our method wrt ground truth

NER Annotator

Description

Develop an NER annotator module (in the form of a web app with flask) for annotating NER. One excellent (closed source sadly) is prodigy.ai

Deliverables

  • NER annotator module

Estimated Time: 3 weeks

UI Refactoring

Change UI as follows:

codify_law.html

Τρέχουσα Μορφή του ν. 1234/4325

  • Σύνδεσμος Για ιστορικό Εκδόσεων
  • Ποιοι νόμοι τον τροποποιούν + hyperlinks
  • (Optionally) Ποιους νόμους τροποποιεί.
  • Ετικέτες και παρόμοια θεματολογία

history.html
Accordion elements

Ο ν. 4009/2011 όπως ισχύει σήμερα

Ιστορικό

Ευρετήριο

  • ν. 4485/2017
  • ν. 4405/2016
  • ν. 4310/2014
  • ...
  • Αρχική μορφή του ν. 4009/2011

ν. 4485/2017

... (κείμενο των αλλαγών)
(Δεσμός:) [Εμφάνιση του ν. 4009/2011 μέχρι και τις αλλαγές του ν. 4485/2017]
Απόσπασμα Συνδέσμων + Status εφαρμογής.

ν. 4405/2016

... (κείμενο των αλλαγών)
(Δεσμός:) [Εμφάνιση του ν. 4009/2011 μέχρι και τις αλλαγές του ν. 4405/2016]

ν. 4310/2014

... (κείμενο των αλλαγών)
(Δεσμός:) [Εμφάνιση του ν. 4009/2011 μέχρι και τις αλλαγές του ν. 4310/2014]

Αρχική μορφή του ν. 4009/2011

Full history of a codified version of a law

Add the ability to see the full history of a codified version of a law.
This requires creating a page where the user can see at the same time all changes made by all amending laws to a legal text.
This is not difficult in terms of the algorithm / code. The required code is almost complete.
The difficulty lies on the development of the right user interface that will facilitate user experince user and enable the better understanding of the applied changes to the end user.

Greek Government Gazette Corpus Analysis

Description

Analyzing a corpus allows us to draw conclusions concerning its contents. Another way to understand how legislation is organized is to study closely the legislative graphs produced by the codifier.

Deliverables

  • Report of corpus analysis on the GGG corpus as a whole, containing info about specific metrics such as frequency distributions, collocations, diversity of words or percent of distinct words in the document, most frequent words as well as dynamic and structural data.from the legislative graphs.

Total time: 10 days

Broaden Fact Extraction

Description

Broad range of fact extraction functionality, such as:

  1. Monetary amounts, non-monetary amounts, percentages, ratios
  2. Conditional statements and constraints, like "λιγότερο από" or "μετά από"
  3. Dates, recurring dates, and durations
  4. Courts, regulations, and citations

Work here should be done primarily using the re built-in module.
You are as well free to use any tool (e.g. ML or something similar) to make it better

Deliverables

The code should be committed to entities.py module:

  • Monetary amounts, non-monetary amounts, percentages, ratios
  • Conditional statements and constraints, like "λιγότερο από" or "μετά από"
  • Dates, recurring dates, and durations
  • Courts, regulations, and citations

Estimated Time : 2 weeks

Bug in numbering of articles

In cases where more than one articles are being inserted into another Law, the system
retains by the numbering of that article the first imported, omits the others and considers Article 0 as the last one imported.
e.g. for Law 4512/2018 Article 353 which introduces new Articles 71A, 71B and 71C into another Law concludes as follows : https://3gm.ellak.gr/statute/l_4512_2018/codified
That is, the text of 71C was introduced as C.

Docker Support

Docker support would allow much easier installation and should be easier to be platform agnostic.

The current installation documentation not only have its own platform specific instructions but also has multiple links to another project's installation documentation, all of which has their own platform specific instructions.

Broaden legislative acts extraction

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Train NER with spaCy and embody it in the project

Description

Train an NER (Named Entity Recognizer) special for Government Gazette Texts using the NLP library spaCy. The NER should be extended (do not override the pre-existing labels and do not add new labels if they are not needed since that will drop the model's accuracy).

The pre-existing Greek NER can be found here

Deliverables

  • Tag Map
  • Annotatted dataset (> 3000 sentences)
  • spaCy's trained model
  • Embody to the project (module & web application)
  • API Endpoints

Estimated Time: 1.5 month

Show Articles Titles

It would be good to add the required functionality in order to export and display the titles of the articles

Domain certificate expired

Hi, anytime I try to access to your project's link I get this error on my browser (Firefox)

Did Not Connect: Potential Security Issue

Firefox detected an issue and did not continue to openlaws.ellak.gr. The website is either misconfigured or your computer clock is set to the wrong time.

It’s likely the website’s certificate is expired, which prevents Firefox from connecting securely.

What can you do about it?

openlaws.ellak.gr has a security policy called HTTP Strict Transport Security (HSTS), which means that Firefox can only connect to it securely. You can’t add an exception to visit this site.

The issue is most likely with the website, and there is nothing you can do to resolve it. You can notify the website’s administrator about the problem.

Learn more…

Hope to solve this issue soon, thanks

Law Summarization

Developments in Greek politics during the last few years have mandated, large pieces of legislative texts to be published in the GGG at once, the most recent being the publication of the fourth memorandum. Effective summarization of laws therefore, becomes more and more useful.

I propose training a machine learning algorithm that can provide a comprehensive summary of each law or act in relation to its size.

3gm.ellak.gr

Please configure 3gm.ellak.gr to work with the VM at 83.212.109.156 .

Improve word and document embeddings

Description

Do work on document and word embeddings. Refer to the Wiki Page for more information. We are using gensim as a library.

Deliverables

  • Improved model
  • Demonstratable similarity analyzer

Estimated Time: 1 week

Improve RESTful API

Description & Deliverables

Related blog post

  • API Endpoints
  • Token based authentication
  • Token issuing functionality
  • Limits on requests
  • Provable testing with locust.io

Estimated Time: 1 week

Responsive Web Application

Description

Make the web application responsive (using Bootstrap)

Deliverables

  • Responsive layout for small and medium screen size for all pages of the web application
  • Specification for route templates (e.g. see EUR-lex standards)

Estimated Time : 4 days

Train and Develop tools for Classification

Description & Deliverables

  • Train Segmentation models for legal concepts such as pages or sections.
  • Pre-trained classifiers for document type and clause type
  • Develop Tools for building new clustering and classification methods
  • API Endpoints for the above

Estimated Time: 2-3 months

Broaden legislative acts extraction

The codifier module currently detects, codifies and stores all laws and presidential decrees that are found in the Greek Government Gazette issues.Even though these types of amendments are most important in Greek legislation they do not account for the largest part of GGG issues.

We want to broaden extraction capabilities using regular expressions to include parliamentary regulations (Κανονισμός της Βουλής), treaties (Συνθήκες), Prime Minister, Minister and Deputy Minister decisions, as well as acts of appointment, dismissal and transfer of public officials. An extensive account of types of legislative acts per GGG issue can be found in the [Ethnikon Typografeion website] (http://www.et.gr/index.php/f-e-k/teyxi).

As mentioned the legislative act extraction is currently performed using regular expressions in the entities module. An alternative to this is would be to train a neural network that can detect legislative acts and determines the type although this would be a complex solution depending on the number of issue types.

Since the GGG corpus contains a great number of types of legislative we should prioritize those found in main issue types such as 'Α', 'Β', 'Γ' , 'Δ'.

Create legal dictionaries in Greek

Description

Man applications of natural language processing and machine learning to text can benefit from a controlled lexicon of expert-selected terms (i.e., a dictionary). This is especially true of highly technical language, such as legal text. However, no open source and freely-available dictionaries of this nature have been available in Greek. Creating new legal dictionaries would greatly benefit 3gm but also automatic codification of Greek legal text in general.

Deliverables

  • dictionary of geopolitical entities, actors and divisions (e.g., countries, states, provinces)
  • legal dictionary containing common terms, courts and acts
  • financial dictionary
  • dictionary of public administration offices and public administrations
  • dictionary of naval terms and flags

Total time: 3 weeks

CLI Interface for codifier tool

  1. source at sys.argv[1]
  2. target at sys.stdin
  3. output at sys.stdout

Example usage:

codifier.py ammendment-1.txt <initial-version.txt >ammended-version.txt

Optionally --input and --output flags

Extension

<initial-version.txt codifier.py ammendment-1.txt |
codifier.py ammendment-2.txt |
codifier.py ammendment-3.txt >final-version.txt

Help Page

Implement Help Page for explaining tags and functionality.

Ability for interactive feedback / amendments on the algorithmically generated codified text

Add the ability for interactive feedback / amendments on the algorithmically generated codified text by the end users.
In some cases, the resulting codified text contains erroneous references or is missing some references (links). For those cases , it would be nice to have a procedure that will allow:

  • Simple users to verbally describe a problem
  • Advanced users to interactively process / delete / modify / insert the correct references between 2 legal texts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.