Giter Club home page Giter Club logo

spaces's Introduction

SPACES

端到端的长文本摘要模型(法研杯2020司法摘要赛道)。

博客介绍:https://kexue.fm/archives/8046

含义

我们将我们的模型称为SPACES,它正好是科学空间的域名之一(https://spaces.ac.cn),具体含义如下:

  • S:Sparse Softmax;
  • P:Pretrained Language Model;
  • A:Abstractive;
  • C:Copy Mechanism;
  • E:Extractive;
  • S:Special Words。

顾名思义,这是一个以词为单位的、包含预训练和Copy机制的“抽取-生成”式摘要模型,里边包含了一些我们对文本生成技术的最新研究成果。

运行

实验环境:tensorflow 1.14 + keras 2.3.1 + bert4keras 0.9.7

(如果是Windows,请用bert4keras>=0.9.8)

首先请在snippets.py中修改相关路径配置,然后再执行下述代码。

训练代码:

#! /bin/bash

python extract_convert.py
python extract_vectorize.py

for ((i=0; i<15; i++));
    do
        python extract_model.py $i
    done

python seq2seq_convert.py
python seq2seq_model.py

预测代码

from final import *
summary = predict(text, topk=3)
print(summary)

交流

QQ交流群:808623966,微信群请加机器人微信号spaces_ac_cn

链接

spaces's People

Contributors

bojone avatar junphy-jan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spaces's Issues

如何使用GPU训练

fork下去训练GPU只能占用100M大小,速度非常慢,想问一下,这是什么原因呢?需要修改哪里呢?
image

question about final.py and time it takes to run entire program (sorry i can't read chinese but I can use translate)

so after i ran all the steps, I ran final.py and ended up with this result as the metric: {'main': 0.01531614241776759, 'rouge-1': 0.021052631273130198, 'rouge-2': 0.006711409134723671, 'rouge-l': 0.021052631273130198}. Can someone tell me what this means exactly? also how long is this program supposed to take to run? it took me several hours to run everything one time so I was wondering if that's normal or not. Thanks!

实验环境与训练速度

按照给的实验环境(tensorflow 1.14 + keras 2.3.1 + bert4keras 0.9.7)训练生成式摘要时,在实验室服务器上(3090)跑得比自己笔记本上的还要慢,不知道是为什么,想请教一下~

QQ群加不进去

您好,想加群学习一下,但是管理员拒绝了我的申请

with_mlm='linear'作用

请问一下with_mlm='linear'zhe一行,with_mlm不是布尔型变量吗,为什么要用字符串赋值。

ValueError: invalid literal for int() with base 10: '-f'

运行测试代码

from final import *
summary = predict(text, topk=3)
print(summary)

出现以下错误

ValueError Traceback (most recent call last)
in ()
----> 1 from final import *
2 summary = predict(text, topk=3)
3 print(summary)

1 frames
/content/drive/My Drive/python_work/SPACES/extract_model.py in ()
27 fold = 0
28 else:
---> 29 fold = int(sys.argv[1])
30
31

ValueError: invalid literal for int() with base 10: '-f'

####运行环境Colab
tensorflow==1.14
bert4keras==0.9.8
keras==2.3.1

有点问题想请教下

苏神您好,看了您项目代码后有点疑惑想请教下:主要是seq2seq_model.py文件里的代码

在处理BIO标签的时候您的代码是 labels = source_labels + target_labels[1:],但是这个处理的话导致了label的长度比其他几个输入短了一个单位。
def compute_copy_loss(self, inputs, mask=None):
_, y_mask, y_true, _, y_pred = inputs
y_mask = K.cumsum(y_mask[:, ::-1], axis=1)[:, ::-1]
y_mask = K.cast(K.greater(y_mask, 0.5), K.floatx())
y_mask = y_mask[:, 1:] # mask标记,减少一位
y_pred = y_pred[:, :-1] # 预测序列,错开一位
loss = K.sparse_categorical_crossentropy(y_true, y_pred)
loss = K.sum(loss * y_mask) / K.sum(y_mask)
return loss
y_mask处理后其实是表示全部序列都做loss,但是这里的y_pred和y_true位置是不是对不上了。
比如说:y_pred是在序列上做三分类,labels则是对应的正确类别
y_pred: cls a b seq cls c d (seq)
labels: cls a b seq c d seq (bio标签输入)
即使y_pred = y_pred[:, :-1] # 预测序列,错开一位,长度是对应上了,但是位置好像有问题

数据集

请问有完整的数据集吗?

内存问题

一开始在跑extract_convert.py时,执行到convert函数程序卡住不动,后来发现是内存问题,我把线程或者进程数从100调小至10,能够成功运行。

运行到extract_model.py时,还是会出现内存不足,训练卡住的情况,我只能调小数据集训练,但是会导致模型训练效果差。后续智只能考虑分批次加载数据训练模型了。

我的配置是,16G内存。请问一下苏神,你内存方面的配置是什么样的呢?还是说,其实不是内存方面的问题,而是其他方面的问题?

请问这个报错是什么问题,运行seq2seq_model.py的时候

File "seq2seq_model.py", line 181, in
compound_tokens=compound_tokens,
File "/root/.virtualenvs/torch36/lib/python3.6/site-packages/bert4keras/models.py", line 2297, in build_transformer_model
transformer.load_weights_from_checkpoint(checkpoint_path)
File "/root/.virtualenvs/torch36/lib/python3.6/site-packages/bert4keras/models.py", line 255, in load_weights_from_checkpoint
values = [self.load_variable(checkpoint, v) for v in variables]
File "/root/.virtualenvs/torch36/lib/python3.6/site-packages/bert4keras/models.py", line 255, in
values = [self.load_variable(checkpoint, v) for v in variables]
File "/root/.virtualenvs/torch36/lib/python3.6/site-packages/bert4keras/models.py", line 649, in load_variable
variable = super(BERT, self).load_variable(checkpoint, name)
File "/root/.virtualenvs/torch36/lib/python3.6/site-packages/bert4keras/models.py", line 232, in load_variable
return tf.train.load_variable(checkpoint, name)
File "/root/.virtualenvs/torch36/lib/python3.6/site-packages/tensorflow_core/python/training/checkpoint_utils.py", line 84, in load_variable
return reader.get_tensor(name)
File "/root/.virtualenvs/torch36/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py", line 915, in get_tensor
return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str))
tensorflow.python.framework.errors_impl.OutOfRangeError: Read less bytes than requested

生成摘要是重复一个字或词

苏神,你好。这几天我成功跑通SPACES模型,但是最终预测摘要仅仅是一个字或词的重复,无法生成正常的句子,请问你知道是什么原因吗

OutOfRangeError

tensorflow.python.framework.errors_impl.OutOfRangeError: Read less bytes than requested
苏神这个问题是怎么回事?

博客好像不能评论了,来这里提个问

苏神,请教一下您在文末提到训练阶段Encoder和Decoder都添加了BIO预测,代码具体体现在哪两个位置?目前只看到seq2seq_model的训练部分代码添加了个Dense层做分类。

关于训好的seq2seq的model在不同机器上跑同样的测试集但是生成效果或分数有差别的疑问?

苏神你好,发现seq2seq的模型训练好后,在当前机器上生成摘要的结果每次都是固定的,正常的。但是一旦换了个机器,同样的模型居然生成不一样的结果(略微有区别),这个正常吗?是代码里面哪里设定了随机参数吗?可是为什么同一个机器生成的就不会,换个机器做inference就会出现这种情况?之前怀疑过是不是跟版本问题有关比如keras,bert4keras,tf,但是都设定为一样了,同样model在不同机器还是会出现生成的结果不同的情况,请问您这边有什么建议或指教吗?非常感谢!!

预测的时候没在GPU跑

我在运行extract_model.py文件时,训练阶段是可以跑GPU的,但是在评估阶段,model.predict函数没有使用GPU,GPU的功率没有上去。并且14份的训练时间和1份的预测时间大致相等,我猜测是没有在GPU上跑。
然而另一份seq2seq_convert.py代码,纯预测阶段,是在GPU上跑的,跑的就很快。

希望苏神不吝赐教!

关于与训练模型roberta_wwm_ext

苏神你好,你在模型中使用的预训练模型roberta是用的brightmart版本,还是hungging face上hfl的再.h5模型再转换为ckpt版本,brightmart版本是基于256字符串长度训练的,对于512长度的文本似乎效果不太好,但是看你的博客说你是用的512版本的

预测时出现错误

苏神,请问下,我前面训练完了,预测的时候,出现“Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.”错误,根据“config = tf.ConfigProto()
config.gpu_options.allow_growth = True
keras.backend.tensorflow_backend.set_session(tf.Session(config=config))” 修改了预测代码,又出现“tensorflow.python.framework.errors_impl.FailedPreconditionError”错误。难道就是因为我用的服务器tensorflow版本是1.13.1造成的吗?1.14版本我装不上,也不知道咋回事。代码小白,请教

seq2seq_words.py

python seq2seq_words.py,但是我没看到seq2seq_words.py这个文件呀?

训练时GPU使用很低,CPU占用很高的问题

苏神您好,在运行seq2seq_model.py进行生成训练的时候,GPU只占用了257M,但CPU占用超级大,训练的时候一个epoch要话费2h左右,我服务器有四张40G的卡,我推断是没用到GPU进行训练,我直接os.environ["CUDA_VISIBLE_DEVICES"] = "1"的时候并没有得到改善,还是大量用到的是CPU。请问代码如何设置使用GPU进行训练,加快训练速度,望解答,谢谢您!

AttributeError与FileNotFoundError

苏神你好,我在windows下跑您的SPACES,然后遇到了两个问题。
(1)
Traceback (most recent call last):
File "seq2seq_convert.py", line 6, in
from extract_model import *
File "E:\git\SPACES\extract_model.py", line 190, in
model.load_weights('weights/extract_model.%s.weights' % fold)
File "C:\Users\clab320.conda\envs\spaces\lib\site-packages\keras\engine\saving.py", line 492, in load_wrapper
return load_function(*args, **kwargs)
File "C:\Users\clab320.conda\envs\spaces\lib\site-packages\keras\engine\network.py", line 1230, in load_weights
f, self.layers, reshape=reshape)
File "C:\Users\clab320.conda\envs\spaces\lib\site-packages\keras\engine\saving.py", line 1183, in load_weights_from_hdf5_group
original_keras_version = f.attrs['keras_version'].decode('utf8')
AttributeError: 'str' object has no attribute 'decode'
这个问题百度后说是因为一般是因为str的类型本身不是bytes,所以不能解码
但是我看saving.py中有很多这样格式的语句,所以想请苏神解答一下

(2)
Traceback (most recent call last):
File "seq2seq_model.py", line 304, in
data = load_data(data_seq2seq_json)
File "seq2seq_model.py", line 41, in load_data
with open(filename) as f:
File "C:\Users\clab320.conda\envs\spaces\lib\site-packages\bert4keras\snippets.py", line 92, in init
self.file = open(name, mode, encoding=encoding, errors=errors)
FileNotFoundError: [Errno 2] No such file or directory: '/git/SPACES/datasets/train_seq2seq.json'
这个问题是根本就没有train_seq2seq.json这个文件

希望苏神可以解答一下,谢谢苏神

关于EMA和user_dict

苏神你好,我想请教一下为什么抽取模型不用EMA,而在生成式的模型中用了EMA。
关于user_dict和user_dict_2整理出来的词汇都很好,是通过人工整理的么?

loss无法降下来

苏神你好,我在跑你的代码的时候,抽取模型能取得比较好的效果rouge-1能得到0.5上下,但是在训练生成模型的时候loss无法下降,训练了50epoch也是一样的loss,没有改代码,rouge的得分为0,预测出来的结果也是重复的无用字符。请教一下这是问题出在了哪里,有遇到同样情况的人吗?能都分享一下是怎么解决的?(代码唯一修改的地方就是在运行extract_convert.py文件的时候报错,TypeError: Object of type int64 is not JSON serializable,然后我自己自定义了一个序列化规则,如下,就可以正常的保存预处理之后的数据了)其他地方没有进行修改。
class NpEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.integer):
return int(obj)
elif isinstance(obj, np.floating):
return float(obj)
elif isinstance(obj, np.ndarray):
return obj.tolist()
else:
return super(NpEncoder, self).default(obj)

nezha配置是啥

请问一下snippets.py里面的这个nezha配置是啥,是bert模型的路径么,用到的模型的参数是多大

环境和batchsize

ubuntu 18.04
2080ti
tensorflow 1.14
keras 2.3.1
bert4keras 0.9.7
batch_size只能为3,不然就报oom;请问一下,苏神什么环境配置跑多少的batchsize。有个朋也是说同样的配置batchsize可以8,我这里跑3是在是太慢了点。另外,bert4keras多GPU要怎么弄呢?

1

1

代码执行

#! /bin/bash

python extract_convert.py
python extract_vectorize.py

for ((i=0; i<15; i++));
do
python extract_model.py $i
done

python seq2seq_convert.py
python seq2seq_model.py

这个是写到一个python文件统一执行吗? 为啥我无法执行 小白一个 咨询一下博主 谢谢~~

OSError

Traceback (most recent call last):
File "seq2seq_convert.py", line 62, in
convert(data_seq2seq_json, data, data_x)
File "seq2seq_convert.py", line 41, in convert
total_results.append(fold_convert(data, data_x, fold))
File "seq2seq_convert.py", line 16, in fold_convert
model.load_weights('weights/extract_model.%s.weights' % fold)
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\engine\saving.py", line 492, in load_wrapper
return load_function(*args, **kwargs)
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\engine\network.py", line 1221, in load_weights
with h5py.File(filepath, mode='r') as f:
File "D:\Anaconda3\envs\python3.6\lib\site-packages\h5py_hl\files.py", line 408, in init
swmr=swmr)
File "D:\Anaconda3\envs\python3.6\lib\site-packages\h5py_hl\files.py", line 173, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = 'weights/extract_model.1.weights', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

运行过程中, 出现该错误, 但是路径我检查了是对的(根据博主建议, 写的绝对路径), 这个问题在网上也没找到答案, 所以在这里咨询一下, 谢谢!

苏神好!想询问一下AttributeError: module 'keras.engine.base_layer' has no attribute 'Node'

Traceback (most recent call last):
File "E:\wp\SPACES-main\extract_vectorize.py", line 10, in
from bert4keras.models import build_transformer_model
File "E:\python\anaconda\lib\site-packages\bert4keras\models.py", line 6, in
from bert4keras.layers import *
File "E:\python\anaconda\lib\site-packages\bert4keras\layers.py", line 100, in
NodeBase = keras.engine.base_layer.Node
AttributeError: module 'keras.engine.base_layer' has no attribute 'Node'

是在windows环境下跑的,Keras版本是2.3.1,请问这个是哪个环节的问题……

Valueerror

Traceback (most recent call last):
File "extract_model.py", line 183, in
callbacks=[evaluator]
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\engine\training.py", line 1239, in fit
validation_freq=validation_freq)
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\engine\training_arrays.py", line 216, in fit_loop
callbacks.on_epoch_end(epoch, epoch_logs)
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\callbacks\callbacks.py", line 152, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "extract_model.py", line 149, in on_epoch_end
metrics = evaluate(valid_data, valid_x, threshold + 0.1)
File "extract_model.py", line 130, in evaluate
y_pred = model.predict(data_x)[:, :, 0]
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\engine\training.py", line 1441, in predict
x, _, _ = self._standardize_user_data(x)
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\engine\training.py", line 579, in _standardize_user_data
exception_prefix='input')
File "D:\Anaconda3\envs\python3.6\lib\site-packages\keras\engine\training_utils.py", line 135, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (0, 1)

在shell脚本执行过程中出现这个错误, 另外, 权重文件生成时, 只有从(0-4),然后就中断了, 楼主知道什么原因吗? 谢谢~

ValueError: high is out of bounds for int32

File "E:/myproject/SPACES-sfzy/extract_convert.py", line 91, in
data = convert(data)
File "E:/myproject/SPACES-sfzy/extract_convert.py", line 77, in convert
max_queue_size=200
File "D:\Anaconda3\envs\sfzy\lib\site-packages\bert4keras\snippets.py", line 159, in parallel_apply
random_seeds = np.random.randint(0, 2**32, workers)
File "mtrand.pyx", line 744, in numpy.random.mtrand.RandomState.randint
File "_bounded_integers.pyx", line 1343, in numpy.random._bounded_integers._rand_int32
ValueError: high is out of bounds for int32

您好,我在按照您的配置运行抽取部分代码的时候,出了这个错,在网络上找了许久也没有好的解决办法。请问您遇到过么?可不可以给我一些帮助。

extract_convert.py中函数parallel_apply出错

大佬你好,
我跑你的代码extract_convert.py到data = convert(data)这行时出错,好像是多线程的原因?请问应该怎么修改

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\D00477~1\AppData\Local\Temp\jieba.cache
Loading model cost 0.541 seconds.
Prefix dict has been built successfully.
转换数据: 0%| | 0/4047 [00:00<?, ?it/s]Traceback (most recent call last):
File "D:/2. 2021AI项目/5_RPA项目/8_自动摘要生成/5_CAIL2020/1_SPACES_pytorch/extract_convert.py", line 83, in
data = convert(data)
File "D:/2. 2021AI项目/5_RPA项目/8_自动摘要生成/5_CAIL2020/1_SPACES_pytorch/extract_convert.py", line 69, in convert
max_queue_size=200
File "D:\2. 2021AI项目\5_RPA项目\8_自动摘要生成\5_CAIL2020\1_SPACES_pytorch\snippets.py", line 430, in parallel_apply
return [d for i, d in generator]
File "D:\2. 2021AI项目\5_RPA项目\8_自动摘要生成\5_CAIL2020\1_SPACES_pytorch\snippets.py", line 430, in
return [d for i, d in generator]
File "D:\2. 2021AI项目\5_RPA项目\8_自动摘要生成\5_CAIL2020\1_SPACES_pytorch\snippets.py", line 503, in parallel_apply_generator
pool = Pool(workers, worker_step, (in_queue, out_queue))
File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\pool.py", line 174, in init
self._repopulate_pool()
File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "D:\software\anaconda\anaconda3\envs\pytorch\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'parallel_apply_generator..worker_step'

加载NEZHA-base时找不到position embedding

Traceback (most recent call last):
  File "final.py", line 12, in <module>
    import extract_vectorize as vectorize
  File "/mnt/data/liuts/competition/cail-2020/SPACES/extract_vectorize.py", line 35, in <module>
    nezha_checkpoint_path,
  File "/mnt/data/liuts/competition/cail-2020/venv/lib/python3.6/site-packages/bert4keras/models.py", line 2297, in build_transformer_model
    transformer.load_weights_from_checkpoint(checkpoint_path)
  File "/mnt/data/liuts/competition/cail-2020/venv/lib/python3.6/site-packages/bert4keras/models.py", line 255, in load_weights_from_checkpoint
    values = [self.load_variable(checkpoint, v) for v in variables]
  File "/mnt/data/liuts/competition/cail-2020/venv/lib/python3.6/site-packages/bert4keras/models.py", line 255, in <listcomp>
    values = [self.load_variable(checkpoint, v) for v in variables]
  File "/mnt/data/liuts/competition/cail-2020/venv/lib/python3.6/site-packages/bert4keras/models.py", line 649, in load_variable
    variable = super(BERT, self).load_variable(checkpoint, name)
  File "/mnt/data/liuts/competition/cail-2020/venv/lib/python3.6/site-packages/bert4keras/models.py", line 232, in load_variable
    return tf.train.load_variable(checkpoint, name)
  File "/mnt/data/liuts/competition/cail-2020/venv/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_utils.py", line 84, in load_variable
    return reader.get_tensor(name)
  File "/mnt/data/liuts/competition/cail-2020/venv/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 678, in get_tensor
    return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str))
tensorflow.python.framework.errors_impl.NotFoundError: Key bert/embeddings/position_embeddings not found in checkpoint

预训练模型是从https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-TensorFlow给的链接下载的, 百度网盘和google 网盘都尝试了,还是存在这个问题...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.