zuoqing1988 / zqcnn Goto Github PK
View Code? Open in Web Editor NEW一款推理框架,同时有很多有用的demo,觉得好用请点星啊
License: MIT License
一款推理框架,同时有很多有用的demo,觉得好用请点星啊
License: MIT License
Hi, there. Thank you for share this work. I have some questions about the inference time speed. The inference time table says arcface-r50 will take about 700ms on 3.6GHz CPU. I am wondering how this result is tested. To be specific, did you just use ZQCNN for acceleration? Or did you also use other library, like OpenMP or MKL-DNN, etc?
RT
It appears to be impossible for me to download files from Baidu drive. It forces me to download an EXE file, which I can not run because I'm on Linux. (Even if Baidu programs weren't well-known spywares)
Is there another way to download these models?
请问你有没有这个卷积运算库32位的lib和dll啊,我自己用cmake编译了一个,但是好像没你这个速度快。
左博能提供一下MTCNN的训练代码么?
我使用的环境是win10+VS2015(就是你提供的那个),我在测试使用samplemtcnn的时候出现的问题,请问怎么解决?谢谢了
你公开的进行head检测的zqcnn的模型,包含6个模型文件;我在SampleMTCNN中按照如下顺序加载模型测试,
报错:failed to open file model/headdet1-dw24-fast.zqparams;
加载模型的代码如下:
mtcnn.Init("model/headdet1-dw20-fast.zqparams", "model/headdet1-dw20-fast-16.nchwbin",
"model/headdet1-dw24-fast.zqparams", "model/headdet1-dw24-fast-16.nchwbin",
"model/headdet1-dw48-fast.zqparams", "model/headdet1-dw48-fast-16.nchwbin",
thread_num, false);
请问是什么问题呢?
您好,
我用了您的mxnet2caffe 代码转换了mobilefacenet模型 运用check results 发现输出的结果不一样,想请教一下大神,哪里有问题
[
[[ 0.017276 -0.4222807 -0.07165871 0.7135863 -0.12447087 0.06586622
-0.16812389 -0.13903007 0.04360439 -0.22122525 0.0435597 0.11936011
-0.23474081 -0.3577653 -0.1679546 -0.13885504 0.16633391 0.25873622
-0.33021694 -0.11786669 -0.02872146 -0.477086 -0.11106284 0.11695821
-0.12934922 0.255884 -0.1942169 0.12630616 -0.23865224 0.18448791
0.10559033 -0.088712 -0.04691517 0.26358733 0.18018812 0.14805603
-0.2553117 0.5503354 0.5195001 -0.34429112 -0.50690997 0.46173176
0.3151336 0.02643948 -0.2646044 -0.13466054 -0.02762383 0.37433448
-0.44442838 -0.14106162 -0.03115291 -0.7350194 -0.55731094 0.02851256
-0.01867473 -0.22451565 0.0659817 0.03267149 -0.07442598 -0.00755793
-0.05205738 0.04383722 0.42879567 -0.25555742 -0.23541915 0.41604832
-0.06175407 -0.2166583 -0.14290327 -0.01595623 0.64858043 -0.13120429
-0.19330469 -0.2647119 0.02685797 0.38061866 0.1258779 -0.1579753
0.01175252 -0.3725114 -0.4544143 0.2203731 0.07688574 -0.19915707
-0.05907691 -0.26913345 0.12543568 0.20025031 -0.2774456 0.12992482
0.0010437 0.06121505 0.1540927 0.10459166 0.08840342 0.43114156
0.17922197 -0.493483 -0.7070367 -0.5172252 -0.46376437 -0.29713553
-0.03089709 0.47260872 0.22349297 0.19986378 0.2616748 0.1465985
0.1783604 -0.23194014 0.10261983 0.21745919 0.4326741 -0.04612612
0.6061288 0.34292504 0.51188225 -0.23691243 0.13759308 -0.22194609
-0.5215853 -0.13520943 -0.18358245 0.13728186 -0.27748924 0.07769431
-0.33542892 0.42282426]]
<NDArray 1x128 @cpu(0)>]
[[ 6.2732756e-02 -5.9952173e-02 2.2413557e-02 -3.1469498e-02
6.5951027e-02 -2.8367542e-02 3.2204002e-02 7.7424929e-02
-9.7055621e-02 -6.9253430e-02 -4.5930523e-02 -7.6132409e-02
-5.0844617e-02 -6.4346619e-02 -1.5729671e-02 5.3271189e-02
5.3257262e-04 -5.2365875e-03 1.4655352e-02 -4.3126322e-02
-1.0661008e-02 -8.7465690e-03 -2.8252389e-02 -7.0996016e-02
-1.9464381e-03 -2.8986113e-02 -3.6444955e-02 3.6922205e-02
-3.8776599e-02 -3.2794930e-02 -9.8875435e-03 6.2595280e-03
-5.0185490e-03 -3.4949131e-02 4.1359983e-02 3.1499282e-02
-2.8646499e-02 -5.9114099e-02 1.7148184e-02 -4.8444647e-02
-9.7029276e-02 4.4342317e-04 -2.7198687e-02 -2.8752765e-02
4.1433349e-02 1.2933503e-02 -4.9672965e-02 -3.5463576e-03
4.7990292e-02 2.5810581e-03 3.1260908e-02 1.5094576e-02
-1.3907658e-02 -2.7079349e-03 5.8484511e-03 -1.4168183e-02
1.7932409e-02 -3.7768781e-02 -1.8586438e-02 -1.0990779e-02
-4.7573805e-02 -5.8607054e-03 -4.9528219e-02 -2.2688817e-02
-6.9002919e-02 -2.1926099e-03 -3.6008663e-02 -2.7441682e-02
-2.9889676e-03 -3.6228251e-02 -8.7704889e-02 -3.7730277e-02
4.5867212e-02 -1.6259417e-02 6.2029455e-03 -1.7036878e-02
-3.9387021e-02 1.5584700e-02 3.4619402e-03 -4.7294483e-02
-4.2392161e-02 -6.7054451e-02 -3.1520490e-02 -1.1477423e-01
7.1737356e-03 5.5631641e-02 4.3232810e-02 -2.7725386e-02
3.7312735e-02 1.1762280e-02 -1.0883956e-01 4.5307346e-02
-3.2287091e-02 -1.7094158e-02 -1.6419319e-02 -3.3274885e-02
-3.8385842e-02 4.6371106e-02 -4.4565663e-02 1.4135682e-02
-2.6000340e-02 -4.3430896e-03 -5.8525845e-02 2.6870819e-02
-1.5334387e-02 2.2968277e-05 6.3471328e-03 -4.4829208e-02
-1.8722905e-02 -1.7468868e-02 -5.2707534e-02 3.7894405e-02
-1.9262727e-02 7.9293303e-02 -2.5021762e-02 2.7414378e-02
2.9555865e-02 -2.5431961e-03 -7.2565585e-02 3.5143670e-02
4.9341552e-02 -8.4194131e-02 4.3513231e-02 -2.1488478e-03
-2.2843815e-03 4.7319688e-02 6.3874900e-02 1.4988746e-02]]
前一个为mxnet输出结果, 后一个为caffe model输出结果
我首先运用
json2prototxt.py 转换json到prototxt
再在生成的prototxt文件中改了 bottom: “_mulscalar0”,将_mulscalar0改为上一层的”data”.
然后运用 mxnet2caffe.py 转换mxnet model 到caffe model
再就用 check_results.py 来输出结果如上,发现生成的128维embedding不同,
多谢大神指教
关于这两个模型。SphereFace06bn的准确率要比Mobile-SphereFace10bn高啊,速度也快,是因为模型比较大而不推荐使用吗?
SphereFace06bn | 98.7%-98.8% | - | 不建议使用
Mobile-SphereFace10bn | 98.6%-98.7% 性价比高
请问zq里面加入最大人脸限制是要在哪个地方加好些,添加人脸上限限制是不是速度回更快些?
您好,关于人头检测,请问在SampleMTCNN上参数如何设置才能复现项目 train-mtcnn-head 中的图片 head5.jpg 的结果,我是如下设置的,效果很差。
if (!mtcnn.Init("model/headdet1-dw20-fast.zqparams", "model/headdet1-dw20-fast-16.nchwbin",
"model/headdet2-dw24-fast.zqparams", "model/headdet2-dw24-fast-16.nchwbin",
"model/headdet3-dw48-fast.zqparams", "model/headdet3-dw48-fast-16.nchwbin", thread_num,false))
{
cout << "failed to init!\n";
return EXIT_FAILURE;
}
mtcnn.SetPara(image0.cols, image0.rows, 20, 0.4, 0.5, 0.7, 0.4, 0.5, 0.5, 0.709, 3, 20, 2, false);
望指教,谢谢。
I convert the mxnet model of GenderAge-r50 according to wiki, and compile the sample of GenderAge sucessfully. But I found that all face image(3112112) get the result of 28/29 age and Male.
Thanks for your work!
I tested SamplePet on Company's i7 server PC. It costs 300ms/frame for 'det1.zqparams' and 100ms for 'det1_dw20_fast.zqparams'. While on the same server PC, the original MTCNN runs 20ms/frame. Did I make some mistakes? HELP!
移植到linux系统下,运行SampleSphereFaceNet在执行ZQ::ZQ_CNN_Tensor4D::ConvertFromBGR时会挂掉
如果
ZQ_CNN_Tensor4D_NHW_C_Align128bit input0, input1
改为 ZQ_CNN_Tensor4D_NHW_C_Align0 input0,input1;
就没问题,这是为什么呢,ZQCNN以后会出linux版本的吗?
求购不开源的106点landmark,在线等
hi 👍
This is good project,can u use cmake profile and set up linux,pls give me doc
大神,我把算法在linux下重新建了下工程,然后用sampleMTCNN作为demo测试一下,模型加载是成功的,但是mtcnn.find106这步在pnet_stage时候就返回false了,百思不得其解,代码完全照搬没改动过的
我的数据都是11296大小的,用的 MobileFaceNet-res2-6-10-2-dim256这个训练脚本。我按照 “InsightFace: how to train 11296“ 这个做了修改。 但是训练的是报了做个错误。。。
请问应该怎么修改呢。。
mxnet.base.MXNetError: [12:15:09] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\tensor./matrix_op-inl.h:659: Check failed: e <= len (104 vs. 96) slicing with end[1]=104 exceeds limit of 96
当在一个进程中同时用这个库做人脸检测(MTCNN)和人脸识别(MobilefacenetRes)时,人脸检测有问题,估计是底层类线程不安全。
使用您代码时debug到的一个小问题,memset(dst_slice_ptr - dstPixelStepdst_borderW + dstWidthStepdst_borderH, 0, sizeof(float)dstWidthStepdst_borderH);这行加Pad高方向指错位置了吧不应该是ROI区域的下沿么?
你好,请问数据里的prob.txt是表示的什么?是怎么得出来的呢?
您好,请问支持linux吗?
我更新了你的ZQCNN库,用VS2015调试了一下你的SampleMTCNN代码,在执行函数zq_cnn_conv_no_padding_32f_kernel3x3_C3_omp时报错了,提示Illegal Instruction,是我本机不支持AVX向量扩展么,还是需要安装些别的东西啊?
以下为测试代码:
for (int i = 0; i < 1000; i++)
{
std::string prototxt_file = "./model/model-r50-am.zqparams";
std::string caffemodel_file = "./model/model-r50-am.nchwbin";
std::string out_blob_name = "fc5";
ZQ_FaceRecognizerArcFaceZQCNN* pFaceZQCNN = new ZQ_FaceRecognizerArcFaceZQCNN();
if (!pFaceZQCNN->Init("", prototxt_file, caffemodel_file, out_blob_name))
{
cout << "failed to init arcface\n";
return 0;
}
delete pFaceZQCNN;
}
每次循环时内存都会上涨,delete pFaceZQCNN 没有完全释放内存。
when i compile /ZQCNN
@zuoqing1988 感谢您的工作。请问基于mtcnn的5点人脸对齐部分的代码有c++实现吗?
我编译最新的ZQCNN,文件ZQ_CNN_Forward_SSEUtils.cpp中有很多函数找不到定义,如 zq_cnn_avgpooling_nopadding_suredivided_32f_align256bit_kernel2x2_omp
我把你的 ZQ_CNN_USE_SSETYPE 改为 ZQ_CNN_SSETYPE_AVX,编译zq_cnn_relu_32f_align_c.c这个文件时报错了,用ZQ_CNN_SSETYPE_AVX2编译就不会报错。是不是你程序那个地方使用了#undef导致报错了啊?下面是vs2015编译信息:
zq_cnn_prelu_32f_align_c.c
1> zq_cnn_relu_32f_align_c.c
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(213): warning C4013: “zq_mm_fmadd_ps”未定义;假设外部返回 int
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(213): error C2440: “函数”: 无法从“int”转换为“__m256”
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(213): warning C4024: “_mm256_store_ps”: 形参和实参 2 的类型不同
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(216): error C2440: “函数”: 无法从“int”转换为“__m256”
Hi all,
I'm playing with the FacialEmotion caffe model. I can see the input to the network is of dim (10, 1, 42, 42). But the fer2013 dataset seems to have 48x48 images.
Can anyone elaborate what is the input to the model.
Many thanks.
在pnet,采用修改之前的计算方式:计算mapH 和mapW尺寸 和 pnet[0].Forward(pnet_images[i])计算出来的 scoreH = score->GetH(); scoreW = score->GetW();尺寸维度不相同。
修改之后的计算正常了。
void _compute_Pnet_single_thread(std::vector<std::vector>& maps,
std::vector& mapH, std::vector& mapW)
{
int scale_num = 0;
for (int i = 0; i < scales.size(); i++)
{
int changedH = (int)ceil(heightscales[i]);
int changedW = (int)ceil(widthscales[i]);
if (changedH < pnet_size || changedW < pnet_size)
continue;
scale_num++;
修改之前:
mapH.push_back((changedH - pnet_size) / pnet_stride + 1);
mapW.push_back((changedW - pnet_size) / pnet_stride + 1);*/
修改之后:
mapH.push_back((int)ceil((changedH - pnet_size)*1.0 / pnet_stride) + 1);
mapW.push_back((int)ceil((changedW - pnet_size)*1.0 / pnet_stride) + 1);
}
}
测试图片:
data\keliamoniz1.jpg 640x480
修改之前结果显示:
修改之后结果显示:
Could you provide model conversion scripts in python instead of Matlab?
Hi, I've tried out some mobilefacenet-res models and I'm impressed by the performance.
How do you come up with those net structures, is there any paper about those specific mobilefacenet-res models or you just try different network configurations and find out which one is better?
And is the training code for mobilefacenet-res available to finetune or maybe train the net from scratch? If not, will it be hard to modify the training code in insightface repository to traing mobilefacenet-res models.
Many thanks.
RT
您好,感谢公开优质的代码,请问可以方便公开下训练的数据集吗?
以下为可复现问题的代码。这里用MTCNN做人脸检测,然后提取人脸的特征,当循环到一定次数时(几百到一千次),人脸检测会失败(mtcnn.Find 函数返回false)。
#include "ZQ_CNN_MTCNN.h"
#include "ZQ_FaceRecognizerArcFaceZQCNN.h"
#include
#include "opencv2\opencv.hpp"
#include "ZQ_CNN_ComplieConfig.h"
#if ZQ_CNN_USE_BLAS_GEMM
#include <cblas.h>
#pragma comment(lib,"libopenblas.lib")
#endif
using namespace std;
using namespace cv;
using namespace ZQ;
int main()
{
#if ZQ_CNN_USE_BLAS_GEMM
openblas_set_num_threads(4);
#endif
for (int i = 0; i<10000;i++)
{
printf("i = %d\n", i);
Mat image0 = cv::imread("data\\AE_.jpg", 1);
if (image0.empty())
{
cout << "empty image\n";
return EXIT_FAILURE;
}
std::string prototxt_file = "model\\mobilefacenet-v0.zqparams";
std::string caffemodel_file = "model\\mobilefacenet-v0.nchwbin";
std::string out_blob_name = "fc5";
ZQ_FaceRecognizerArcFaceZQCNN* pFaceZQCNN = new ZQ_FaceRecognizerArcFaceZQCNN();
if (!pFaceZQCNN->Init("", prototxt_file, caffemodel_file, out_blob_name))
{
cout << "failed to init arcface\n";
return 0;
}
int FeatureLength = pFaceZQCNN->GetFeatDim();
float * outFeature = new float[FeatureLength];
std::vector<ZQ_CNN_BBox> thirdBbox;
ZQ_CNN_MTCNN mtcnn;
if (!mtcnn.Init("model\\det1.zqparams", "model\\det1.nchwbin", "model\\det2.zqparams",
"model\\det2.nchwbin", "model\\det3.zqparams", "model\\det3.nchwbin"))
{
cout << "failed to init!\n";
return EXIT_FAILURE;
}
mtcnn.SetPara(image0.cols, image0.rows, 60, 0.6, 0.7, 0.7, 0.5, 0.5, 0.5);
if (!mtcnn.Find(image0.data, image0.cols, image0.rows, image0.step[0], thirdBbox))
{
cout << "failed to find face!\n";
return EXIT_FAILURE;
}
float face5point_x[5] = { 0 };
float face5point_y[5] = { 0 };
for (int num = 0; num < 5; num++)
{
face5point_x[num] = *(thirdBbox[0].ppoint + num);
face5point_y[num] = *(thirdBbox[0].ppoint + num + 5);
}
Mat NormFace = Mat(pFaceZQCNN->GetCropHeight(), pFaceZQCNN->GetCropWidth(), CV_8UC3);
pFaceZQCNN->CropImage(image0.data, image0.cols, image0.rows, image0.step[0], ZQ_PIXEL_FMT_BGR, face5point_x, face5point_y,
(unsigned char*)(NormFace.data), NormFace.step[0]);
if (!pFaceZQCNN->ExtractFeature((unsigned char*)(NormFace.data), NormFace.step[0],
ZQ_PIXEL_FMT_BGR, outFeature, true))
{
cout << "failed to ExtractFeature!\n";
return EXIT_FAILURE;
}
delete pFaceZQCNN;
delete[]outFeature;
}
return EXIT_SUCCESS;
}
Hi all, I'm going to test mobilefacenet-res4-8-16-4-dim512's performance.
Before testing, can anyone tell me how is the face alignment done here? Is it done using cv2 wrapaffine like in the original mxnet insightface implementation or something else?
I've seen a cpp file that may contain the alignment procedure but I am not familiar with cpp, so any help or explanation is appreciated.
您好,非常感谢您的分享. 我现在进行mtcnn 训练, 遇到了错误
F:\train-mtcnn-head>set MXNET_CUDNN_AUTOTUNE_DEFAULT=0
F:\train-mtcnn-head>python example\train_P_net20.py --gpus 0 --lr 0.
001 --image_set train_20_1 --prefix model/pnet20 --end_epoch 16 --lr_epoch 8,14
--frequent 10 --batch_size 1000 --thread_num 24
D:\ProgramData\Anaconda2\lib\site-packages\urllib3\contrib\pyopenssl.py:46: Depr
ecationWarning: OpenSSL.rand is deprecated - you should use os.urandom instead
import OpenSSL.SSL
Called with argument:
Namespace(batch_size=1000, begin_epoch=0, dataset_path='data/mtcnn', end_epoch=1
6, epoch=0, frequent=10, gpu_ids='0', image_set='train_20_1', lr=0.001, lr_epoch
='8,14', prefix='model/pnet20', pretrained='model/pnet20', resume=False, root_pa
th='data', thread_num=24)
init weights and bias:
hello3
F:\train-mtcnn-head\example\train.py:38: DeprecationWarning: �[91mCa
lling initializer with init(str, NDArray) has been deprecated.please use init(mx
.init.InitDesc(...), NDArray) instead.�[0m
init(k, args[k])
F:\train-mtcnn-head\example\train.py:55: DeprecationWarning: �[91mCa
lling initializer with init(str, NDArray) has been deprecated.please use init(mx
.init.InitDesc(...), NDArray) instead.�[0m
init(k, auxs[k])
lr 0.001 lr_epoch [8, 14] lr_epoch_diff [8, 14]
Traceback (most recent call last):
File "example\train_P_net20.py", line 62, in
args.begin_epoch, args.end_epoch, args.batch_size, args.thread_num, args.fre
quent, args.lr, lr_epoch, args.resume)
File "example\train_P_net20.py", line 17, in train_P_net20
20, True, True, frequent, not resume, lr, lr_epoch)
File "F:\train-mtcnn-head\example\train.py", line 90, in train_net
arg_params=args, aux_params=auxs, begin_epoch=begin_epoch, num_epoch=end_epo
ch)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\base_module.py",
line 460, in fit
for_training=True, force_rebind=force_rebind)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\module.py", line
429, in bind
state_names=self._state_names)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\executor_group.p
y", line 265, in init
self.bind_exec(data_shapes, label_shapes, shared_group)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\executor_group.p
y", line 361, in bind_exec
shared_group))
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\executor_group.p
y", line 639, in _bind_ith_exec
shared_buffer=shared_data_arrays, **input_shapes)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\symbol\symbol.py", line
1518, in simple_bind
raise RuntimeError(error_msg)
RuntimeError: simple_bind error. Arguments:
data: (1000, 3L, 20L, 20L)
bbox_target: (1000, 4L)
label: (1000,)
[14:26:44] C:\projects\mxnet-distro-win\mxnet-build\src\storage\storage.cc:125:
Compile with USE_CUDA=1 to enable GPU usage
这个错误您遇到过吗? 是不是mxnet版本不对呢?
谢谢大佬的工作。但是我在测试过程中遇见了问题,想求助一下:
1、我在window下 i5-8400 (单线程)测试mtcnn, 640*480(4.jpg) 32ms (以及改了ZQ_CNN_SSETYPE_AVX2为ZQ_CNN_SSETYPE_AVX)不知道这个速度还能加速吗?需要怎么弄?
2、我试图将前两个模型换成你上传的fast版本,然后报模型初始化失败,请问如何解决?
谢谢了
您好,大佬,请问您有没有统计过如题的数据?,精度上和原始caffe-mtcnn有没有更强,速度及显存占用有没有提升?
我想把你rnet图片保存下来看看,怎么写出来的不对啊,下面是我的代码:
ZQ_CNN_Tensor4D_NHW_C_Align128bit& net_Img = task_rnet_images[0];
int nWidth = net_Img.GetW();
int nHeight = net_Img.GetH();
auto data = net_Img.GetFirstPixelPtr();
cv::Mat mat_(nHeight, nWidth, CV_32FC4, data);
cv::imwrite("e:\temp.jpg", mat_);
我按照你的方法已经实现了对图片的检测,请问如何实现对视频检测呢?
您好,请问ZQCNN默认开启SSE加速吗,是否可以仅使用标准C/C++进行计算?
请问各个模型输入的mean和scale都是什么?人脸识别、属性识别等模型的输入操作都是一致的吗?
第四步,不太明白加到哪里。有加好的具体例子吗?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.