Giter Club home page Giter Club logo

sparseneus's People

Contributors

flamehaze1115 avatar xxlong0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparseneus's Issues

have you try to apply it in real world dataset?

Thanks for your excellent work!

I want to test it in real world but at first I must write a LLFF dataloader by myself. Do you think it will work well in some place like textured market meets Manhattan assumptions? Although it is a simple geometry structure, how many details can be presented when it comes to new view synthesis?

Best Regards,
Tao

How about the training time of one epoch(200k iter as you config).

Hello, thanks for your excellent work! I am confused about the training time and inference time recently when I tried to reproduce your model. I tried to train the first stage of SparseNeus in two V100 GPUs, but I find that the traning time seams to be too long and not acceptable. Maybe there are something wrong of my conf. So could you please give some info about the training time and the inference time.

I want to try my dataset and hope you give some advice?

Hello, I want to try train my dataset,but I run into some problems,hope you can give me some advice?Thanks.

1)The camera internal and external parameters used in my data are not uniform, so I modified some of the code in "dtu_general.py", I am more confused about the role of the "pair.txt" file in it, whether each Does the scene need one?

2)The resolution of the image is not 640x512, not a multiple of 32. Does running the program have any effect?

3)I selected 9 sets of data in the dtu data set to remove the background for training. The results obtained were very bad. Do you need to increase the amount of data or why?
2022-09-05 19-15-12 的屏幕截图
Screenshot 2022-09-05 19:12:45

"weight_sum" and "alpha_sum" become zero in generic process

Hi, author.
I run the generic and finetune code successfully on the DTU dataset.
When it comes to customized dataset, the parameter "weight_sum" and "alpha_sum" become zero in generic process.
So how to avoid this? By introducing "visibility_beta" and "visibility_weight_thred" or other way?

About Val_step parameter error and sdf_network_lod1 load fails

Hi, thanks for your excellent work and codes! I just ran the code following the instruction of the "Easy to try", and get the error:
save_vis
I check the code and find that the generic model use the save_vis parameter in the val_step function, but for fine_tune model where it use the val_depth.
code
For another problem, it shows that sdf_network_lod1 load fails, and the mesh extracted seems not right after I run 3 epoch.
fails
So should I modify the visibility_weight_thred?
mesh

Testing on the DTU_TEST and small cavities on meshes for some scans

Hello, thanks for the great work! I am currently trying to test the model after training both lod0 and lod1 as described in the repository. But for testing, the dataset appears to be similar to the sample finetuning dataset rather than the training dataset. So I decided to use the Dtu_Fit dataset class to load the DTU_TEST dataset you provided in one of the issues while using the validation code you provided in trainer_generic.py.

However, when I try to reconstruct some of the scans, I observed some small cavities as in the following screenshot:
Screenshot 2023-04-18 at 19 49 25
Can you provide some guidance about testing of the trained network on test dataset?

Confusion about the mask in the general training

Hi, I download the dtu_training.rar with your provided link and do not find folder of Masks. However, the dataloader of dtu_general.py contains such path of mask_filename = os.path.join(self.root_dir, f'Masks/{scan}train/mask{vid:04d}.png').
For the training images in the folder of Rectified, the backgrounds seem to be white. I am not sure whether your code regards the white region as mask instead of mask file for training. Could you please clarify it, thanks!

Generic training issue: the lod1 stage's training code can't extract mesh reliable

Hi, thanks for your excellent paper and codes!
I have ran your training codes (only in generic stage not in finetune stage), but find the lod1 stage's results are confusing.

In lod0 stage, mesh extracted seem reliable, I trained 190k iterations in lod0 stage and save the checkpoint. Then I use this checkpoint as the resumed checkpoint at the beginning of lod1 stage's training.(the codes released have some issue in loading lod0 stage's checkpoint, I manually modify load_checkpoint function). The mesh extracted after about 190k iterations as shown below:
2

In lod1 stage , the mesh extracted seem unreliable. The mesh extracted after about 60k iterations as shown below:
1

I don't modify any conf files or any network parameters, My colleague found the same confusing results as mine in lod1 stage's training.

Is there any error in released codes?

How to train and predict without background ?

Dear author,

We are working on a project that we need to get the mesh of the object without background. I noticed that , by default, the input of the demo code do not remove the background. Right now I want to train my object without background. So my question is: should I fill the backgound region to black or white after image matting ? Should I turn the image into RGBA and the alpha channal corresponding to the opacity ?

Thanks!

Jiakui

Some confusion in Table 1 in the paper.

Thank you for your nice work, but I'm confused about Table 1.

In Table 1, the mean chamfer of IDR is 3.39, UniSurf is 4.39, Neus is 4.00, Colmap is 1.52.
But the chamfer of IDR is 0.9, UniSurf is 1.02, Neus is 0.84 in their own paper. In addition, colmap with trim=7 is 0.65, colmap with trim=0 is 1.36 in IDR's paper.

What's the difference between your results and ones of their.

Looking forward to your reply!
Thank you.

Code release date

Hi! Thanks for your excellent work!
Do you have a plan for when to release your code?

RuntimeError: CUDA error: invalid configuration argument.

Hi, I want to render with my own data without training. I fowllow Neus/preprocess_custom_data using colmap to prepare my own test data. The results are in the same format with sample_data/scan_114.
My command is
python exp_runner_finetune.py \ --mode val --conf ./confs/finetune.conf --is_finetune \ --checkpoint_path ./weights/ckpt.pth \ --case_name scan114 --train_imgs_idx 0 1 2 --test_imgs_idx 0 1 2 --near 200 --far 800 \ --visibility_beta 0.010 --visibility_gama 0.010 --visibility_weight_thred 0.7
Without any changes except the case_name, the scan114 could validate successfully, but my own data raises the error: RuntimeError: CUDA error: invalid configuration argument.
The output:

detected 8 GPUs
base_exp_dir: ./exp/dtu/finetune/DTU/seen_imgs_0_1_2/2022_08_31_12_46_19
[exp_runner_finetune.py:177 - init() ] Find checkpoint: ./weights/ckpt.pth
sdf_network_lod1 load fails
[exp_runner_finetune.py:483 - load_checkpoint() ] End
/DATA/disk1/epic/yanzhu/miniconda3/envs/nrvgn/lib/python3.9/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2228.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
type ncc patch_size 3 beta 0.01 gama 0.01 weight_thred [0.7]
[dtu_fit.py:45 - init() ] Load data: Begin
[dtu_fit.py:107 - init() ] Load data: End
[dtu_fit.py:45 - init() ] Load data: Begin
[dtu_fit.py:107 - init() ] Load data: End
/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/data/dtu_fit.py:238: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad
(True), rather than torch.tensor(sourceTensor).
sample['partial_vol_origin'] = torch.tensor(self.partial_vol_origin, dtype=torch.float32)
/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/data/dtu_fit.py:238: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
sample['partial_vol_origin'] = torch.tensor(self.partial_vol_origin, dtype=torch.float32)
/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/data/dtu_fit.py:238: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
sample['partial_vol_origin'] = torch.tensor(self.partial_vol_origin, dtype=torch.float32)
/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/data/dtu_fit.py:238: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
sample['partial_vol_origin'] = torch.tensor(self.partial_vol_origin, dtype=torch.float32)
/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/data/dtu_fit.py:209: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
sample['partial_vol_origin'] = torch.tensor(self.partial_vol_origin, dtype=torch.float32)
Traceback (most recent call last):
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/exp_runner_finetune.py", line 596, in
runner = Runner(args.conf, args.mode, args.is_continue,
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/exp_runner_finetune.py", line 202, in init
self.initialize_network()
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/exp_runner_finetune.py", line 283, in initialize_network
self.trainer.initialize_finetune_network(sample, train_from_scratch=self.train_from_scratch)
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/models/trainer_finetune.py", line 231, in initialize_finetune_network
con_volume, con_mask_volume, _ = self.prepare_con_volume(sample)
File "/DATA/disk1/epic/yanzhu/miniconda3/envs/nrvgn/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/models/trainer_finetune.py", line 176, in prepare_con_volume
conditional_features_lod0 = self.sdf_network_lod0.get_conditional_volume(
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/models/sparse_sdf_network.py", line 364, in get_conditional_volume
feat = self.sparse_costreg_net(sparse_feat)
File "/DATA/disk1/epic/yanzhu/miniconda3/envs/nrvgn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/tsparse/modules.py", line 293, in forward
conv0 = self.conv0(x)
File "/DATA/disk1/epic/yanzhu/miniconda3/envs/nrvgn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/DATA/disk1/epic/ciyuruan/SparseNeuS/SparseNeuS/tsparse/modules.py", line 107, in forward
out = self.net(x)
File "/DATA/disk1/epic/yanzhu/miniconda3/envs/nrvgn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/DATA/disk1/epic/yanzhu/miniconda3/envs/nrvgn/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/DATA/disk1/epic/yanzhu/miniconda3/envs/nrvgn/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ciyuruan/.local/lib/python3.9/site-packages/torchsparse/nn/modules/conv.py", line 66, in forward
return F.conv3d(input,
File "/home/ciyuruan/.local/lib/python3.9/site-packages/torchsparse/nn/functional/conv.py", line 114, in conv3d
results = F.sphashquery(queries, references)
File "/home/ciyuruan/.local/lib/python3.9/site-packages/torchsparse/nn/functional/query.py", line 21, in sphashquery
output = torchsparse.backend.hash_query_cuda(queries, references,
RuntimeError: CUDA error: invalid configuration argument

The correct output with scan114:
Screenshot from 2022-08-31 12-49-04
I also print the world_mat_0 to check whether there exist a transposation.
Screenshot from 2022-08-31 12-32-59
The left is my test_data(seen), the right is the example(scan114).

About view synthesis

Hi! Thanks for your excellent work!
SparseNeuS can reconstruct geometry with high speed and accuracy from sparse views, which is amazing. However, I run view synthesis using the provided model parameter and found the results are not very good. I’m not sure if I run view synthesis correctly.
Moreover, do you have quantitative analysis on view synthesis?
image
image
image

why is depth loss required to train sparse neus ?

Dear author,

In the paper, I noticed that only color loss is mentioned, however, in your code it is required to provide the depth to train SparseNeus from scratch. I wonder, if I don not have depth information, can I train SparseNeus from stractch ? If so, how should I do it ?

Thanks so much !

Jiakui

blending weights in Fine-tuning networks in paper

Screenshot_2022-09-14_16-33-36

Hello!
thanks for your great work!
I do not understand the blending weights in Fine-tuning networks in paper:

  1. you mentioned, that the CNN based blending network is used in the generic setting. But I only found MLP based blending network in Section "3.2 Appearance prediction -> Blending weights.", but not CNN.
  2. In the equation of MLP network f′c, there is no index i on the right side of the equation. It seems the image index is not used to compute the blending weight of image i....

Could you please help me the figure out my issues? Thank you!

Best regards

About dependencies

Hello, thank you for sharing the code,
I've had some trouble with the environment. can you provide the version of the python dependencies?
Thanks

what do I need to do if I need to make the network only predict the model?

Dear author,

Thanks for your excellent work! I want to try to get model without the background,I downloaded your training set and remove the background.
No matter whether "use_white_bkgd" is set to "True" or "False", the obtained model still has background. So I need to modify some parameters in the "general_lod0.conf" file?

The context I'm talking about is the part outside the predictive model, as shown in the image below:
2022-09-01 14-37-46 的屏幕截图

The parameter I modified is this place:
2022-09-01 14-48-32 的屏幕截图

This is the training image and the predicted model after I removed the background. I have tried both white and black backgrounds, and the effect is not very good.
rect_001_0_r5000
rect_001_0_r5000
2022-09-01 15-00-13 的屏幕截图
2022-09-01 15-02-36 的屏幕截图

In order to quickly verify my ideas, currently I only train scan7 on one data.
Excuse me, what do I need to do if I need to make the network only predict the model?thanks.

Problem of run sample_bashs

Hi, thanks for your work.
When I run sample_bashs for an easy test, I got the following errors:

pyparsing.exceptions.ParseException: Expected '}', found '='  (at char 1374), (line:67, col:16)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "exp_runner_finetune.py", line 608, in <module>
    clip_wh=[int(x) for x in args.clip_wh]
  File "exp_runner_finetune.py", line 58, in __init__
    self.conf = ConfigFactory.parse_file(conf_path)
  File "/home/pai/lib/python3.6/site-packages/pyhocon/config_parser.py", line 142, in parse_file
    return cls.parse_string(content, os.path.dirname(filename), resolve, unresolved_value)
  File "/home/pai/lib/python3.6/site-packages/pyhocon/config_parser.py", line 192, in parse_string
    return ConfigParser().parse(content, basedir, resolve, unresolved_value)
  File "/home/pai/lib/python3.6/site-packages/pyhocon/config_parser.py", line 455, in parse
    config = config_expr.parseString(content, parseAll=True)[0]
  File "/home/pai/lib/python3.6/site-packages/pyparsing/core.py", line 1134, in parse_string
    raise exc.with_traceback(None)
pyparsing.exceptions.ParseSyntaxException: Expected '}', found '='  (at char 1374), (line:67, col:16)

I am new in Neus and I just want to have a try, so how can I correct it and have a test?
Thanks for your reply.

Question about coordinate system.

Dear author,

I want to change the dataset. So can you tell me the coordinate system you use? Do you use the OpenGL convention (Right-handed system, positive x-axis is to right, the positive y-axis is up and the positive z-axis is backwards)?

Thanks so much !

Hanlin

How to determine near and far for custom dataset?

Thanks for your great work!

I ran the code on dtu dataset in neus format, which contains image, mask, cameras_sphere.npz and cameras_large.npz. How should I determine near and far in this dataset (or a custom dataset created from colmap)?

I tried to use scale_mat_0 in camera_dict to replace self.cal_scale_mat() in dtu_fit.py, but it doesn't seem correct.

self.scale_mat = camera_dict['scale_mat_0'].astype(np.float32)
self.scale_factor = 1. / self.scale_mat[0,0]

# ! estimate scale_mat
# self.scale_mat, self.scale_factor = self.cal_scale_mat(
#     img_hw=[self.img_wh[1], self.img_wh[0]],
#     intrinsics=self.all_intrinsics[self.train_img_idx],
#     extrinsics=self.all_w2cs[self.train_img_idx],
#     near_fars=self.raw_near_fars[self.train_img_idx],
#     factor=1.1)

How to prepare my dataset?

Hi, thank for your great work! I tried to inference on my own data, but result was very bad. Can you tell me how to prepare my data? I have extrinsics and intrinsics by colmap.

RuntimeError: grad can be implicitly created only for scalar outputs

Hi, thanks for your amazing work.
The following error occurs when running the provided bash file:
Traceback (most recent call last):
File "/media/disk8T/zmx/sparseneus/exp_runner_finetune.py", line 609, in
runner.train()
File "/media/disk8T/zmx/sparseneus/exp_runner_finetune.py", line 320, in train
loss.backward()
File "/home/zhumingxia/anaconda3/envs/spneus/lib/python3.10/site-packages/torch/tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/zhumingxia/anaconda3/envs/spneus/lib/python3.10/site-packages/torch/autograd/init.py", line 190, in backward
grad_tensors
= make_grads(tensors, grad_tensors, is_grads_batched=False)
File "/home/zhumingxia/anaconda3/envs/spneus/lib/python3.10/site-packages/torch/autograd/init.py", line 85, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
Thank you in advance.

Unable to get the valid object mesh with the provided sample bash command

Dear authors,

Thank you for your great work. I tried to play with the bash command you provided:

bash ./sample_bashs/dtu_scan118.sh

The first 1k finetuning steps look good. But after finetuning 1k steps (2k or 3k steps), the object mesh turns to disappear and only background mesh is extracted.

Please take a good at the two figures for comparison (1st at 1k step, 2nd at 2k step):

image
image

Have you got any ideas to reproduce the correct mesh? Thank you for your time.

Some confusion about using the ground-truth mask file of DTU.

In the training stage, I noticed that you adopted the ground-truth mask file provied by DTU to sample more rays withine valid regions. However, these ground-truth mask files can mask out most non-object regions. And this seems to introduce the Mask prior like IDR, or is there something I'm misunderstanding

Question regarding lod0 and lod1

First of all, thank you for your great work on neural implicit surface representation!
I was wondering in your implementation what the lod0 and lod1 portions represent.
From what I have understood so far, I believe these correspond to lod0=>coarse level, and lod1=>fine level in the cascaded geometry reasoning diagram in the paper. Is this understanding correct?

Thanks,
Jason Jeong

How to evaluate the generated mesh and ground truth on DTU?

Thanks for your nice work!

The generated mesh have background, which will affect the results. How did you deal with it?
Did you filter the generated mesh and GT mesh according to the image mask?
Can you provide an evaluation code and the processed GT mesh?

Texture on the model

First of all, thanks for the amazing work. Its really incredible.
Its the first time I saw such great result in a small amount of time.

I just have one doubt by now.
How do I get the texture on the model? Seems like the script does something, but cant find how to apply or how to get the texture on the mode.

Thanks a lot

Reconstruction results

Hi, thanks for your amazing work. In the results section of the paper, a comparison with other methods such as mvsnerf has been done. I wanted to enquire about the results for mvsnerf and how you acquired them. I tried running marching cubes mvsnerf with a density threshold of 10 and am not getting satisfactory results. If possible, I'd be also grateful if you would be willing to share the code to do the same.

Thank you in advance

Finetune in scan118 error

Hi, I have succeeded to extract correct mesh in Generic mode, but in Finetune mode, I found the extracted mesh become confusing during training.

At the begining of Finetune (0 iters):
scan118

End of Finetune (11k iters):
scan118_2

It seems that the Finetune in scan118 degenerate, I don't modify any config or source code, so what's problem? Can anyone who encounter same issue share some solution? Many Thanks!

missing model code

Hi, thank you for sharing the code,
but some codes are missing in the core models, such as sdf_network, FastRenderer, PatchProjector. Is there any plan to upload this code?

How to create custom dataset from colmap?

Dear author,

I have many images captured by the cellphone, and I can run colmap to obtain the extrinsics and intrinsics. Can you tell me how to proceed to run the train and inference stage? What is the key procedure to make colmap result work for SparseNeus?

Thanks so much !

About Traing data Prepare

Hi, xiaoxiao, I just want to train the model from scratch. I download the mvs_training data and DTU data which you mentioned in the data part.And then I run the train code, I find the resolution of the depth is different from which you use here.
depth
The depth which loads from pfm format is 160*128, so I guess that the depth should be preprocessed here.
Do you have plan to publish some detailed info for prepare dataset? Or how to use own data to train the model.

The dataloader and train parameters is missing.

Hi, thanks for your interesting work.

But the dataloader for train/test in exp_runner_generic.py is not available.
from data.dtu_perview import MVSDatasetDtuPerView
from data.dtu_idr import DtuIDR

The train parameters are also missed.
Is there any plan to publish them?

DTU Quantitative results

Hi, I recently ran the training code in Generic mode and extract the mesh, then use the scripts
https://github.com/jzhangbs/DTUeval-python to evaluate the Chamfer Distance for 15 test scenes according to the Table 1 of your paper.
However I found the quantitative results I got are worse than those in Table 1, eg:

  • scan24: 2.2(I got) VS 1.68(Paper)
  • scan37: 5.15(I got) VS 3.06(Paper)
  • scan50: 3.2(I got) VS 2.25(Paper)

I use --mode val in command line to extract the mesh. In conf file I set the test_ref_view = [23]. In your paper I don't find the ref-view settings in experiment. So maybe you use other pairs.txt in Chamfer Distance test? Can you share some details in your experiment?

Thanks a lot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.