Giter Club home page Giter Club logo

maptr's Introduction

MapTR

An End-to-End Framework for Online Vectorized HD Map Construction

Bencheng Liao1,2,3 *, Shaoyu Chen1,3 *, Yunchi Zhang1,3 , Bo Jiang1,3 ,Tianheng Cheng1,3, Qian Zhang3, Wenyu Liu1, Chang Huang3, Xinggang Wang1 📧

1 School of EIC, HUST, 2 Institute of Artificial Intelligence, HUST, 3 Horizon Robotics

(*) equal contribution, (📧) corresponding author.

ArXiv Preprint (arXiv 2208.14437)

openreview ICLR'23, accepted as ICLR Spotlight

extended ArXiv Preprint MapTRv2 (arXiv 2308.05736)

News

  • Aug. 31th, 2023: initial MapTRv2 is released at maptrv2 branch. Please run git checkout maptrv2 to use it.
  • Aug. 14th, 2023: As required by many researchers, the code of MapTR-based map annotation framework (VMA) will be released at https://github.com/hustvl/VMA recently.
  • Aug. 10th, 2023: We release MapTRv2 on Arxiv. MapTRv2 demonstrates much stronger performance and much faster convergence. To better meet the requirement of the downstream planner (like PDM), we introduce an extra semantic——centerline (using path-wise modeling proposed by LaneGAP). Code & model will be released in late August. Please stay tuned!
  • May. 12th, 2023: MapTR now support various bevencoder, such as BEVFormer encoder and BEVFusion bevpool. Check it out!
  • Apr. 20th, 2023: Extending MapTR to a general map annotation framework (paper, code), with high flexibility in terms of spatial scale and element type.
  • Mar. 22nd, 2023: By leveraging MapTR, VAD (paper, code) models the driving scene as fully vectorized representation, achieving SoTA end-to-end planning performance!
  • Jan. 21st, 2023: MapTR is accepted to ICLR 2023 as Spotlight Presentation!
  • Nov. 11st, 2022: We release an initial version of MapTR.
  • Aug. 31st, 2022: We released our paper on Arxiv. Code/Models are coming soon. Please stay tuned! ☕️

Introduction

MapTR/MapTRv2 is a simple, fast and strong online vectorized HD map construction framework.

framework

High-definition (HD) map provides abundant and precise static environmental information of the driving scene, serving as a fundamental and indispensable component for planning in autonomous driving system. In this paper, we present Map TRansformer, an end-to-end framework for online vectorized HD map construction. We propose a unified permutation-equivalent modeling approach, i.e., modeling map element as a point set with a group of equivalent permutations, which accurately describes the shape of map element and stabilizes the learning process. We design a hierarchical query embedding scheme to flexibly encode structured map information and perform hierarchical bipartite matching for map element learning. To speed up convergence, we further introduce auxiliary one-to-many matching and dense supervision. The proposed method well copes with various map elements with arbitrary shapes. It runs at real-time inference speed and achieves state-of-the-art performance on both nuScenes and Argoverse2 datasets. Abundant qualitative results show stable and robust map construction quality in complex and various driving scenes.

Models

Results from the MapTRv2 paper

comparison

Method Backbone Lr Schd mAP FPS
MapTR R18 110ep 45.9 35.0
MapTR R50 24ep 50.3 15.1
MapTR R50 110ep 58.7 15.1
MapTRv2 R18 110ep 52.3 33.7
MapTRv2 R50 24ep 61.5 14.1
MapTRv2 R50 110ep 68.7 14.1
MapTRv2 V2-99 110ep 73.4 9.9

Notes:

  • FPS is measured on NVIDIA RTX3090 GPU with batch size of 1 (containing 6 view images).
  • All the experiments are performed on 8 NVIDIA GeForce RTX 3090 GPUs.

Results from this repo.

MapTR

nuScenes dataset

Method Backbone BEVEncoder Lr Schd mAP FPS memory Config Download
MapTR-nano R18 GKT 110ep 46.3 35.0 11907M (bs 24) config model / log
MapTR-tiny R50 GKT 24ep 50.0 15.1 10287M (bs 4) config model / log
MapTR-tiny R50 GKT 110ep 59.3 15.1 10287M (bs 4) config model / log
MapTR-tiny Camera & LiDAR GKT 24ep 62.7 6.0 11858M (bs 4) config model / log
MapTR-tiny R50 bevpool 24ep 50.1 14.7 9817M (bs 4) config model / log
MapTR-tiny R50 bevformer 24ep 48.7 15.0 10219M (bs 4) config model / log
MapTR-tiny+ R50 GKT 24ep 51.3 15.1 15158M (bs 4) config model / log
MapTR-tiny+ R50 bevformer 24ep 53.3 15.0 15087M (bs 4) config model / log

Notes:

  • + means that we introduce temporal setting.

MapTRv2

Please git checkout maptrv2 and follow the install instruction to use following checkpoint

nuScenes dataset

Method Backbone BEVEncoder Lr Schd mAP FPS memory Config Download
MapTRv2 R50 bevpool 24ep 61.4 14.1 19426M (bs 24) config model / log
MapTRv2* R50 bevpool 24ep 54.3 WIP 20363M (bs 24) config model / log

Argoverse2 dataset

Method Backbone BEVEncoder Lr Schd mAP FPS memory Config Download
MapTRv2 R50 bevpool 6ep 64.3 14.1 20580 (bs 24) config model / log
MapTRv2* R50 bevpool 6ep 61.3 WIP 21515 (bs 24) config model / log

Notes:

  • * means that we introduce an extra semantic——centerline (using path-wise modeling proposed by LaneGAP).

Qualitative results on nuScenes val split and Argoverse2 val split

MapTR/MapTRv2 maintains stable and robust map construction quality in various driving scenes.

visualization

MapTRv2 on whole nuScenes val split

Youtube

MapTRv2 on whole Argoverse2 val split

Youtube

End-to-end Planning based on MapTR

e2e_planning.mp4

Getting Started

Catalog

  • temporal modules
  • centerline detection & topology support (refer to maptrv2 branch)
  • multi-modal checkpoints
  • multi-modal code
  • lidar modality code
  • argoverse2 dataset
  • Nuscenes dataset
  • MapTR checkpoints
  • MapTR code
  • Initialization

Acknowledgements

MapTR is based on mmdetection3d. It is also greatly inspired by the following outstanding contributions to the open-source community: BEVFusion, BEVFormer, HDMapNet, GKT, VectorMapNet.

Citation

If you find MapTR is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@inproceedings{MapTR,
  title={MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction},
  author={Liao, Bencheng and Chen, Shaoyu and Wang, Xinggang and Cheng, Tianheng, and Zhang, Qian and Liu, Wenyu and Huang, Chang},
  booktitle={International Conference on Learning Representations},
  year={2023}
}
@article{maptrv2,
  title={MapTRv2: An End-to-End Framework for Online Vectorized HD Map Construction},
  author={Liao, Bencheng and Chen, Shaoyu and Zhang, Yunchi and Jiang, Bo and Zhang, Qian and Liu, Wenyu and Huang, Chang and Wang, Xinggang},
  journal={arXiv preprint arXiv:2308.05736},
  year={2023}
}
 @article{lanegap,
  title={Lane Graph as Path: Continuity-preserving Path-wise Modeling for Online Lane Graph Construction},
  author={Bencheng Liao and Shaoyu Chen and Bo Jiang and Tianheng Cheng and Qian Zhang and Wenyu Liu and Chang Huang and Xinggang Wang},
  journal={arXiv preprint arXiv:2303.08815},
  year={2023}
}

maptr's People

Contributors

legendbc avatar outsidercsy avatar zyc10ud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

maptr's Issues

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Some clarifications on the architecture

Hi, I've just read the paper and the idea and results are impressive.
I know you haven't release the source code yet but since I want to present this paper in my workplace I would like to better understand the architecture of your network. I have two questions:

  1. What are the dimensions of the queries in the map decoder?
    I understood there are Nv point-level queries and N instance level queries and that for each instance query i we add the different point queries to get the hierarchical query, so in the end there are Nv x N hierarchical queries. What I wish to know is what is the dimension of each hierarchical query q^{hie}_{ij}?
  2. In the map decoder you say that each query q^{hie}_{ij} predicts the 2-dimension normalized BEV coordinate (xij ; yij) of the reference point pij. Is this done as described in Deformable attention paper?
    Meaning, for each object query, the 2-d normalized coordinate of the reference point is predicted from its object query embedding via a learnable linear projection followed by a sigmoid function.

Many thanks in advance.

visualization question

There are a few style gt formats for vis in tools/maptr, such as "fixed_num_pts", 'fixed_num_pts_torch', "polyline_pts",'polyline_pts_quiver','polyline_dilated_pts', 'shift_fixed_num_pts','shift_fixed_num_pts_v1...3'.
would you mind to tell which one is the default?

Output coordinations

谢谢你们的工作!请问网络最后的点列输出结果是在哪个坐标系下的呢?

Thank you for your work! Which coordinate system are the final point outputs of the network in?

Inferior re-implementation results

Hi, I tried the training script maptr_tiny_r50_24e of your code, and use 3 3090 GPUs to run the experiments. However, the final validation mAP=44, which is much lower than mAP=50 reported in your paper. Please find the log here log_google_drive.

Is there anything that I missed? Or the number of GPUs could affect the final results (but the difference is too huge)? Could you please provide some suggestions here? Thanks for your consideration.

reproduce 2D-to-BEV transformation

@LegendBC Thanks for your great work!

Would you mind also releasing your implementation of other 2D-to-BEV transformations used for ablation in Table 3? Because I would like to check the details and make a fair comparison of this component on other hardware platforms like what you have done. Thanks.

Codes for visualization

Thanks for the awesome work!

I just wish to ask whether you could share the codes for map visualization for GT and predictions.

Thanks in advance.

question about eval

I run the command of ./tools/dist_test_map.sh ./projects/configs/maptr/maptr_nano_r18_110e.py ./ckpts/maptr_nano_r18_110e.pth 1
and meet the following problem:

Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 51, in starmapstar
    return list(itertools.starmap(args[0], args[1]))
  File "/workspace/MapTR/projects/mmdet3d_plugin/datasets/map_utils/tpfp.py", line 57, in custom_tpfp_gen
    matrix = custom_polyline_score(
  File "/workspace/MapTR/projects/mmdet3d_plugin/datasets/map_utils/tpfp_chamfer.py", line 52, in custom_polyline_score
    if o.intersects(pline):
AttributeError: 'numpy.int64' object has no attribute 'intersects'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "./tools/test.py", line 262, in <module>
    main()
  File "./tools/test.py", line 258, in main
    print(dataset.evaluate(outputs, **eval_kwargs))
  File "/workspace/MapTR/projects/mmdet3d_plugin/datasets/nuscenes_map_dataset.py", line 1482, in evaluate
    ret_dict = self._evaluate_single(result_files[name], metric=metric)
  File "/workspace/MapTR/projects/mmdet3d_plugin/datasets/nuscenes_map_dataset.py", line 1416, in _evaluate_single
    mAP, cls_ap = eval_map(
  File "/workspace/MapTR/projects/mmdet3d_plugin/datasets/map_utils/mean_ap.py", line 256, in eval_map
    tpfp = pool.starmap(
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 372, in starmap
    return self._map_async(func, iterable, starmapstar, chunksize).get()
  File "/usr/lib/python3.8/multiprocessing/pool.py", line 771, in get
    raise self._value
AttributeError: 'numpy.int64' object has no attribute 'intersects'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1584) of binary: /usr/bin/python3
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 689, in run
    elastic_launch(
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
***************************************
         ./tools/test.py FAILED        
=======================================
Root Cause:
[0]:
  time: 2022-12-15_10:15:34
  rank: 0 (local_rank: 0)
  exitcode: 1 (pid: 1584)
  error_file: <N/A>
  msg: "Process failed with exitcode 1"
=======================================
Other Failures:
  <NO_OTHER_FAILURES>
***************************************

Does anyone know the reason and solutions?

Is any modification in mmdetection3d?

Thanks for the gorgeous work! I see a full mmdetection3d project is included. Is any difference or code modification with the origin open source version?
Thank you.

problem about nuScenes datasets

Hi,
Thanks for sharing such a great job!

I want to reproduce your code. Do datasets have to be prepared as full nuScenes datasets? Does the v1.0-mini datasets work?
If possible, how to set the mini datasets' structure?

Thanks!

code details

Hi,thanks for your great work, i'm interested in it.

I noticed that you predict a bbox in the code, and use the loss function to constrain it during the training process. What is the role of the predicted output bbox? I am very confused, and it seems that you did not mention it in the article.

Looking forward to your reply

problem of training

When I run the code with one gpu, I faced problem:
Traceback (most recent call last):
File "tools/train.py", line 259, in
main()
File "tools/train.py", line 215, in main
model = build_model(
File "/home/xx/data/MapTR/mmdetection3d/mmdet3d/models/builder.py", line 84, in build_model
return build_detector(cfg, train_cfg=train_cfg, test_cfg=test_cfg)
File "/home/xx/data/MapTR/mmdetection3d/mmdet3d/models/builder.py", line 57, in build_detector
return DETECTORS.build(
File "/home/xx/anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/utils/registry.py", line 212, in build
return self.build_func(*args, **kwargs, registry=self)
File "/home/xx/anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/home/xx/anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/utils/registry.py", line 44, in build_from_cfg
raise KeyError(
KeyError: 'MapTR is not in the models registry'
Could you help me fix this issue?

CUDA out of memory while training

I use 2 NVIDIA TITAN RTX 24GB GPU to train this model:
./tools/dist_train.sh ./projects/configs/maptr/maptr_nano_r18_110e.py 2
but I got CUDA out of memory while put the model to cuda

请问网络权重和数据集会开源吗?

你好!我正在做高精地图语义识别的课题,看到这篇文章,发现其中的语义内容比我们目前做的要更丰富一些,有道路标志。请问网络权重,或者是你们用于训练的数据集会开源吗?救救学生,谢谢了%>_<%

Visualize Prediction Failure following the docs

Hi, thanks for the nice work! I strictly followed the docs you provided and completed the part of training and evaluation(maptr_tiny_r50_24e.py). However, when I want to visualize the prediction, I meet this problem

Traceback (most recent call last):
  File "tools/maptr/vis_pred.py", line 401, in <module>
    main()
  File "tools/maptr/vis_pred.py", line 235, in main
    pts_filename = img_metas[0]['pts_filename']
KeyError: 'pts_filename'

It seems to be caused by something with the data. Could you please tell me how can I modify the raw code to get it right? Thanks a lot.

error: Run modelThe model and loaded state dict do not match exactly

Hi! When I run MapTR-tiny model, it prompts the following error:


load checkpoint from local path: ./ckpts/maptr_tiny_r50_24e.pth
The model and loaded state dict do not match exactly

unexpected key in source state_dict: pts_bbox_head.transformer.encoder.layers.0.attentions.1.attention.grid_offsets


The executed command is as follows:

/tools/dist_test_map.sh ./projects/configs/maptr/maptr_tiny_r50_24e.py ./ckpts/maptr_tiny_r50_24e.pth 1 

Could this be a problem caused by a parameter input error? If yes is there a good solution? Thanks!

Appendix: All LOG information output during the model run

(maptr) ➜  MapTR git:(main) ✗ ./tools/dist_test_map.sh ./projects/configs/maptr/maptr_tiny_r50_24e.py ./ckpts/maptr_tiny_r50_24e.pth 1 
/home/test/anaconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
projects.mmdet3d_plugin
/home/test/code/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/test/code/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/test/code/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/test/anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/test/anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/test/anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/test/anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:92: UserWarning: The arguments `dropout` in MultiheadAttention has been deprecated, now you can separately set `attn_drop`(float), proj_drop(float), and `dropout_layer`(dict) 
  warnings.warn('The arguments `dropout` in MultiheadAttention '
load checkpoint from local path: ./ckpts/maptr_tiny_r50_24e.pth
The model and loaded state dict do not match exactly

unexpected key in source state_dict: pts_bbox_head.transformer.encoder.layers.0.attentions.1.attention.grid_offsets

[                                                  ] 0/6019, elapsed: 0s, ETA:/home/test/anaconda3/envs/maptr/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at  ../aten/src/ATen/native/BinaryOps.cpp:467.)
  return torch.floor_divide(self, other)
[                                                  ] 1/6019, 0.3 task/s, elapsed: 4s, ETA: 22939s/home/test/code/MapTR/projects/mmdet3d_plugin/core/bbox/coders/nms_free_coder.py:236: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  self.post_center_range = torch.tensor(
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 6019/6019, 1.6 task/s, elapsed: 3695s, ETA:     0s
Formating bboxes of pts_bbox
Start to convert map detection format...
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 6019/6019, 53846.5 task/s, elapsed: 0s, ETA:     0s
data/nuscenes/nuscenes_map_anns_val.json exist, not update
Results writes to test/maptr_tiny_r50_24e/Sat_Feb__4_12_41_21_2023/pts_bbox/nuscmap_results.json
Evaluating bboxes of pts_bbox
Formating results & gts by classes
results path: /home/test/code/MapTR/test/maptr_tiny_r50_24e/Sat_Feb__4_12_41_21_2023/pts_bbox/nuscmap_results.json
Formatting ...
Cls data formatting done in 21.579294s!! with /home/test/code/MapTR/test/maptr_tiny_r50_24e/Sat_Feb__4_12_41_21_2023/pts_bbox/cls_formatted.pkl
-*-*-*-*-*-*-*-*-*-*use metric:chamfer-*-*-*-*-*-*-*-*-*-*
-*-*-*-*-*-*-*-*-*-*threshhold:0.5-*-*-*-*-*-*-*-*-*-*
cls:divider done in 1.090212s!!
cls:ped_crossing done in 0.051536s!!
cls:boundary done in 0.068384s!!

+--------------+-------+------+--------+-------+
| class        | gts   | dets | recall | ap    |
+--------------+-------+------+--------+-------+
| divider      | 27332 | 0    | 0.000  | 0.000 |
| ped_crossing | 6406  | 0    | 0.000  | 0.000 |
| boundary     | 21050 | 0    | 0.000  | 0.000 |
+--------------+-------+------+--------+-------+
| mAP          |       |      |        | 0.000 |
+--------------+-------+------+--------+-------+
-*-*-*-*-*-*-*-*-*-*threshhold:1.0-*-*-*-*-*-*-*-*-*-*
cls:divider done in 1.094373s!!
cls:ped_crossing done in 0.058753s!!
cls:boundary done in 0.068401s!!

+--------------+-------+------+--------+-------+
| class        | gts   | dets | recall | ap    |
+--------------+-------+------+--------+-------+
| divider      | 27332 | 0    | 0.000  | 0.000 |
| ped_crossing | 6406  | 0    | 0.000  | 0.000 |
| boundary     | 21050 | 0    | 0.000  | 0.000 |
+--------------+-------+------+--------+-------+
| mAP          |       |      |        | 0.000 |
+--------------+-------+------+--------+-------+
-*-*-*-*-*-*-*-*-*-*threshhold:1.5-*-*-*-*-*-*-*-*-*-*
cls:divider done in 1.080050s!!
cls:ped_crossing done in 0.055158s!!
cls:boundary done in 0.067125s!!

+--------------+-------+------+--------+-------+
| class        | gts   | dets | recall | ap    |
+--------------+-------+------+--------+-------+
| divider      | 27332 | 0    | 0.000  | 0.000 |
| ped_crossing | 6406  | 0    | 0.000  | 0.000 |
| boundary     | 21050 | 0    | 0.000  | 0.000 |
+--------------+-------+------+--------+-------+
| mAP          |       |      |        | 0.000 |
+--------------+-------+------+--------+-------+
divider: 0.0
ped_crossing: 0.0
boundary: 0.0
map: 0.0
{'NuscMap_chamfer/divider_AP': 0.0, 'NuscMap_chamfer/ped_crossing_AP': 0.0, 'NuscMap_chamfer/boundary_AP': 0.0, 'NuscMap_chamfer/mAP': 0.0, 'NuscMap_chamfer/divider_AP_thr_0.5': 0.0, 'NuscMap_chamfer/divider_AP_thr_1.0': 0.0, 'NuscMap_chamfer/divider_AP_thr_1.5': 0.0, 'NuscMap_chamfer/ped_crossing_AP_thr_0.5': 0.0, 'NuscMap_chamfer/ped_crossing_AP_thr_1.0': 0.0, 'NuscMap_chamfer/ped_crossing_AP_thr_1.5': 0.0, 'NuscMap_chamfer/boundary_AP_thr_0.5': 0.0, 'NuscMap_chamfer/boundary_AP_thr_1.0': 0.0, 'NuscMap_chamfer/boundary_AP_thr_1.5': 0.0}

'rotate' is not defined

首先感谢作者的开源!
使用len_queue=2训练时报错显示
File ".../maptr/projects/mmdet3d_plugin/maptr/modules/transformer.py", line 140, in get_bev_features
NameError: name 'rotate' is not defined
代码中rotate函数确实没有给出定义,请问作者能将rotate函数开源/补充进项目中吗,谢谢。

level of feature maps

Hi!
Thanks for the impressive work!
I wonder have you tried multi-level of feature maps? If so, how much is the mAP?

Some questions about annotations

Hi! First, thanks for your excellent work!But I still have some questions about annotations.
Could you please tell me whether the GT is point-wise or polylines?Looking for ur reply,thks~

关于点集的个数问题

刚拜读完这篇文章,非常棒的工作!
请问是所有的道路元素都用20个点来表示的吗,这20个点的选取有什么标准吗,比如等间隔之类的;如果是要包含导向箭头这类多边形道路元素的话,点应该怎么选取呢,涉及到拐角的地方等间隔应该不好表示;如果有处理的代码,可以告知是在代码中的哪块地方吗
期待回复,感谢

Abonormal loss tendency when traning in smaller batch size

When reproducing the experiment under maptr_tiny_r50_24e.py using 2 batches (per GPU) x 4 GPUs (2080 Ti), the obtained loss results seem abnormal, especially the loss_dir group:
image
image

However, the reproduction using 2 batches (per GPU) x 8 GPUs (2080 Ti) went well in the overall procedure and I can see the desired loss tendency. Does anyone ever meet the same problem? What might be the cause?

increase point_cloud_range

hi , thanks for your great work!
I want to increase the range, [-15.0, -30.0, -2.0, 60.0, 30.0, 2.0], I modified "MapTRNMSFreeCoder" and point_cloud_range, but the result is not very good, can you give some suggestions?

questions about training

Hello, first of all, thank you for such a good job . But there's one more question I'd like to ask. I only used part 1 of nuScenes for training because of disk space limitations, but mAP is only 0.006. Do you know any possible reasons. Looking forward to your reply, thank you very much~

----------threshhold:0.5----------
image

----------threshhold:1.0----------
image
----------threshhold:1.5----------
image

`ann_file` vs `map_ann_file` in config file

Dear,
First of all I would like to express my admiration about the good work you have here.

I have a question regarding the parameters ann_file and map_ann from your the config file:

ann_file=ann_file + 'nuscenes_infos_temporal_val.pkl',
map_ann_file=data_root + 'nuscenes_map_anns_val.json', 

I understood that the ann_file is the path to the annotation file which is created using your create_data.py script. However I don't know where map_ann_file come from? Could you please elaborate this parameter?

evaluation problem

Hi,
I followed the instructions in Installation and Eval. But when I run with
./tools/dist_test_map.sh ./projects/configs/maptr/maptr_tiny_r50_24e.py ./ckpts/maptr_tiny_r50_24e.pth 1

I get the following errors. Do you have any suggestions? Thanks.

projects.mmdet3d_plugin
/home/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/home/MapTR/projects/mmdet3d_plugin/bevformer/modules/custom_base_transformer_layer.py:94: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/root/miniconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `feedforward_channels` in BaseTransformerLayer has been deprecated, now you should set `feedforward_channels` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/root/miniconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_dropout` in BaseTransformerLayer has been deprecated, now you should set `ffn_drop` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/root/miniconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:341: UserWarning: The arguments `ffn_num_fcs` in BaseTransformerLayer has been deprecated, now you should set `num_fcs` and other FFN related arguments to a dict named `ffn_cfgs`. 
  warnings.warn(
/root/miniconda3/envs/maptr/lib/python3.8/site-packages/mmcv/cnn/bricks/transformer.py:92: UserWarning: The arguments `dropout` in MultiheadAttention has been deprecated, now you can separately set `attn_drop`(float), proj_drop(float), and `dropout_layer`(dict) 
  warnings.warn('The arguments `dropout` in MultiheadAttention '
load checkpoint from local path: ./ckpts/maptr_tiny_r50_24e.pth
The model and loaded state dict do not match exactly

unexpected key in source state_dict: pts_bbox_head.transformer.encoder.layers.0.attentions.1.attention.grid_offsets

[                                                  ] 0/6019, elapsed: 0s, ETA:Traceback (most recent call last):
  File "./tools/test.py", line 262, in <module>
    main()
  File "./tools/test.py", line 233, in main
    outputs = custom_multi_gpu_test(model, data_loader, args.tmpdir,
  File "/home/MapTR/projects/mmdet3d_plugin/bevformer/apis/test.py", line 70, in custom_multi_gpu_test
    for i, data in enumerate(data_loader):
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
    raise self.exc_type(msg)
KeyError: Caught KeyError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/MapTR/projects/mmdet3d_plugin/datasets/nuscenes_map_dataset.py", line 1238, in __getitem__
    return self.prepare_test_data(idx)
  File "/home/MapTR/projects/mmdet3d_plugin/datasets/nuscenes_map_dataset.py", line 1225, in prepare_test_data
    input_dict = self.get_data_info(index)
  File "/home/MapTR/projects/mmdet3d_plugin/datasets/nuscenes_map_dataset.py", line 1129, in get_data_info
    map_location = info['map_location'],
KeyError: 'map_location'

/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See 
https://pytorch.org/docs/stable/distributed.html#launch-utility for 
further instructions

  warnings.warn(
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 12733) of binary: /root/miniconda3/envs/maptr/bin/python3
Traceback (most recent call last):
  File "/root/miniconda3/envs/maptr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/root/miniconda3/envs/maptr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
    main()
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
    launch(args)
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
    run(args)
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
    elastic_launch(
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/root/miniconda3/envs/maptr/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
***************************************
         ./tools/test.py FAILED        
=======================================
Root Cause:
[0]:
  time: 2023-03-22_13:47:54
  rank: 0 (local_rank: 0)
  exitcode: 1 (pid: 12733)
  error_file: <N/A>
  msg: "Process failed with exitcode 1"
=======================================
Other Failures:
  <NO_OTHER_FAILURES>
***************************************

Map topology support

Hi,

Thanks for sharing such a great job!

拜读了论文,想请教一下关于路口的拓扑信息是否在输出中包含?

再次感谢!

Fail in training process

When I want to reproduce this work, I fail to train the model .

Here is some details about it:
image
176512cfe02cb627714076616a879ee
and GPU is:
image
the error is:
image

Hope you can help me fix this problem.Thanks a lot~

Inference speed on RTX 2080Ti Vs. RTX 3090

Hi, the work is awesome and thanks for sharing the source code.
I find it confusing that we get much higher inference speed on RTX 2080Ti compared to the provided results on RTX 3090. We run the benchmark.py with the provided checkpoints, and get 14.2 fps (11.2 fps in the paper) with maptr-tiny and 45 fps (25.1 fps in the paper) with maptr-nano. Is there something omitted? Or is the running speed in the paper provided with no fp16 operation during inference?
Hope to hear from you soon.

A question about the polyline equivalent permutations

刚刚阅读了我们这篇论文,里面的结果感觉非常的好。这里有一个问题想请教一下,论文当中提到两个相向车道间的车道线,他的direction很难做区分.我们的formulation中,是将所有的车道线都改为了两种排列方式。那这种改变是否会对两个相同方向车道间的车道线学习造成困扰?我们是否有统计过对于整个nuScenes数据集中相向车道间车道线比例与相同方向车道间车道线的比例?感谢~

I've just read the paper on arxiv, and the results in it is really impressive. Here is a question I want to ask. The paper mentions the divider between two opposite lanes. The direction is difficult to distinguish, in your formulation, all dividers(polyline) are changed to two permutations. Will this change cause trouble for divider learning between two lanes in the same direction? Have we counted the ratio of dividers between opposing lanes to the ratio of dividers between lanes in the same direction for the entire nuScenes dataset?

Lower mAP

Hi,

When I evaluate with your provided maptr_tiny_r50_24e.pth, it seems get a obviously lower mAP than that in your log. Is this reasonable, or maybe I set something incorrect?

-*-*-*-*-*-*-*-*-*-*use metric:chamfer-*-*-*-*-*-*-*-*-*-*
-*-*-*-*-*-*-*-*-*-*threshhold:0.5-*-*-*-*-*-*-*-*-*-*
cls:divider done in 10.783781s!!
cls:ped_crossing done in 2.050395s!!
cls:boundary done in 5.882559s!!

+--------------+-------+--------+--------+-------+
| class        | gts   | dets   | recall | ap    |
+--------------+-------+--------+--------+-------+
| divider      | 27332 | 125430 | 0.472  | 0.266 |
| ped_crossing | 6406  | 45757  | 0.233  | 0.088 |
| boundary     | 21050 | 129763 | 0.341  | 0.146 |
+--------------+-------+--------+--------+-------+
| mAP          |       |        |        | 0.167 |
+--------------+-------+--------+--------+-------+
-*-*-*-*-*-*-*-*-*-*threshhold:1.0-*-*-*-*-*-*-*-*-*-*
cls:divider done in 11.069366s!!
cls:ped_crossing done in 2.089410s!!
cls:boundary done in 5.985780s!!

+--------------+-------+--------+--------+-------+
| class        | gts   | dets   | recall | ap    |
+--------------+-------+--------+--------+-------+
| divider      | 27332 | 125430 | 0.741  | 0.543 |
| ped_crossing | 6406  | 45757  | 0.579  | 0.391 |
| boundary     | 21050 | 129763 | 0.684  | 0.503 |
+--------------+-------+--------+--------+-------+
| mAP          |       |        |        | 0.479 |
+--------------+-------+--------+--------+-------+
-*-*-*-*-*-*-*-*-*-*threshhold:1.5-*-*-*-*-*-*-*-*-*-*
cls:divider done in 10.957865s!!
cls:ped_crossing done in 2.032547s!!
cls:boundary done in 5.931236s!!

+--------------+-------+--------+--------+-------+
| class        | gts   | dets   | recall | ap    |
+--------------+-------+--------+--------+-------+
| divider      | 27332 | 125430 | 0.848  | 0.673 |
| ped_crossing | 6406  | 45757  | 0.778  | 0.609 |
| boundary     | 21050 | 129763 | 0.839  | 0.697 |
+--------------+-------+--------+--------+-------+
| mAP          |       |        |        | 0.660 |
+--------------+-------+--------+--------+-------+
divider: 0.49397581815719604
ped_crossing: 0.3623682012160619
boundary: 0.44848064581553143
map: 0.4349415550629298
{'NuscMap_chamfer/divider_AP': 0.49397581815719604, 
'NuscMap_chamfer/ped_crossing_AP': 0.3623682012160619, 
'NuscMap_chamfer/boundary_AP': 0.44848064581553143, 
'NuscMap_chamfer/mAP': 0.4349415550629298,
'NuscMap_chamfer/divider_AP_thr_0.5': 0.2660278081893921, 
'NuscMap_chamfer/divider_AP_thr_1.0': 0.5427717566490173, 
'NuscMap_chamfer/divider_AP_thr_1.5': 0.6731278896331787,
'NuscMap_chamfer/ped_crossing_AP_thr_0.5': 0.08782391250133514, 
'NuscMap_chamfer/ped_crossing_AP_thr_1.0': 0.3905932307243347, 
'NuscMap_chamfer/ped_crossing_AP_thr_1.5': 0.6086874604225159, 
'NuscMap_chamfer/boundary_AP_thr_0.5': 0.1457921862602234,  
'NuscMap_chamfer/boundary_AP_thr_1.0': 0.5028878450393677, 
'NuscMap_chamfer/boundary_AP_thr_1.5': 0.6967619061470032}

关于如何使用提供的权重推理?

感谢作者开源这一非常优秀的工作!
我想要使用您提供的训练好的权重进行推理,复现一个demo,
也就是path/to/MapTR/tools/maptr中的一些程序(vis_pred.py等),请问具体该如何使用呢(用什么命令参数,各个文件的摆放位置等)?
感谢您的交流!

Can not find

When I prepared the data, I got the following error. How can I fix it?

python tools/create_data.py nuscenes --root-path ./data --out-dir ./data --extra-tag nuscenes --version v1.0 --canbus ./data
Traceback (most recent call last):
File "tools/create_data.py", line 6, in
from data_converter.create_gt_database import create_groundtruth_database
File "/.../apps/MapTR/tools/data_converter/create_gt_database.py", line 6, in
from mmcv.ops import roi_align
File ".../anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/ops/init.py", line 2, in
from .assign_score_withk import assign_score_withk
File ".../anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/ops/assign_score_withk.py", line 5, in
ext_module = ext_loader.load_ext(
File ".../anaconda3/envs/maptr/lib/python3.8/site-packages/mmcv/utils/ext_loader.py", line 13, in load_ext
ext = importlib.import_module('mmcv.' + name)
File ".../anaconda3/envs/maptr/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named 'mmcv._ext'

Effect of perception range on performance

Hi, thanks for your great work!
Based on your repo, we did some exploration. We found that when increasing the perception range, the performance drops significantly. Did you have the same discovery? How to explain this phenomenon and mitigate its effects?

Some detailed questions about the Work

Hi! At first, thanks for your excellent work! It's a really impressive and ideal modeling method for HD Map elements. I'm also tring to follow your work and there are some questions about it. I'm appreciate if you have any time to answer:

  1. The way that fuse instance queries q{inst} and shared point queries q{pts} to Hierarchical query embeddings q{hie}.
    Here is my thought:
    (1) init 2 query embedding seperately;
    (2) use a linear to reduce the q{pts} dims from 256 to 1;
    (3) merge context and position of q{pts} to each q{inst}.
    (4) Only q{hie} are used in Decoder Attn module.

  2. The initialization of q{inst} and q{pts}.
    If we just fuse q{hie} by the above way, how to make sure the point-level infomation are really learned by q{pts}?

  3. A cost question in Instance-level Matching.
    The paper said the point2point cost is used as the position matching cost, which is similar to the point-level matching cost. But in the model foward, q{hie} doesn't match any orders or permutations of point sets . It means when we use Hungarian algorithm to find the optimal instance-level assignment, the unmatched order and permutation will lead a error assignment between predicts and gts. It will cause the unfitting problems.

  4. Questions about the evaluation.

  • As the paper mentioned, Chamfer distance D{chamfer} is used as the evaluation matrix. But how to distinguish the paired predict and gt instance, do we need to calculate each D{chamfer} between predicts and gt, Then choose the lowerest D{chamfer} to pair each instance?
  • Otherway, I think Chamfer distance D{chamfer} is a not accurate way to evaluate the point sets. The low distance cost may happens when only two points of the each point set are closed but other points are far away from each corresponding. What's your thinking of this situation?
  • Whether the confidence of predicted instances is used in evaluation to filter the low confidence instance?

These are all my questions. Thanks for your great work!

Hierarchical matching question

Hi, bencheng, thanks for your great work.
My question is about Hierarchical matching.

In your paper, first perform instance level matching to find the corresponding lane lines, and then perform point-level matching to find the internal points of each lane line.
Is my understanding correct?

if so, I think there should be two Hungarian matching processes.
But I don't find the corresponding code. It seems like there is only one matching process, which is different from that described in the paper?

matched_row_inds, matched_col_inds = linear_sum_assignment(cost)

pts_cost, order_index = torch.min(pts_cost_ordered, 2)

loss bbox and loss iou?

Hi, Bencheng, awesome work! Congratulation!

I found that loss bbox and loss iou weights have been set to 0 in the matching cost and calculation loss. Is there any improvement after adding them?

argoverse2 question

The lanesegments in the argoverse2 map are broken into several pieces. How do you deal with this problem?

About sharing .pkl files

Hi there,

Since I'm concerning about the model inference, is it possible that you could share nuscenes_infos_temporal_{train,val}.pkl ?

Thanks in advance,
Lewis

Validation set

Hi there. In your repo, the validation set is the same as the testing set. Is this the normal case for Nuscenes in the computer vision community?

Argoverse2 release

Hi @LegendBC , Thanks for such a wonderful job !

When will the argoverse2 related works be released? (data converter, cfgs, models, training etc...)

GT labels for evaluation

Thanks for sharing the great work!

When you do evaluation after inference, it seems that you used the evenly sampled GT labels for evaluation (i.e., with eval_use_same_gt_sample_num_flag as True). Is this correct? If so, may I know whether setting eval_use_same_gt_sample_num_flag as False would affect the final AP score? Since I think it would be more appropriate to use the raw GT labels without sampling for evaluation, just like VectorMapNet.

Thanks for your consideration.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.