Giter Club home page Giter Club logo

parallel_map_evaluation's Introduction

Parallel mAP_evaluation

This repo parallelizes mAP_evaluation using python's multiprocessing module.

As we know, in Lyft's 3d object detection challenge, the evaluation metric mAP is calculated as mean of mAPs for IoU thresholds 0.5, 0.55, 0.6, ... 0.95, (see here) looping over these 10 thresholds one by one can be time consuming process. Here's how it looks when you do so:

Only one hyperthread is fully utilized, rest are idle.

In this repo, you can find the parallelized implementation of mAP evaluation (mAP_evaluation.py) which uses Python's inbuilt multiprocessing module to compute APs for all 10 IoUs parallelly and simultaneously. Here's how it looks using parallelized version:

The parallel implementation is ~10x faster than the for loop implementation.

Requirements

  • lyft's devkit sdk link
  • fire
  • pathlib
  • numpy

Instructions

As official mAP_evaluation script, this script also expects the predictions and ground truth to be in the format:

pred_file: json file, predictions in global frame, in the format of:

predictions = [{
    'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207fbb039a550991a5149214f98cec136ac',
    'translation': [971.8343488872263, 1713.6816097857359, -25.82534357061308],
    'size': [2.519726579986132, 7.810161372666739, 3.483438286096803],
    'rotation': [0.10913582721095375, 0.04099572636992043, 0.01927712319721745, 1.029328402625659],
    'name': 'car',
    'score': 0.3077029437237213
}]

gt_file: ground truth annotations in global frame, in the format of:

gt = [{
    'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207fbb039a550991a5149214f98cec136ac',
    'translation': [974.2811881299899, 1714.6815014457964, -23.689857123368846],
    'size': [1.796, 4.488, 1.664],
    'rotation': [0.14882026466054782, 0, 0, 0.9888642620837121],
    'name': 'car'
}]

output_dir: a directory to save the final results.

I've given a sample pred_file and gt_file in the tmp folder of this repository. Here's how you can run the evaluation script:

python mAP_evaluation.py --gt_file="tmp/gt_data.json" --pred_file="tmp/pred_data.json" --output_dir="tmp/"

After this command finishes, you'll find metric_summary.json file in tmp containing the mAPs of all the iou thresholds as well as the overall mAP.

Enjoy!

parallel_map_evaluation's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

parallel_map_evaluation's Issues

about the formula

Hi, thanks for your great work, but I do have a doubt here:
In the evaluation page, it said:
image
So, it's not like running the same scripts with different threshold over and over again, do you have any ideas? Thanks! Maybe I misunderstand your code

AssertionError

I am trying to evaluate my prediction file and mAP_evaluation.py is giving following error:
Starting mAP computation
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-10:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-6:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-9:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-4:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-5:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-7:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-8:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:312: RuntimeWarning: invalid value encountered in true_divide
recalls = tp / float(num_gts)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in greater_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py:314: RuntimeWarning: invalid value encountered in less_equal
assert np.all(0 <= recalls) & np.all(recalls <= 1)
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "mAP_evaluation.py", line 44, in save_AP
AP = get_average_precisions(gt, predictions, class_names, iou_threshold)
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 369, in get_average_precisions
gt_by_class_name[class_name], pred_by_class_name[class_name], iou_threshold
File "/usr/local/lib/python3.6/dist-packages/lyft_dataset_sdk/eval/detection/mAP_evaluation.py", line 314, in recall_precision
assert np.all(0 <= recalls) & np.all(recalls <= 1)
AssertionError
Traceback (most recent call last):
File "mAP_evaluation.py", line 126, in
fire.Fire(main)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/usr/local/lib/python3.6/dist-packages/fire/core.py", line 675, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "mAP_evaluation.py", line 115, in main
metric, overall_AP = get_metric_overall_AP(iou_th_range, output_dir, class_names)
File "mAP_evaluation.py", line 66, in get_metric_overall_AP
with open(str(summary_path), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'tmp/metric_summary_0.5.json'
I could not able to figure out why it it happening? can you help me out?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.