Giter Club home page Giter Club logo

py-motmetrics's Introduction

PyPI version Build Status

py-motmetrics

The py-motmetrics library provides a Python implementation of metrics for benchmarking multiple object trackers (MOT).

While benchmarking single object trackers is rather straightforward, measuring the performance of multiple object trackers needs careful design as multiple correspondence constellations can arise (see image below). A variety of methods have been proposed in the past and while there is no general agreement on a single method, the methods of [1,2,3,4] have received considerable attention in recent years. py-motmetrics implements these metrics.


Pictures courtesy of Bernardin, Keni, and Rainer Stiefelhagen [1]

In particular py-motmetrics supports CLEAR-MOT[1,2] metrics and ID[4] metrics. Both metrics attempt to find a minimum cost assignment between ground truth objects and predictions. However, while CLEAR-MOT solves the assignment problem on a local per-frame basis, ID-MEASURE solves the bipartite graph matching by finding the minimum cost of objects and predictions over all frames. This blog-post by Ergys illustrates the differences in more detail.

Features at a glance

  • Variety of metrics
    Provides MOTA, MOTP, track quality measures, global ID measures and more. The results are comparable with the popular MOTChallenge benchmarks (*1).
  • Distance agnostic
    Supports Euclidean, Intersection over Union and other distances measures.
  • Complete event history
    Tracks all relevant per-frame events suchs as correspondences, misses, false alarms and switches.
  • Flexible solver backend
    Support for switching minimum assignment cost solvers. Supports scipy, ortools, munkres out of the box. Auto-tunes solver selection based on availability and problem size.
  • Easy to extend
    Events and summaries are utilizing pandas for data structures and analysis. New metrics can reuse already computed values from depending metrics.

Metrics

py-motmetrics implements the following metrics. The metrics have been aligned with what is reported by MOTChallenge benchmarks.

import motmetrics as mm
# List all default metrics
mh = mm.metrics.create()
print(mh.list_metrics_markdown())
Name Description
num_frames Total number of frames.
num_matches Total number matches.
num_switches Total number of track switches.
num_false_positives Total number of false positives (false-alarms).
num_misses Total number of misses.
num_detections Total number of detected objects including matches and switches.
num_objects Total number of unique object appearances over all frames.
num_predictions Total number of unique prediction appearances over all frames.
num_unique_objects Total number of unique object ids encountered.
mostly_tracked Number of objects tracked for at least 80 percent of lifespan.
partially_tracked Number of objects tracked between 20 and 80 percent of lifespan.
mostly_lost Number of objects tracked less than 20 percent of lifespan.
num_fragmentations Total number of switches from tracked to not tracked.
motp Multiple object tracker precision.
mota Multiple object tracker accuracy.
precision Number of detected objects over sum of detected and false positives.
recall Number of detections over number of objects.
idfp ID measures: Number of false positive matches after global min-cost matching.
idfn ID measures: Number of false negatives matches after global min-cost matching.
idtp ID measures: Number of true positives matches after global min-cost matching.
idp ID measures: global min-cost precision.
idr ID measures: global min-cost recall.
idf1 ID measures: global min-cost F1 score.
obj_frequencies pd.Series Total number of occurrences of individual objects over all frames.
pred_frequencies pd.Series Total number of occurrences of individual predictions over all frames.
track_ratios pd.Series Ratio of assigned to total appearance count per unique object id.
id_global_assignment dict ID measures: Global min-cost assignment for ID measures.

MOTChallenge compatibility

py-motmetrics produces results compatible with popular MOTChallenge benchmarks (*1). Below are two results taken from MOTChallenge Matlab devkit corresponding to the results of the CEM tracker on the training set of the 2015 MOT 2DMark.


TUD-Campus
 IDF1  IDP  IDR| Rcll  Prcn   FAR| GT  MT  PT  ML|   FP    FN  IDs   FM| MOTA  MOTP MOTAL
 55.8 73.0 45.1| 58.2  94.1  0.18|  8   1   6   1|   13   150    7    7| 52.6  72.3  54.3

TUD-Stadtmitte
 IDF1  IDP  IDR| Rcll  Prcn   FAR| GT  MT  PT  ML|   FP    FN  IDs   FM| MOTA  MOTP MOTAL
 64.5 82.0 53.1| 60.9  94.0  0.25| 10   5   4   1|   45   452    7    6| 56.4  65.4  56.9

In comparison to py-motmetrics

                IDF1   IDP   IDR  Rcll  Prcn GT MT PT ML FP  FN IDs  FM  MOTA  MOTP
TUD-Campus     55.8% 73.0% 45.1% 58.2% 94.1%  8  1  6  1 13 150   7   7 52.6% 0.277
TUD-Stadtmitte 64.5% 82.0% 53.1% 60.9% 94.0% 10  5  4  1 45 452   7   6 56.4% 0.346

(*1) Besides naming conventions, the only obvious differences are

  • Metric FAR is missing. This metric is given implicitly and can be recovered by FalsePos / Frames * 100.
  • Metric MOTP seems to be off. To convert compute (1 - MOTP) * 100. MOTChallenge benchmarks compute MOTP as percentage, while py-motmetrics sticks to the original definition of average distance over number of assigned objects [1].

You can compare tracker results to ground truth in MOTChallenge format by

python -m motmetrics.apps.eval_motchallenge --help

For MOT16/17, you can run

python -m motmetrics.apps.evaluateTracking --help

Installation

To install latest development version of py-motmetrics (usually a bit more recent than PyPi below)

pip install git+https://github.com/cheind/py-motmetrics.git

Install via PyPi

To install py-motmetrics use pip

pip install motmetrics

Python 3.5/3.6/3.9 and numpy, pandas and scipy is required. If no binary packages are available for your platform and building source packages fails, you might want to try a distribution like Conda (see below) to install dependencies.

Alternatively for developing, clone or fork this repository and install in editing mode.

pip install -e <path/to/setup.py>

Install via Conda

In case you are using Conda, a simple way to run py-motmetrics is to create a virtual environment with all the necessary dependencies

conda env create -f environment.yml
> activate motmetrics-env

Then activate / source the motmetrics-env and install py-motmetrics and run the tests.

activate motmetrics-env
pip install .
pytest

In case you already have an environment you install the dependencies from within your environment by

conda install --file requirements.txt
pip install .
pytest

Usage

Populating the accumulator

import motmetrics as mm
import numpy as np

# Create an accumulator that will be updated during each frame
acc = mm.MOTAccumulator(auto_id=True)

# Call update once for per frame. For now, assume distances between
# frame objects / hypotheses are given.
acc.update(
    [1, 2],                     # Ground truth objects in this frame
    [1, 2, 3],                  # Detector hypotheses in this frame
    [
        [0.1, np.nan, 0.3],     # Distances from object 1 to hypotheses 1, 2, 3
        [0.5,  0.2,   0.3]      # Distances from object 2 to hypotheses 1, 2, 3
    ]
)

The code above updates an event accumulator with data from a single frame. Here we assume that pairwise object / hypothesis distances have already been computed. Note np.nan inside the distance matrix. It signals that object 1 cannot be paired with hypothesis 2. To inspect the current event history simple print the events associated with the accumulator.

print(acc.events) # a pandas DataFrame containing all events

"""
                Type  OId HId    D
FrameId Event
0       0        RAW    1   1  0.1
        1        RAW    1   2  NaN
        2        RAW    1   3  0.3
        3        RAW    2   1  0.5
        4        RAW    2   2  0.2
        5        RAW    2   3  0.3
        6      MATCH    1   1  0.1
        7      MATCH    2   2  0.2
        8         FP  NaN   3  NaN
"""

The above data frame contains RAW and MOT events. To obtain just MOT events type

print(acc.mot_events) # a pandas DataFrame containing MOT only events

"""
                Type  OId HId    D
FrameId Event
0       6      MATCH    1   1  0.1
        7      MATCH    2   2  0.2
        8         FP  NaN   3  NaN
"""

Meaning object 1 was matched to hypothesis 1 with distance 0.1. Similarily, object 2 was matched to hypothesis 2 with distance 0.2. Hypothesis 3 could not be matched to any remaining object and generated a false positive (FP). Possible assignments are computed by minimizing the total assignment distance (Kuhn-Munkres algorithm).

Continuing from above

frameid = acc.update(
    [1, 2],
    [1],
    [
        [0.2],
        [0.4]
    ]
)
print(acc.mot_events.loc[frameid])

"""
        Type OId  HId    D
Event
2      MATCH   1    1  0.2
3       MISS   2  NaN  NaN
"""

While object 1 was matched, object 2 couldn't be matched because no hypotheses are left to pair with.

frameid = acc.update(
    [1, 2],
    [1, 3],
    [
        [0.6, 0.2],
        [0.1, 0.6]
    ]
)
print(acc.mot_events.loc[frameid])

"""
         Type OId HId    D
Event
4       MATCH   1   1  0.6
5      SWITCH   2   3  0.6
"""

Object 2 is now tracked by hypothesis 3 leading to a track switch. Note, although a pairing (1, 3) with cost less than 0.6 is possible, the algorithm prefers prefers to continue track assignments from past frames which is a property of MOT metrics.

Computing metrics

Once the accumulator has been populated you can compute and display metrics. Continuing the example from above

mh = mm.metrics.create()
summary = mh.compute(acc, metrics=['num_frames', 'mota', 'motp'], name='acc')
print(summary)

"""
     num_frames  mota  motp
acc           3   0.5  0.34
"""

Computing metrics for multiple accumulators or accumulator views is also possible

summary = mh.compute_many(
    [acc, acc.events.loc[0:1]],
    metrics=['num_frames', 'mota', 'motp'],
    names=['full', 'part'])
print(summary)

"""
      num_frames  mota      motp
full           3   0.5  0.340000
part           2   0.5  0.166667
"""

Finally, you may want to reformat column names and how column values are displayed.

strsummary = mm.io.render_summary(
    summary,
    formatters={'mota' : '{:.2%}'.format},
    namemap={'mota': 'MOTA', 'motp' : 'MOTP'}
)
print(strsummary)

"""
      num_frames   MOTA      MOTP
full           3 50.00%  0.340000
part           2 50.00%  0.166667
"""

For MOTChallenge py-motmetrics provides predefined metric selectors, formatters and metric names, so that the result looks alike what is provided via their Matlab devkit.

summary = mh.compute_many(
    [acc, acc.events.loc[0:1]],
    metrics=mm.metrics.motchallenge_metrics,
    names=['full', 'part'])

strsummary = mm.io.render_summary(
    summary,
    formatters=mh.formatters,
    namemap=mm.io.motchallenge_metric_names
)
print(strsummary)

"""
      IDF1   IDP   IDR  Rcll  Prcn GT MT PT ML FP FN IDs  FM  MOTA  MOTP
full 83.3% 83.3% 83.3% 83.3% 83.3%  2  1  1  0  1  1   1   1 50.0% 0.340
part 75.0% 75.0% 75.0% 75.0% 75.0%  2  1  1  0  1  1   0   0 50.0% 0.167
"""

In order to generate an overall summary that computes the metrics jointly over all accumulators add generate_overall=True as follows

summary = mh.compute_many(
    [acc, acc.events.loc[0:1]],
    metrics=mm.metrics.motchallenge_metrics,
    names=['full', 'part'],
    generate_overall=True
    )

strsummary = mm.io.render_summary(
    summary,
    formatters=mh.formatters,
    namemap=mm.io.motchallenge_metric_names
)
print(strsummary)

"""
         IDF1   IDP   IDR  Rcll  Prcn GT MT PT ML FP FN IDs  FM  MOTA  MOTP
full    83.3% 83.3% 83.3% 83.3% 83.3%  2  1  1  0  1  1   1   1 50.0% 0.340
part    75.0% 75.0% 75.0% 75.0% 75.0%  2  1  1  0  1  1   0   0 50.0% 0.167
OVERALL 80.0% 80.0% 80.0% 80.0% 80.0%  4  2  2  0  2  2   1   1 50.0% 0.275
"""

Computing distances

Up until this point we assumed the pairwise object/hypothesis distances to be known. Usually this is not the case. You are mostly given either rectangles or points (centroids) of related objects. To compute a distance matrix from them you can use motmetrics.distance module as shown below.

Euclidean norm squared on points

# Object related points
o = np.array([
    [1., 2],
    [2., 2],
    [3., 2],
])

# Hypothesis related points
h = np.array([
    [0., 0],
    [1., 1],
])

C = mm.distances.norm2squared_matrix(o, h, max_d2=5.)

"""
[[  5.   1.]
 [ nan   2.]
 [ nan   5.]]
"""

Intersection over union norm for 2D rectangles

a = np.array([
    [0, 0, 1, 2],    # Format X, Y, Width, Height
    [0, 0, 0.8, 1.5],
])

b = np.array([
    [0, 0, 1, 2],
    [0, 0, 1, 1],
    [0.1, 0.2, 2, 2],
])
mm.distances.iou_matrix(a, b, max_iou=0.5)

"""
[[ 0.          0.5                nan]
 [ 0.4         0.42857143         nan]]
"""

Solver backends

For large datasets solving the minimum cost assignment becomes the dominant runtime part. py-motmetrics therefore supports these solvers out of the box

A comparison for different sized matrices is shown below (taken from here)

Please note that the x-axis is scaled logarithmically. Missing bars indicate excessive runtime or errors in returned result.

By default py-motmetrics will try to find a LAP solver in the order of the list above. In order to temporarly replace the default solver use

costs = ...
mysolver = lambda x: ... # solver code that returns pairings

with lap.set_default_solver(mysolver):
    ...

For custom dataset

Use this section as a guide for calculating MOT metrics for your custom dataset.

Before you begin, make sure to have Ground truth and your Tracker output data in the form of text files. The code below assumes MOT16 format for the ground truth as well as the tracker ouput. The data is arranged in the following sequence:

<frame number>, <object id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <confidence>, <x>, <y>, <z>

A sample ground truth/tracker output file is shown below. If you are using a custom dataset, then it is highly likely that you will have to create your own ground truth file. If you already have a MOT16 format ground truth file, you can use it directly otherwise, you will need a MOT16 annotator tool to create the annotations (ground truth). You can use any tool to create your ground truth data, just make sure it is as per MOT16 format.

If you can't find a tool to create your ground truth files, you can use this free MOT16 annotator tool to create ground truth for your dataset which can then be used in conjunction with your tracker output to generate the MOT metrics.

1,1,763.00,272.00,189.00,38.00,1,-1,-1,-1
1,2,412.00,265.00,153.00,30.00,1,-1,-1,-1
2,1,762.00,269.00,185.00,41.00,1,-1,-1,-1
2,2,413.00,267.00,151.00,26.00,1,-1,-1,-1
3,1,760.00,272.00,186.00,38.00,1,-1,-1,-1

You can read more about MOT16 format here.

Following function loads the ground truth and tracker object files, processes them and produces a set of metrices.

def motMetricsEnhancedCalculator(gtSource, tSource):
  # import required packages
  import motmetrics as mm
  import numpy as np
  
  # load ground truth
  gt = np.loadtxt(gtSource, delimiter=',')

  # load tracking output
  t = np.loadtxt(tSource, delimiter=',')

  # Create an accumulator that will be updated during each frame
  acc = mm.MOTAccumulator(auto_id=True)

  # Max frame number maybe different for gt and t files
  for frame in range(int(gt[:,0].max())):
    frame += 1 # detection and frame numbers begin at 1

    # select id, x, y, width, height for current frame
    # required format for distance calculation is X, Y, Width, Height \
    # We already have this format
    gt_dets = gt[gt[:,0]==frame,1:6] # select all detections in gt
    t_dets = t[t[:,0]==frame,1:6] # select all detections in t

    C = mm.distances.iou_matrix(gt_dets[:,1:], t_dets[:,1:], \
                                max_iou=0.5) # format: gt, t

    # Call update once for per frame.
    # format: gt object ids, t object ids, distance
    acc.update(gt_dets[:,0].astype('int').tolist(), \
              t_dets[:,0].astype('int').tolist(), C)

  mh = mm.metrics.create()

  summary = mh.compute(acc, metrics=['num_frames', 'idf1', 'idp', 'idr', \
                                     'recall', 'precision', 'num_objects', \
                                     'mostly_tracked', 'partially_tracked', \
                                     'mostly_lost', 'num_false_positives', \
                                     'num_misses', 'num_switches', \
                                     'num_fragmentations', 'mota', 'motp' \
                                    ], \
                      name='acc')

  strsummary = mm.io.render_summary(
      summary,
      #formatters={'mota' : '{:.2%}'.format},
      namemap={'idf1': 'IDF1', 'idp': 'IDP', 'idr': 'IDR', 'recall': 'Rcll', \
               'precision': 'Prcn', 'num_objects': 'GT', \
               'mostly_tracked' : 'MT', 'partially_tracked': 'PT', \
               'mostly_lost' : 'ML', 'num_false_positives': 'FP', \
               'num_misses': 'FN', 'num_switches' : 'IDsw', \
               'num_fragmentations' : 'FM', 'mota': 'MOTA', 'motp' : 'MOTP',  \
              }
  )
  print(strsummary)

Run the function by pointing to the ground truth and tracker output file. A sample output is shown below.

# Calculate the MOT metrics
motMetricsEnhancedCalculator('gt/groundtruth.txt', \
                             'to/trackeroutput.txt')
"""
     num_frames  IDF1       IDP       IDR      Rcll      Prcn   GT  MT  PT  ML  FP  FN  IDsw  FM      MOTA      MOTP
acc         150  0.75  0.857143  0.666667  0.743295  0.955665  261   0   2   0   9  67     1  12  0.704981  0.244387
"""

Running tests

py-motmetrics uses the pytest framework. To run the tests, simply cd into the source directly and run pytest.

References

  1. Bernardin, Keni, and Rainer Stiefelhagen. "Evaluating multiple object tracking performance: the CLEAR MOT metrics." EURASIP Journal on Image and Video Processing 2008.1 (2008): 1-10.
  2. Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).
  3. Li, Yuan, Chang Huang, and Ram Nevatia. "Learning to associate: Hybridboosted multi-target tracker for crowded scene." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.
  4. Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking. E. Ristani, F. Solera, R. S. Zou, R. Cucchiara and C. Tomasi. ECCV 2016 Workshop on Benchmarking Multi-Target Tracking.

Docker

Update ground truth and test data:

/data/train directory should contain MOT 2D 2015 Ground Truth files. /data/test directory should contain your results.

You can check usage and directory listing at https://github.com/cheind/py-motmetrics/blob/master/motmetrics/apps/eval_motchallenge.py

Build Image

docker build -t desired-image-name -f Dockerfile .

Run Image

docker run desired-image-name

(credits to christosavg)

License

MIT License

Copyright (c) 2017-2022 Christoph Heindl
Copyright (c) 2018 Toka
Copyright (c) 2019-2022 Jack Valmadre

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

py-motmetrics's People

Contributors

alexlitz avatar angelcarro avatar ardeshir-shon avatar borda avatar cheind avatar christosavg avatar cinabars avatar hakanardo avatar hanzhi713 avatar helicopt avatar jvlmdr avatar khalidw avatar lihi-gur-arie avatar marthateye avatar michael-hoss avatar muaz-urwa avatar shensheng27 avatar smidm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

py-motmetrics's Issues

Why is my MOTP greater than 1?

         IDF1   IDP   IDR  Rcll  Prcn GT MT PT ML  FP FN IDs  FM  MOTA   MOTP
Overall 67.5% 63.8% 71.7% 91.2% 81.1%  4  4  0  0 198 82  11  25 68.8% 19.907

These are the results that I'm getting on my custom dataset. I expect the MOTP to be in the range 0-1, however as you see its 19.907. What could be the reason for this. I'm computing my cost matrix as follows.

cost_matrix = mot.distances.norm2squared_matrix(ground_truth, detections, max_d2=max_distance)
acc.update(gt_labels, tracks.keys(), cost_matrix)

I am working on a 2D RADAR tracker, not bounding boxes on images. Could be an issue with IoU assumptions made in computing cost matrix?

[Bug?] Inconsistent result reproduction

On running the metric evaluator for GT vs GT, the metric table has some odd/off entries.

Data

  • Ground truth:
1,1,912,484,97,109,0,7,1
2,1,912,484,97,109,0,7,1
3,1,912,484,97,109,0,7,1
4,1,912,484,97,109,0,7,1
5,1,912,484,97,109,0,7,1
6,1,912,484,97,109,0,7,1
7,1,912,484,97,109,0,7,1
8,1,912,484,97,109,0,7,1
9,1,912,484,97,109,0,7,1
10,1,912,484,97,109,0,7,1
11,1,912,484,97,109,0,7,1
12,1,912,484,97,109,0,7,1
13,1,912,484,97,109,0,7,1
14,1,912,484,97,109,0,7,1
15,1,912,484,97,109,0,7,1
16,1,912,484,97,109,0,7,1
...
  • "Test"file:
1,1,912,484,97,109,0,7,1
2,1,912,484,97,109,0,7,1
3,1,912,484,97,109,0,7,1
4,1,912,484,97,109,0,7,1
5,1,912,484,97,109,0,7,1
6,1,912,484,97,109,0,7,1
7,1,912,484,97,109,0,7,1
8,1,912,484,97,109,0,7,1
9,1,912,484,97,109,0,7,1
10,1,912,484,97,109,0,7,1
11,1,912,484,97,109,0,7,1
12,1,912,484,97,109,0,7,1
13,1,912,484,97,109,0,7,1
14,1,912,484,97,109,0,7,1
15,1,912,484,97,109,0,7,1
16,1,912,484,97,109,0,7,1
...

Results

  • Traceback:
(mot-eval) <removed>@ubuntu:~/mot16-evaluation/py-motmetrics$ python3 -m motmetrics.apps.eval_motchallenge data/train/ data/test/ --loglevel debug --solver lapsolver --fmt mot16
11:35:31 INFO - Found 7 groundtruths and 7 test 
11:35:31 INFO - Available LAP solvers ['lapsolver', '
11:35:31 INFO - Default LAP solver '
11:35:31 INFO - Loading 
11:35:35 INFO - Comparing MOT16-02...
11:35:50 INFO - Comparing MOT16-09...
11:35:53 INFO - Comparing MOT16-05...
11:35:57 INFO - Comparing MOT16-04...
11:37:12 INFO - Comparing MOT16-10...
11:37:20 INFO - Comparing MOT16-11...
11:37:25 INFO - Comparing MOT16-13...
11:37:32 INFO - Running metrics
/home/intern23/mot16-evaluation/py-motmetrics/motmetrics/mot.py:243: FutureWarning: the 'labels' keyword is deprecated, use 'codes' instead          idx = pd.MultiIndex(levels=[[],[]], labels=[[],[]], names=['FrameId','Event'])                                                                    
  • Table:
          IDF1   IDP    IDR   Rcll  Prcn  GT  MT PT ML    FP FN IDs  FM   MOTA  MOTP
MOT16-02 75.8% 61.1% 100.0% 100.0% 61.1%  54  54  0  0 11360  0   0   0  36.3% 0.000
MOT16-09 74.6% 59.5% 100.0% 100.0% 59.5%  25  25  0  0  3573  0   0   0  32.0% 0.000 
MOT16-05 94.1% 88.9% 100.0% 100.0% 88.9% 125 125  0  0   853  0   0   0  87.5% 0.000 
MOT16-04 61.1% 44.0% 100.0% 100.0% 44.0%  83  83  0  0 60448  0   0   0 -27.1% 0.000 
MOT16-10 84.2% 72.8% 100.0% 100.0% 72.8%  54  54  0  0  4611  0   0   0  62.6% 0.000 
MOT16-11 95.3% 91.0% 100.0% 100.0% 91.0%  69  69  0  0   902  0   0   0  90.2% 0.000
MOT16-13 74.6% 59.4% 100.0% 100.0% 59.4% 107 107  0  0  7813  0   0   0  31.8% 0.000
OVERALL  71.1% 55.2% 100.0% 100.0% 55.2% 517 517  0  0 89560  0   0   0  18.9% 0.000
  • What are the FPs calculated on the basis of?
  • Why is MOTA negative?
  • Is MOTP supposed to be 0?

(possible duplicate of #43)

nan values still present

Hi and thank you for your work.
I've been facing 'the label [nan] is not in the [index]' when executing the example.py script.

The error is also present when trying to test my data on the ground truth, using
python -m motmetrics.apps.eval_motchallenge

It seems to occur at:
File "/usr/local/lib/python3.5/dist-packages/motmetrics/metrics.py", line 335, in id_global_assignment
df_o = df.loc[o, 'D'].dropna()

Is this a possible pandas bug not removing NaN values? Perhaps I'm doing something wrong.
Thanks a lot.

Edit: After scrolling through the other issues again, I found EchoTheory's answer. Thankfully I'm building in a docker environment so I'd strongly suggest you'd do the same (or use a virtual env)

Maybe a Bug

@cheind Hi, cheind. I try to import motmetrics after I installed it , but met a error : AttributeError: module 'importlib' has no attribute 'util' . My enveriment is Python 3.5.2. It maybe a bug . I fixed it by change import importlib to import importlib from importlib import util in lap.py line 148. available_solvers = [s[0] for s in solvers if util.find_spec(s[0]) is not None] in line 162

FN and FP numbers are way off

Hi @cheind , great work with the repo and all the help.
When I try this on my own tracking algorithm and my own dataset (in MOT Challenge format), I get quite weird values.

    IDF1  IDP  IDR Rcll Prcn GT MT PT ML  FP IDs  FM   MOTA   MOTP num_objects num_predictions  FN

test 3.6% 3.6% 3.6% 5.4% 5.5% 3 0 0 3 467 1 2 -88.7% 61.671 497 494 470
OVERALL 3.6% 3.6% 3.6% 5.4% 5.5% 3 0 0 3 467 1 2 -88.7% 61.671 497 494 470

Sorry for my formatting skills but the problem is that the MOTA is negative, because the number of FP(467) + FN(470) is greater than the total objects(497) which should not be possible. I think the scores are done after graph matching so it is not necessary that the tracker assign exactly the same id as the gt, is it? Also, any other factors I need to take care of in my gt and ts txt files?
Thank you.
My gt and ts files are in here

Prombelm in eval_motchallenge.py

I prepared test environment using MOT17-04-FRCNN\gt\gt.txt.

gt_04\1\gt\gt.txt
test_04\1.txt

"1.txt" file is the same with "gt.txt".

I executed eval.motchallenge.py, and expected a perfect score becase two file is the same. But the result is a little strange, specially for MOTA.


python eval_motchallenge.py gt_04 test_04
04:26:35 INFO - Found 1 groundtruths and 1 test files.
04:26:35 INFO - Available LAP solvers ['scipy']
04:26:35 INFO - Default LAP solver 'scipy'
04:26:35 INFO - Loading files.
04:26:38 INFO - Comparing 1...
04:27:48 INFO - Running metrics
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
1 61.1% 44.0% 100.0% 100.0% 44.0% 83 83 0 0 60448 0 0 0 -27.1% 0.000
OVERALL 61.1% 44.0% 100.0% 100.0% 44.0% 83 83 0 0 60448 0 0 0 -27.1% 0.000
04:31:50 INFO - Completed

What is the prblem?

False positives are over-reported

The motmetrics module seems to be reporting false positive tracks in situations when none exist. I confirmed this by running eval_motchallenge with the 2DMOT2015 ground truth files copied and used as the test results. The metric results are below - most are perfect, as would be expected, but FP can get quite high, and this affects downstream metrics like IDF1 and MOTA.

                 IDF1    IDP    IDR   Rcll   Prcn  GT  MT PT ML   FP FN IDs  FM   MOTA   MOTP
KITTI-13        90.1%  82.0% 100.0% 100.0%  82.0%  42  42  0  0  167  0   0   0  78.1% -0.000
ADL-Rundle-8   100.0% 100.0% 100.0% 100.0% 100.0%  28  28  0  0    0  0   0   0 100.0%  0.000
Venice-2       100.0% 100.0% 100.0% 100.0% 100.0%  26  26  0  0    0  0   0   0 100.0%  0.000
TUD-Campus     100.0% 100.0% 100.0% 100.0% 100.0%   8   8  0  0    0  0   0   0 100.0%  0.000
KITTI-17        93.2%  87.3% 100.0% 100.0%  87.3%   9   9  0  0   99  0   0   0  85.5% -0.000
ETH-Bahnhof     82.8%  70.6% 100.0% 100.0%  70.6% 171 171  0  0 2255  0   0   0  58.4%  0.000
PETS09-S2L1     98.1%  96.3% 100.0% 100.0%  96.3%  19  19  0  0  174  0   0   0  96.1%  0.000
TUD-Stadtmitte 100.0% 100.0% 100.0% 100.0% 100.0%  10  10  0  0    0  0   0   0 100.0%  0.000
ADL-Rundle-6   100.0% 100.0% 100.0% 100.0% 100.0%  24  24  0  0    0  0   0   0 100.0%  0.000
ETH-Sunnyday    98.9%  97.7% 100.0% 100.0%  97.7%  30  30  0  0   43  0   0   0  97.7%  0.000
ETH-Pedcross2   96.2%  92.7% 100.0% 100.0%  92.7% 133 133  0  0  495  0   0   0  92.1%  0.000
OVERALL         96.1%  92.5% 100.0% 100.0%  92.5% 500 500  0  0 3233  0   0   0  91.9%  0.000

I will look into this a bit and open a PR if I find the cause.

Aggregated Metrics

Hello ! Thank you for your huge work ! How can you compute the aggregated metrics on the whole dataset ? It is available in the matlab eval kit. Thanks very much !!!!!

NaN MOTP and MOTA

Hi my name is Olivier. I try to run py-motmetrics with spyder and i got an issue.

I don't know why but when i run Py-motmetrics this way

mh= mm.metrics.create()

GroundTruth = mm.io.loadtxt("gt.txt")

Test= mm.io.loadtxt("Test.txt")
acc = mm.utils.compare_to_groundtruth(GroundTruth, Test)
summary = mh.compute( acc, metrics=[ "num_frames", "mostly_tracked", "partially_tracked", "mostly_lost", "num_fragmentations", "mota", "motp", "num_detections", 'num_matches', 'num_switches', ], name="acc", )
strsummary = mm.io.render_summary( summary, formatters=mh.formatters, namemap=mm.io.motchallenge_metric_names )
print(strsummary)

I got this message
num_frames MT PT ML FM MOTA MOTP num_detections num_matches IDs acc 101 1 0 1 0 0.0% nan 0 0 0 C:\Users\osacchi\AppData\Local\Continuum\anaconda3\lib\site-packages\motmetrics\metrics.py:290: RuntimeWarning: invalid value encountered in double_scalars return df.noraw['D'].sum() / num_detections

When i use data like "MOT17DetLabels/train/MOT17-02/gt/gt.txt" and "res/MOT17Det/DPM/data/MOT17-02.txt" there is no problem but when i use my data i got this error.
Normaly my data while be of the same format than MOTchallenge data (same dtypes, same number of columns etc, etc)

my dataframe are like that :
Ground truth (create by me):

X Y Width Height Confidence ClassId Visibility
FrameId Id
1 -1 0 0 150 2 0 7 1.0
2 -1 9 9 150 2 0 7 1.0
3 -1 19 19 150 2 0 7 1.0
4 -1 29 29 150 2 0 7 1.0
5 -1 39 39 150 2 0 7 1.0
6 -1 50 50 150 2 0 7 1.0
7 -1 60 60 150 2 0 7 1.0
8 -1 70 70 150 2 0 7 1.0
9 -1 80 80 150 2 0 7 1.0
10 -1 91 91 150 2 0 7 1.0
11 -1 101 101 150 2 0 7 1.0

and Test is like that :

X Y Width Height Confidence ClassId Visibility
FrameId Id
1 -1 0.000000 0.000000 150 10 2 -1 -1 -1
2 -1 1.000000 1.000000 150 10 2 -1 -1 -1
3 -1 2.000000 2.000000 150 10 2 -1 -1 -1
4 -1 3.000000 3.000000 150 10 2 -1 -1 -1
5 -1 4.000000 4.000000 150 10 2 -1 -1 -1
6 -1 50.199972 50.200520 150 10 2 -1 -1 -1
7 -1 60.439959 60.439959 150 10 2 -1 -1 -1
8 -1 70.680017 70.680007 150 10 2 -1 -1 -1
9 -1 80.919994 80.919998 150 10 2 -1 -1 -1
10 -1 91.160002 91.160001 150 10 2 -1 -1 -1
11 -1 101.399999 101.400000 150 10 2 -1 -1 -1

If I try to use Py-motmetrics on Test with Test or GroundTruth on GroundTruth it works.

num_frames MT PT ML FM MOTA MOTP num_detections num_matches IDs
acc 101 1 0 0 0 100.0% 0.000 101 101 0

Do you have any idea?

Thanks in advance and sorry for my horrible english.

pytest fails

Dear py-motmetrics,
it seems the pytest fails for master and develop branches on a clean ubuntu 16.04 LTS. Or do I need to run additional commands before running pytest?

root@ubuntu-s-1vcpu-1gb-fra1-01:~/py-motmetrics# pytest
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-3.6.1, py-1.5.3, pluggy-0.6.0
rootdir: /root/py-motmetrics, inifile:
collected 18 items

motmetrics/tests/test_distances.py ...                                   [ 16%]
motmetrics/tests/test_io.py ..                                           [ 27%]
motmetrics/tests/test_lap.py ..                                          [ 38%]
motmetrics/tests/test_metrics.py ...F.F                                  [ 72%]
motmetrics/tests/test_mot.py F...F                                       [100%]

=================================== FAILURES ===================================
________________________________ test_mota_motp ________________________________

    def test_mota_motp():
        acc = mm.MOTAccumulator()

        # All FP
        acc.update([], ['a', 'b'], [], frameid=0)
        # All miss
        acc.update([1, 2], [], [], frameid=1)
        # Match
        acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
        # Switch
        acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3                                                                             )
        # Match. Better new match is available but should prefer history
        acc.update([1, 2], ['a', 'b'], [[5, 1], [1, 5]], frameid=4)
        # No data
        acc.update([], [], [], frameid=5)

        mh = mm.metrics.create()
        metr = mh.compute(acc, metrics=['motp', 'mota', 'num_predictions'], retu                                                                             rn_dataframe=False, return_cached=True)

        assert metr['num_matches'] == 4
        assert metr['num_false_positives'] == 2
        assert metr['num_misses'] == 2
        assert metr['num_switches'] == 2
        assert metr['num_detections'] == 6
>       assert metr['num_objects'] == 8
E       assert 10 == 8

motmetrics/tests/test_metrics.py:90: AssertionError
___________________________ test_motchallenge_files ____________________________

self = <pandas.core.indexing._LocIndexer object at 0x7fe0a48059f8>, key = 'nan'
axis = 0

    @Appender(_NDFrameIndexer._validate_key.__doc__)
    def _validate_key(self, key, axis):
        ax = self.obj._get_axis(axis)

        # valid for a label where all labels are in the index
        # slice of labels (where start-end in labels)
        # slice of integers (only if in the labels)
        # boolean

        if isinstance(key, slice):
            return

        elif com.is_bool_indexer(key):
            return

        elif not is_list_like_indexer(key):

            def error():
                if isna(key):
                    raise TypeError("cannot use label indexing with a null "
                                    "key")
                raise KeyError(u"the label [{key}] is not in the [{axis}]"
                               .format(key=key,
                                       axis=self.obj._get_axis_name(axis)))

            try:
                key = self._convert_scalar_indexer(key, axis)
                if not ax.contains(key):
>                   error()

/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:1790:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def error():
        if isna(key):
            raise TypeError("cannot use label indexing with a null "
                            "key")
        raise KeyError(u"the label [{key}] is not in the [{axis}]"
                       .format(key=key,
>                              axis=self.obj._get_axis_name(axis)))
E       KeyError: 'the label [nan] is not in the [index]'

/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:1785: KeyError

During handling of the above exception, another exception occurred:

    def test_motchallenge_files():
        dnames = [
            'TUD-Campus',
            'TUD-Stadtmitte',
        ]

        def compute_motchallenge(dname):
            df_gt = mm.io.loadtxt(os.path.join(dname,'gt.txt'))
            df_test = mm.io.loadtxt(os.path.join(dname,'test.txt'))
            return mm.utils.compare_to_groundtruth(df_gt, df_test, 'iou', distth                                                                             =0.5)

        accs = [compute_motchallenge(os.path.join(DATA_DIR, d)) for d in dnames]

        # For testing
        # [a.events.to_pickle(n) for (a,n) in zip(accs, dnames)]

        mh = mm.metrics.create()
>       summary = mh.compute_many(accs, metrics=mm.metrics.motchallenge_metrics,                                                                              names=dnames, generate_overall=True)

motmetrics/tests/test_metrics.py:133:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
motmetrics/metrics.py:191: in compute_many
    partials = [self.compute(acc, metrics=metrics, name=name) for acc, name in z                                                                             ip(dfs, names)]
motmetrics/metrics.py:191: in <listcomp>
    partials = [self.compute(acc, metrics=metrics, name=name) for acc, name in z                                                                             ip(dfs, names)]
motmetrics/metrics.py:142: in compute
    cache[mname] = self._compute(df_map, mname, cache, parent='summarize')
motmetrics/metrics.py:203: in _compute
    v = cache[depname] = self._compute(df_map, depname, cache, parent=name)
motmetrics/metrics.py:203: in _compute
    v = cache[depname] = self._compute(df_map, depname, cache, parent=name)
motmetrics/metrics.py:205: in _compute
    return minfo['fnc'](df_map, *vals)
motmetrics/metrics.py:335: in id_global_assignment
    df_o = df.loc[o, 'D'].dropna()
/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:1472: in __getite                                                                             m__
    return self._getitem_tuple(key)
/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:870: in _getitem_                                                                             tuple
    return self._getitem_lowerdim(tup)
/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:998: in _getitem_                                                                             lowerdim
    section = self._getitem_axis(key, axis=i)
/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:1911: in _getitem                                                                             _axis
    self._validate_key(key, axis)
/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:1798: in _validat                                                                             e_key
    error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def error():
        if isna(key):
            raise TypeError("cannot use label indexing with a null "
                            "key")
        raise KeyError(u"the label [{key}] is not in the [{axis}]"
                       .format(key=key,
>                              axis=self.obj._get_axis_name(axis)))
E       KeyError: 'the label [nan] is not in the [index]'

/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py:1785: KeyError
_________________________________ test_events __________________________________

    def test_events():
        acc = mm.MOTAccumulator()

        # All FP
        acc.update([], ['a', 'b'], [], frameid=0)
        # All miss
        acc.update([1, 2], [], [], frameid=1)
        # Match
        acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
        # Switch
        acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3                                                                             )
        # Match. Better new match is available but should prefer history
        acc.update([1, 2], ['a', 'b'], [[5, 1], [1, 5]], frameid=4)
        # No data
        acc.update([], [], [], frameid=5)

        expect = mm.MOTAccumulator.new_event_dataframe()
        expect.loc[(0, 0), :] = ['RAW', np.nan, 'a', np.nan]
        expect.loc[(0, 1), :] = ['RAW', np.nan, 'b', np.nan]
        expect.loc[(0, 2), :] = ['FP', np.nan, 'a', np.nan]
        expect.loc[(0, 3), :] = ['FP', np.nan, 'b', np.nan]

        expect.loc[(1, 0), :] = ['RAW', 1, np.nan, np.nan]
        expect.loc[(1, 1), :] = ['RAW', 2, np.nan, np.nan]
        expect.loc[(1, 2), :] = ['MISS', 1, np.nan, np.nan]
        expect.loc[(1, 3), :] = ['MISS', 2, np.nan, np.nan]

        expect.loc[(2, 0), :] = ['RAW', 1, 'a', 1.0]
        expect.loc[(2, 1), :] = ['RAW', 1, 'b', 0.5]
        expect.loc[(2, 2), :] = ['RAW', 2, 'a', 0.3]
        expect.loc[(2, 3), :] = ['RAW', 2, 'b', 1.0]
        expect.loc[(2, 4), :] = ['MATCH', 1, 'b', 0.5]
        expect.loc[(2, 5), :] = ['MATCH', 2, 'a', 0.3]

        expect.loc[(3, 0), :] = ['RAW', 1, 'a', 0.2]
        expect.loc[(3, 1), :] = ['RAW', 1, 'b', np.nan]
        expect.loc[(3, 2), :] = ['RAW', 2, 'a', np.nan]
        expect.loc[(3, 3), :] = ['RAW', 2, 'b', 0.1]
        expect.loc[(3, 4), :] = ['SWITCH', 1, 'a', 0.2]
        expect.loc[(3, 5), :] = ['SWITCH', 2, 'b', 0.1]

        expect.loc[(4, 0), :] = ['RAW', 1, 'a', 5.]
        expect.loc[(4, 1), :] = ['RAW', 1, 'b', 1.]
        expect.loc[(4, 2), :] = ['RAW', 2, 'a', 1.]
        expect.loc[(4, 3), :] = ['RAW', 2, 'b', 5.]
        expect.loc[(4, 4), :] = ['MATCH', 1, 'a', 5.]
        expect.loc[(4, 5), :] = ['MATCH', 2, 'b', 5.]
        # frame 5 generates no events

>       assert pd.DataFrame.equals(acc.events, expect)
E       AssertionError: assert False
E        +  where False = <function NDFrame.equals at 0x7fe0ac346bf8>(                                                                                              Type  OId  HId    D\nFrameId Event                       \n0       0                                                                                      RAW  nan    a  NaN\n       ...  a  1.0\n        3         RAW    2    b  5.0                                                                             \n        4       MATCH    1    a  5.0\n        5       MATCH    2    b  5.0,                                                                                               Type  OId  HId    D\nFrameId Event                       \n0                                                                                    0         RAW  NaN    a  NaN\n       ...  a  1.0\n        3         RAW    2                                                                                 b  5.0\n        4       MATCH    1    a  5.0\n        5       MATCH    2    b                                                                               5.0)
E        +    where <function NDFrame.equals at 0x7fe0ac346bf8> = <class 'pandas                                                                             .core.frame.DataFrame'>.equals
E        +      where <class 'pandas.core.frame.DataFrame'> = pd.DataFrame
E        +    and                    Type  OId  HId    D\nFrameId Event                                                                                                    \n0       0         RAW  nan    a  NaN\n       ...  a  1.0\n                                                                                     3         RAW    2    b  5.0\n        4       MATCH    1    a  5.0\n        5                                                                                    MATCH    2    b  5.0 = <motmetrics.mot.MOTAccumulator object at 0x7fe0a47c                                                                             7390>.events

motmetrics/tests/test_mot.py:57: AssertionError
____________________________ test_merge_dataframes _____________________________

    def test_merge_dataframes():
        acc = mm.MOTAccumulator()

        acc.update([], ['a', 'b'], [], frameid=0)
        acc.update([1, 2], [], [], frameid=1)
        acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
        acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3                                                                             )

        r, mappings = mm.MOTAccumulator.merge_event_dataframes([acc.events, acc.                                                                             events], return_mappings=True)

        expect = mm.MOTAccumulator.new_event_dataframe()
        expect.loc[(0, 0), :] = ['RAW', np.nan, mappings[0]['hid_map']['a'], np.                                                                             nan]
        expect.loc[(0, 1), :] = ['RAW', np.nan, mappings[0]['hid_map']['b'], np.                                                                             nan]
        expect.loc[(0, 2), :] = ['FP', np.nan, mappings[0]['hid_map']['a'], np.n                                                                             an]
        expect.loc[(0, 3), :] = ['FP', np.nan, mappings[0]['hid_map']['b'], np.n                                                                             an]

>       expect.loc[(1, 0), :] = ['RAW', mappings[0]['oid_map'][1], np.nan, np.na                                                                             n]
E       KeyError: 1

motmetrics/tests/test_mot.py:126: KeyError
===================== 4 failed, 14 passed in 2.75 seconds ======================

Reference Results

Hi,

I am trying to understand the results meaning.
I have compared one tracking ground truth file with itself and got the following:
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
CVPR19-01 86.2% 75.8% 100.0% 100.0% 75.8% 74 74 0 0 6349 0 0 0 68.0% 0.000
OVERALL 86.2% 75.8% 100.0% 100.0% 75.8% 74 74 0 0 6349 0 0 0 68.0% 0.000

Shouldn't the MOTA and MOTP be 100%?
Shouldn't the Prcn be 100%?

Non-ASCII character '\xc3' in file io.py

Hey, when running your code I get the following error
SyntaxError: Non-ASCII character '\xc3' in file ../../../py-motmetrics/motmetrics/io.py on line 19, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
inserting "# -- coding: utf-8 --" in the beginning of the io.py file helps.
could you update the repo, please?

Assertion error, while testing.

While running pytest I encountered this error, I followed the procedure as you explained and eventhough I got the result I'm not sure what this error means.

  np.testing.assert_allclose(summary, expected, atol=1e-3)

E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0.001
E
E (shapes (2, 15), (3, 15) mismatch)
E x: array([[1.667152e+00, 1.653115e+00, 1.667152e+00, 9.512373e-01,
E 9.512373e-01, 1.000000e+00, 1.000000e+00, 0.000000e+00,
E 0.000000e+00, 2.010000e+02, 2.010000e+02, 0.000000e+00,...
E y: array([[5.57659e-01, 7.29730e-01, 4.51253e-01, 5.82173e-01, 9.41441e-01,
E 8.00000e+00, 1.00000e+00, 6.00000e+00, 1.00000e+00, 1.30000e+01,
E 1.50000e+02, 7.00000e+00, 7.00000e+00, 5.26462e-01, 2.77201e-01],...

motmetrics/tests/test_metrics.py:143: AssertionError

I have sped up for some of the components.

I have sped up for some of the components, for a faster combined result calculating. I have tested it in many cases, it is actually faster than the origin implement. If you want, maybe we can make a pull request.

Mismatched ID's reporting MATCH

I was plotting some of my results when I noticed a situation where for frames 0-116 OId 24 was matched with HId 3 (Type == 'MATCH' for all those frames), however when we get to the final frame, 117, OId 22 was matched with HId 3 (also Type == 'MATCH'). How can this be possible?
acc_oid_22.xlsx
acc_hid_3.xlsx

df = acc.mot_events
df.loc[HId == 3] # see attachments acc_hid_3.xlsx (last line)
df.loc[OId == 22] # See attachments acc_oid_22.xlsx (first line)

mh.compute AttributeError: 'list' object has no attribute 'astype'

I dropped into the debugger and it fails in the return statement from compute(). If I define index=None or leave it out it works (See details of debug session below).

I'm running:

numpy==1.14.5
pandas==0.23.4
motmetrics==1.1.3

I searched on the web but didn't find anything that useful. I'll continue to look.

Here's my calling code:

def match_annotations(gold, pred, match='gold'):
    """Compute object detection statistics as a function of IoU, using py-motmetrics."""

    acc = mm.MOTAccumulator()

    f0 = min(gold.min_frame(), pred.min_frame())
    f1 = max(gold.max_frame(), pred.max_frame())
    for f in range(f0, f1 + 1):
        g = gold.get_frame(f).loc[:, ['left', 'top', 'width', 'height']]
        p = pred.get_frame(f).loc[:, ['left', 'top', 'width', 'height']]

        dists = mm.distances.iou_matrix(g, p, max_iou=0.5)
       print('G', list(g.index), '\nP', list(p.index), '\nD', dists, '\nF', f, '\n')
       acc.update(list(g.index), list(p.index), dists, frameid=f)

    mh = mm.metrics.create()
    import pdb; pdb.set_trace()
    result = mh.compute(acc, name=None)
    return result

where get_frame() returns a DataFrame that has additional columns, and the output of the print statement is:

G [0, 1] 
P [0, 1] 
D [[0.2 nan]
 [nan 0.2]] 
F 0 

G [2, 3] 
P [2, 3, 4, 5] 
D [[0.28571429 0.5        0.                nan]
 [       nan 0.44444444        nan 0.        ]] 
F 1 

Suggestions?

(Pdb) l                             
147            if return_cached:   
148                 data = cache    
149             else:               
150                 data = OrderedDict([(k, cache[k]) for k in metrics]) 
151                                 
152  ->         return pd.DataFrame(data, index=[name]) if return_dataframe else data                                                             
153                                 
154         def compute_many(self, dfs, metrics=None, names=None, generate_overall=False):                                                        
155             """Compute metrics on multiple dataframe / accumulators. 
156                                 
157             Params              

(Pdb) pd.DataFrame(data, index=[name])                                   
*** AttributeError: 'list' object has no attribute 'astype' 
             
(Pdb) p name                        
0                                   

(Pdb) p data
OrderedDict([('num_frames', 2), ('obj_frequencies', 3    1
2    1
1    1
0    1
Name: OId, dtype: int64), ('pred_frequencies', 5    1
4    1
3    1
2    1
1    1
0    1
Name: HId, dtype: int64), ('num_matches', 4), ('num_switches', 0), ('num_false_positives', 2), ('num_misses', 0), ('num_detections', 4), ('num_objects', 4), ('num_predictions', 6), ('num_unique_objects', 4), ('track_ratios', 3    1.0
2    1.0
1    1.0
0    1.0
Name: OId, dtype: float64), ('mostly_tracked', 4), ('partially_tracked', 0), ('mostly_lost', 0), ('num_fragmentations', 0), ('motp', 0.09999999999999998), ('mota', 0.5), ('precision', 0.6666666666666666), ('recall', 1.0), ('id_global_assignment', {'fpmatrix': array([[ 0.,  1.,  1.,  1.,  1.,  1.,  0.,  0.,  0.,  0.],
       [ 1.,  0.,  1.,  1.,  1.,  1.,  0.,  0.,  0.,  0.],
       [ 1.,  1.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.],
       [ 1.,  1.,  1.,  0.,  1.,  0.,  0.,  0.,  0.,  0.],
       [ 1., nan, nan, nan, nan, nan,  0.,  0.,  0.,  0.],
       [nan,  1., nan, nan, nan, nan,  0.,  0.,  0.,  0.],
       [nan, nan,  1., nan, nan, nan,  0.,  0.,  0.,  0.],
       [nan, nan, nan,  1., nan, nan,  0.,  0.,  0.,  0.],
       [nan, nan, nan, nan,  1., nan,  0.,  0.,  0.,  0.],
       [nan, nan, nan, nan, nan,  1.,  0.,  0.,  0.,  0.]]), 'fnmatrix': array([[ 0.,  1.,  1.,  1.,  1.,  1.,  1., nan, nan, nan],
       [ 1.,  0.,  1.,  1.,  1.,  1., nan,  1., nan, nan],
       [ 1.,  1.,  0.,  0.,  0.,  1., nan, nan,  1., nan],
       [ 1.,  1.,  1.,  0.,  1.,  0., nan, nan, nan,  1.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.]]), 'rids': array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), 'cids': array([0, 1, 2, 3, 6, 7, 8, 9, 4, 5]), 'costs': array([[ 0.,  2.,  2.,  2.,  2.,  2.,  1., nan, nan, nan],
       [ 2.,  0.,  2.,  2.,  2.,  2., nan,  1., nan, nan],
       [ 2.,  2.,  0.,  0.,  0.,  2., nan, nan,  1., nan],
       [ 2.,  2.,  2.,  0.,  2.,  0., nan, nan, nan,  1.],
       [ 1., nan, nan, nan, nan, nan,  0.,  0.,  0.,  0.],
       [nan,  1., nan, nan, nan, nan,  0.,  0.,  0.,  0.],
       [nan, nan,  1., nan, nan, nan,  0.,  0.,  0.,  0.],
       [nan, nan, nan,  1., nan, nan,  0.,  0.,  0.,  0.],
       [nan, nan, nan, nan,  1., nan,  0.,  0.,  0.,  0.],
       [nan, nan, nan, nan, nan,  1.,  0.,  0.,  0.,  0.]]), 'min_cost': 2.0}), ('idfp', 2.0), ('idfn', 0.0), ('idtp', 4.0), ('idp', 0.6666666666666666), ('idr', 1.0), ('idf1', 0.8)])
(Pdb) pd.DataFrame(data, index=None)
          num_frames  obj_frequencies  pred_frequencies  num_matches  num_switches  ...   idfn  idtp       idp  idr  idf1
0                  2              1.0               1.0            4             0  ...    0.0   4.0  0.666667  1.0   0.8
1                  2              1.0               1.0            4             0  ...    0.0   4.0  0.666667  1.0   0.8
2                  2              1.0               1.0            4             0  ...    0.0   4.0  0.666667  1.0   0.8
3                  2              1.0               1.0            4             0  ...    0.0   4.0  0.666667  1.0   0.8
4                  2              NaN               1.0            4             0  ...    0.0   4.0  0.666667  1.0   0.8
5                  2              NaN               1.0            4             0  ...    0.0   4.0  0.666667  1.0   0.8
cids               2              NaN               NaN            4             0  ...    0.0   4.0  0.666667  1.0   0.8
costs              2              NaN               NaN            4             0  ...    0.0   4.0  0.666667  1.0   0.8
fnmatrix           2              NaN               NaN            4             0  ...    0.0   4.0  0.666667  1.0   0.8
fpmatrix           2              NaN               NaN            4             0  ...    0.0   4.0  0.666667  1.0   0.8
min_cost           2              NaN               NaN            4             0  ...    0.0   4.0  0.666667  1.0   0.8
rids               2              NaN               NaN            4             0  ...    0.0   4.0  0.666667  1.0   0.8

[12 rows x 27 columns]

(Pdb) pd.DataFrame(data, index=[0])
*** AttributeError: 'list' object has no attribute 'astype'

(Pdb) pd.DataFrame(data, index=['w'])
*** AttributeError: 'list' object has no attribute 'astype'

Configure T, max distance which keeps association

After an association is made between two objects, according to the paper the association should be kept if it does not exceed a threshold T in the next frame. How can I configure this threshold? Based on what I have seen in the code it looks like it is just being checked if the distance is finite, which is very different from the paper. See [0], section 2.1.1, the first sentence or [2] for the original implementation and [1] for the place in the code which I think is wrong in this implementation.

[0] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.367.6279&rep=rep1&type=pdf
[1] https://github.com/cheind/py-motmetrics/blob/master/motmetrics/mot.py#L182
[2] https://github.com/glisanti/CLEAR-MOT/blob/master/GreedyAssociation.m#L40

How to set max_iou when evaluate my MOT algorithm?

When I want to evaluate my MOT algorithm with py-motmetrics, I find that max_iou should be set by hand, but different max_iou will affect the result of metrics, I don't know how to set the value of max_iou when I run the mm.distance.iou_matrix(), can anyone tell me what it should be just like the official test? 0.5 or other values?

Which method to call to execute this program?

how to pass GT, Test files to this app?
can you explain what are files in etc/data/iotest and how are these different from etc/data/TUD-Campus?
can you give examples for above in your documentation!

Thanks

IDF1, IDR and IDP greater than 100% after post-processing tracks using re-id methods

I have a bunch of objects that I am tracking using the Deep SORT algorithm, that produce decent results when evaluated using this library. However, as in all multi-object tracking applications, one ground truth identity (oid) usually ends up being assigned multiple identities in testing (hid). I am using re-identification methods to try to correct some of the mistakes made by Deep SORT

Fundamentally, in an ideal scenario, if oid 1, is assigned hids 6, 8, 32 and 35 through the time it is tracked, the re-identification system would recognise that hids 8, 32 and 35 are the same as 6 and subsequently set them all to 6 as that was the first hid that was assigned. So something that looks like this as the input

frame  id        x       y      width  height  conf
1      6       1466.00  578.00  195.00  390.00     1
2      8       1475.00    2.00  222.00  375.00     1
3      32       700.48  580.39  276.07  453.66     1
4      35       770.00    5.00  378.00   99.00     1

would be modified to look like this

frame  id        x       y      width  height  conf
1      6      1466.00  578.00  195.00  390.00     1
2      6      1475.00    2.00  222.00  375.00     1
3      6       700.48  580.39  276.07  453.66     1
4      6       770.00    5.00  378.00   99.00     1

So the only thing that is modified is the hid, everything else is left the same.

When I process the original, I get the results I would expect, however when I process the modified version, my IDF1, IDR and IDP are all above 100%.

I've traced this problem back to idfn(df, id_global_assignment) in metrics.py, which returns a negative number (which I reckon is the problem, but I'm not sure), however I can't figure out what could be causing this in relation to the input I'm providing.

Memory usage

I'm running this with for a 900 frame sequence and it brings my system to its knees, consuming probably 20GB of memory. Any ideas?

metrics output giving all zeros / nans

Hi
I am trying to run motmetrics in conda environment using the instructions provided but I am getting the following error while running the example data provided in the folder.

$ python -m motmetrics.apps.eval_motchallenge motmetrics/data/TUD-Campus/gt.txt motmetrics/data/TUD-Campus/test.txt


(motmetrics-env) my-mac $ python -m motmetrics.apps.eval_motchallenge motmetrics/data/TUD-Campus/gt.txt  motmetrics/data/TUD-Campus/test.txt 
05:01:09 INFO - Found 0 groundtruths and 0 test files.
05:01:09 INFO - Available LAP solvers ['scipy']
05:01:09 INFO - Default LAP solver 'scipy'
05:01:09 INFO - Loading files.
05:01:09 INFO - Running metrics
~/Desktop/py-motmetrics/motmetrics/metrics.py:378: RuntimeWarning: invalid value encountered in double_scalars
  return 2 * idtp / (num_objects + num_predictions)
~/Desktop/py-motmetrics/motmetrics/metrics.py:370: RuntimeWarning: invalid value encountered in double_scalars
  return idtp / (idtp + idfp)
~/Desktop/py-motmetrics/motmetrics/metrics.py:374: RuntimeWarning: invalid value encountered in double_scalars
  return idtp / (idtp + idfn)
~/Desktop/py-motmetrics/motmetrics/metrics.py:302: RuntimeWarning: invalid value encountered in long_scalars
  return num_detections / num_objects
~/Desktop/py-motmetrics/motmetrics/metrics.py:298: RuntimeWarning: invalid value encountered in long_scalars
  return num_detections / (num_false_positives + num_detections)
~/Desktop/py-motmetrics/motmetrics/metrics.py:294: RuntimeWarning: invalid value encountered in long_scalars
  return 1. - (num_misses + num_switches + num_false_positives) / num_objects
~/Desktop/py-motmetrics/motmetrics/metrics.py:290: RuntimeWarning: invalid value encountered in double_scalars
  return df.noraw['D'].sum() / num_detections
        IDF1  IDP  IDR Rcll Prcn GT MT PT ML FP FN IDs  FM MOTA MOTP
OVERALL nan% nan% nan% nan% nan%  0  0  0  0  0  0   0   0 nan%  nan
05:01:09 INFO - Completed

Then, I tried to run pytest and got the following error. Any idea what might be wrong?

$ pytest
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: ~/Desktop/py-motmetrics, inifile:
collected 18 items                                                                                                                                                            

motmetrics/tests/test_distances.py ...                                                                                                                                  [ 16%]
motmetrics/tests/test_io.py ..                                                                                                                                          [ 27%]
motmetrics/tests/test_lap.py ..                                                                                                                                         [ 38%]
motmetrics/tests/test_metrics.py ...F.F                                                                                                                                 [ 72%]
motmetrics/tests/test_mot.py F...F                                                                                                                                      [100%]

================================================================================== FAILURES ===================================================================================
_______________________________________________________________________________ test_mota_motp ________________________________________________________________________________

   def test_mota_motp():
       acc = mm.MOTAccumulator()
   
       # All FP
       acc.update([], ['a', 'b'], [], frameid=0)
       # All miss
       acc.update([1, 2], [], [], frameid=1)
       # Match
       acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
       # Switch
       acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3)
       # Match. Better new match is available but should prefer history
       acc.update([1, 2], ['a', 'b'], [[5, 1], [1, 5]], frameid=4)
       # No data
       acc.update([], [], [], frameid=5)
   
       mh = mm.metrics.create()
       metr = mh.compute(acc, metrics=['motp', 'mota', 'num_predictions'], return_dataframe=False, return_cached=True)
   
       assert metr['num_matches'] == 4
       assert metr['num_false_positives'] == 2
       assert metr['num_misses'] == 2
       assert metr['num_switches'] == 2
       assert metr['num_detections'] == 6
>       assert metr['num_objects'] == 8
E       assert 10 == 8

motmetrics/tests/test_metrics.py:90: AssertionError
___________________________________________________________________________ test_motchallenge_files ___________________________________________________________________________

self = <pandas.core.indexing._LocIndexer object at 0x10f626e58>, key = 'nan', axis = 0

   @Appender(_NDFrameIndexer._validate_key.__doc__)
   def _validate_key(self, key, axis):
       ax = self.obj._get_axis(axis)
   
       # valid for a label where all labels are in the index
       # slice of labels (where start-end in labels)
       # slice of integers (only if in the labels)
       # boolean
   
       if isinstance(key, slice):
           return
   
       elif com.is_bool_indexer(key):
           return
   
       elif not is_list_like_indexer(key):
   
           def error():
               if isna(key):
                   raise TypeError("cannot use label indexing with a null "
                                   "key")
               raise KeyError(u"the label [{key}] is not in the [{axis}]"
                              .format(key=key,
                                      axis=self.obj._get_axis_name(axis)))
   
           try:
               key = self._convert_scalar_indexer(key, axis)
               if not ax.contains(key):
>                   error()

../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1790: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

   def error():
       if isna(key):
           raise TypeError("cannot use label indexing with a null "
                           "key")
       raise KeyError(u"the label [{key}] is not in the [{axis}]"
                      .format(key=key,
>                              axis=self.obj._get_axis_name(axis)))
E       KeyError: 'the label [nan] is not in the [index]'

../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1785: KeyError

During handling of the above exception, another exception occurred:

   def test_motchallenge_files():
       dnames = [
           'TUD-Campus',
           'TUD-Stadtmitte',
       ]
   
       def compute_motchallenge(dname):
           df_gt = mm.io.loadtxt(os.path.join(dname,'gt.txt'))
           df_test = mm.io.loadtxt(os.path.join(dname,'test.txt'))
           return mm.utils.compare_to_groundtruth(df_gt, df_test, 'iou', distth=0.5)
   
       accs = [compute_motchallenge(os.path.join(DATA_DIR, d)) for d in dnames]
   
       # For testing
       # [a.events.to_pickle(n) for (a,n) in zip(accs, dnames)]
   
       mh = mm.metrics.create()
>       summary = mh.compute_many(accs, metrics=mm.metrics.motchallenge_metrics, names=dnames, generate_overall=True)

motmetrics/tests/test_metrics.py:133: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
motmetrics/metrics.py:191: in compute_many
   partials = [self.compute(acc, metrics=metrics, name=name) for acc, name in zip(dfs, names)]
motmetrics/metrics.py:191: in <listcomp>
   partials = [self.compute(acc, metrics=metrics, name=name) for acc, name in zip(dfs, names)]
motmetrics/metrics.py:142: in compute
   cache[mname] = self._compute(df_map, mname, cache, parent='summarize')
motmetrics/metrics.py:203: in _compute
   v = cache[depname] = self._compute(df_map, depname, cache, parent=name)
motmetrics/metrics.py:203: in _compute
   v = cache[depname] = self._compute(df_map, depname, cache, parent=name)
motmetrics/metrics.py:205: in _compute
   return minfo['fnc'](df_map, *vals)
motmetrics/metrics.py:335: in id_global_assignment
   df_o = df.loc[o, 'D'].dropna()
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1472: in __getitem__
   return self._getitem_tuple(key)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:870: in _getitem_tuple
   return self._getitem_lowerdim(tup)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:998: in _getitem_lowerdim
   section = self._getitem_axis(key, axis=i)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1911: in _getitem_axis
   self._validate_key(key, axis)
../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1798: in _validate_key
   error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

   def error():
       if isna(key):
           raise TypeError("cannot use label indexing with a null "
                           "key")
       raise KeyError(u"the label [{key}] is not in the [{axis}]"
                      .format(key=key,
>                              axis=self.obj._get_axis_name(axis)))
E       KeyError: 'the label [nan] is not in the [index]'

../../anaconda/envs/motmetrics-env/lib/python3.6/site-packages/pandas/core/indexing.py:1785: KeyError
_________________________________________________________________________________ test_events _________________________________________________________________________________

   def test_events():
       acc = mm.MOTAccumulator()
   
       # All FP
       acc.update([], ['a', 'b'], [], frameid=0)
       # All miss
       acc.update([1, 2], [], [], frameid=1)
       # Match
       acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
       # Switch
       acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3)
       # Match. Better new match is available but should prefer history
       acc.update([1, 2], ['a', 'b'], [[5, 1], [1, 5]], frameid=4)
       # No data
       acc.update([], [], [], frameid=5)
   
       expect = mm.MOTAccumulator.new_event_dataframe()
       expect.loc[(0, 0), :] = ['RAW', np.nan, 'a', np.nan]
       expect.loc[(0, 1), :] = ['RAW', np.nan, 'b', np.nan]
       expect.loc[(0, 2), :] = ['FP', np.nan, 'a', np.nan]
       expect.loc[(0, 3), :] = ['FP', np.nan, 'b', np.nan]
   
       expect.loc[(1, 0), :] = ['RAW', 1, np.nan, np.nan]
       expect.loc[(1, 1), :] = ['RAW', 2, np.nan, np.nan]
       expect.loc[(1, 2), :] = ['MISS', 1, np.nan, np.nan]
       expect.loc[(1, 3), :] = ['MISS', 2, np.nan, np.nan]
   
       expect.loc[(2, 0), :] = ['RAW', 1, 'a', 1.0]
       expect.loc[(2, 1), :] = ['RAW', 1, 'b', 0.5]
       expect.loc[(2, 2), :] = ['RAW', 2, 'a', 0.3]
       expect.loc[(2, 3), :] = ['RAW', 2, 'b', 1.0]
       expect.loc[(2, 4), :] = ['MATCH', 1, 'b', 0.5]
       expect.loc[(2, 5), :] = ['MATCH', 2, 'a', 0.3]
   
       expect.loc[(3, 0), :] = ['RAW', 1, 'a', 0.2]
       expect.loc[(3, 1), :] = ['RAW', 1, 'b', np.nan]
       expect.loc[(3, 2), :] = ['RAW', 2, 'a', np.nan]
       expect.loc[(3, 3), :] = ['RAW', 2, 'b', 0.1]
       expect.loc[(3, 4), :] = ['SWITCH', 1, 'a', 0.2]
       expect.loc[(3, 5), :] = ['SWITCH', 2, 'b', 0.1]
   
       expect.loc[(4, 0), :] = ['RAW', 1, 'a', 5.]
       expect.loc[(4, 1), :] = ['RAW', 1, 'b', 1.]
       expect.loc[(4, 2), :] = ['RAW', 2, 'a', 1.]
       expect.loc[(4, 3), :] = ['RAW', 2, 'b', 5.]
       expect.loc[(4, 4), :] = ['MATCH', 1, 'a', 5.]
       expect.loc[(4, 5), :] = ['MATCH', 2, 'b', 5.]
       # frame 5 generates no events
   
>       assert pd.DataFrame.equals(acc.events, expect)
E       AssertionError: assert False
E        +  where False = <function NDFrame.equals at 0x1067a3488>(                 Type  OId  HId    D\nFrameId Event                       \n0       0         RAW  nan    a  NaN\n       ...  a  1.0\n        3         RAW    2    b  5.0\n        4       MATCH    1    a  5.0\n        5       MATCH    2    b  5.0,                  Type  OId  HId    D\nFrameId Event                       \n0       0         RAW  NaN    a  NaN\n       ...  a  1.0\n        3         RAW    2    b  5.0\n        4       MATCH    1    a  5.0\n        5       MATCH    2    b  5.0)
E        +    where <function NDFrame.equals at 0x1067a3488> = <class 'pandas.core.frame.DataFrame'>.equals
E        +      where <class 'pandas.core.frame.DataFrame'> = pd.DataFrame
E        +    and                    Type  OId  HId    D\nFrameId Event                       \n0       0         RAW  nan    a  NaN\n       ...  a  1.0\n        3         RAW    2    b  5.0\n        4       MATCH    1    a  5.0\n        5       MATCH    2    b  5.0 = <motmetrics.mot.MOTAccumulator object at 0x10f7f2550>.events

motmetrics/tests/test_mot.py:57: AssertionError
____________________________________________________________________________ test_merge_dataframes ____________________________________________________________________________

   def test_merge_dataframes():
       acc = mm.MOTAccumulator()
   
       acc.update([], ['a', 'b'], [], frameid=0)
       acc.update([1, 2], [], [], frameid=1)
       acc.update([1, 2], ['a', 'b'], [[1, 0.5], [0.3, 1]], frameid=2)
       acc.update([1, 2], ['a', 'b'], [[0.2, np.nan], [np.nan, 0.1]], frameid=3)
   
       r, mappings = mm.MOTAccumulator.merge_event_dataframes([acc.events, acc.events], return_mappings=True)
   
       expect = mm.MOTAccumulator.new_event_dataframe()
       expect.loc[(0, 0), :] = ['RAW', np.nan, mappings[0]['hid_map']['a'], np.nan]
       expect.loc[(0, 1), :] = ['RAW', np.nan, mappings[0]['hid_map']['b'], np.nan]
       expect.loc[(0, 2), :] = ['FP', np.nan, mappings[0]['hid_map']['a'], np.nan]
       expect.loc[(0, 3), :] = ['FP', np.nan, mappings[0]['hid_map']['b'], np.nan]
   
>       expect.loc[(1, 0), :] = ['RAW', mappings[0]['oid_map'][1], np.nan, np.nan]
E       KeyError: 1

motmetrics/tests/test_mot.py:126: KeyError
===================================================================== 4 failed, 14 passed in 2.18 seconds =====================================================================
(motmetrics-env) 

Detection Scores

hello.thank you very much for publishing this perfect repository. it's very useful
but i have one question , i just want to take into account detections with high confidence and apply the tracking on this detections . but i think the motmetric consideres all the detections and so my evaluation result is not what it should be. do you have any suggestion for me on this problem.
i would appreciate it very much if you help me on this issue

Nice work, but i have a unnormal parameter when i run my own data file.

i have been change my own file into the formal 10 parameters file.
But i have i mistake like MOTA is -82.5%, and many metrixs is 0, i'm confused about.
the output is like following:

03:52:35 INFO - Found 1 groundtruths and 1 test files.
03:52:35 INFO - Available LAP solvers ['scipy']
03:52:35 INFO - Default LAP solver 'scipy'
03:52:35 INFO - Loading files.
03:52:36 INFO - Comparing ADL-Rundle-6...
03:52:37 INFO - Running metrics
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
ADL-Rundle-6 0.0% 0.0% 0.0% 0.0% 0.0% 24 0 0 24 4134 5008 0 0 -82.5% 0.480
OVERALL 0.0% 0.0% 0.0% 0.0% 0.0% 24 0 0 24 4134 5008 0 0 -82.5% 0.480
03:52:40 INFO - Completed

and then i run the data file of yours,
then get a result like the following (it's like a normal one):

03:55:33 INFO - Found 11 groundtruths and 11 test files.
03:55:33 INFO - Available LAP solvers ['scipy']
03:55:33 INFO - Default LAP solver 'scipy'
03:55:33 INFO - Loading files.
03:55:34 INFO - Comparing KITTI-13...
03:55:34 INFO - Comparing PETS09-S2L1...
03:55:37 INFO - Comparing TUD-Stadtmitte...
03:55:37 INFO - Comparing Venice-2...
03:55:40 INFO - Comparing ETH-Pedcross2...
03:55:41 INFO - Comparing ADL-Rundle-8...
03:55:43 INFO - Comparing ETH-Sunnyday...
03:55:44 INFO - Comparing ETH-Bahnhof...
03:55:47 INFO - Comparing ADL-Rundle-6...
03:55:49 INFO - Comparing TUD-Campus...
03:55:49 INFO - Comparing KITTI-17...
03:55:49 INFO - Running metrics
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
KITTI-13 29.6% 38.8% 23.9% 28.1% 45.6% 42 0 17 25 255 548 4 5 -5.9% 0.318
PETS09-S2L1 66.6% 66.6% 66.6% 91.8% 91.8% 19 18 1 0 368 367 31 37 82.9% 0.269
TUD-Stadtmitte 64.5% 82.0% 53.1% 60.9% 94.0% 10 5 4 1 45 452 7 6 56.4% 0.346
Venice-2 35.5% 43.6% 29.9% 42.0% 61.3% 26 4 16 6 1890 4144 42 52 14.9% 0.274
ETH-Pedcross2 16.9% 61.3% 9.8% 12.2% 76.0% 133 1 17 115 240 5502 17 21 8.0% 0.301
ADL-Rundle-8 28.3% 33.9% 24.3% 40.1% 55.8% 28 5 13 10 2152 4064 37 45 7.8% 0.261
ETH-Sunnyday 49.9% 83.2% 35.7% 37.1% 86.4% 30 7 4 19 108 1169 1 4 31.2% 0.240
ETH-Bahnhof 54.2% 60.2% 49.3% 58.9% 72.0% 171 41 55 75 1241 2223 17 69 35.7% 0.282
ADL-Rundle-6 36.3% 50.0% 28.5% 42.1% 73.9% 24 1 18 5 744 2898 53 45 26.2% 0.275
TUD-Campus 55.8% 73.0% 45.1% 58.2% 94.1% 8 1 6 1 13 150 7 7 52.6% 0.277
KITTI-17 55.8% 72.2% 45.5% 52.4% 83.1% 9 1 7 1 73 325 6 10 40.8% 0.277
OVERALL 41.2% 53.2% 33.6% 45.3% 71.7% 500 84 158 258 7129 21842 222 301 26.8% 0.276
03:58:56 INFO - Completed

if you have some idea of this fault, pls contact me, thanks very much.
it's pretty unnormal about this, i just test one video of MOT15-2D.

Can this repo be used to evaluate detector performance?

This repo is a great job. And I can embed it into my muti-target tracking framework. Now I want to use my private detector, so I wonder is this repo can be used to evaluate detectors for multi-target detection? Like the metric MODA, MODP and so on

Crash on all empty hypotheses in accumulator

Hello, this is somehow edge case (maybe similar to #23). I run motmetrics evaluation on a set of experiments and it happens that there are no hypotheses at all. Something like this:

acc = motmetrics.MOTAccumulator(auto_id=True)
acc.update([2], [], numpy.empty((0, 0)))
acc.update([2, 3], [], numpy.empty((0, 0)))
acc.update([3], [], numpy.empty((0, 0)))

metrics = motmetrics.metrics.create()
summary = metrics.compute(acc)

That will cause a crash on motmetrics.py:L377:

IndexError: cannot do a non-empty take from an empty axes.

I'd expect to get results like:
frames: 3 , mostly_lost: 2, num_misses: 4, mota: 0, motp: nan, ...

Instruction for MOT Detection

Hey there. First, thank you for the great work you have done in this project!

I am wondering whether you have an example for the specific task of detection. I got a little lost trying to arrange the ground-truth and predictions data I have from another framework, together with the py-motmetrics. Also, I couldn't figure it out just by reading the code.

Would you mind helping me to realize how to do it? Thanks.

Ps:
I use the ground truth data from my own dataset. I am aware it needs to follow the definitions in https://motchallenge.net/instructions/, as well the predictions file.

mh.compute_many with generate_overall seems to freeze

Hi all,
I called mh.compute_many(generate_overall=True) with data on 7 input videos (in MOT training set) and the call seems to get stuck in a never-ending loop. It has been about 16 hours since I made the call but still no result.

filtered_summary = mh.compute_many(
            [x[1] for x in filtered_results], 
            metrics=mm.metrics.motchallenge_metrics, 
            names=[x[0] for x in filtered_results],
            generate_overall=True
        )

I got results in like 20 minutes when the generate_overall option is not set.

Is there any technique to make this operation faster?
Thanks in advance

Error during id_global_assignment

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1790, in _validate_key
    error()
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1785, in error
    axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [nan] is not in the [index]'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "evaluate.py", line 96, in <module>
    main(args)
  File "evaluate.py", line 86, in main
    summary = mh.compute(acc, metrics=[ 'id_global_assignment'],  name='acc')
  File "/usr/local/lib/python3.5/dist-packages/motmetrics/metrics.py", line 142, in compute
    cache[mname] = self._compute(df_map, mname, cache, parent='summarize')            
  File "/usr/local/lib/python3.5/dist-packages/motmetrics/metrics.py", line 205, in _compute
    return minfo['fnc'](df_map, *vals)
  File "/usr/local/lib/python3.5/dist-packages/motmetrics/metrics.py", line 335, in id_global_assignment
    df_o = df.loc[o, 'D'].dropna()
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1472, in __getitem__
    return self._getitem_tuple(key)
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 870, in _getitem_tuple
    return self._getitem_lowerdim(tup)
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 998, in _getitem_lowerdim
    section = self._getitem_axis(key, axis=i)
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1911, in _getitem_axis
    self._validate_key(key, axis)
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1798, in _validate_key
    error()
  File "/usr/local/lib/python3.5/dist-packages/pandas/core/indexing.py", line 1785, in error
    axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [nan] is not in the [index]'

many 0 and nan return

Hi, i run my own data, it return many zero and nan, i sucess in TUD-Campus example.
my gt txt:
2,1,1702,246,31,61,1,-1,-1,-1
2,2,2351,313,39,77,1,-1,-1,-1
2,3,1745,191,24,57,1,-1,-1,-1
2,4,2199,175,25,60,1,-1,-1,-1
my test txt:
2,1,485.700000,267.020000,53.650000,76.000000,-1,-1,-1,-1
2,2,210.860000,172.800000,28.490000,56.980000,-1,-1,-1,-1
2,3,325.400000,106.000000,26.970000,65.000000,-1,-1,-1,-1
2,4,289.370000,231.810000,31.520000,59.140000,-1,-1,-1,-1
my res:
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
0.0% 0.0% 0.0% 0.0% 0.0% 31 0 0 31 5865 7734 0 0 -75.8% nan
0.0% 0.0% 0.0% 0.0% 0.0% 31 0 0 31 5865 7734 0 0 -75.8% nan
and has runtime warming:
/python3.5/site-packages/motmetrics/metrics.py:290: RuntimeWarning: invalid value encountered in double_scalars return df.noraw['D'].sum() / num_detections

what's problem do i meet?
Thanks.

running time

I tried to evaluate the metric of my result. python -m motmetrics.apps.eval_motchallenge "/gt" /test"
if there are only one *.txt in /test. I can got the evaluate result quickly. however, if there are many *.txt in the folder /test, I can not the result after a long time.

accumulator.events.index.lexsort_depth == 1 causes num_fragmentations to crash

Hello and thanks for your metrics library.
I'm trying to compute a single accumulator and running into the below error. (I can successfully use compute_many on a list of accumulators). I have noticed that the accumulator.events.index.lexsort_depth == 1 and that is what is causing the problem. I could partially get rid of the error by adding ' dfo = dfo.sort_index()' in metrics.py num_fragmentations.
Any pointers to where else I can fix this? I wasn't able to change the accumulator.events with a sort.

Thanks


File "/Users/melissa.stockman/devl/tracking/tracking/metrics/metric_calculator.py", line 116, in xcam_calculate
summary = mh.compute(accumulator, metrics=mm.metrics.motchallenge_metrics, name=self.xcam_accum_name)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/motmetrics/metrics.py", line 142, in compute
cache[mname] = self._compute(df_map, mname, cache, parent='summarize')
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/motmetrics/metrics.py", line 205, in _compute
return minfo['fnc'](df_map, *vals)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/motmetrics/metrics.py", line 284, in num_fragmentations
diffs = dfo.loc[first:last].Type.apply(lambda x: 1 if x == 'MISS' else 0).diff()
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexing.py", line 1478, in getitem
return self._getitem_axis(maybe_callable, axis=axis)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexing.py", line 1866, in _getitem_axis
return self._get_slice_axis(key, axis=axis)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexing.py", line 1511, in _get_slice_axis
slice_obj.step, kind=self.name)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 4107, in slice_indexer
kind=kind)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexes/multi.py", line 2146, in slice_locs
return super(MultiIndex, self).slice_locs(start, end, step, kind=kind)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 4308, in slice_locs
start_slice = self.get_slice_bound(start, 'left', kind)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexes/multi.py", line 2090, in get_slice_bound
return self._partial_tup_index(label, side=side)
File "/Users/melissa.stockman/devl/tracking/venv/lib/python3.6/site-packages/pandas/core/indexes/multi.py", line 2153, in _partial_tup_index
(len(tup), self.lexsort_depth))
pandas.errors.UnsortedIndexError: 'Key length (2) was greater than MultiIndex lexsort depth (1)'


This is the mot_events:

                   Type                                   OId  HId              D

FrameId Event
1543358520000 2 MATCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 7232.000000
3 FP NaN 1 NaN
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 6782.265625
7 MATCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 9516.812500
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
1 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 576.000000
1543358520500 2 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 15049.230713
3 FP NaN 1 NaN
6 MATCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 3202.250000
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 18670.890625
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
2 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 1792.265625
3 MISS b3f8d06a-2b00-556a-b7c2-18af9f701aca NaN NaN
1543358521000 2 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 17283.985407
3 FP NaN 1 NaN
6 MATCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 4740.765625
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 42373.250000
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
2 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 1643.515625
3 MISS b3f8d06a-2b00-556a-b7c2-18af9f701aca NaN NaN
1543358521500 4 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 4165.015625
5 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 54787.781907
4 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 4 172112.900785
5 MATCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 5 33730.562500
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 7913.000000
7 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 14436.250000
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
1543358522000 4 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 4228.515625
5 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 14912.504390
4 MATCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 5 9111.250000
5 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 4 173238.706159
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 439.765625
7 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 7941.515625
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
2 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 7 12294.140625
3 MISS b3f8d06a-2b00-556a-b7c2-18af9f701aca NaN NaN
1543358522500 6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 59762.179383
7 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 754.890625
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
6 MATCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 5 83324.265625
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 33461.546577
8 MISS b3f8d06a-2b00-556a-b7c2-18af9f701aca NaN NaN
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 2284.765625
7 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 689.520767
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
2 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 4906.250000
3 MISS 59dcc018-8c9a-5c59-bb7a-e014893ca92e NaN NaN
1543358523000 9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 2650.062500
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 63939.023656
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 8620.140625
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 2162.250000
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 3227.312500
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 8986.140625
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 350.265625
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 2494.890625
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 360.312500
2 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 137.515625
3 MISS 59dcc018-8c9a-5c59-bb7a-e014893ca92e NaN NaN
1543358523500 9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 5566.140625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 2902.390625
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 81865.129200
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 1970.312500
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 752.515625
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 10006.812500
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 2398.765625
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 610.000000
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 611.140625
4 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 2405.640625
5 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 556.562500
1543358524000 9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 7087.390625
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 2381.562500
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 71169.686046
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 436.000000
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 11855.886515
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 3471.250000
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 4186.562500
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 928.671499
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 2.140625
6 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 2416.250000
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 5301.562500
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
1543358524500 9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 33180.146638
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 3844.265625
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 3399.640625
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 4119.062500
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 26989.786040
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 5434.812500
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 120.015625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 24139.179066
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 1340.093824
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 22096.562500
7 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 2337.250000
8 MISS 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 NaN NaN
1543358525000 9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 12126.649307
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 11745.390625
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 12852.085307
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 4026.908427
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 1854.272520
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 30328.384181
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 433.390625
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 114.390625
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 188995.372394
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 12768.390625
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 1465.093763
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 11389.062500
1543358525500 9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 14596.049757
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 3816.015625
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 7247.250000
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 6188.562500
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 7767.250000
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 6943.344437
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 676.015625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 6 314552.173324
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 338.312500
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 5376.515625
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 5557.250000
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 11892.265625
1543358526000 9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 8874.015625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 16816.757606
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 5873.640625
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 6052.765625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 21846.672001
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 19643.890625
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 1676.140625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 3 837245.000000
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 6 13810.076282
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 14359.369644
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 841.228633
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 1707.250000
1543358526500 9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 2306.250000
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 52993.373560
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 9109.390625
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 66346.080339
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 2439.209175
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 29303.015625
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 5876.003768
7 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 562.500000
8 FP NaN 6 NaN
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 40394.334813
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 11893.290199
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 2435.515625
1543358527000 9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 2851.312500
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 6282.000000
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 38071.482497
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 2729.140625
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 18868.640625
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 80495.525461
6 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 1037.250000
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 2590.518119
8 FP NaN 6 NaN
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 22332.441614
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 19744.765625
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 48597.128981
1543358527500 9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 150.890625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 27280.253014
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 9861.015625
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 48330.988311
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 29200.562500
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 7695.390625
6 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 786.812500
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 6 28930.171751
8 FP NaN 3 NaN
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 16367.490597
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 26650.000000
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 33396.406085
1543358528000 9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 5694.577172
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 11600.562500
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 1316.250000
9 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 21086.890625
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 10261.250000
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 40387.983154
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 6 78400.359546
7 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 1465.390625
8 FP NaN 3 NaN
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 14550.250000
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 30.015625
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 14823.678546
1543358528500 9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 3272.312500
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 10986.640625
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 3559.765625
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 8636.312500
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 26789.630757
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 13384.390625
6 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 1599.250000
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 6 16368.441221
8 FP NaN 3 NaN
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 4369.015625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 3080.265625
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 20190.896688
1543358529000 9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 5631.015625
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 10114.062500
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 3367.853089
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 3957.015625
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 31451.686585
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 16952.562500
6 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 366.250000
7 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 5295.224129
8 FP NaN 6 NaN
9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 3142.250000
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 22054.779926
11 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 1460.000000
1543358529500 9 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 2 1318.250000
10 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 1 4968.765625
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 10 10333.562500
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 5 35556.585153
10 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 4 769.250000
11 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 9 31566.312500
6 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 3 1431.212951
7 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 8 628.562500
8 FP NaN 6 NaN
9 SWITCH b3f8d06a-2b00-556a-b7c2-18af9f701aca 7 156.250000
10 SWITCH 80bc2a54-8821-5ef7-9fd6-b48c21bdaac8 12 23850.628253
11 SWITCH 59dcc018-8c9a-5c59-bb7a-e014893ca92e 11 2221.250000

Using metrics with custom detections

Hello, I have built tracker using YOLO+SORT and I would like to evaluate its performance against benchmark. Is your code going to work with detection I made using YOLO detector?

setting IOU threshold: max_iou or min_iou?

Two different metrics can be used to define if a detector hypothesis (H) and a groundtruth object (O) are a match or not: max_d2 and max_iou.

Obviously, if H and O have a very large Euclidean distance then they are not a match and the (H, O) pair that has the smallest distance is the best match. So, it make sense to define a maximum acceptable number for the mismatch threshold, i.e., max_d2. This means we are interested in minimizing the distances and a distance bigger than max_d2 is ignored.

However, a (H, O) pair with the biggest IOU is the best match and a (H, O) pair that have zero overlap and therefore, zero IOU is a mismatch. This means we want to maximize IOU (as opposed to minimizing the distance). It seems to me that we should be defining a min_iou threshold rather than a max_iou.

Can you explain why the threshold we define is used as a maximum allowable IOU and not the minimum acceptable IOU?

MOT16/MOT17 compatible evaluation

Fist of all, great work!

I am wondering if there is any chance that it is planned to implement the evaluation scheme used in MOT16/MOT17?

The important part is described in 4.1.3 of [2]:
They first remove all bounding boxes from the results that are assigned to instances of some uninteresting, but maybe technically correct subclasses of persons like cyclists, reflections or static persons. This is done to not punish results containing them. Finally, only objects from the ground truth class "pedestrian" are used for the evaluation (the ground truth also contains vehicles etc).

I think the expected behaviour is to use this evaluation scheme when using the mot16 data format, especially when calling "python -m motmetrics.apps.eval_motchallenge ... --fmt='mot16'"
Otherwise the results differ significantly compared to ones obtained by the official devkit.

[2] Milan, Anton, et al. "Mot16: A benchmark for multi-object tracking." arXiv preprint arXiv:1603.00831 (2016).

Interpreting data fields in MOT Benchmark, ground truth data

Hi, Thank you for this repo. Great work!

I am facing some difficulties in understanding the ground-truth data. So, I have downloaded the ground truth data from the MOT challenge_2015.. I have understood the first six columns of the dataset but unable to do so for the rest four columns. Following is the sample data from the directory <\2DMOT2015\train\ETH-Bahnhof\gt>:

frame no., object_id, bb_left, bb_top, bb_width ,bb_height, (?), (?), (?), (?)
1, 1, 212, 204, 20, 57, 0, -3.1784, 16.34, 0.45739
1, 2, 223, 181, 36,104, 1, -1.407, 9.0212, 0.68774

Such data fields in the ground truth data exist for videos ETH-Bahnhof, ETH-Sunnyday, PETS09-S2L1 and TUD-Stadtmitte. As for the other videos in the 2DMOT2015 dataset, they are in the standard format with last four values as -1.

I have some questions about the source code, in mot.py and metrics.py

In mot.py

there is a piece of code as follows:

# 1. Try to re-establish tracks from previous correspondences
for i in range(oids.shape[0]):
    if not oids[i] in self.m: # only consider objects that have been tracked
        continue

    hprev = self.m[oids[i]]  
    j, = np.where(hids==hprev)  
    if j.shape[0] == 0:  
        continue
    j = j[0]  

    if not dists[i, j] == INVDIST:
        oids[i] = ma.masked
        hids[j] = ma.masked
        self.m[oids.data[i]] = hids.data[j] # ?? is this really necessary?
        self.events.loc[(frameid, next(eid)), :] = ['MATCH', oids.data[i], hids.data[j], dists[i, j]]

the second to the last line, I don't think this assignment is necessary. Isn't it the same pair in the previous frame? In my humble opinion, there's no need for this assignment.

In metrics.py

the _compute() function

def _compute(self, df, name, cache, parent=None):
    """Compute metric and resolve dependencies."""
    assert name in self.metrics, 'Cannot find metric {} required by {}.'.format(name, parent)
    minfo = self.metrics[name]
    vals = []
    for depname in minfo['deps']:
        v = cache.get(depname, None)  
        if v is None:  
            v = cache[depname] = self._compute(df, depname, cache, parent=name)
        vals.append(v)
    return minfo['fnc'](df, *vals)

the parameter parent serves no function here.

I don't know if my opinion is right or I miss something. Please shed some light on those two parts.

Would like to know the detail of ID-score

I am very new to multiple object tracking, and I am confusing about the calculation of ID scores (IDFN, IDTP, ...).
In the metrics.py, the matching between objects (groundtruth trajectories) and hypotheses (tracked trajectories) are associated based on cost optimization.
The calculation seems to be utilizing the per-frame matching (Hungarian ?) results, because the event dataframe without rows of "match" is utilized.
Why is that ? In my understanding, the global matching is not related to per-frame matching, but is that incorrect ?
Please tell me some more explanation about global matching, as well as the meanings of fpmatrix and fnmatrix, if you do not mind.

MOT metrics for DEEP SORT

i have the output files of tracking algorithm (DEEP SORT) after a successful run on MOT16 Dataset and i also have the ground truth of Each of the sequence from MOT16 dataset.
How can i give both the directories (output and ground truth) as input to get metrics score as output?

Negative value in MOTA

Hello
I am testing my tracker with groundtruth in training set, but the problem is that in some image sequences the MOTA is a negative value. Also, I have used SST tracker and its link is available from MOTChallenge website. The same result is shown for some training sequences. Could you guide me what is wrong here.

Thank you,
Morteza

I'm very appreciate about this code. but i have some mistakes when i want use your data to evaluate the performance.

I have change the dictionary of the groundtruth data and the test data like the following format:
*/data/test/gt/gt.txt
*/data/TUD-Stadtmitte/test.txt

and i run the eval_motmetrics.py code like the following:
~/zmh/py-motmetrics-master/motmetrics/apps$ python eval_motchallenge.py /home/zs/zmh/py-motmetrics-master/motmetrics/data/test/gt /home/zs/zmh/py-motmetrics-master/motmetrics/data/TUD-Stadtmitte

(the former is the root routing of the file, Is it all right?)
but these is a warning like: "WARNING - No ground truth for test, skipping."
and the "INFO - Found 0 groundtruths and 1 test files."
then, the following is the complete output words:
08:45:20 INFO - Found 0 groundtruths and 1 test files.
08:45:20 INFO - Available LAP solvers ['scipy']
08:45:20 INFO - Default LAP solver 'scipy'
08:45:20 INFO - Loading files.
08:45:21 WARNING - No ground truth for test, skipping.
08:45:21 INFO - Running metrics
/home/zs/.virtualenvs/zmh/lib/python3.5/site-packages/motmetrics/metrics.py:378: RuntimeWarning: invalid value encountered in double_scalars
return 2 * idtp / (num_objects + num_predictions)
/home/zs/.virtualenvs/zmh/lib/python3.5/site-packages/motmetrics/metrics.py:370: RuntimeWarning: invalid value encountered in double_scalars
return idtp / (idtp + idfp)
/home/zs/.virtualenvs/zmh/lib/python3.5/site-packages/motmetrics/metrics.py:374: RuntimeWarning: invalid value encountered in double_scalars
return idtp / (idtp + idfn)
/home/zs/.virtualenvs/zmh/lib/python3.5/site-packages/motmetrics/metrics.py:302: RuntimeWarning: invalid value encountered in long_scalars
return num_detections / num_objects
/home/zs/.virtualenvs/zmh/lib/python3.5/site-packages/motmetrics/metrics.py:298: RuntimeWarning: invalid value encountered in long_scalars
return num_detections / (num_false_positives + num_detections)
/home/zs/.virtualenvs/zmh/lib/python3.5/site-packages/motmetrics/metrics.py:294: RuntimeWarning: invalid value encountered in long_scalars
return 1. - (num_misses + num_switches + num_false_positives) / num_objects
/home/zs/.virtualenvs/zmh/lib/python3.5/site-packages/motmetrics/metrics.py:290: RuntimeWarning: invalid value encountered in double_scalars
return df.noraw['D'].sum() / num_detections
IDF1 IDP IDR Rcll Prcn GT MT PT ML FP FN IDs FM MOTA MOTP
OVERALL nan% nan% nan% nan% nan% 0 0 0 0 0 0 0 0 nan% nan
08:45:21 INFO - Completed

Am i do something wrong like running this code, or the data folder file cann't test this code?
Hope for your help, thanks!

Incorrect edge case

This library gives incorrect results in a specific edge case. In the MOT16 paper 4.1.1 it states:

if a ground truth object i is matched to hypothesis j at time t โˆ’ 1 and the distance (or dissimilarity) between i and j in frame t is below td, then the correspondence between i and j is carried over to frame t even if there exists another hypothesis that is closer to the actual target.

However, in this library it checks whatever the most recently matched hypothesis j was, even if that match did not occur in time t-1. This creates an incorrect metric when:

At time 1, object A matches prediction X
At time 2, prediction X moves too far away from object A, and no other prediction matches it so it is a false negative.
At time 3, prediction X moves back to being close enough to match object A, but prediction Y appears and is even closer to object A.

In the authors' metric (https://bitbucket.org/amilan/motchallenge-devkit/src/default/utils/clearMOTMex.cpp) this results in object A matching to prediction Y, however this library will match it to prediction X.

Another issue I found that is that num_frames does not count frames with zero objects and zero predictions. Any metrics down the line which divide by num_frames would be affected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.