Giter Club home page Giter Club logo

soccernetv2-devkit's Introduction

SoccerNetv2-DevKit

Welcome to the SoccerNet-V2 Development Kit for the SoccerNet Benchmark and Challenge. This kit is meant as a help to get started working with the soccernet data and the proposed tasks. More information about the dataset can be found on our official website.

SoccerNet-v2 is an extension of SoccerNet-v1 with new and challenging tasks including action spotting, camera shot segmentation with boundary detection, and a novel replay grounding task.

The dataset consists of 500 complete soccer games including:

  • Full untrimmed broadcast videos in both low and high resolution.
  • Pre-computed features such as ResNET-152.
  • Annotations of actions among 17 classes (Labels-v2.json).
  • Annotations of camera replays linked to actions (Labels-cameras.json).
  • Annotations of camera changes and camera types for 200 games (Labels-cameras.json).

Participate in our upcoming Challenge in the CVPR 2021 International Challenge on Activity Recognition Workshop and try to win up to 1000$ sponsored by Second Spectrum! All details can be found on the challenge website, or on the main page.

The participation deadline is fixed at the 30th of May 2021. The official rules and guidelines are available on ChallengeRules.md.

How to download SoccerNet-v2

A SoccerNet pip package to easily download the data and the annotations is available.

To install the pip package simply run:

pip install SoccerNet

Please follow the instructions provided in the Download folder of this repository. Do also mind that signing an Non-Disclosure agreement (NDA) is required to access the LQ and HQ videos: NDA.

How to extract video features

As it was one of the most requested features on SoccerNet-V1, this repository provides functions to automatically extract the ResNet-152 features and compute the PCA on your own broadcast videos. These functions allow you to test pre-trained action spotting, camera segmentation or replay grounding models on your own games.

The functions to extract the video features can be found in the Features folder.

Baseline Implementations

This repository contains several baselines for each task which are presented in the SoccerNet-V2 paper, or subsequent papers. You can use these codes to build upon our methods and improve the performances.

Evaluation

This repository and the pip package provide evaluation functions for the three proposed tasks based on predictions saved in the JSON format. See the Evaluation folder of this repository for more details.

Visualizations

Finally, this repository provides the Annotation tool used to annotate the actions, the camera types and the replays. This tool can be used to visualize the information. Please follow the instruction in the dedicated folder for more details.

Citation

For further information check out the paper and supplementary material: https://arxiv.org/abs/2011.13367

Please cite our work if you use our dataset:

@InProceedings{Deliège2020SoccerNetv2,
      title={SoccerNet-v2 : A Dataset and Benchmarks for Holistic Understanding of Broadcast Soccer Videos}, 
      author={Adrien Deliège and Anthony Cioppa and Silvio Giancola and Meisam J. Seikavandi and Jacob V. Dueholm and Kamal Nasrollahi and Bernard Ghanem and Thomas B. Moeslund and Marc Van Droogenbroeck},
      year={2021},
      booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
      month = {June},
}

soccernetv2-devkit's People

Contributors

cioppaanthony avatar gmberton avatar jagob avatar meisamjam avatar silviogiancola avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

soccernetv2-devkit's Issues

CALF GCN assertion error while processing

I tried to train the CALF GCN model with the following command line:

python src/main.py --SoccerNet_path=/datasets/soccernet \
                               --features=ResNET_TF2_PCA512.npy \
                               --num_features=512 \
                               --model_name=calib_GCN \
                               --batch_size 32 \
                               --evaluation_frequency 20 \
                               --chunks_per_epoch 18000 \
                               --model_name=calib_GCN_run_${i}  \
                               --backbone_feature=2DConv \
                               --backbone_player=resGCN-14 \
                               --dist_graph_player=25 \
                               --feature_multiplier 2 \
                               --class_split visual

This line is basically the same as the one from the README.md file here, but without the calibration option. The result I get from running it is the following:

2021-07-12 16:32:33,120 [MainThread  ] [INFO ]  Starting main function
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]  Parameters:
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]   SoccerNet_path : /datasets/soccernet
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]         features : ResNET_TF2_PCA512.npy
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]       max_epochs : 1000
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]     load_weights : None
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]       model_name : calib_GCN_run_
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]             mode : 0
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]        test_only : False
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]        challenge : False
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]          teacher : False
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]             tiny : None
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]      class_split : visual
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]         K_params : None
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]     num_features : 512
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]  chunks_per_epoch : 18000
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]  evaluation_frequency : 20
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]      dim_capsule : 16
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]        framerate : 2
2021-07-12 16:32:33,120 [MainThread  ] [INFO ]       chunk_size : 120
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  receptive_field : 40
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]     lambda_coord : 5.0
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]     lambda_noobj : 0.5
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  loss_weight_segmentation : 0.000367
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  loss_weight_detection : 1.0
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]   num_detections : 15
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  feature_multiplier : 2
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  backbone_player : resGCN-14
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  backbone_feature : 2DConv
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]      calibration : False
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  calibration_field : False
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  calibration_cone : False
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  calibration_confidence : False
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  dim_representation_w : 64
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  dim_representation_h : 32
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  dim_representation_c : 3
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  dim_representation_player : 2
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]  dist_graph_player : 25
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]     with_dropout : 0.0
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]       batch_size : 32
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]               LR : 0.001
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]         patience : 25
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]              GPU : -1
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]   max_num_worker : 4
2021-07-12 16:32:33,121 [MainThread  ] [INFO ]         loglevel : INFO
2021-07-12 16:32:33,255 [MainThread  ] [INFO ]  Checking/Download features and labels locally
2021-07-12 16:32:35,530 [MainThread  ] [INFO ]  Pre-compute clips
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 300/300 [08:38<00:00,  1.73s/it]
2021-07-12 16:41:13,821 [MainThread  ] [INFO ]  Checking/Download features and labels locally
2021-07-12 16:41:19,122 [MainThread  ] [INFO ]  Pre-compute clips
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [05:06<00:00,  3.06s/it]
2021-07-12 16:46:25,305 [MainThread  ] [INFO ]  Checking/Download features and labels locally
2021-07-12 16:46:25,808 [MainThread  ] [INFO ]  Checking/Download features and labels locally
2021-07-12 16:46:28,088 [MainThread  ] [INFO ]  ContextAwareModel(
  (conv_1): Conv2d(1, 128, kernel_size=(1, 512), stride=(1, 1))
  (conv_2): Conv2d(128, 32, kernel_size=(1, 1), stride=(1, 1))
  (pad_p_1): ZeroPad2d(padding=(0, 0, 5, 6), value=0.0)
  (pad_p_2): ZeroPad2d(padding=(0, 0, 13, 13), value=0.0)
  (pad_p_3): ZeroPad2d(padding=(0, 0, 19, 20), value=0.0)
  (pad_p_4): ZeroPad2d(padding=(0, 0, 39, 40), value=0.0)
  (conv_p_1): Conv2d(32, 8, kernel_size=(12, 1), stride=(1, 1))
  (conv_p_2): Conv2d(32, 16, kernel_size=(27, 1), stride=(1, 1))
  (conv_p_3): Conv2d(32, 32, kernel_size=(40, 1), stride=(1, 1))
  (conv_p_4): Conv2d(32, 64, kernel_size=(80, 1), stride=(1, 1))
  (node_encoder): Linear(in_features=8, out_features=64, bias=True)
  (edge_encoder): Linear(in_features=8, out_features=64, bias=True)
  (layers): ModuleList(
    (0): DeepGCNLayer(block=res)
    (1): DeepGCNLayer(block=res)
    (2): DeepGCNLayer(block=res)
    (3): DeepGCNLayer(block=res)
    (4): DeepGCNLayer(block=res)
    (5): DeepGCNLayer(block=res)
    (6): DeepGCNLayer(block=res)
    (7): DeepGCNLayer(block=res)
    (8): DeepGCNLayer(block=res)
    (9): DeepGCNLayer(block=res)
    (10): DeepGCNLayer(block=res)
    (11): DeepGCNLayer(block=res)
    (12): DeepGCNLayer(block=res)
    (13): DeepGCNLayer(block=res)
  )
  (lin): Linear(in_features=64, out_features=152, bias=True)
  (pad_seg): ZeroPad2d(padding=(0, 0, 1, 1), value=0.0)
  (conv_seg): Conv2d(304, 128, kernel_size=(3, 1), stride=(1, 1))
  (batch_seg): BatchNorm2d(240, eps=0.001, momentum=0.01, affine=True, track_running_stats=True)
  (max_pool_spot): MaxPool2d(kernel_size=(3, 1), stride=(2, 1), padding=0, dilation=1, ceil_mode=False)
  (pad_spot_1): ZeroPad2d(padding=(0, 0, 1, 1), value=0.0)
  (conv_spot_1): Conv2d(136, 32, kernel_size=(3, 1), stride=(1, 1))
  (max_pool_spot_1): MaxPool2d(kernel_size=(3, 1), stride=(2, 1), padding=0, dilation=1, ceil_mode=False)
  (pad_spot_2): ZeroPad2d(padding=(0, 0, 1, 1), value=0.0)
  (conv_spot_2): Conv2d(32, 16, kernel_size=(3, 1), stride=(1, 1))
  (max_pool_spot_2): MaxPool2d(kernel_size=(3, 1), stride=(2, 1), padding=0, dilation=1, ceil_mode=False)
  (conv_conf): Conv2d(464, 30, kernel_size=(1, 1), stride=(1, 1))
  (conv_class): Conv2d(464, 120, kernel_size=(1, 1), stride=(1, 1))
  (softmax): Softmax(dim=-1)
)
2021-07-12 16:46:28,089 [MainThread  ] [INFO ]  Total number of parameters: 741828
2021-07-12 16:46:28,090 [MainThread  ] [INFO ]  start training
  0%|                                                                                                                                   | 0/563 [00:05<?, ?it/s]
Traceback (most recent call last):
  File "src/main.py", line 213, in <module>
    main(args)
  File "src/main.py", line 82, in main
    trainer(train_loader, val_loader, val_metric_loader, test_loader,
  File "/code/soccerNetv2-devkit/Task1-ActionSpotting/CALF_Calibration_GCN/src/train.py", line 36, in trainer
    loss_training = train(
  File "/code/soccerNetv2-devkit/Task1-ActionSpotting/CALF_Calibration_GCN/src/train.py", line 150, in train
    output_segmentation, output_spotting = model(feats, representations)
  File "/home/gorayni/anaconda3/envs/CALF-pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/code/soccerNetv2-devkit/Task1-ActionSpotting/CALF_Calibration_GCN/src/model.py", line 127, in forward
    r_concatenation = self.forward_GCN(inputs, representation_inputs)
  File "/code/soccerNetv2-devkit/Task1-ActionSpotting/CALF_Calibration_GCN/src/model.py", line 476, in forward_GCN
    x = self.layers[0].conv(x, edge_index)
  File "/home/gorayni/anaconda3/envs/CALF-pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/gorayni/anaconda3/envs/CALF-pytorch/lib/python3.8/site-packages/torch_geometric/nn/conv/gen_conv.py", line 152, in forward
    out = self.propagate(edge_index, x=x, edge_attr=edge_attr, size=size)
  File "/home/gorayni/anaconda3/envs/CALF-pytorch/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py", line 216, in propagate
    size = self.__check_input__(edge_index, size)
  File "/home/gorayni/anaconda3/envs/CALF-pytorch/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py", line 91, in __check_input__
    assert edge_index.dim() == 2
AssertionError

Making player bounding box code

Hi, I read your Camera Calibration and Player Localization paper, and it was awesome work.

So, I'd like to try to add "ball detection" process, since you detected players only.

Unfortunately, I couldn't find a code which makes "player_boundingbox_maskrcnn.json" and "field_calibration_ccbv.json" files as output.

Could you let me know where can I find the code?

Best Regards.

Custom Annotations/Labels for different sport

Hi SoccerNet Dev team again, apologies for the 3rd issue in a row, but I'm really fascinated with this work!

As a seasoned Badminton fan, I would love to apply this architecture to Badminton matches. Is it possible to do so?
As I've gathered from the material, there are a few places where I need to change the code to accomplish this:

  1. When using the annotation tool (when I get it working after my issue #21 is solved), I need to change some code so that when I press ENTER the menu should display my custom labels.

  2. In ./Task1-ActionSpotting/CALF/src/config/classes.py, with custom labels, EVENT_DICTIONARY_V2 and INVERSE_EVENT_DICTIONARY_V2 must be changed (to my custom labels, and just incrementing the dictionary value I assume), but does K_V2 have to change? If so, what does K_V2 change to, or how do I calculate it?

  3. In ./Task1-ActionSpotting/CALF/src/model.py, I of course need to make sure the matrices line-up, as I have a different number of class predictions.

Is there anywhere else the code needs to changed that I'm missing? I haven't yet looked at the code thoroughly for points 1 & 3, and I'm unsure about the theory in point 2, but if anyone could point me in the right direction, that'd be greatly appreciated!
Thank you so much again for reading, and looking forward to your response!

homography params at calibration_ccbv.json

how can I use those Homography params to transform a current frame into a top view?

for Example , If I use opencv function, what is H ? ( do I need to normalize the json's params ? inverse?)
what is output dimensions of this transform?

warp = cv2.warpPerspective(currFrame, H, (outWidth, outHeight))

Question about ResNET_TF2.npy

Hi owner!
I am new to this devkit and follow NetVLAD++ as starting point. I have found the data downloading are all npy files, eg. 1_ResNET_TF2.npy instead of raw video. I wonder how the data is preprocessed and divided into 1_ResNet and 2_ResNet. Could you provide some hints on it?

Also, when I am trying to test the NetVLAD++, I am found something weird during testSpotting(). I have found that the program crashed at data with index 36. More specifically, it is related to "europe_uefa-champions-league/2016-2017/2017-05-02 - 21-45 Real Madrid 3 - 0 Atl. Madrid/2_ResNET_TF2.npy". The error is given as follows:

cannot reshape array of size 4997088 into shape (5400,2048)

I wonder if I download the 2_ResNET_TF2.npy wrongly or the index 36 data should be avoided?

Thank you for reading my issue in advance.

Question about the input image channels of Features/ExtractResNET_TF2.py

When I read preprocess_input in tf.keras, it seems that the input image channel is expected to be "RGB", but the image loaded by OpenCV is supposed to be "BGR", right?
In that case, FrameCV.frames would still be BGR, which would be different from the input image expected by tf.keras.preprocess_input and tf.keras.applications, right?

Please let me know if I am wrong as I am not a frequent user of tensorflow.

Incorrect field calibration JSON prediction file present under game folder

Hi,

Wrong field calibration file seems to be present under a different game folder. Hence, no. of frames extracted for a 25fps video will not match with no of predictions present inside field_calibration.json because 'UrlLocal' points to a different game video.

Example -
Refer to Game :
england_epl/2014-2015/2015-05-17 - 18-00 Manchester United 1 - 1 Arsenal
https://exrcsdrive.kaust.edu.sa/exrcsdrive/index.php/s/9eRjic29XTk0gS9?path=%2Fengland_epl%2F2014-2015

Check file-
1_field_calibration.json , it contains "UrlLocal": "/ibex/scratch/giancos/SoccerNet_field/england_epl/2014-2015/2015-04-11 - 19-30 Burnley 0 - 1 Arsenal/1_HQ_25.mkv",

2_field_calibration.json, it contains "UrlLocal": "/ibex/scratch/giancos/SoccerNet_field/england_epl/2015-2016/2015-08-08 - 19-30 Chelsea 2 - 2 Swansea/2_HQ_25.mkv",

There are 217 such instances out of 1000 where such incorrect mapping of game folder to game video is present.

Mismatch Players' Bounding Boxes

I found that several game matches have a mismatch of the players' bounding boxes and possibly their calibration as well. The games with this problem were the following:

italy_serie-a/2016-2017/2017-03-04 - 17-00 AS Roma 1 - 2 Napoli
italy_serie-a/2015-2016/2015-09-22 - 21-45 Udinese 2 - 3 AC Milan   
italy_serie-a/2015-2016/2015-11-22 - 22-45 Inter 4 - 0 Frosinone
italy_serie-a/2016-2017/2016-08-27 - 21-45 Napoli 4 - 2 AC Milan
italy_serie-a/2016-2017/2016-08-28 - 21-45 Cagliari 2 - 2 AS Roma
italy_serie-a/2016-2017/2016-09-11 - 16-00 AC Milan 0 - 1 Udinese
italy_serie-a/2016-2017/2016-09-16 - 21-45 Sampdoria 0 - 1 AC Milan
italy_serie-a/2016-2017/2016-09-18 - 21-45 Fiorentina 1 - 0 AS Roma
italy_serie-a/2016-2017/2016-09-20 - 21-45 AC Milan 2 - 0 Lazio
italy_serie-a/2016-2017/2016-09-21 - 21-45 AS Roma 4 - 0 Crotone
italy_serie-a/2016-2017/2016-09-25 - 13-30 Torino 3 - 1 AS Roma
italy_serie-a/2016-2017/2016-10-02 - 21-45 AS Roma 2 - 1 Inter
italy_serie-a/2016-2017/2017-02-10 - 22-45 Napoli 2 - 0 Genoa
italy_serie-a/2016-2017/2017-02-25 - 20-00 Napoli 0 - 2 Atalanta
italy_serie-a/2016-2017/2017-04-15 - 21-45 Napoli 3 - 0 Udinese
spain_laliga/2015-2016/2015-09-12 - 17-00 Espanyol 0 - 6 Real Madrid
spain_laliga/2015-2016/2015-09-12 - 21-30 Atl. Madrid 1 - 2 Barcelona   
spain_laliga/2016-2017/2016-11-19 - 18-15 Barcelona 0 - 0 Malaga
spain_laliga/2016-2017/2017-01-08 - 22-45 Villarreal 1 - 1 Barcelona
spain_laliga/2016-2017/2017-04-26 - 22-30 Dep. La Coruna 2 - 6 Real Madrid
spain_laliga/2019-2020/2020-02-09 - 18-00 Osasuna 1 - 4 Real Madrid
spain_laliga/2019-2020/2020-02-16 - 18-00 Real Madrid 2 - 2 Celta Vigo
england_epl/2015-2016/2015-11-07 - 20-30 Stoke City 1 - 0 Chelsea
england_epl/2015-2016/2016-04-09 - 17-00 Swansea 1 - 0 Chelsea
england_epl/2016-2017/2017-01-03 - 18-00 Bournemouth 3 - 3 Arsenal                   
europe_uefa-champions-league/2014-2015/2014-11-04 - 22-45 Arsenal 3 - 3 Anderlecht
europe_uefa-champions-league/2014-2015/2014-11-05 - 22-45 Ajax 0 - 2 Barcelona
europe_uefa-champions-league/2015-2016/2015-11-03 - 22-45 PSV 2 - 0 Wolfsburg   
france_ligue-1/2016-2017/2017-04-18 - 19-30 Metz 2 - 3 Paris SG
germany_bundesliga/2014-2015/2015-05-09 - 16-30 Dortmund 2 - 0 Hertha Berlin 

For the rest of the games the bounding boxes match perfectly. I extract the frames by sampling 2 fps from a half game match by doing the following:

ffmpeg -ss "$start_time" \
       -t "$duration_time" \
       -i "$half_match_HQ_video_path"
       -vf fps=fps=2.:round=down \
       -vsync 1 \
       -q:v 1 \
       "$frames_dest_dir"/%05d.jpg

I initially thought that I was incorrectly sampling frames from the videos, but I also tried with the official frame loader class FrameCV from the DataLoader.py with no success.

I am not quite sure what might be the problem, is there an appropiate way to sample the frames from the videos or are the bounding boxes or is the data wrongly labeled?

Question relating to CALF Average-mAP metric

Hi,

I'm studying the metric used for CALF in Task1-ActionSpotting - specifically average_mAP, delta_curve, and compute_mAP in metrics_visibility_fast.py.

I understand that you use delta=5 seconds around each GT label, and num_intervals=12 intervals (also `num_classes=171). If this is the case, I assume we can...

In average_mAP, change,

integral_unshown = 0.0
for i in np.arange(len(mAP_unshown)-1):
    integral_unshown += 5*(mAP_unshown[i]+mAP_unshown[i+1])/2
a_mAP_unshown = integral_unshown/(5*(len(mAP_unshown)-1))
a_mAP_unshown = a_mAP_unshown*17/13

into

integral_unshown = 0.0
for i in np.arange(len(mAP_unshown)-1):
    integral_unshown += delta * (mAP_unshown[i] + mAP_unshown[i+1]) / 2
a_mAP_unshown = integral_unshown / (delta * (len(mAP_unshown)-1))
a_mAP_unshown = a_mAP_unshown * num_classes / (num_intervals+1)

In delta_curve, change (np.arange(12)*5 + 5)*framerate) into (np.arange(num_intervals)*delta + delta)*framerate

In compute-mAP, change

  1. for j in np.arange(11)/10: into for j in np.arange(num_intervals-1)/10:, and
  2. mAP_per_class = AP/11 into mAP_per_class = AP / num_intervals-1

Are these 3 assumptions correct?

Also, in average_mAP, when averaging the mAPs, how come a_mAP and a_mAP_visible are identical, but a_mAP_unshown has an extra line: a_mAP_unshown = a_mAP_unshown*17/13?

Hoping you can help me better understand the metric your team used. Thank you for reading!

[Question] Features extracted from LQ and HQ versions of the same video are not matching

Thanks for sharing the amazing work. I tried to extract the 512 dimensional ResNet features after PCA using VideoFeatureExtractor.py, using a few downloaded low resolution (LQ) videos as input, and the features extracted exactly match the features provided for download at the same fps. However, when I tried the same with the corresponding high resolution (HQ) versions of the videos as inputs, the features no longer match. I also used the start times from the video.ini files to ensure that the feature extractions in the two cases are synchronized.

I used the crop transform, so, in the case of HQ videos, the frames were first resized to 398x224. which is the resolution at which the LQ versions were encoded. I also checked the frame data, and the same-sized frame tensors for LQ and HQ videos of the same game are different! If the resized frames were losslessly compressed while encoding to generate the given LQ videos, the tensors ought to have been the same at this point. So, perhaps lossy compression was applied which is causing the discrepancy?

I would like to know whether the models trained with the features extracted from the LQ videos would give approximately the same level of performance when tested with the features extracted from the HQ videos as above. If not, could you please share what preprocessing needs to be applied to HQ videos to get the same features and hence the same performance? For example, if compression was the issue above, then sharing the compression parameters used to generate the LQ videos would most likely solve the issue. Thanks for your help.

Starting from scratch

I would like to know that using this kit, from where to start?
Can I use the already trained model for the fine tuning? or I have to train from scratch using your features?

UnboundLocalError: local variable 'jsonGamesFile' referenced before assignment

Hello, thanks for sharing the great dataset and codebase!

I'm facing the following issue when trying to evaluate spotting performances on the "val" split, by running python EvaluateSpotting.py --SoccerNet_path /path/to/SoccerNet/ --predictions_path /path/to/SoccerNet/outputs/ --split "val":

File <path_to>/SoccerNet/utils.py", line 48, in getListGames
    with open(jsonGamesFile, "r") as json_file:
UnboundLocalError: local variable 'jsonGamesFile' referenced before assignment

The "/path/to/SoccerNet/outputs/" is the path to a directory (not zipped) with the structure as specified in https://github.com/SilvioGiancola/SoccerNetv2-DevKit/tree/main/Evaluation under "Output Format", but I only have the "results_spotting.json" prediction file for each game (not the "results_segmentation.json" and "results_ground.json" ones). Each file follows the structure specified under "Task 1: results_spotting.json".
Do you know how to solve it?
Thanks again.

Matteo

Models CALF_calibration approach

Hi guys, first of all thank you for share all this knowledge, it is impressive. I was wondering if it can be possible to access the models of CALF_calibration approach since training for scratch the networks in my case is very slow. I am doing my master thesis and I am trying to reproduce the experimentation that you did, this is the reason which I am interested in that.

Sorry for inconvenience and thank you very much.

trying to reduce dataset to train but exception occurs

I download SoccerNet locally then remove multiple games from files (SoccerNetGamesTrain.json,SoccerNetGamesValid.json,SoccerNetGamesTest.json)

but the following exception occurs:
2021-06-26 12:46:56,913 [MainThread ] [INFO ] Total number of parameters: 578245 /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary. cpuset_checked)) 2021-06-26 12:46:56,915 [MainThread ] [INFO ] start training 0%| | 0/563 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) Train 1: Time 0.040s (it:0.030s) Data:0.001s (it:0.000s) Loss 1.4038e+02 Loss Seg 2.6572e+04 Loss Spot 1.3063e+02 : 100%|█████| 563/563 [00:22<00:00, 25.23it/s] [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) 0%| | 0/563 [00:00<?, ?it/s] Traceback (most recent call last): File "src/main.py", line 164, in <module> main(args) File "src/main.py", line 77, in main max_epochs=args.max_epochs, evaluation_frequency=args.evaluation_frequency) File "/content/drive/My Drive/SoccerNetv2-DevKit/Task1-ActionSpotting/CALF/src/train.py", line 53, in trainer train = False) File "/content/drive/My Drive/SoccerNetv2-DevKit/Task1-ActionSpotting/CALF/src/train.py", line 138, in train for i, (feats, labels, targets) in t: File "/usr/local/lib/python3.7/dist-packages/tqdm/std.py", line 1104, in __iter__ for obj in iterable: File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 425, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/drive/My Drive/SoccerNetv2-DevKit/Task1-ActionSpotting/CALF/src/dataset.py", line 127, in __getitem__ event_selection = random.randint(0, len(self.game_anchors[class_selection])-1) File "/usr/lib/python3.7/random.py", line 222, in randint return self.randrange(a, b+1) File "/usr/lib/python3.7/random.py", line 200, in randrange raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width)) ValueError: empty range for randrange() (0,0, 0)

Question related to camera mIoU metric

Greetings,

Thanks for making all this code available, this repo is fantastic!

I wanted to ask about some details regarding the camera mIoU metric. On initially going over the SoccerNet-V2 paper, I got the idea that the camera class "Other" should be treated just as any of the other classes. When I went over the code in more detail, I am second-guessing if that's the case and wanted to ask for your thoughts on what should be the ideal treatment of the "Other" camera class.

In looking at Camera_Type_DICTIONARY at

"Close-up side staff":7,"Close-up corner":8,"Close-up behind the goal":9,"Inside the goal":10,"Public":11,"other":12,"I don't know":12}
the "Other" camera label is lower-cased as "other", whereas in my JSON files they show up with initial upper-case as "Other", so I wonder if those labels are being ignored when being read here: At the same time, I guess the "I don't know" annotation will still get mapped to the last label 12 for "Other"?

Ignoring the "Other" label might be consistent with the weights assigned to the different classes for CALF segmentation, where the last weight (for "Other") is zero:

self.weight = torch.tensor([0.00059148, 0.0011937, 0.0257837, 0.02792131, 0.29968935, 0.02532632, 0.09931299, 0.00722039, 0.04672078, 0.01889945, 0.24348754, 0.03107723, 0], dtype=torch.float).cuda()

At the same time, I was confused since the evaluation code seems to take all classes into account when computing mIoU:

for cl in range(dataloader.dataset.num_classes_sgementation):
cur_gt_mask = (target_np == cl)
cur_pred_mask = (pred_np == cl)
# print(cur_gt_mask)
# print(cur_pred_mask)
I = np.sum(np.logical_and(cur_gt_mask,cur_pred_mask), dtype=np.float32)
U = np.sum(np.logical_or(cur_gt_mask,cur_pred_mask), dtype=np.float32)
if U > 0:
part_intersect[cl] += I
part_union[cl] += U
pred_np = segmentation_long_half_2.max(dim=1)[1].numpy() #pred.squeeze(0).cpu().numpy()
target_np = label_half2.max(dim=1)[1].numpy() #targets.squeeze(0).cpu().numpy()
for cl in range(dataloader.dataset.num_classes_sgementation):
cur_gt_mask = (target_np == cl)
cur_pred_mask = (pred_np == cl)
I = np.sum(np.logical_and(cur_gt_mask,cur_pred_mask), dtype=np.float32) + 1
U = np.sum(np.logical_or(cur_gt_mask,cur_pred_mask), dtype=np.float32) + 1
if U > 0:
part_intersect[cl] += I
part_union[cl] += U

So I wanted to get your thoughts on what the appropriate definition for mIoU should be and also confirm if "I don't know" should be treated the same as "Other" for evaluation purposes?

Thanks very much!
Joao

[Annotation Tool Error] "main.py", ModuleNotFoundError: No module named 'interface'

Hi Silvio Giancola & SoccerNetv2-DevKit team, this is a great repo you have put together here, thank you so much for making it public!

I came across this SoccerNet-v2 challenge - Tutorial #2 (live session) video from your YouTube channel.
I'm following along in "02:18​ Demo 1: Annotation and visualization tool", and upon running main.py, I've come across an error:

Traceback (most recent call last):
  File "main.py", line 4, in <module>
    from interface.main_window import MainWindow
ModuleNotFoundError: No module named 'interface'

My Steps:

  1. git clone-ed the repo
  2. Created environment based on 4 lines in Annotation's "Getting Started"
  3. Ran ./Annotation/actions/src/main.py

If there's anyone that could let me know if this is an error on my end, or how to fix this, I'd be greatly appreciated!

(Note: Regarding the issue I opened a few minutes ago, it was a miss-click but I can't delete it - apologies for the inconvenience)

Is the NDA access to vedio still available?

Hi, i tried to use https://soccer-net.org/ to get the password to download the vedio, but i found that a ERR_CONNECTION_REFUSED was announced by it. Could I use this website to fetch the password of vedio, or is there any other ways to get the vedio?

Feature Extraction for Custom Video - GPU not compatible?

Hi again,

I'm testing the feature extraction function on my custom video. From the root, I'm running:

python Features/VideoFeatureExtractor.py \
--path_video 'Download/BWF TV (Copy)/20210317-20210321 YONEX All England Open Badminton Championships 2021/20210319 - YONEX All England Open 2021 _ Day 3 - Kento Momota (JPN) [1] vs Lee Zii Jia (MAS) [6].mp4' \ 
--path_features='Download/BWF TV (Copy)/20210317-20210321 YONEX All England Open Badminton Championships 2021/features.npy' \
--PCA=Features/pca_512_TF2.pkl \
--PCA_scaler=Features/average_512_TF2.pkl

And getting quite a long error:

2021-09-09 13:55:54.204894: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-09-09 13:55:55.004666: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-09-09 13:55:55.027734: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-09 13:55:55.028163: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:0b:00.0 name: GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.68GiB deviceMemoryBandwidth: 871.81GiB/s
2021-09-09 13:55:55.028178: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-09-09 13:55:55.029177: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-09-09 13:55:55.030163: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-09-09 13:55:55.030307: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-09-09 13:55:55.031239: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-09-09 13:55:55.031700: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-09-09 13:55:55.033647: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-09-09 13:55:55.033742: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-09 13:55:55.034211: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-09 13:55:55.034623: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-09-09 13:55:55.034969: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-09 13:55:55.059732: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3393815000 Hz
2021-09-09 13:55:55.060647: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5563c9e3d200 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-09-09 13:55:55.060669: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2021-09-09 13:55:55.130591: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-09 13:55:55.131080: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5563c99bee50 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-09-09 13:55:55.131093: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce RTX 3090, Compute Capability 8.6
2021-09-09 13:55:55.131224: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-09 13:55:55.131729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:0b:00.0 name: GeForce RTX 3090 computeCapability: 8.6
coreClock: 1.695GHz coreCount: 82 deviceMemorySize: 23.68GiB deviceMemoryBandwidth: 871.81GiB/s
2021-09-09 13:55:55.131750: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-09-09 13:55:55.131779: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-09-09 13:55:55.131790: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-09-09 13:55:55.131799: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-09-09 13:55:55.131807: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-09-09 13:55:55.131814: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-09-09 13:55:55.131823: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-09-09 13:55:55.131868: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-09 13:55:55.132376: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-09 13:55:55.132841: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-09-09 13:55:55.132860: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1

After the Terminal hanging for a few minutes, I get:

Traceback (most recent call last):
  File "Features/VideoFeatureExtractor.py", line 200, in <module>
    FPS=args.FPS)
  File "Features/VideoFeatureExtractor.py", line 66, in __init__
    classes=1000)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/applications/resnet.py", line 517, in ResNet152
    input_tensor, input_shape, pooling, classes, **kwargs)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/applications/resnet.py", line 171, in ResNet
    x = layers.Conv2D(64, 7, strides=2, use_bias=use_bias, name='conv1_conv')(x)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 926, in __call__
    input_list)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1098, in _functional_construction_call
    self._maybe_build(inputs)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 2643, in _maybe_build
    self.build(input_shapes)  # pylint:disable=not-callable
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/layers/convolutional.py", line 204, in build
    dtype=self.dtype)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 614, in add_weight
    caching_device=caching_device)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py", line 750, in _add_variable_with_custom_getter
    **kwargs_for_getter)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 145, in make_variable
    shape=variable_shape if variable_shape else None)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 260, in __call__
    return cls._variable_v1_call(*args, **kwargs)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 221, in _variable_v1_call
    shape=shape)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 199, in <lambda>
    previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2597, in default_variable_creator
    shape=shape)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 264, in __call__
    return super(VariableMetaclass, cls).__call__(*args, **kwargs)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 1518, in __init__
    distribute_strategy=distribute_strategy)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 1651, in _init_from_args
    initial_value() if init_from_fn else initial_value,
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/keras/initializers/initializers_v2.py", line 397, in __call__
    return super(VarianceScaling, self).__call__(shape, dtype=_get_dtype(dtype))
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/init_ops_v2.py", line 561, in __call__
    return self._random_generator.random_uniform(shape, -limit, limit, dtype)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/init_ops_v2.py", line 1044, in random_uniform
    shape=shape, minval=minval, maxval=maxval, dtype=dtype, seed=self.seed)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
    return target(*args, **kwargs)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/ops/random_ops.py", line 288, in random_uniform
    shape = tensor_util.shape_tensor(shape)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/framework/tensor_util.py", line 1029, in shape_tensor
    return ops.convert_to_tensor(shape, dtype=dtype, name="shape")
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1499, in convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 338, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 264, in constant
    allow_broadcast=True)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 275, in _constant_impl
    return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 300, in _constant_eager_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/framework/constant_op.py", line 97, in convert_to_eager_tensor
    ctx.ensure_initialized()
  File "/home/wilsonchan/anaconda3/envs/SoccerNet-FeatureExtraction/lib/python3.7/site-packages/tensorflow/python/eager/context.py", line 539, in ensure_initialized
    context_handle = pywrap_tfe.TFE_NewContext(opts)
tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: device kernel image is invalid

I'm using an RTX 3090, and it seems it isn't compatible. It seems I need to "rebuild TensorFlow with the appropriate compiler flags"?

I tried re-installing TensorFlow with pip install tensorflow but nothing was updated.

If you or your team has any leads on what the issue is I'd greatly appreciate it if you'd let me know! Thank you so much for reading this issue, again.

IMPOSSIBLE OPEN ARCHIVE

I try to download the project with the Github button. I download Soccernetv2-DevKit-main.zip but when i try to open every program find an error with a message (damaged archive). How can i risolve?

Camera Change Detection

Hi, can you tell me how is the labels-camera.json file generated?
I am assuming it is generated through inference run by BasicModel-Segmentation. Will it be run from the inference code for Task2-BasicModelSegmentation and if so can it be shared, it seems to be commented in the main.py for task2.

Thank you for your help in advance!
This seems to be a very interesting research and your loss function is amazing conceptually and even practically kudos!

Lots of Questions on CALF-Calibration

Hello SoccerNet Dev Team,

I'm currently in the process of reading your paper on CALF-Calibration, and the entire pipeline along with the results are very impressive. I have a (many) few questions on parts of the paper/code I'm confused with, and would really appreciate it if any of you could help clear up my confusion. I know I have many questions written below, and completely understand if you're unable to answer them due to the volume. Still, I would love to dig deeper into your work, and it would be amazing if you could help me do so.
As always, thank you so much for your time, and I'm of course looking forward to your new discoveries!

Note: As mentioned in my previous issues, I'm trying to use action-spotting in the context of badminton.


  1. Section 3: Calibration Algorithm
    Here it says "We base our calibration on the Camera Calibration for Broadcast Videos (CCBV) of Sha et al. [38],
    but we write our own implementation, given the absence of usable public code". I thought the public implement is here based on mentions from #19 and #32.

  1. Section 3: Our training process
    Since there's not a large enough public dataset of ground-truth calibrations, it seems you needed to use a student-teacher distillation approach. Why does this require this approach? It's also mentioned that you use the "Xeebra" from Evs to obtain the pseudo-GT calibrations. And in the CCBV repo in ./calibration_data/model.png I assume I have to swap this out for a badminton court for my application. And in ./calibration_data/dictionary.json, the data format is:
{
  {
        "posX": 0.03973018142882254,
        "posY": 68.63033722968056,
        "posZ": -15.718964999679423,
        "focalLength": 4576.520967734781,
        "pan": 7.544952667759858,
        "tilt": 77.55662442882397,
        "template_id": 0,
        "calibration": [
            4576.520967734781,
            0.0,
            960.0,
            0.0,
            4576.520967734781,
            540.0,
            0.0,
            0.0,
            1.0
        ],
        "homography": [
            4659.98895334099,
            -328.4171986412632,
            25605.797859625953,
            -60.243484492166004,
            454.8368668686411,
            40864.09122521112,
            0.1282196063258542,
            -0.9680549606981291,
            69.81988273325322
        ],
        "homography_resize": [
            621.3318481445312,
            -43.788963317871094,
            3414.106689453125,
            -8.032465934753418,
            60.64492416381836,
            5448.544921875,
            0.1282196044921875,
            -0.9680549502372742,
            69.81988525390625
        ],
        "image": "/home/fmg/sources/mmeu/data/meanshift2/gmm/dictionary-four/dict-0000.png"
    },
    ...
}

I assume this is the format that "Xeebra Evs" product will write the data as? And I believe the model predictions are in the 1_field_calib_ccbv.json files?

As a follow up to Q1, if you didn't use the CCBV code, have you open-sourced your student calibration algorithm?


  1. Section 3: Player localization
    It says for each frame, you use Mask R-CNN to obtain the bounding box, segmentation mask, and average RGB color of each detected/segmented person. When checking the 1_player_boundingbox_maskrcnn.json files, I see bbox, color and onfield predictions. Is onfield the image segmentation mask?

  1. Section 3: Player localization
    It then says "Then, we compute a field mask following [10] to filter out the bounding boxes that do not intersect the field,
    thus removing e.g. staff and detections in the public". Is this lines 233-237?

  1. Section 3: Player localization
    Following up on Q4, it then says "We use the homography computed by CCBV-SN to estimate the player localization on the
    field in real-world coordinates from the middle point of the bottom of their bounding box". Is this lines 260-273?

  1. Code Blob
    From #6 I gather that lines 240-257 is to transform the current frame into a top view representation? What's a "calibration cone"? Do you have an image of what it looks like?

  1. Section 4: Top view image representations
    From issue #32 I've gathered that lines 68-82 save the top view images, and I can just edit the save paths to keep them instead of overwriting them. You also read in images src/config/radar.png and src/config/model-radar-mini.png. What are these "radar" images?

  1. Section 4: Feature vector representations
    It seems that lines 89-130 load the model required to get the feature vector representations from the top view representations (depending on which backbone we want to use) and are compute & saved in lines 303-324?

  1. Section 4: Player graph representation
    Is this somewhere in the repo? I can't find the code where this at all.

  1. Other
    What do lines 276-286 and lines 289-297 do?

About training clips with TemporallyAwarePooling

Hi, I was wondering why at training time (of TemporallyAwarePooling - NetVLAD++) you don't also consider overlapping clips? Wouldn't this produce a lot more training data (~30x more)?
PS: many thanks for the repo: it's wonderful to download a repo and see that it runs without errors out of the box! Also, the fact that training lasts 50 minutes and is reproducible is amazing! The ML community needs more people like you :)

Task1 CALF Training - PermissionError: [Errno 13] Permission denied

Hi @SilvioGiancola,

I'm trying to run the CALF code for Task 1 with around 400 GB of the entire dataset so far (download is taking a while, so I wanted to try training with the data I have for now).

When in, /Task1-ActionSpotting/CALF, running,

python src/main.py --SoccerNet_path="/../../Download/Videos" \
--features=ResNET_TF2_PCA512.npy \
--num_features=512 \
--model_name=CALF_v2 \
--batch_size 32 \
--evaluation_frequency 20 \
--chunks_per_epoch 18000 \

I get error PermissionError: [Errno 13] Permission denied: '/../../Download'

Download/Videos is where I downloaded the SoccerNet dataset. It has folders like "england_epl", "europe_uefa-champions-league", etc.
And, in DownloadSoccerNet.py, I referred to this folder as the directory to download to: mySoccerNetDownloader = SoccerNetDownloader(LocalDirectory="./Videos")

Do you have any idea what the issue is?

Thank you so much for your time! Looking forward to your response.

Download SoccerNet Dataset using API - HTTP Error 404: Not Found

Hi again SoccerNet Team, thank you so much for providing the dataset for download! It'll help lots with my learning process. That being said, I'm having an issue with downloading the dataset using the API.

I'm running the script from ./Download as:
python DownloadSoccerNet.py

I've edited DownloadSoccerNet.py to download all relevant data (as far as I know of) as:

import SoccerNet
from SoccerNet.Downloader import SoccerNetDownloader

mySoccerNetDownloader = SoccerNetDownloader(LocalDirectory="./Videos")
mySoccerNetDownloader.password = input("Password for Videos?\n")

# Download SoccerNet labels
mySoccerNetDownloader.downloadGames(files=["Labels.json"], split=["train","valid","test"]) # download labels
mySoccerNetDownloader.downloadGames(files=["Labels-v2.json"], split=["train","valid","test"]) # download labels SN v2
mySoccerNetDownloader.downloadGames(files=["Labels-cameras.json"], split=["train","valid","test"]) # download labels for camera shot
mySoccerNetDownloader.downloadGames(files=["Labels-replays.json"], split=["train","valid","test"])

# Download SoccerNet features
mySoccerNetDownloader.downloadGames(files=["1_ResNET_TF2.npy", "2_ResNET_TF2.npy"], split=["train","valid","test"]) # download Features
mySoccerNetDownloader.downloadGames(files=["1_ResNET_TF2_PCA512.npy", "2_ResNET_TF2_PCA512.npy"], split=["train","valid","test"]) # download Features reduced with PCA
mySoccerNetDownloader.downloadGames(files=["1_player_boundingbox_maskrcnn.json", "2_player_boundingbox_maskrcnn.json"], split=["train","valid","test"]) # download Player Bounding Boxes inferred with MaskRCNN
mySoccerNetDownloader.downloadGames(files=["1_field_calibration_ccbv.json", "2_field_calibration_ccbv.json"], split=["train","valid","test"]) # download Field Calibration inferred with CCBV

mySoccerNetDownloader.downloadGames(files=["1.mkv", "2.mkv"], split=["train","valid","test"]) # download LQ Videos
mySoccerNetDownloader.downloadGames(files=["1_HQ.mkv", "2_HQ.mkv", "video.ini"], split=["train","valid","test"]) # download HQ Videos

After getting the password from signing the NDA and entering it, it started downloading as expected. Since I kept my PC on for 2 days and had accumulated 320 GB of the dataset (I didn't expect over 320 GB!), I decided to give it a break. Once turning it back on and running the script again, I get this output:

./Videos/england_epl/2014-2015/2015-02-21 - 18-00 Chelsea 1 - 1 Burnley/Labels.json already exists
./Videos/england_epl/2014-2015/2015-02-21 - 18-00 Crystal Palace 1 - 2 Arsenal/Labels.json already exists
./Videos/england_epl/2014-2015/2015-02-21 - 18-00 Swansea 2 - 1 Manchester United/Labels.json already exists
./Videos/england_epl/2014-2015/2015-02-22 - 19-15 Southampton 0 - 2 Liverpool/Labels.json already exists
./Videos/england_epl/2015-2016/2015-08-08 - 19-30 Chelsea 2 - 2 Swansea/Labels.json already exists
./Videos/england_epl/2015-2016/2015-08-29 - 17-00 Chelsea 1 - 2 Crystal Palace/Labels.json already exists
...
./Videos/spain_laliga/2016-2017/2017-03-12 - 22-45 Real Madrid 2 - 1 Betis/Labels-cameras.json already exists
./Videos/spain_laliga/2016-2017/2017-04-02 - 17-15 Real Madrid 3 - 0 Alaves/Labels-cameras.json already exists
./Videos/spain_laliga/2016-2017/2017-04-08 - 21-45 Malaga 2 - 0 Barcelona/Labels-cameras.json already exists
./Videos/spain_laliga/2016-2017/2017-04-26 - 20-30 Barcelona 7 - 1 Osasuna/Labels-cameras.json already exists
HTTP Error 404: Not Found
HTTP Error 404: Not Found

After which there's no more output, but the script isn't finished running.

I'd be grateful if anyone could look into this bug for me so I can complete the download, possibly let me know how big the dataset is, and if I've missed anything I may also want to download.
Cheers! Thank you so much for reading this issue.

Replay grounding json

@SilvioGiancola I have to ask that how can we get the human readable json from Task3 replay grounding for external video? is there any hint or possible advise which I can follow would be great help from you.

Questions Regarding Baidu Embeddings

Hi Silvio, I have some questions about the baidu emdeddings, I wonder if you have any information about them. I couldn't find any information regarding these questions from either of their github repo or the published paper:

  1. Is the baidu embeddings already gone through PCA, or was it still a "raw" features? I noticed it's in Tx8576 dimension, which could probably mean they are still "raw", am I right about this?
  2. In your opinion, if I were to reduce the dimension, would it be better to have PCA reduce them, or have them go through a FCL like in the implementation of TemporallyAwarePooling?
    self.feature_extractor = nn.Linear(self.input_size, 512)
  3. If they are still raw, do you have any idea what was the initial dimension before they are flatten to 8576? They used 398x224 video as mentioned in the paper, but it's not possible to reshape them to it. I was thinking maybe I could used them in video transformer based architecture (MViT etc) if I'm able to reshape them to the original dimension.
  4. Do we have any of their fine-tuned feature extraction code publicly available? I think not, but I'm just going to ask anyway in case you know any since their embeddings have very few public information available.

Thanks!

NetVLAD++ Model RAM consumption?

Hi SoccerNet Dev Team,

I've managed to plug my own dataset into NetVLAD++, but am unable to train due to overloading my 32 GB of RAM.

I have ~80 matches of ~50 minutes of badminton matches with ResNet-152 features sampled at 5fps. After loading my dataset, I have ~18/32 GB of RAM used. The program gets killed while loading the model. I'm confused why, as it's only ~5.5 GB as shown in the TorchInfo summary below. I believe I should still have ~8 GB to spare. Is this a feature of NetVLAD++ specifically? I noticed that in #28 experiments were done with 60-90 GB of RAM.

Thank you for reading, and looking forwawrd to your insights!

TorchInfo Summary:

==========================================================================================
Layer (type:depth-idx)                   Output Shape              Param #
==========================================================================================
NetVLAD_plus_plus                        --                        --
├─Linear: 1-1                            [5236, 512]               1,049,088
├─NetVLAD: 1-2                           [44, 14336]               28,672
├─NetVLAD: 1-3                           [44, 14336]               28,672
├─Dropout: 1-4                           [44, 28672]               --
├─Linear: 1-5                            [44, 3]                   86,019
├─Sigmoid: 1-6                           [44, 3]                   --
==========================================================================================
Total params: 1,192,451
Trainable params: 1,192,451
Non-trainable params: 0
Total mult-adds (G): 5.50
==========================================================================================
Input size (MB): 42.89
Forward/backward pass size (MB): 31.54
Params size (MB): 4.77
Estimated Total Size (MB): 79.20
==========================================================================================

How To Get Top Down

Hi,

I am new to using SoccerNet. Do you know what code I can run to get the top down views as PNGs? I have all the mkv files downloaded, and here is a picture of the files I have downloaded.

Arth Bohra

About Task1-ActionSpotting‘s results

I see the result in the task1'code:
Example: NetVLAD on SoccerNet v2 (17 classes - Average-mAP=31.37%)),whick is consistent with the result of my experiment.
But in the paper, from table 2, the result is 39.7. I want to know what optimizations have caused the improvement of results. Thanks a lot.

NAN with TemporallyAwarePooling

Running into NaN loss after 220 epochs with NetVLAD++ in the TemporallyAwarePooling context. Any suggestions? Different LR haven't fixed. I think it might be coming from the weighted NLLLoss, but i'm not sure yet.

Custom Feature Extractor - Features/VideoFeatureExtractor.py: TensorFlow vs PyTorch, Output Dimension, Extraction Speed

Hi SoccerNet Dev Team,

I have a question regarding the technical (?) aspect of Features/VideoFeatureExtractor.py

I see that you were considering using PyTorch for the backend, but ultimately decided to go with TensorFlow. Is there a reason for this?

Also, when extracting the frames of the video, I see that you're using SoccerNet.DataLoader.Frame and SoccerNet.DataLoader.FrameCV (depending on OpenCV/SKVideo backend) at 2 FPS, then preprocessing with tensorflow.keras.applications.resnet.preprocess_input.
After running a custom video through this pipeline, I end up with frames.shape = num_frames=(8393, 224, 224, 3). What confuses me even more is that when loading the ResNET_TF2.npy file, the shape seems to be (8393, 2048)
Could you please explain what these dimensions represent? I believe 3 are the color channels, and 2048 are the number of ResNet features, but I'm unsure of where 8393 and 224 come from.

About optimization, during the extraction, it seems that my CPU is only using ~15% of its maximum speed. Is there a way to optimize this in the backend (SoccerNet.DataLoader.Frame and SoccerNet.DataLoader.FrameCV I assume)

Thank you for your time again, and hoping you can help me in understanding your code a little better!

CNN architecture

can you explain me the Neural Network architecture (CNN) in SoccerNetv2-DevKit/Task1-ActionSpotting/CALF/src/model.py ?

the number of layers and ...........

Annotation Tool for Actions - Remove "halves", third class requirement, Events List not adjustable, json files extra information

Hi again,

I'm trying to remove the "halves" (1 or 2) in Labels-v2.json when using the Annotation Tool - I believe it has something to do with Annotation/actions/interface/main_window.py and self.half = 1

I'm also trying to remove the need for a 3rd class, specifically I don't need Annotation/actions/config/third_class.txt. It seems that something must be changed in Annotation/actions/interface/event_selection.py to allow this.

After opening my video with the tool, it seems the Events List (right side) has shrunk and can't be adjusted (see image below). Is there a way to change the code to make this adjustment possible?
Screenshot from 2021-09-09 13-18-56

As always, thank you so much for taking the time to read my issue!

Labeling "kind-of" visible actions for Task1-ActionSpotting

Hi again,

In certain scenarios, I'm sure there are cases when the moment an action occurs, it's not visible, but then after let's say 2s, the action is still in the process of occuring, and then it becomes visible. In these cases, would you label the action as "visible" or "not shown"? An idea I had was to label it "not shown" first, then after 2s when the action is visible, label the same action again but this time as "visible".

Does the visibility label impact the training at all? Or just the metric calculation?

Looking forward to your response. Thank you for reading!

annotation tool error

Hi thanks for providing great work!
I currently do project related to soccer, and I want to annotate myself for my own task.
I tried to use annotation tool but I got error when running python main.py in annotetion/src.

Traceback (most recent call last): File "main.py", line 4, in <module> from interface.main_window import MainWindow File "/home/jihwan/SoccerNetv2-DevKit/Annotation/replays/src/interface/main_window.py", line 4, in <module> from PyQt5.QtMultimedia import QMediaPlayer ImportError: libpulse-mainloop-glib.so.0: cannot open shared object file: No such file or directory

What should I do for this?

Index Error for Calibration Predictions at CALF GCN

I followed step-by-step the README.md instructions for training the Task ActionSpotting - CALF GCN model at https://github.com/SilvioGiancola/SoccerNetv2-DevKit/tree/main/Task1-ActionSpotting/CALF_Calibration_GCN and got the following error:

Traceback (most recent call last):                                                                                                                                                                                 
  File "src/main.py", line 213, in <module>                                                                                                                                                                        
    main(args)                                                                                                                                                                                                     
  File "src/main.py", line 27, in main                                                                                                                                                                             
    dataset_Train = SoccerNetClips(path=args.SoccerNet_path, split="train", args=args)                                                                                                                             
  File "/soccerNetv2-devkit/Task1-ActionSpotting/CALF_Calibration_GCN/src/dataset.py", line 462, in __init__                                                                                                   
    confidence = calibration_half2["predictions"][i][0]["confidence"]                                                                                                                                              
IndexError: list index out of range   

It seems that the cause of this error is that the number of calibration predictions is not equal to the number of bounding boxes. This error is thrown for several half-time matches.

Problem with the homography data in calibration json

I am working on the 1_field_calibration_ccbv.json to extract frames from the video using the corresponding video.ini file under the assumption that a HQ video is 50 fps downsampled to 2 fps.

Using the start time second present in .ini file & count of frames present, I am mapping the homography matrix to the corresponding frame. However, it seems to have homography data present for replay frames with high confidence score.

Attached is an example,
8

is a frame present at 0:09:41 (calculated using the above assumption) has following data present in calibration.json

"homography": [
3334.76123046875,
-1385.3936767578125,
-23787.22265625,
-74.32776641845703,
260.42376708984375,
31227.6171875,
-0.8536908626556396,
-1.3100050687789917,
85.80413818359375
],
"conf": 0.931109577411863

and I am using the Task1-ActionSpotting/Calibration_GCN/src/dataset.py (https://github.com/SilvioGiancola/SoccerNetv2-DevKit/blob/deda7eaae95de2637d43730a7708fc8199245ad9/Task1-ActionSpotting/Calibration_GCN/src/dataset.py) file

& the following code to draw the calibration cone :
` # Visualization
#for i, frame in enumerate(representation_half1):
# cv2.imwrite("outputs/test/"+str(i)+".png", frame)

            representation_half2 = np.zeros((feat_half2.shape[0], self.representation_height, self.representation_width, self.representation_channel), dtype=np.uint8)
            bbox_predictions = bbox_half2
            ratio_width = bbox_predictions["size"][2]/self.representation_width
            ratio_height = bbox_predictions["size"][1]/self.representation_height
            for i, bbox in enumerate(bbox_predictions["predictions"][0:feat_half2.shape[0]]):
                if self.args.calibration:
                    confidence = calibration_half2["predictions"][i][0]["confidence"]
                    if confidence < self.calibration_threshold:
                        continue
                    homography = calibration_half2["predictions"][i][0]["homography"]
                    homography = np.reshape(homography, (3,3))
                    homography = homography/homography[2,2]
                    homography = np.linalg.inv(homography)
                    # Draw the field lines
                    if self.args.calibration_field:
                        representation_half2[i,int(0.025*self.representation_height): int(0.025*self.representation_height) + self.radar_image.shape[0], int(0.025*self.representation_width): int(0.025*self.representation_width) + self.radar_image.shape[1] ] =self.radar_image

                    # Draw the calibration cones
                    if self.args.calibration_cone:
                        frame_top_left_projected = unproject_image_point(homography, np.array([0,0,1]))
                        frame_top_right_projected = unproject_image_point(homography, np.array([calibration_half2["size"][2],0,1]))
                        frame_bottom_left_projected = unproject_image_point(homography, np.array([0,calibration_half2["size"][1],1]))
                        frame_bottom_right_projected = unproject_image_point(homography, np.array([calibration_half2["size"][2],calibration_half2["size"][1],1]))
                        frame_top_left_radar = meter2radar(frame_top_left_projected, self.dim_terrain, (self.representation_height, self.representation_width, self.representation_channel))
                        frame_top_right_radar = meter2radar(frame_top_right_projected, self.dim_terrain, (self.representation_height, self.representation_width, self.representation_channel))
                        frame_bottom_left_radar = meter2radar(frame_bottom_left_projected, self.dim_terrain, (self.representation_height, self.representation_width, self.representation_channel))
                        frame_bottom_right_radar = meter2radar(frame_bottom_right_projected, self.dim_terrain, (self.representation_height, self.representation_width, self.representation_channel))

                        pts = np.array([frame_top_left_radar[0:2], frame_top_right_radar[0:2], frame_bottom_right_radar[0:2], frame_bottom_left_radar[0:2]], np.int32)

                        representation_half2[i] = cv2.polylines(representation_half2[i], [pts], True, (255,255,255), 1)`

It seems that either the homography data present for this frame is incorrect or it's confidence value, or is there a problem with my mapping logic of homography data to the field_calibration_ccbv.json?

Training NetVLAD++ process killed

Hello,
while executing: from the python src/main.py --SoccerNet_path=my/path/to/soccernet from the SoccerNetv2-DevKit/Task1-ActionSpotting/TemporallyAwarePooling folder, the process get an out of memory kill signal from the kernel. This doesn't happen with the reduced features i.e. with --features ResNET152_TF2_PCA512.npy.
The signal is sent when trying to run the line 131 of the SoccerNetv2-DevKit/Task1-ActionSpotting/TemporallyAwarePooling/src/dataset.py which is called by the line 24 in SoccerNetv2-DevKit/Task1-ActionSpotting/TemporallyAwarePooling/src/main.py

Here is the log from my /var/log/syslog:
Sep 21 11:26:03 MS-7B79 kernel: [440320.649437] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1004.slice/session-752.scope,task=python,pid=126284,uid=1004
Sep 21 11:26:03 MS-7B79 kernel: [440320.649479] Out of memory: Killed process 126284 (python) total-vm:55748164kB, anon-rss:31574096kB, file-rss:0kB, shmem-rss:4kB, UID:1004 pgtables:63392kB oom_score_adj:0
Sep 21 11:26:03 MS-7B79 kernel: [440321.292489] oom_reaper: reaped process 126284 (python), now anon-rss:0kB, file-rss:0kB, shmem-rss:4kB

My config is:
OS: Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-84-generic x86_64)
Ram: 32GB
GPU: NVIDIA GeForce RTX 2080

Maybe I simply don't have enough ram ? Do you have any suggestion ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.