Giter Club home page Giter Club logo

pycococreator's Introduction

pycococreator

Please site using: DOI

pycococreator is a set of tools to help create COCO datasets. It includes functions to generate annotations in uncompressed RLE ("crowd") and polygons in the format COCO requires.

Read more here https://patrickwasp.com/create-your-own-coco-style-dataset/

alt text alt text

Install

pip install git+git://github.com/waspinator/[email protected]

If you need to install pycocotools for python 3, try the following:

sudo apt-get install python3-dev
pip install cython
pip install git+git://github.com/waspinator/[email protected]

pycococreator's People

Contributors

hannarud avatar waspinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pycococreator's Issues

How to run?

Could you please provide a little bit more clearer description of how to run your code? I installed the package and a little bit unsure of what to do next. Trying to run pycococreatortools.py, but it says that numpy is not installed (but it installed).

Getting error when trying to install in Kaggle

After running !pip install git+git://github.com/waspinator/[email protected]

getting errror

Collecting git+git://github.com/waspinator/[email protected]
  Cloning git://github.com/waspinator/pycococreator.git (to revision 0.2.0) to /tmp/pip-req-build-m589ikd6
  Running command git clone --filter=blob:none --quiet git://github.com/waspinator/pycococreator.git /tmp/pip-req-build-m589ikd6
  fatal: remote error:
    The unauthenticated git protocol on port 9418 is no longer supported.
  Please see https://github.blog/2021-09-01-improving-git-protocol-security-github/ for more information.
  error: subprocess-exited-with-error
  
  × git clone --filter=blob:none --quiet git://github.com/waspinator/pycococreator.git /tmp/pip-req-build-m589ikd6 did not run successfully.
  │ exit code: 128
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× git clone --filter=blob:none --quiet git://github.com/waspinator/pycococreator.git /tmp/pip-req-build-m589ikd6 did not run successfully.
│ exit code: 128
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

Preparing Custom Dataset - "NameError: name 'fnmatch' is not defined"

I have a dataset folder which contains two folders named images, annotations(../train/images, annotations). Each of these folders contains images. For example:

In images:

In annotations:

I have seen this blog. I modified the code and tried to run in colab for generating dataset. Then I got this error: NameError: name 'fnmatch' is not defined

How can I solve the issue? Can anyone guide me through the procedure for creating a dataset to apply mask rcnn?
@waspinator @hannarud

When visualizing with own images: ValueError: cannot reshape array of size 5 into shape (2,2)

Hey, I recreated the steps with my own image as attached. Adapting your shapes_to_coco.py script by changing class names it worked. However, in the last step of visualizing them with your jupyter notebook (there is an error, the annotation_file is not in the annotations dir but in the train dir) I get the following error: ValueError: cannot reshape array of size 5 into shape (2,2). Screenshot is attached too.
screenshot from 2018-04-18 15-11-08

1_cube_3
1_cuboid_2
1_triangle_1
1

Maybe we can discuss this in detail via mail to speed up fixing it: [email protected]

Failed to transform dataset with figures as filename

Just want to raise your attention, for the following test case,

./train/annotations/139284-0989-1254.txt

failed to transform to annotation in json format. I assume that your framework should support this.

Can you leave me some instructions to counter this?

ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.

(venv) root@cc3180440f4b:/opendr/src/opendr/perception/panoptic_segmentation/efficient_ps/algorithm/EfficientPS# python tools/convert_cityscapes.py ./data_2/ ./data/cityscapes/
Loading Cityscapes from ./data_2/
Converting train ...
0%| | 0/2975 [00:00<?, ?it/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "tools/convert_cityscapes.py", line 163, in call
coco_ann_i = pct.create_annotation_info(
File "/opendr/venv/lib/python3.8/site-packages/pycococreatortools/pycococreatortools.py", line 99, in create_annotation_info
segmentation = binary_mask_to_polygon(binary_mask, tolerance)
File "/opendr/venv/lib/python3.8/site-packages/pycococreatortools/pycococreatortools.py", line 48, in binary_mask_to_polygon
contours = np.subtract(contours, 1)
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.

I am trying to convert my Citscapes dataset to COCO style using pycococreator with following subfolder.

  1. gtFine
  2. leftImg8bit

Inside gtFine->train-><city_name>->following items
1.aachen_000057_000019_gtFine_color.png
aachen_000057_000019_gtFine_instanceIds.png
aachen_000057_000019_gtFine_labelIds.png
aachen_000057_000019_gtFine_polygons.json

How can I solve the above ValueError issue?

IndexError: list index out of range

Full error:

Traceback (most recent call last):
  File "segment.py", line 108, in <module>
    class_id = [x['id'] for x in CATEGORIES if x['name'] in annotation_filename][0]
IndexError: list index out of range

Here's the code:

INFO = {
    "description": "Fashion Dataset",
    "url": "https://github.com/waspinator/pycococreator",
    "version": "0.1.0",
    "year": 2020,
    "contributor": "Abu Noman Md. Sakib",
    "date_created": datetime.datetime.utcnow().isoformat(' ')
}

LICENSES = [
    {
        "id": 1,
        "name": "GB",
        "url": "GB"
    }
]

CATEGORIES = [
    {
        'id': 1,
        'name': 'mask',
        'supercategory': 'fashion',
    }
]

coco_output = {
    "info": INFO,
    "licenses": LICENSES,
    "categories": CATEGORIES,
    "images": [],
    "annotations": []
}

image_id = 1
segmentation_id = 1

ROOT_DIR = "train"
IMAGE_DIR = "train/images"
ANNOTATION_DIR = "train/annotations"

image_files = [f for f in listdir(IMAGE_DIR) if isfile(join(IMAGE_DIR, f))]
annotation_files = [f for f in listdir(ANNOTATION_DIR) if isfile(join(ANNOTATION_DIR, f))]

# go through each image
for image_filename in image_files:
    image = Image.open(IMAGE_DIR + '/' + image_filename)
    image_info = pycococreatortools.create_image_info(image_id, os.path.basename(image_filename), image.size)

    coco_output["images"].append(image_info)

# go through each associated annotation
for annotation_filename in annotation_files:

    print(annotation_filename)
    class_id = [x['id'] for x in CATEGORIES if x['name'] in annotation_filename][0]
    category_info = {'id': class_id, 'is_crowd': 'crowd' in image_filename}
    binary_mask = np.asarray(Image.open(annotation_filename).convert('1')).astype(np.uint8)

    annotation_info = pycococreatortools.create_annotation_info(segmentation_id, image_id, category_info, binary_mask, image.size, tolerance=2)

    if annotation_info is not None:
        coco_output["annotations"].append(annotation_info)

    segmentation_id = segmentation_id + 1

    image_id = image_id + 1

with open('train/images.json', 'w') as output_json_file:
    json.dump(coco_output, output_json_file)

I still don't know if the code works or not. Please help!

Extra annotations

Hi guys, does anyone have a problem adding extra annotations for some images?
Extra annotations

Error processing image,when use matterport Mask_RCNN

Hi,thanks for your codes. I use your code to produce .json file of my dataset. I have a problem when I use it for Mask RCNN. It may be fail to read the instance(mask) label. I don't know what's wrong about it. So I do need your suggestions,thanks.
I modify the coco.py to read my .json file and images.Then error below:

Epoch 1/40
ERROR:root:Error processing image {'source': 'coco', 'path': '/home/he/sundata/SUNCGxS_IMG/train/color/008557.jpg', 'annotations': [{'iscrowd': 0, 'bbox': [46.0, 194.0, 29.0, 20.0], 'category_id': 1, 'area': 496, 'image_id': 7355, 'segmentation': [[53.0, 213.5, 46.5, 211.0, 46.0, 195.5, 73.5, 195.0, 74.0, 210.5, 53.0, 213.5]], 'width': 640, 'height': 480, 'id': 74576}, {'iscrowd': 0, 'bbox': [299.0, 0.0, 235.0, 222.0], 'category_id': 23, 'area': 13517, 'image_id': 7355, 'segmentation': [[332.0, 205.5, 306.5, 203.0, 304.5, 49.0, 305.5, 19.0, 311.5, 18.0, 301.0, 19.5, 298.5, 15.0, 310.0, 14.5, 313.0, 11.5, 318.0, 13.5, 319.0, 10.5, 324.0, 12.5, 326.0, 9.5, 336.0, 11.5, 338.0, 8.5, 345.0, 10.5, 353.0, 9.5, 355.0, 6.5, 371.0, 7.5, 373.0, 4.5, 380.0, 6.5, 451.0, 0, 499.0, 32.5, 501.0, 35.5, 478.5, 46.0, 499.5, 75.0, 494.0, 95.5, 459.5, 54.0, 452.5, 35.0, 449.0, 5.5, 376.5, 14.0, 372.5, 42.0, 365.5, 60.0, 332.5, 104.0, 327.5, 117.0, 334.5, 165.0, 332.0, 205.5], [377.5, 11.0, 450.5, 3.0, 391.0, 7.5, 377.5, 11.0], [368.5, 12.0, 372.5, 11.0, 359.5, 12.0, 368.5, 12.0], [349.5, 14.0, 355.5, 13.0, 342.5, 14.0, 349.5, 14.0], [331.5, 16.0, 337.5, 14.0, 329.5, 15.0, 331.5, 16.0], [511.0, 221.5, 491.5, 218.0, 490.5, 183.0, 500.5, 132.0, 512.0, 116.5, 530.5, 128.0, 533.5, 170.0, 532.5, 186.0, 500.5, 203.0, 511.0, 221.5]], 'width': 640, 'height': 480, 'id': 74577}, {'iscrowd': 0, 'bbox': [198.0, 204.0, 353.0, 209.0], 'category_id': 10, 'area': 32022, 'image_id': 7355, 'segmentation': [[513.0, 412.5, 484.5, 405.0, 488.0, 330.5, 206.0, 279.5, 203.5, 278.0, 203.5, 262.0, 198.5, 260.0, 198.5, 250.0, 275.0, 229.5, 285.0, 230.5, 287.5, 209.0, 298.0, 203.5, 511.0, 221.5, 515.5, 227.0, 512.5, 260.0, 517.0, 257.5, 536.0, 259.5, 550.5, 282.0, 506.5, 368.0, 528.5, 383.0, 515.5, 403.0, 513.0, 412.5], [464.0, 381.5, 452.5, 379.0, 465.0, 365.5, 464.0, 381.5]], 'width': 640, 'height': 480, 'id': 74578}, {'iscrowd': 0, 'bbox': [0.0, 42.0, 188.0, 174.0], 'category_id': 12, 'area': 5764, 'image_id': 7355, 'segmentation': [[123.0, 215.5, 88.5, 209.0, 132.0, 202.5, 131.0, 183.5, 0, 197.0, 0, 186.0, 4.0, 184.5, 130.5, 173.0, 129.5, 144.0, 127.0, 140.5, 110.5, 140.0, 113.0, 137.5, 127.0, 137.5, 128.5, 134.0, 124.5, 68.0, 96.0, 64.5, 96.0, 73.5, 86.5, 73.0, 86.0, 83.5, 39.0, 80.5, 33.5, 77.0, 33.0, 68.5, 24.5, 67.0, 23.0, 57.5, 0.0, 55.5, 0.0, 41.5, 182.5, 63.0, 182.0, 66.5, 173.0, 67.5, 130.0, 67.5, 129.5, 72.0, 133.5, 134.0, 136.0, 137.5, 151.5, 138.0, 135.5, 141.0, 136.5, 172.0, 187.5, 175.0, 181.0, 178.5, 137.0, 182.5, 137.5, 201.0, 154.0, 199.5, 184.0, 202.5, 184.5, 205.0, 123.0, 215.5], [93.5, 71.0, 91.0, 63.5, 26.5, 58.0, 28.0, 66.5, 93.5, 71.0], [85.5, 81.0, 84.0, 72.5, 35.5, 69.0, 37.0, 78.5, 85.5, 81.0]], 'width': 640, 'height': 480, 'id': 74579}, {'iscrowd': 0, 'bbox': [2.0, 150.0, 43.0, 36.0], 'category_id': 1, 'area': 1318, 'image_id': 7355, 'segmentation': [[14.0, 184.5, 5.5, 183.0, 1.5, 151.0, 42.0, 150.5, 44.5, 181.0, 14.0, 184.5]], 'width': 640, 'height': 480, 'id': 74580}, {'iscrowd': 0, 'bbox': [63.0, 44.0, 90.0, 152.0], 'category_id': 20, 'area': 7111, 'image_id': 7355, 'segmentation': [[142.0, 58.5, 136.5, 57.0, 135.5, 48.0, 142.0, 47.5, 143.0, 43.5, 142.0, 58.5], [101.0, 53.5, 89.5, 52.0, 101.0, 50.5, 101.0, 53.5], [144.0, 58.5, 144.0, 51.5, 144.0, 58.5], [93.0, 71.5, 74.0, 70.5, 74.5, 63.0, 91.0, 63.5, 93.5, 65.0, 93.0, 71.5], [79.0, 178.5, 69.5, 178.0, 62.5, 108.0, 70.5, 83.0, 86.0, 83.5, 86.5, 73.0, 96.0, 73.5, 95.5, 65.0, 101.0, 64.5, 104.5, 66.0, 127.5, 112.0, 126.5, 137.0, 110.5, 139.0, 126.5, 141.0, 127.5, 173.0, 79.0, 178.5], [147.0, 137.5, 141.5, 137.0, 137.0, 68.5, 143.5, 70.0, 144.0, 78.5, 145.5, 70.0, 147.0, 137.5], [85.0, 81.5, 73.0, 79.5, 73.5, 72.0, 84.0, 72.5, 85.0, 81.5], [145.0, 172.5, 141.5, 141.0, 147.0, 139.5, 148.0, 148.5, 149.5, 141.0, 150.5, 171.0, 145.0, 172.5], [152.0, 195.5, 137.5, 194.0, 143.5, 192.0, 145.0, 181.5, 151.5, 182.0, 151.5, 189.0, 148.5, 192.0, 152.0, 195.5], [132.0, 194.5, 80.5, 189.0, 129.0, 183.5, 132.0, 194.5], [116.5, 189.0, 113.5, 189.0, 116.5, 189.0]], 'width': 640, 'height': 480, 'id': 74581}, {'iscrowd': 0, 'bbox': [0.0, 205.0, 194.0, 130.0], 'category_id': 26, 'area': 16028, 'image_id': 7355, 'segmentation': [[33.0, 334.5, 30.5, 334.0, 28.5, 321.0, 24.0, 313.5, 0, 308.0, 0.0, 234.5, 187.5, 205.0, 183.5, 229.0, 191.5, 234.0, 193.0, 276.5, 179.0, 276.5, 84.0, 305.5, 84.0, 317.5, 33.0, 334.5]], 'width': 640, 'height': 480, 'id': 74582}, {'iscrowd': 0, 'bbox': [0.0, 64.0, 23.0, 68.0], 'category_id': 8, 'area': 1307, 'image_id': 7355, 'segmentation': [[22.0, 131.5, 0, 131.0, 0.0, 63.5, 15.5, 65.0, 22.0, 131.5]], 'width': 640, 'height': 480, 'id': 74583}, {'iscrowd': 0, 'bbox': [0.0, 209.0, 119.0, 26.0], 'category_id': 13, 'area': 1344, 'image_id': 7355, 'segmentation': [[4.0, 234.5, 0, 234.0, 0.0, 220.5, 82.0, 208.5, 118.5, 214.0, 4.0, 234.5]], 'width': 640, 'height': 480, 'id': 74584}, {'iscrowd': 0, 'bbox': [0.0, 157.0, 7.0, 29.0], 'category_id': 1, 'area': 90, 'image_id': 7355, 'segmentation': [[1.0, 159.5, 0.0, 156.5, 1.0, 159.5], [1.0, 185.5, 0, 171.0, 4.0, 170.5, 6.5, 184.0, 1.0, 185.5]], 'width': 640, 'height': 480, 'id': 74585}, {'iscrowd': 0, 'bbox': [184.0, 182.0, 75.0, 66.0], 'category_id': 1, 'area': 3187, 'image_id': 7355, 'segmentation': [[209.0, 247.5, 205.5, 247.0, 204.0, 240.5, 183.5, 229.0, 187.5, 217.0, 185.5, 209.0, 194.5, 199.0, 193.5, 194.0, 200.0, 188.5, 202.0, 191.5, 201.5, 188.0, 205.0, 185.5, 214.0, 181.5, 216.0, 184.5, 232.0, 184.5, 232.5, 189.0, 241.0, 191.5, 247.5, 198.0, 247.5, 204.0, 250.5, 203.0, 252.5, 210.0, 249.5, 219.0, 257.0, 214.5, 258.5, 217.0, 235.0, 240.5, 209.0, 247.5]], 'width': 640, 'height': 480, 'id': 74586}, {'iscrowd': 0, 'bbox': [0.0, 191.0, 72.0, 30.0], 'category_id': 8, 'area': 1102, 'image_id': 7355, 'segmentation': [[5.0, 220.5, 0, 220.0, 0.0, 197.5, 63.0, 190.5, 71.5, 192.0, 46.0, 195.5, 47.0, 214.5, 5.0, 220.5]], 'width': 640, 'height': 480, 'id': 74587}, {'iscrowd': 0, 'bbox': [328.0, 13.0, 183.0, 206.0], 'category_id': 20, 'area': 26567, 'image_id': 7355, 'segmentation': [[449.0, 13.5, 446.5, 13.0, 449.0, 13.5], [439.0, 14.5, 436.5, 14.0, 439.0, 14.5], [429.0, 15.5, 426.5, 15.0, 429.0, 15.5], [419.0, 16.5, 416.5, 16.0, 419.0, 16.5], [409.0, 17.5, 406.5, 17.0, 409.0, 17.5], [399.0, 18.5, 396.5, 18.0, 399.0, 18.5], [491.0, 218.5, 396.0, 212.5, 334.5, 205.0, 334.5, 102.0, 365.5, 60.0, 376.0, 24.5, 450.0, 17.5, 453.5, 39.0, 459.5, 54.0, 494.5, 96.0, 490.5, 105.0, 510.5, 116.0, 500.5, 132.0, 493.5, 160.0, 490.5, 183.0, 491.0, 218.5], [389.0, 19.5, 386.5, 19.0, 389.0, 19.5], [379.0, 20.5, 376.5, 20.0, 379.0, 20.5], [328.0, 119.5, 328.0, 116.5, 328.0, 119.5]], 'width': 640, 'height': 480, 'id': 74588}, {'iscrowd': 0, 'bbox': [84.0, 276.0, 406.0, 204.0], 'category_id': 6, 'area': 41269, 'image_id': 7355, 'segmentation': [[440.0, 479.5, 436.5, 479.0, 438.5, 434.0, 436.0, 427.5, 399.0, 417.5, 386.0, 424.5, 359.0, 448.5, 220.0, 403.5, 217.5, 378.0, 211.0, 365.5, 183.0, 357.5, 158.0, 368.5, 154.5, 350.0, 137.5, 345.0, 161.5, 334.0, 159.0, 332.5, 117.0, 321.5, 118.5, 349.0, 88.0, 360.5, 83.5, 306.0, 188.0, 275.5, 488.0, 330.5, 482.5, 444.0, 472.0, 459.5, 460.5, 456.0, 462.5, 407.0, 460.0, 406.5, 443.5, 427.0, 440.0, 479.5], [464.5, 385.0, 465.0, 365.5, 449.5, 382.0, 464.5, 385.0]], 'width': 640, 'height': 480, 'id': 74589}, {'iscrowd': 0, 'bbox': [0.0, 309.0, 158.0, 133.0], 'category_id': 10, 'area': 6296, 'image_id': 7355, 'segmentation': [[0.0, 441.5, 0.0, 308.5, 14.0, 309.5, 26.5, 316.0, 33.0, 355.5, 42.0, 354.5, 69.0, 365.5, 77.0, 365.5, 130.0, 344.5, 148.0, 347.5, 156.5, 353.0, 156.0, 369.5, 117.0, 387.5, 89.0, 377.5, 71.0, 375.5, 8.0, 401.5, 4.5, 412.0, 10.5, 436.0, 0.0, 441.5]], 'width': 640, 'height': 480, 'id': 74590}, {'iscrowd': 0, 'bbox': [41.0, 33.0, 104.0, 160.0], 'category_id': 23, 'area': 4363, 'image_id': 7355, 'segmentation': [[136.0, 57.5, 102.0, 53.5, 100.0, 42.5, 74.0, 45.5, 74.0, 50.5, 42.0, 46.5, 40.5, 44.0, 137.0, 32.5, 144.5, 38.0, 135.5, 39.0, 136.0, 57.5], [135.5, 37.0, 131.5, 37.0, 135.5, 37.0], [107.5, 40.0, 104.5, 40.0, 107.5, 40.0], [80.5, 43.0, 99.5, 41.0, 75.5, 43.0, 80.5, 43.0], [73.0, 69.5, 46.0, 67.5, 45.5, 62.0, 50.0, 59.5, 74.0, 62.5, 73.0, 69.5], [126.0, 109.5, 106.5, 74.0, 105.0, 65.5, 124.5, 68.0, 126.0, 109.5], [141.0, 137.5, 136.0, 137.5, 133.5, 134.0, 130.0, 67.5, 136.0, 68.5, 137.5, 72.0, 141.0, 137.5], [72.0, 79.5, 48.0, 78.5, 45.5, 70.0, 73.0, 71.5, 72.0, 79.5], [57.0, 180.5, 49.0, 80.5, 70.5, 83.0, 62.5, 108.0, 69.5, 178.0, 57.0, 180.5], [127.0, 137.5, 128.0, 127.5, 127.0, 137.5], [130.0, 173.5, 127.5, 173.0, 126.5, 164.0, 127.0, 140.5, 129.5, 144.0, 130.0, 173.5], [143.0, 172.5, 135.5, 170.0, 136.0, 140.5, 141.5, 141.0, 143.0, 172.5], [143.0, 192.5, 137.5, 192.0, 137.0, 182.5, 143.5, 183.0, 143.0, 192.5], [132.0, 191.5, 129.5, 191.0, 130.0, 183.5, 132.0, 191.5]], 'width': 640, 'height': 480, 'id': 74591}, {'iscrowd': 0, 'bbox': [452.0, 0.0, 188.0, 480.0], 'category_id': 1, 'area': 55519, 'image_id': 7355, 'segmentation': [[639.0, 479.5, 561.0, 479.5, 543.5, 461.0, 578.5, 418.0, 506.5, 368.0, 550.5, 282.0, 500.5, 203.0, 539.0, 184.5, 540.5, 179.0, 529.5, 126.0, 490.5, 105.0, 493.5, 102.0, 499.5, 75.0, 478.5, 46.0, 501.5, 35.0, 452.0, 0, 639.5, 0.0, 639.0, 479.5]], 'width': 640, 'height': 480, 'id': 74592}, {'iscrowd': 0, 'bbox': [0.0, 358.0, 439.0, 122.0], 'category_id': 10, 'area': 32061, 'image_id': 7355, 'segmentation': [[436.0, 479.5, 0.0, 479.5, 0, 442.0, 10.5, 436.0, 4.5, 412.0, 8.0, 401.5, 71.0, 375.5, 89.0, 377.5, 117.0, 387.5, 178.0, 358.5, 188.0, 357.5, 208.0, 363.5, 213.5, 368.0, 218.5, 384.0, 221.0, 418.5, 235.0, 417.5, 265.0, 427.5, 274.0, 433.5, 286.0, 434.5, 319.0, 445.5, 331.0, 453.5, 335.0, 451.5, 353.0, 457.5, 386.0, 424.5, 399.0, 417.5, 436.0, 427.5, 438.5, 434.0, 436.0, 479.5]], 'width': 640, 'height': 480, 'id': 74593}, {'iscrowd': 0, 'bbox': [0.0, 0.0, 96.0, 16.0], 'category_id': 34, 'area': 807, 'image_id': 7355, 'segmentation': [[4.0, 15.5, 0, 15.0, 0.0, 0, 95.5, 0.0, 4.0, 15.5]], 'width': 640, 'height': 480, 'id': 74594}], 'width': 640, 'height': 480, 'id': 7355}
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1710, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1276, in load_image_gt
    source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
KeyError: 'coco'
ERROR:root:Error processing image {'source': 'coco', 'path': '/home/he/sundata/SUNCGxS_IMG/train/color/016359.jpg', 'annotations': [{'iscrowd': 0, 'bbox': [0.0, 431.0, 516.0, 49.0], 'category_id': 30, 'area': 2033, 'image_id': 8885, 'segmentation': [[0.0, 471.5, 0.0, 430.5, 43.5, 435.0, 0.0, 471.5], [515.0, 479.5, 368.0, 479.5, 367.5, 466.0, 515.0, 479.5]], 'width': 640, 'height': 480, 'id': 90126}, {'iscrowd': 0, 'bbox': [580.0, 140.0, 60.0, 287.0], 'category_id': 12, 'area': 8117, 'image_id': 8885, 'segmentation': [[597.0, 426.5, 590.5, 425.0, 589.5, 417.0, 592.5, 396.0, 590.5, 377.0, 592.5, 359.0, 596.5, 355.0, 597.5, 336.0, 579.5, 319.0, 594.5, 144.0, 596.0, 139.5, 639.0, 139.5, 639.0, 150.5, 626.0, 150.5, 624.5, 153.0, 620.5, 201.0, 639.0, 201.5, 639.5, 220.0, 618.5, 220.0, 614.5, 257.0, 639.0, 257.5, 639.5, 282.0, 612.0, 281.5, 608.5, 311.0, 639.5, 314.0, 639.5, 342.0, 606.0, 340.5, 603.5, 360.0, 639.5, 362.0, 639.5, 398.0, 601.0, 394.5, 597.0, 426.5], [611.5, 201.0, 616.5, 148.0, 603.0, 147.5, 597.5, 200.0, 611.5, 201.0], [599.5, 257.0, 605.0, 256.5, 606.5, 253.0, 609.5, 216.0, 597.0, 209.5, 592.5, 256.0, 599.5, 257.0], [594.5, 311.0, 600.5, 310.0, 603.5, 278.0, 592.0, 267.5, 587.5, 309.0, 594.5, 311.0], [627.5, 380.0, 625.5, 379.0, 627.5, 380.0], [614.5, 383.0, 612.5, 382.0, 614.5, 383.0], [631.5, 385.0, 629.5, 384.0, 631.5, 385.0], [617.5, 387.0, 615.5, 386.0, 617.5, 387.0]], 'width': 640, 'height': 480, 'id': 90127}, {'iscrowd': 0, 'bbox': [0.0, 395.0, 370.0, 85.0], 'category_id': 6, 'area': 24783, 'image_id': 8885, 'segmentation': [[367.0, 479.5, 0.0, 479.5, 0, 472.0, 88.0, 397.5, 92.0, 394.5, 102.0, 394.5, 369.0, 416.5, 367.0, 479.5]], 'width': 640, 'height': 480, 'id': 90128}, {'iscrowd': 0, 'bbox': [74.0, 0.0, 32.0, 32.0], 'category_id': 7, 'area': 697, 'image_id': 8885, 'segmentation': [[98.0, 31.5, 73.5, 27.0, 75.5, 6.0, 82.0, 0, 86.0, 4.5, 85.5, 0.0, 88.0, 0, 90.5, 10.0, 95.0, 8.5, 104.5, 16.0, 104.5, 26.0, 98.0, 31.5]], 'width': 640, 'height': 480, 'id': 90129}, {'iscrowd': 0, 'bbox': [426.0, 306.0, 169.0, 119.0], 'category_id': 5, 'area': 16338, 'image_id': 8885, 'segmentation': [[586.0, 424.5, 582.5, 424.0, 578.0, 414.5, 436.0, 403.5, 430.0, 403.5, 429.5, 412.0, 427.0, 412.5, 425.5, 364.0, 428.5, 306.0, 575.0, 313.5, 594.5, 337.0, 586.0, 424.5]], 'width': 640, 'height': 480, 'id': 90130}, {'iscrowd': 0, 'bbox': [503.0, 0.0, 26.0, 20.0], 'category_id': 7, 'area': 457, 'image_id': 8885, 'segmentation': [[517.0, 19.5, 509.0, 19.5, 502.5, 13.0, 502.5, 5.0, 506.0, 0, 527.5, 0.0, 528.5, 15.0, 517.0, 19.5]], 'width': 640, 'height': 480, 'id': 90131}, {'iscrowd': 0, 'bbox': [0.0, 137.0, 61.0, 243.0], 'category_id': 12, 'area': 6372, 'image_id': 8885, 'segmentation': [[31.0, 379.5, 25.5, 379.0, 22.0, 352.5, 0, 350.0, 0, 328.0, 9.5, 323.0, 7.5, 306.0, 0, 305.0, 0, 284.0, 4.5, 282.0, 2.5, 259.0, 0, 255.0, 0.0, 170.5, 1.5, 188.0, 5.5, 188.0, 0.0, 136.5, 47.5, 137.0, 60.5, 289.0, 25.5, 304.0, 31.5, 340.0, 33.5, 377.0, 31.0, 379.5], [44.5, 189.0, 40.5, 144.0, 10.0, 143.5, 13.5, 188.0, 44.5, 189.0], [48.5, 236.0, 45.5, 197.0, 17.0, 200.5, 18.5, 235.0, 48.5, 236.0], [10.5, 235.0, 7.5, 204.0, 4.0, 203.5, 6.5, 235.0, 10.5, 235.0], [53.5, 282.0, 49.0, 244.5, 20.5, 254.0, 24.0, 281.5, 53.5, 282.0], [12.5, 281.0, 15.5, 278.0, 13.5, 259.0, 10.0, 255.5, 12.5, 281.0], [19.5, 322.0, 18.5, 307.0, 15.0, 306.5, 15.5, 320.0, 19.5, 322.0]], 'width': 640, 'height': 480, 'id': 90132}, {'iscrowd': 0, 'bbox': [169.0, 195.0, 259.0, 207.0], 'category_id': 2, 'area': 45540, 'image_id': 8885, 'segmentation': [[425.0, 401.5, 174.0, 382.5, 172.5, 376.0, 186.5, 362.0, 186.5, 353.0, 181.5, 247.0, 180.5, 238.0, 175.5, 237.0, 180.5, 233.0, 179.5, 210.0, 168.5, 202.0, 190.0, 194.5, 426.0, 199.5, 427.5, 209.0, 415.5, 216.0, 414.5, 232.0, 420.5, 233.0, 420.5, 245.0, 414.5, 246.0, 410.5, 354.0, 410.5, 370.0, 425.5, 372.0, 425.0, 401.5]], 'width': 640, 'height': 480, 'id': 90133}, {'iscrowd': 0, 'bbox': [28.0, 287.0, 159.0, 102.0], 'category_id': 5, 'area': 13212, 'image_id': 8885, 'segmentation': [[162.0, 388.5, 158.5, 388.0, 158.5, 381.0, 153.0, 379.5, 60.0, 372.5, 47.0, 372.5, 36.0, 379.5, 27.5, 305.0, 65.0, 286.5, 183.0, 292.5, 186.5, 362.0, 162.0, 388.5]], 'width': 640, 'height': 480, 'id': 90134}, {'iscrowd': 0, 'bbox': [187.0, 0.0, 230.0, 83.0], 'category_id': 25, 'area': 18564, 'image_id': 8885, 'segmentation': [[242.0, 82.5, 189.5, 81.0, 187.0, 0, 416.5, 0.0, 414.5, 71.0, 411.0, 80.5, 242.0, 82.5]], 'width': 640, 'height': 480, 'id': 90135}], 'width': 640, 'height': 480, 'id': 8885}
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1710, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1276, in load_image_gt
    source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
KeyError: 'coco'
ERROR:root:Error processing image {'source': 'coco', 'path': '/home/he/sundata/SUNCGxS_IMG/train/color/000418.jpg', 'annotations': [{'iscrowd': 0, 'bbox': [529.0, 462.0, 109.0, 18.0], 'category_id': 1, 'area': 1535, 'image_id': 5581, 'segmentation': [[637.0, 479.5, 531.0, 479.5, 528.5, 464.0, 582.0, 462.5, 623.0, 470.5, 635.0, 474.5, 637.0, 479.5]], 'width': 640, 'height': 480, 'id': 56444}, {'iscrowd': 0, 'bbox': [520.0, 190.0, 82.0, 247.0], 'category_id': 25, 'area': 13836, 'image_id': 5581, 'segmentation': [[577.0, 436.5, 568.5, 433.0, 555.5, 401.0, 537.5, 397.0, 525.5, 365.0, 526.0, 357.5, 536.5, 357.0, 519.5, 319.0, 530.0, 189.5, 536.0, 190.5, 592.0, 228.5, 601.5, 242.0, 586.5, 407.0, 585.5, 412.0, 580.5, 413.0, 577.0, 436.5]], 'width': 640, 'height': 480, 'id': 56445}, {'iscrowd': 0, 'bbox': [479.0, 323.0, 150.0, 157.0], 'category_id': 5, 'area': 11620, 'image_id': 5581, 'segmentation': [[530.0, 479.5, 491.0, 479.5, 489.5, 476.0, 478.5, 406.0, 483.0, 322.5, 522.5, 325.0, 536.5, 357.0, 526.0, 357.5, 525.5, 365.0, 537.5, 397.0, 555.5, 401.0, 572.0, 436.5, 578.5, 435.0, 580.5, 413.0, 585.5, 412.0, 587.0, 397.5, 628.5, 472.0, 582.0, 462.5, 547.0, 461.5, 528.5, 464.0, 530.0, 479.5]], 'width': 640, 'height': 480, 'id': 56446}, {'iscrowd': 0, 'bbox': [117.0, 352.0, 296.0, 128.0], 'category_id': 30, 'area': 28619, 'image_id': 5581, 'segmentation': [[402.0, 479.5, 116.5, 479.0, 208.5, 377.0, 208.0, 351.5, 412.0, 367.5, 402.0, 479.5]], 'width': 640, 'height': 480, 'id': 56447}, {'iscrowd': 0, 'bbox': [0.0, 309.0, 209.0, 171.0], 'category_id': 14, 'area': 29287, 'image_id': 5581, 'segmentation': [[116.0, 479.5, 0, 479.0, 0.0, 308.5, 158.0, 319.5, 179.0, 322.5, 187.0, 327.5, 199.0, 317.5, 205.5, 317.0, 208.5, 377.0, 116.0, 479.5]], 'width': 640, 'height': 480, 'id': 56448}, {'iscrowd': 0, 'bbox': [62.0, 37.0, 127.0, 254.0], 'category_id': 15, 'area': 15151, 'image_id': 5581, 'segmentation': [[188.0, 290.5, 81.5, 284.0, 62.0, 39.5, 177.5, 37.0, 188.0, 290.5], [174.5, 281.0, 147.5, 264.0, 142.5, 176.0, 141.5, 171.0, 136.5, 170.0, 141.5, 168.0, 141.5, 159.0, 135.5, 57.0, 162.5, 49.0, 79.0, 50.5, 95.5, 276.0, 174.5, 281.0]], 'width': 640, 'height': 480, 'id': 56449}, {'iscrowd': 0, 'bbox': [564.0, 0.0, 76.0, 348.0], 'category_id': 20, 'area': 16241, 'image_id': 5581, 'segmentation': [[565.0, 210.5, 563.5, 198.0, 579.5, 4.0, 582.0, 0, 565.0, 210.5], [639.0, 309.5, 599.5, 273.0, 601.5, 242.0, 592.0, 228.5, 575.5, 218.0, 595.0, 0, 639.5, 0.0, 639.0, 309.5], [639.0, 347.5, 596.5, 301.0, 598.0, 284.5, 639.5, 326.0, 639.0, 347.5]], 'width': 640, 'height': 480, 'id': 56450}], 'width': 640, 'height': 480, 'id': 5581}
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1710, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1276, in load_image_gt
    source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
KeyError: 'coco'
ERROR:root:Error processing image {'source': 'coco', 'path': '/home/he/sundata/SUNCGxS_IMG/train/color/014976.jpg', 'annotations': [{'iscrowd': 0, 'bbox': [0.0, 82.0, 145.0, 386.0], 'category_id': 5, 'area': 39708, 'image_id': 3308, 'segmentation': [[0.0, 467.5, 0, 91.0, 34.0, 92.5, 43.0, 83.5, 57.0, 81.5, 68.0, 85.5, 76.0, 93.5, 130.0, 96.5, 138.0, 94.5, 144.5, 105.0, 141.0, 115.5, 103.5, 115.0, 116.5, 284.0, 116.0, 289.5, 104.5, 292.0, 113.5, 419.0, 0.0, 467.5]], 'width': 640, 'height': 480, 'id': 33090}, {'iscrowd': 0, 'bbox': [446.0, 15.0, 194.0, 465.0], 'category_id': 15, 'area': 87855, 'image_id': 3308, 'segmentation': [[639.0, 479.5, 445.5, 479.0, 468.0, 0, 639.5, 0.0, 639.0, 479.5]], 'width': 640, 'height': 480, 'id': 33091}, {'iscrowd': 0, 'bbox': [167.0, 277.0, 125.0, 203.0], 'category_id': 24, 'area': 16455, 'image_id': 3308, 'segmentation': [[262.0, 479.5, 189.0, 479.5, 175.5, 471.0, 167.0, 284.5, 199.0, 276.5, 222.0, 280.5, 225.5, 374.0, 239.0, 372.5, 269.0, 384.5, 290.5, 401.0, 291.5, 432.0, 278.5, 456.0, 262.5, 474.0, 262.0, 479.5]], 'width': 640, 'height': 480, 'id': 33092}, {'iscrowd': 0, 'bbox': [0.0, 0.0, 177.0, 480.0], 'category_id': 23, 'area': 39050, 'image_id': 3308, 'segmentation': [[176.0, 479.5, 0, 479.0, 1.0, 466.5, 113.5, 419.0, 104.5, 292.0, 116.0, 289.5, 116.5, 284.0, 103.5, 115.0, 139.0, 116.5, 144.5, 107.0, 143.5, 100.0, 138.0, 94.5, 130.0, 96.5, 76.0, 93.5, 68.0, 85.5, 57.0, 81.5, 43.0, 83.5, 34.0, 92.5, 0.0, 90.5, 0.0, 0, 150.5, 0.0, 176.0, 479.5]], 'width': 640, 'height': 480, 'id': 33093}, {'iscrowd': 0, 'bbox': [253.0, 361.0, 3.0, 19.0], 'category_id': 1, 'area': 54, 'image_id': 3308, 'segmentation': [[255.0, 379.5, 252.5, 378.0, 253.0, 360.5, 255.5, 362.0, 255.0, 379.5]], 'width': 640, 'height': 480, 'id': 33094}], 'width': 640, 'height': 480, 'id': 3308}
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1710, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1276, in load_image_gt
    source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
KeyError: 'coco'
ERROR:root:Error processing image {'source': 'coco', 'path': '/home/he/sundata/SUNCGxS_IMG/train/color/015192.jpg', 'annotations': [{'iscrowd': 0, 'bbox': [0.0, 33.0, 142.0, 189.0], 'category_id': 20, 'area': 22160, 'image_id': 7137, 'segmentation': [[6.0, 221.5, 0, 221.0, 0, 37.0, 123.0, 48.5, 133.5, 200.0, 128.5, 203.0, 139.0, 202.5, 141.5, 205.0, 6.0, 221.5], [130.0, 64.5, 130.0, 54.5, 130.0, 64.5], [131.0, 81.5, 131.0, 71.5, 131.0, 81.5], [132.0, 97.5, 132.0, 87.5, 132.0, 97.5], [133.0, 114.5, 133.0, 104.5, 133.0, 114.5], [134.0, 130.5, 134.0, 120.5, 134.0, 130.5], [135.0, 146.5, 135.0, 137.5, 135.0, 146.5], [136.0, 163.5, 136.0, 153.5, 136.0, 163.5], [137.0, 179.5, 137.0, 169.5, 137.0, 179.5], [138.0, 196.5, 138.0, 186.5, 138.0, 196.5], [127.5, 204.0, 119.5, 204.0, 127.5, 204.0], [118.5, 205.0, 110.5, 205.0, 118.5, 205.0], [92.5, 208.0, 109.5, 206.0, 88.5, 208.0, 92.5, 208.0], [83.5, 209.0, 87.5, 208.0, 80.5, 209.0, 83.5, 209.0], [74.5, 210.0, 76.5, 209.0, 74.5, 210.0], [65.5, 211.0, 67.5, 210.0, 65.5, 211.0]], 'width': 640, 'height': 480, 'id': 72306}, {'iscrowd': 0, 'bbox': [183.0, 0.0, 202.0, 19.0], 'category_id': 34, 'area': 2075, 'image_id': 7137, 'segmentation': [[277.0, 18.5, 182.5, 3.0, 187.0, 0, 227.0, 0, 244.0, 3.5, 252.0, 0, 384.5, 0.0, 277.0, 18.5]], 'width': 640, 'height': 480, 'id': 72307}, {'iscrowd': 0, 'bbox': [32.0, 300.0, 608.0, 180.0], 'category_id': 1, 'area': 68029, 'image_id': 7137, 'segmentation': [[639.0, 479.5, 31.5, 479.0, 358.0, 338.5, 364.5, 335.0, 366.0, 329.5, 373.0, 326.5, 381.0, 328.5, 449.0, 299.5, 639.0, 342.5, 639.0, 479.5]], 'width': 640, 'height': 480, 'id': 72308}, {'iscrowd': 0, 'bbox': [564.0, 63.0, 76.0, 280.0], 'category_id': 14, 'area': 15425, 'image_id': 7137, 'segmentation': [[639.0, 342.5, 563.5, 325.0, 577.5, 157.0, 599.5, 156.0, 606.5, 77.0, 639.0, 62.5, 639.0, 342.5]], 'width': 640, 'height': 480, 'id': 72309}, {'iscrowd': 0, 'bbox': [327.0, 305.0, 96.0, 39.0], 'category_id': 30, 'area': 1331, 'image_id': 7137, 'segmentation': [[346.0, 343.5, 330.0, 338.5, 326.5, 333.0, 396.0, 304.5, 422.5, 310.0, 381.0, 328.5, 373.0, 326.5, 366.0, 329.5, 364.0, 335.5, 346.0, 343.5]], 'width': 640, 'height': 480, 'id': 72310}, {'iscrowd': 0, 'bbox': [158.0, 0.0, 94.0, 7.0], 'category_id': 1, 'area': 181, 'image_id': 7137, 'segmentation': [[161.0, 6.5, 158.0, 0, 186.5, 0.0, 185.0, 2.5, 161.0, 6.5], [246.0, 3.5, 227.5, 0.0, 251.5, 0.0, 246.0, 3.5]], 'width': 640, 'height': 480, 'id': 72311}, {'iscrowd': 0, 'bbox': [363.0, 46.0, 75.0, 94.0], 'category_id': 26, 'area': 6487, 'image_id': 7137, 'segmentation': [[433.0, 139.5, 362.5, 137.0, 364.0, 53.5, 437.5, 47.0, 433.0, 139.5]], 'width': 640, 'height': 480, 'id': 72312}], 'width': 640, 'height': 480, 'id': 7137}
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1710, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1276, in load_image_gt
    source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
KeyError: 'coco'
ERROR:root:Error processing image {'source': 'coco', 'path': '/home/he/sundata/SUNCGxS_IMG/train/color/008931.jpg', 'annotations': [{'iscrowd': 0, 'bbox': [301.0, 290.0, 242.0, 190.0], 'category_id': 24, 'area': 13430, 'image_id': 3614, 'segmentation': [[539.0, 363.5, 467.5, 356.0, 472.0, 292.5, 487.0, 289.5, 503.0, 290.5, 522.0, 292.5, 531.5, 297.0, 541.5, 328.0, 541.5, 347.0, 540.0, 351.5, 523.0, 350.5, 522.5, 354.0, 539.0, 356.5, 539.0, 363.5], [375.0, 479.5, 336.5, 479.0, 343.5, 451.0, 326.5, 434.0, 300.5, 391.0, 303.5, 371.0, 309.0, 364.5, 328.0, 357.5, 348.0, 355.5, 375.0, 355.5, 408.5, 360.0, 375.0, 479.5]], 'width': 640, 'height': 480, 'id': 36061}, {'iscrowd': 0, 'bbox': [376.0, 349.0, 221.0, 131.0], 'category_id': 18, 'area': 22881, 'image_id': 3614, 'segmentation': [[596.0, 479.5, 376.0, 479.5, 410.5, 354.0, 418.0, 351.5, 539.0, 363.5, 539.0, 356.5, 525.0, 355.5, 522.5, 351.0, 556.0, 348.5, 567.0, 352.5, 565.5, 389.0, 596.0, 479.5]], 'width': 640, 'height': 480, 'id': 36062}, {'iscrowd': 0, 'bbox': [174.0, 417.0, 145.0, 63.0], 'category_id': 30, 'area': 6618, 'image_id': 3614, 'segmentation': [[297.0, 479.5, 173.5, 479.0, 226.0, 418.5, 249.0, 416.5, 307.0, 424.5, 318.5, 431.0, 318.5, 437.0, 297.0, 479.5]], 'width': 640, 'height': 480, 'id': 36063}, {'iscrowd': 0, 'bbox': [298.0, 251.0, 220.0, 117.0], 'category_id': 22, 'area': 15439, 'image_id': 3614, 'segmentation': [[306.0, 367.5, 297.5, 352.0, 297.5, 283.0, 301.5, 273.0, 325.0, 251.5, 377.0, 251.5, 369.0, 253.5, 365.5, 262.0, 377.0, 272.5, 386.5, 276.0, 383.5, 284.0, 388.5, 289.0, 373.5, 303.0, 383.0, 306.5, 402.0, 304.5, 411.0, 292.5, 436.0, 290.5, 452.0, 281.5, 459.5, 275.0, 468.0, 257.5, 510.5, 261.0, 517.0, 292.5, 487.0, 289.5, 474.0, 291.5, 470.5, 295.0, 467.0, 356.5, 418.0, 351.5, 413.0, 351.5, 409.0, 359.5, 375.0, 355.5, 333.0, 356.5, 317.0, 360.5, 306.0, 367.5], [413.0, 273.5, 409.5, 272.0, 415.0, 270.5, 413.0, 273.5]], 'width': 640, 'height': 480, 'id': 36064}, {'iscrowd': 0, 'bbox': [366.0, 150.0, 151.0, 157.0], 'category_id': 21, 'area': 9389, 'image_id': 3614, 'segmentation': [[388.0, 306.5, 373.5, 304.0, 388.5, 289.0, 383.5, 284.0, 386.5, 276.0, 373.0, 269.5, 365.5, 262.0, 365.5, 257.0, 371.0, 252.5, 392.0, 249.5, 416.0, 230.5, 423.0, 228.5, 437.5, 205.0, 455.0, 187.5, 465.0, 184.5, 485.0, 186.5, 481.5, 183.0, 490.5, 154.0, 502.0, 149.5, 511.0, 152.5, 516.5, 161.0, 515.5, 169.0, 495.5, 191.0, 512.5, 212.0, 511.5, 220.0, 500.5, 242.0, 487.0, 253.5, 469.0, 256.5, 459.5, 275.0, 452.0, 281.5, 436.0, 290.5, 411.0, 292.5, 402.0, 304.5, 388.0, 306.5], [428.5, 250.0, 442.5, 239.0, 450.0, 218.5, 421.0, 242.5, 402.5, 248.0, 428.5, 250.0], [481.5, 242.0, 482.0, 239.5, 481.5, 242.0], [413.5, 273.0, 415.0, 270.5, 409.5, 272.0, 413.5, 273.0]], 'width': 640, 'height': 480, 'id': 36065}], 'width': 640, 'height': 480, 'id': 3614}
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1710, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1276, in load_image_gt
    source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
KeyError: 'coco'
Exception in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 568, in data_generator_task
    generator_output = next(self._generator)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1710, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 1276, in load_image_gt
    source_class_ids = dataset.source_class_ids[dataset.image_info[image_id]["source"]]
KeyError: 'coco'

Traceback (most recent call last):
  File "samples/house/house.py", line 428, in <module>
    augmentation=augmentation)
  File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 2375, in train
  File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 2011, in fit_generator
    generator_output = next(output_generator)
StopIteration

```

Panoptic Polygons Overlapping

I've been trying to build polygons for a model that requires me to convert them to panoptic format. I've been running into issues because I can't make everything a crowd, but if I don't, I get polygons that overlap. I should also note that I have double and triple checked (going as far as to subtract all masks from each other). Is there any way around this?

skim age to scikit-image

Hi, thanks for this tool! Can you please change skimage to scikit-image so I can just pip install git+this_repo?

NameError: name 'pycococreatortools' is not defined

I have installed using this:

!pip install cython
!pip install git+git://github.com/waspinator/[email protected]

But still I got this error. Whole code:

INFO = {
    "description": "Fashion Dataset",
    "url": "https://github.com/waspinator/pycococreator",
    "version": "0.1.0",
    "year": 2020,
    "contributor": "Abu Noman Md. Sakib",
    "date_created": datetime.datetime.utcnow().isoformat(' ')
}

LICENSES = [
    {
        "id": 1,
        "name": "GB",
        "url": "GB"
    }
]

CATEGORIES = [
    {
        'id': 1,
        'name': 'mask',
        'supercategory': 'fashion',
    }
]

coco_output = {
    "info": INFO,
    "licenses": LICENSES,
    "categories": CATEGORIES,
    "images": [],
    "annotations": []
}

image_id = 1
segmentation_id = 1

ROOT_DIR = "train"
IMAGE_DIR = "train/images"
ANNOTATION_DIR = "train/annotations"

image_files = [f for f in listdir(IMAGE_DIR) if isfile(join(IMAGE_DIR, f))]
annotation_files = [f for f in listdir(ANNOTATION_DIR) if isfile(join(ANNOTATION_DIR, f))]

# go through each image
for image_filename in image_files:
    image = Image.open(IMAGE_DIR + '/' + image_filename)
    image_info = pycococreatortools.create_image_info(image_id, os.path.basename(image_filename), image.size)

    coco_output["images"].append(image_info)

# go through each associated annotation
for annotation_filename in annotation_files:

    print(annotation_filename)
    class_id = [x['id'] for x in CATEGORIES if x['name'] in annotation_filename][0]
    category_info = {'id': class_id, 'is_crowd': 'crowd' in image_filename}
    binary_mask = np.asarray(Image.open(annotation_filename).convert('1')).astype(np.uint8)

    annotation_info = pycococreatortools.create_annotation_info(segmentation_id, image_id, category_info, binary_mask, image.size, tolerance=2)

    if annotation_info is not None:
        coco_output["annotations"].append(annotation_info)

    segmentation_id = segmentation_id + 1

    image_id = image_id + 1

with open('train/images.json', 'w') as output_json_file:
    json.dump(coco_output, output_json_file)
    

I still don't know if the code works or not. Please help!

Bug: Mismatch of class names from images and predefined class names, if names have same word.

This line of code finds the wrong class id, if the names are "inherent".
https://github.com/waspinator/pycococreator/blob/master/examples/shapes/shapes_to_coco.py#L102

E.g. if the class name of the image has race car will that trigger the first option because car is a valid match.

CATEGORIES = [
    {
        'id': 1,
        'name': 'car',
        'supercategory': 'vehicles',
    },
    {
        'id': 2,
        'name': 'race car',
        'supercategory': 'vehicles',
    }
]

VGG annotation to COCO?

I have labeled my data using VGG annotator, which provide a Json file at the end. is there a possibility to convert that file into COCO format? thank you

Holes are filled

It seems to me that if the annotation sources contain holes, the generated json will fill them up.

My Json is different from COCO

Hi everyone! I have successed achieve the file of json in my dataset, but the style is different as fellow:

image
It seems have more "" , can i resolve it? Thank you!

Circle Annotation Bug

Hello,

when i use your tool, there is a bug.

If i have a rope which is arranged like a circle, your tool give me this output:

This is the full mask:
circlecorrect

This is the mask which i use with your tool:
circlecorrect2

And this is when i inspect my data with the annotation file:
circlebug2

Do anybody know how i can solve this problem?

Pycococreator with mutiprocessing

Can we have a code where we can use power of multiprocessing with very large datasets because with present code its taking too much time to process it.

Thanks in advance!

Tutorial for part before pycococreater

HI,
Thank you for the post, it was really informative. I am new to this so i am having difficulty converting my dataset to COCO format. Specifically the part about using Annotator tool and Binary masks. Can you please explain this in detail.
Thanks

COCO sample previews show multiple sample masks

I'm not sure what the deal is, but I set up my dataset as described in your guide. The json file generates fine, but when I generate several random previews, some of my samples have some technicolor nonsense going on.

By that I mean 11 masks (they happen to be consecutive masks in the index) are drawn/assigned over the one sample. I've checked the json file, and it looks like all these annotations have been assigned to the one sample for some reason. Each sample is named according to the suggested format, so I'm not sure what's up.

For example:

image filename: DSC_5409_1.jpg
annotation filename: DSC_5409_1_mask_1.png

image: MVI_0155_1107_140.jpg
annotation: MVI_0155_1107_140_mask_1.png

There's the same number of unique filenames under each directory, and I've stripped the extensions and "_mask_1" from the annotation filenames to ensure they match.

Anyone else have this problem?

===

To explain further, I have a source image, let's say MVI_0155_1107.jpg. This is sampled using a sliding window technique to produce x number of samples saved as MVI_0155_1107_#.jpg, with x going as high as 350 in some cases.

...ah, that's the problem. When it reaches images xx_3.jpg, and starts looking for annotations, it's going to pick up ANY annotations with 'xx_3' in the filename. So, if I have annotations with: xx_3, xx_30, xx_31, xx_32, xx_33, xx_34, xx_35, xx_36, xx_37, xx_38, and xx_39, that leaves me with 11 annotations assigned to one sample.

So it's a matter of improving how the script parses annotation filenames for comparison. I'll try something like this...

#  I'm going to index out what I need.  To do that...

words_to_strip = annotation_filename.split('_')[-2:]  # in our case we'll get ['mask', '1.png']

char_sum = 0
for word in words_to_strip:
    char_sum += len(word)

# We still need to add 2 to account for the underscore before and after 'mask'.

char_sum +=2

# Use the sum to index out the part of the filename we want to compare

annotation_filename_match = annotation_filename[:-char_sum]

Now we can use the image filename to match to the appropriate annotation regardless of naming scheme.

There are probably better ways to do this, so please feel free to suggest! I'm not closing this yet because I haven't tested my method. :)

Error processing image, ValueError: Input image expected to be RGB, RGBA or gray.

I'm getting this error very intermittently during training (approx. 10 times per 1000 iterations). I have variable size images and masks, so I'm thinking this may be an issue with some of the very large images in my dataset (for example, sizes 5456x3632, 2592x1944, etc.). It continues to train without crashing due to the error, but I'm unsure if there will be any negative consequences later on.

ERROR:root:Error processing image {'id': 3452, 'source': 'coco', 'path': '/home/docker_user/data/typeb_data/train2018/00001207.jpg', 'width': 5456, 'height': 3632, 'annotations': [{'id': 8215, 'image_id': 3452, 'category_id': 1, 'iscrowd': 0, 'area': 2581761, 'bbox': [2971.0, 657.0, 2217.0, 2258.0], 'segmentation': [[4691.0, 2914.5, 4657.0, 2912.5, 4640.0, 2909.5, 4595.0, 2909.5, 4571.0, 2903.5, 4522.0, 2879.5, 4503.0, 2841.5, 4491.0, 2839.5, 4475.0, 2833.5, 4446.5, 2818.0, 4443.5, 2767.0, 4438.5, 2735.0, 4432.5, 2722.0, 4416.5, 2710.0, 4411.5, 2696.0, 4400.5, 2676.0, 4397.5, 2662.0, 4400.5, 2642.0, 4408.0, 2632.5, 4424.0, 2631.5, 4431.0, 2639.5, 4455.0, 2643.5, 4470.0, 2643.5, 4508.0, 2630.5, 4510.5, 2627.0, 4511.5, 2617.0, 4516.5, 2603.0, 4539.5, 2552.0, 4523.5, 2530.0, 4501.0, 2512.5, 4454.0, 2491.5, 4420.0, 2471.5, 4399.5, 2449.0, 4388.5, 2427.0, 4375.5, 2394.0, 4362.5, 2368.0, 4352.0, 2360.5, 4348.5, 2355.0, 4343.5, 2324.0, 4355.5, 2302.0, 4370.5, 2280.0, 4363.0, 2251.5, 4343.5, 2256.0, 4337.5, 2282.0, 4322.5, 2312.0, 4298.5, 2340.0, 4323.5, 2445.0, 4323.0, 2447.5, 4316.0, 2449.5, 3752.0, 2482.5, 3726.0, 2481.5, 3725.5, 2475.0, 3729.5, 2470.0, 3795.5, 2398.0, 3817.5, 2370.0, 3819.5, 2347.0, 3819.5, 2338.0, 3817.5, 2331.0, 3808.0, 2324.5, 3786.0, 2319.5, 3731.5, 2383.0, 3698.5, 2411.0, 3692.5, 2460.0, 3699.5, 2481.0, 3708.5, 2486.0, 3717.5, 2504.0, 3715.5, 2533.0, 3717.5, 2566.0, 3715.5, 2579.0, 3714.5, 2614.0, 3710.5, 2630.0, 3669.5, 2660.0, 3658.0, 2684.5, 3627.0, 2707.5, 3589.0, 2727.5, 3548.0, 2758.5, 3481.0, 2758.5, 3381.0, 2704.5, 3343.5, 2657.0, 3318.0, 2632.5, 3296.0, 2617.5, 3251.0, 2623.5, 3196.0, 2619.5, 3161.0, 2606.5, 3119.0, 2583.5, 3094.5, 2547.0, 3051.5, 2510.0, 3027.5, 2478.0, 3009.5, 2442.0, 3000.5, 2408.0, 2977.5, 2379.0, 2972.5, 2333.0, 2970.5, 2262.0, 2973.5, 2150.0, 2996.0, 2127.5, 3021.5, 2108.0, 3037.5, 2080.0, 3045.5, 2059.0, 3058.0, 2039.5, 3095.0, 2021.5, 3116.0, 2004.5, 3139.0, 1963.5, 3204.5, 1931.0, 3200.5, 1922.0, 3185.5, 1910.0, 3192.5, 1882.0, 3203.5, 1859.0, 3208.5, 1853.0, 3241.5, 1817.0, 3266.0, 1795.5, 3281.5, 1779.0, 3284.5, 1748.0, 3294.5, 1730.0, 3336.0, 1710.5, 3351.0, 1706.5, 3392.0, 1683.5, 3450.0, 1671.5, 3448.5, 1651.0, 3455.5, 1631.0, 3480.0, 1618.5, 3561.0, 1609.5, 3584.0, 1595.5, 3642.0, 1595.5, 3665.5, 1593.0, 3663.0, 1574.5, 3652.0, 1574.5, 3632.0, 1580.5, 3612.0, 1570.5, 3584.5, 1543.0, 3564.5, 1502.0, 3545.5, 1477.0, 3540.5, 1465.0, 3519.5, 1403.0, 3499.5, 1350.0, 3479.5, 1286.0, 3475.5, 1208.0, 3482.5, 1080.0, 3518.5, 940.0, 3556.5, 887.0, 3617.5, 816.0, 3699.0, 751.5, 3815.0, 697.5, 3909.0, 669.5, 4044.0, 664.5, 4087.0, 656.5, 4171.0, 664.5, 4261.0, 682.5, 4266.0, 684.5, 4354.0, 748.5, 4426.5, 823.0, 4504.5, 1012.0, 4522.5, 1098.0, 4517.5, 1195.0, 4480.5, 1320.0, 4391.5, 1451.0, 4354.5, 1513.0, 4347.5, 1531.0, 4305.5, 1662.0, 4328.0, 1669.5, 4390.0, 1682.5, 4433.0, 1697.5, 4477.0, 1725.5, 4500.0, 1754.5, 4524.5, 1772.0, 4538.5, 1851.0, 4574.0, 1871.5, 4661.5, 1963.0, 4672.5, 1975.0, 4701.5, 2025.0, 4728.0, 2035.5, 4764.0, 2059.5, 4820.0, 2100.5, 4858.5, 2147.0, 4886.0, 2187.5, 4928.0, 2208.5, 4930.5, 2211.0, 4972.0, 2273.5, 4996.0, 2279.5, 5020.0, 2282.5, 5102.0, 2338.5, 5156.5, 2406.0, 5183.5, 2459.0, 5187.5, 2496.0, 5157.5, 2543.0, 5145.5, 2587.0, 5133.5, 2603.0, 5124.5, 2628.0, 5122.0, 2631.5, 5090.0, 2648.5, 5077.0, 2653.5, 5058.0, 2686.5, 5037.0, 2700.5, 5011.0, 2724.5, 4989.0, 2741.5, 4983.0, 2744.5, 4947.0, 2751.5, 4919.5, 2792.0, 4853.0, 2868.5, 4795.0, 2895.5, 4771.0, 2902.5, 4737.0, 2909.5, 4709.0, 2903.5, 4691.0, 2914.5]], 'width': 5456, 'height': 3632}, {'id': 8216, 'image_id': 3452, 'category_id': 2, 'iscrowd': 0, 'area': 2590712, 'bbox': [2968.0, 658.0, 2218.0, 2256.0], 'segmentation': [[4669.0, 2913.5, 4641.0, 2909.5, 4588.0, 2909.5, 4553.0, 2901.5, 4527.5, 2874.0, 4510.0, 2848.5, 4498.0, 2839.5, 4470.0, 2831.5, 4459.5, 2812.0, 4439.5, 2800.0, 4437.5, 2785.0, 4437.5, 2728.0, 4419.5, 2718.0, 4400.5, 2677.0, 4390.5, 2663.0, 4402.0, 2642.5, 4424.0, 2631.5, 4436.0, 2644.5, 4439.0, 2644.5, 4469.0, 2639.5, 4512.0, 2628.5, 4517.5, 2617.0, 4524.5, 2582.0, 4543.5, 2558.0, 4532.0, 2539.5, 4507.0, 2528.5, 4481.0, 2508.5, 4432.0, 2484.5, 4412.5, 2462.0, 4400.5, 2440.0, 4381.5, 2414.0, 4370.5, 2373.0, 4368.0, 2368.5, 4348.5, 2355.0, 4343.5, 2336.0, 4344.5, 2316.0, 4354.5, 2298.0, 4359.5, 2285.0, 4362.5, 2263.0, 4362.5, 2259.0, 4360.0, 2255.5, 4342.5, 2254.0, 4344.5, 2264.0, 4334.5, 2300.0, 4311.5, 2322.0, 4303.5, 2338.0, 4322.5, 2443.0, 4322.0, 2445.5, 4314.0, 2446.5, 3715.0, 2484.5, 3718.5, 2479.0, 3789.5, 2400.0, 3804.5, 2376.0, 3817.5, 2359.0, 3821.5, 2346.0, 3811.0, 2334.5, 3790.0, 2319.5, 3720.0, 2385.5, 3696.5, 2412.0, 3697.5, 2437.0, 3695.5, 2477.0, 3702.5, 2491.0, 3719.5, 2510.0, 3719.5, 2538.0, 3714.5, 2547.0, 3719.5, 2558.0, 3719.5, 2562.0, 3715.5, 2581.0, 3711.5, 2637.0, 3669.0, 2661.5, 3649.0, 2698.5, 3631.0, 2707.5, 3603.0, 2717.5, 3592.0, 2727.5, 3560.0, 2747.5, 3524.0, 2761.5, 3487.0, 2753.5, 3456.0, 2752.5, 3402.0, 2722.5, 3369.5, 2697.0, 3343.5, 2641.0, 3332.5, 2626.0, 3308.0, 2612.5, 3287.0, 2613.5, 3252.0, 2619.5, 3206.0, 2624.5, 3200.0, 2624.5, 3165.0, 2617.5, 3127.0, 2581.5, 3099.0, 2560.5, 3079.0, 2550.5, 3056.0, 2536.5, 3032.5, 2495.0, 3010.5, 2472.0, 2999.5, 2420.0, 2972.5, 2340.0, 2972.5, 2303.0, 2967.5, 2229.0, 2976.5, 2154.0, 2983.5, 2140.0, 3012.5, 2106.0, 3017.0, 2101.5, 3041.5, 2089.0, 3039.5, 2080.0, 3045.5, 2069.0, 3050.5, 2048.0, 3069.0, 2028.5, 3105.0, 2020.5, 3129.0, 1978.5, 3157.0, 1960.5, 3179.0, 1943.5, 3206.5, 1928.0, 3199.5, 1923.0, 3191.5, 1913.0, 3186.5, 1894.0, 3199.5, 1877.0, 3206.5, 1862.0, 3235.5, 1824.0, 3272.5, 1787.0, 3275.5, 1770.0, 3289.0, 1739.5, 3305.0, 1729.5, 3344.0, 1711.5, 3360.0, 1698.5, 3371.0, 1692.5, 3399.0, 1680.5, 3436.0, 1669.5, 3444.0, 1669.5, 3461.0, 1621.5, 3509.0, 1612.5, 3551.0, 1611.5, 3569.0, 1606.5, 3586.0, 1596.5, 3661.5, 1593.0, 3659.0, 1568.5, 3632.0, 1583.5, 3605.5, 1566.0, 3577.5, 1524.0, 3552.5, 1491.0, 3508.5, 1387.0, 3492.5, 1355.0, 3476.5, 1278.0, 3476.5, 1218.0, 3473.5, 1161.0, 3502.5, 979.0, 3504.5, 973.0, 3510.5, 964.0, 3599.0, 835.5, 3786.0, 705.5, 3867.0, 673.5, 4048.0, 662.5, 4088.0, 661.5, 4125.0, 657.5, 4153.0, 660.5, 4187.0, 673.5, 4287.0, 695.5, 4317.0, 715.5, 4346.0, 739.5, 4421.5, 810.0, 4466.5, 885.0, 4496.5, 941.0, 4516.5, 985.0, 4529.5, 1155.0, 4505.5, 1248.0, 4492.5, 1323.0, 4475.0, 1333.5, 4465.0, 1338.5, 4455.0, 1340.5, 4452.5, 1343.0, 4402.5, 1444.0, 4385.5, 1462.0, 4371.5, 1503.0, 4352.5, 1527.0, 4342.5, 1568.0, 4298.5, 1669.0, 4333.0, 1669.5, 4374.0, 1672.5, 4408.0, 1684.5, 4448.0, 1704.5, 4500.0, 1741.5, 4544.5, 1788.0, 4545.5, 1847.0, 4547.5, 1850.0, 4564.0, 1870.5, 4609.5, 1897.0, 4710.5, 2035.0, 4780.0, 2068.5, 4832.0, 2110.5, 4835.5, 2114.0, 4883.5, 2196.0, 4976.0, 2268.5, 4979.0, 2270.5, 5025.0, 2274.5, 5103.0, 2328.5, 5146.5, 2378.0, 5165.5, 2427.0, 5185.5, 2464.0, 5179.5, 2512.0, 5175.0, 2518.5, 5159.5, 2531.0, 5155.5, 2571.0, 5153.5, 2578.0, 5139.5, 2600.0, 5092.5, 2649.0, 5054.0, 2692.5, 4998.0, 2735.5, 4946.0, 2756.5, 4931.5, 2775.0, 4913.5, 2803.0, 4875.0, 2839.5, 4839.0, 2866.5, 4802.0, 2882.5, 4777.0, 2895.5, 4746.0, 2905.5, 4695.0, 2905.5, 4669.0, 2913.5]], 'width': 5456, 'height': 3632}]}
Traceback (most recent call last):
  File "/root/anaconda3/lib/python3.6/site-packages/mask_rcnn-2.1-py3.6.egg/mrcnn/model.py", line 1695, in data_generator
    use_mini_mask=config.USE_MINI_MASK)
  File "/root/anaconda3/lib/python3.6/site-packages/mask_rcnn-2.1-py3.6.egg/mrcnn/model.py", line 1209, in load_image_gt
    image = dataset.load_image(image_id)
  File "/root/anaconda3/lib/python3.6/site-packages/mask_rcnn-2.1-py3.6.egg/mrcnn/utils.py", line 367, in load_image
    image = skimage.color.gray2rgb(image)
  File "/root/anaconda3/lib/python3.6/site-packages/skimage/color/colorconv.py", line 862, in gray2rgb
    raise ValueError("Input image expected to be RGB, RGBA or gray.")
ValueError: Input image expected to be RGB, RGBA or gray.

TypeError: Expected bytes, got list

Hey @waspinator , thanks for this work. I have a problem about the decoding of RLE.
I set 'is_crowd' to 1 to make RLE format in shapes_to_coco.py. But I use MaskRCNN from facebookresearch and have a bug related to your program.

File "C:\Users\DTMLLUAdminUser\Anaconda3\envs\detectron2\lib\site-packages\pycocotools\mask.py", line 91, in decode
return _mask.decode([rleObjs])[:,:,0]
File "pycocotools_mask.pyx", line 146, in pycocotools._mask.decode
File "pycocotools_mask.pyx", line 128, in pycocotools._mask._frString
TypeError: Expected bytes, got list

do you have any idea or what should I do to finish this bug. Thank you.

Bug: mismatch of "bbox" and "segmentation"

Dictionary below is first images annotations information that made by pycococreator.
This link is an image what I use for explain my question.

https://light-tree.tistory.com/123

As I know, "bbox" contains left top coordinate and right bottom coordinate.

To simple check data is fine, I printed max value of "bbox" and "segmentation".
As I understand, max value of "segmentation" and max value "bbox" have to be same.
(whatever that max value things means right bottom x or y)

but, max of "bbox" is 765 that right bottom y coordinate
and max of "segmentation" is 787.5 that point is on shoe bottom in my image.

Is this my mistake? How can I fix this?

{"bbox": [93.0, 23.0, 254.0, 765.0], "segmentation": [[320.0, 787.5, 284.0, 787.5, 260.0, 781.5, 233.0, 779.5, 227.5, 771.0, 228.5, 762.0, 233.5, 756.0, 231.0, 742.5, 227.0, 747.5, 220.0, 749.5, 210.0, 749.5, 201.0, 743.5, 196.0, 748.5, 181.0, 748.5, 169.0, 744.5, 148.5, 725.0, 141.5, 707.0, 132.0, 705.5, 122.0, 699.5, 107.5, 680.0, 107.5, 671.0, 115.0, 661.5, 136.0, 648.5, 152.0, 645.5, 153.5, 638.0, 164.0, 631.5, 176.0, 616.5, 204.5, 593.0, 206.5, 588.0, 203.0, 571.5, 196.0, 578.5, 187.5, 575.0, 187.5, 571.0, 195.5, 562.0, 195.5, 557.0, 186.0, 545.5, 157.0, 549.5, 149.5, 542.0, 151.5, 528.0, 144.5, 510.0, 144.5, 493.0, 132.5, 455.0, 129.5, 418.0, 132.5, 374.0, 118.5, 316.0, 109.5, 298.0, 103.5, 267.0, 97.5, 256.0, 92.5, 235.0, 93.5, 218.0, 110.5, 188.0, 136.0, 163.5, 144.5, 161.0, 140.0, 159.5, 133.5, 150.0, 134.5, 129.0, 127.0, 126.5, 112.5, 108.0, 112.5, 101.0, 116.5, 95.0, 115.5, 75.0, 119.5, 70.0, 119.5, 59.0, 146.0, 30.5, 166.0, 22.5, 190.0, 23.5, 215.0, 30.5, 246.5, 65.0, 247.5, 107.0, 253.5, 124.0, 250.5, 159.0, 244.0, 166.5, 226.0, 173.5, 221.5, 181.0, 232.0, 183.5, 245.5, 193.0, 272.5, 237.0, 291.5, 257.0, 299.5, 270.0, 303.5, 284.0, 303.5, 304.0, 299.5, 324.0, 291.5, 341.0, 313.5, 397.0, 324.5, 416.0, 335.5, 428.0, 333.5, 443.0, 322.5, 455.0, 330.5, 491.0, 338.5, 508.0, 337.5, 516.0, 333.0, 519.5, 324.0, 519.5, 320.5, 529.0, 321.5, 539.0, 330.5, 555.0, 331.5, 571.0, 323.5, 583.0, 297.5, 605.0, 292.0, 617.5, 270.5, 621.0, 269.5, 634.0, 274.5, 651.0, 269.5, 657.0, 267.5, 672.0, 268.5, 693.0, 280.5, 727.0, 288.0, 729.5, 296.0, 741.5, 303.0, 741.5, 309.0, 736.5, 317.0, 737.5, 333.0, 755.5, 341.0, 756.5, 346.5, 763.0, 346.5, 768.0, 339.0, 778.5, 320.0, 787.5]], "area": 114927, "id": 1, "category_id": 1, "height": 1080, "image_id": 1, "iscrowd": 0, "width": 1920}

Thanks.

The installed module is not found in Jupyter Notebook

Thanks for sharing your library.
But your module 'pycocotools' is not found in your Jupyter Notebook example 'visualize_coco'.
after installation.
I can find a library named 'pycococreatortools' by library searching using 'pip list' command
but not pycocotools.

Is there any specific requirement for your library?
I currently use Anaconda 3 environment and I installed it using Anaconda Prompt.

  • I am completely new to python language.

Question: overlapping annotations?

Does this happen to support overlapping masks? For example 'car' and 'tire'. Also I can see how that would be represented as different objects in an XML file, but how would that be represented in a jpeg mask? Thanks!

I see that there's the parameter is_crowd, but not sure how that may be determined from a jpeg mask.

Mask RCNN and RLE

Hello everyone,

I am running the matterport Mask_RCNN on my own dataset created using waspinator/pycococreator

Everything works fine when I am using polygons, but as soon as I am using RLE because I have crowded images I got an error when trying to display a few images from the training dataset,

I got problems with images with holes using polygons and I assumed that is why the pycococreator uses RLE when the image is crowded. This part works perfectly, but then since MaskRCCN requires polygons I am stuck in this loop.

Is there a way to convert RLE to Polygons? I am not even sure this would solve the problem

result from shapes_to_coco.py is strange

Hello.
I run the shapes_to_coco.py code using the shape dataset, but the result is strange.

image

I wonder if you can guess where this problem is coming from. Maybe there is a problem in converting the mask.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.