Giter Club home page Giter Club logo

deep-blind-watermark-removal's Introduction

Hi there 👋

🔭 I’m currently working on Image/Video Editing and Synthesis. please visit my website for further details.

⏰ Very busy days! I am trying my best to solve the issues!

📍 We are hiring research interns to publish high-quality research papers! See more information here.

☕️ You can buy me a coffee, if you find my project helpful.

I have cobuild a discord server for text-to-video generation, try the server here: Discord.

deep-blind-watermark-removal's People

Contributors

vinthony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

deep-blind-watermark-removal's Issues

Modifying COCO dataset class to include 256x256 Random Crop gives CUDA error

Hi, I was trying to train the model on the Logo Dataset that you had provided, without resizing the Images.
Instead of Hard resize to 256x256 which distorts the aspect ratio, I decided to use random cropping however after a few epochs I'd run into CUDA error

Here is the error:

/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [0,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [1,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [2,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [3,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [4,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [5,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [6,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [7,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [8,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [9,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [10,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [11,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [12,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [13,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [14,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [15,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [16,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [17,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [18,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [19,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [20,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [21,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [22,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [23,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [24,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [25,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [26,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [27,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [28,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [29,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [30,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [228,0,0], thread: [31,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [32,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [33,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [34,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [35,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [36,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [37,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [38,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [39,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [40,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [41,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [42,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [43,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [44,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [45,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [46,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [47,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [48,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [49,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [50,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [51,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [52,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [53,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [54,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [55,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [56,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [57,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [58,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [59,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [60,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [61,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [62,0,0] Assertion input_val >= zero && input_val <= one failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:102: operator(): block: [197,0,0], thread: [63,0,0] Assertion input_val >= zero && input_val <= one failed.
Traceback (most recent call last):
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/main.py", line 71, in
main(args)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/main.py", line 41, in main
Machine.train(epoch)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/scripts/machines/VX.py", line 129, in train
l2_loss,att_loss,wm_loss,style_loss,ssim_loss = self.loss(outputs[0],self.norm(target),outputs[1],mask,outputs[2],self.norm(wm))
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/deep-blind-watermark-removal_patch/scripts/machines/VX.py", line 85, in forward
att_loss = self.attLoss(pred_ms, mask)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 530, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction)
File "/home/qblocks/shashank/Development/Oct_21/Watermark_removal/wm/lib/python3.8/site-packages/torch/nn/functional.py", line 2525, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(
RuntimeError: CUDA error: device-side assert triggered

Initially I thought that this might be due to errors in the input mask as it expects the values to be b/w 0 and 1. However, upon printing the values of Mask and pred_ms (prediction), it was found that the model prediction tensor was NaN.

mask tensor([[[[1., 1., 1.,  ..., 0., 0., 0.],
          [1., 1., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [1., 1., 1.,  ..., 0., 0., 0.],
          [1., 1., 1.,  ..., 0., 0., 0.],
          [1., 1., 1.,  ..., 0., 0., 0.]]],


        [[[0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.]]],


        [[[0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.]]],


        [[[0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          [0., 0., 0.,  ..., 0., 0., 0.],
          ...,
          [1., 0., 0.,  ..., 1., 1., 1.],
          [1., 1., 0.,  ..., 1., 1., 0.],
          [1., 1., 1.,  ..., 1., 0., 0.]]]], device='cuda:0')

pred_mask tensor([[[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]],


        [[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]],


        [[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]],


        [[[nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          ...,
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan],
          [nan, nan, nan,  ..., nan, nan, nan]]]], device='cuda:0',
       grad_fn=<SigmoidBackward>)

Following is the code that I am using to Crop the patch:

from __future__ import print_function, absolute_import

import os
import csv
import numpy as np
import json
import random
import math
import matplotlib.pyplot as plt
from collections import namedtuple
from os import listdir
from os.path import isfile, join

import torch
# torch.manual_seed(17)
import torch.utils.data as data

from scripts.utils.osutils import *
from scripts.utils.imutils import *
from scripts.utils.transforms import *
import torchvision.transforms as transforms
from PIL import Image
from PIL import ImageEnhance
from PIL import ImageFilter
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

import glob

class COCO(data.Dataset):
    def __init__(self,train,config=None, sample=[],gan_norm=False):

        self.train = []
        self.anno = []
        self.mask = []
        self.wm = []
        self.input_size = config.input_size
        self.normalized_input = config.normalized_input
        self.base_folder = config.base_dir
        self.dataset = train+config.data

        if config == None:
            self.data_augumentation = False
        else:
            self.data_augumentation = config.data_augumentation

        self.istrain = False if self.dataset.find('train') == -1 else True
        self.sample = sample
        self.gan_norm = gan_norm

        file_paths2 = sorted(glob.glob(join(self.base_folder,'wm_DIV2K','full_*',self.dataset,'image/*')))

        for fl2 in file_paths2:  
            file_name2 = fl2.split('/')[-1]
            self.train.append(fl2)
            self.mask.append(fl2.replace('/image/','/mask/'))
            self.wm.append(fl2.replace('/image/','/wm/'))
            self.anno.append(os.path.join(self.base_folder,'wm_DIV2K','natural',self.dataset,file_name2.split('-')[0]+'.'+file_name2.split('.')[-1]))

        if len(self.sample) > 0:
            self.train = [ self.train[i] for i in self.sample ] 
            self.mask = [ self.mask[i] for i in self.sample ] 
            self.anno = [ self.anno[i] for i in self.sample ] 

        self.trans = transforms.Compose([
                transforms.ToTensor(),
            ])

        print('total Dataset of '+self.dataset+' is : ', len(self.train))


    def __getitem__(self, index):
        img = Image.open(self.train[index]).convert('RGB')
        mask = Image.open(self.mask[index]).convert('L')
        anno = Image.open(self.anno[index]).convert('RGB')
        wm = Image.open(self.wm[index]).convert('RGB')

        W, H = img.size

        if W < self.input_size or H < self.input_size:
            img = img.resize((self.input_size, self.input_size))
            mask = mask.resize((self.input_size, self.input_size))
            anno = anno.resize((self.input_size, self.input_size))
            wm = wm.resize((self.input_size, self.input_size))
            

        i, j, h, w = transforms.RandomCrop.get_params(img, output_size=(self.input_size, self.input_size))
        img = transforms.functional.crop(img,i,j,h,w)
        mask = transforms.functional.crop(mask,i,j,h,w)
        anno = transforms.functional.crop(anno,i,j,h,w)
        wm = transforms.functional.crop(wm,i,j,h,w)
        
        
        img = self.trans(img)
        anno = self.trans(anno)
        mask = self.trans(mask)        
        wm = self.trans(wm)

        return {"image": img,
                "target": anno, 
                "mask": mask, 
                "wm": wm,
                "name": self.train[index].split('/')[-1],
                "imgurl":self.train[index],
                "maskurl":self.mask[index],
                "targeturl":self.anno[index],
                "wmurl":self.wm[index]
                }

    def __len__(self):

        return len(self.train)

Any Help would be appreciated!!

Other Pre-trained Models

Dear XiaoDong,

When will you be able to upload the pre-trained model on the grey image?

Look forward to your favourable reply.

I can't find COCO_val2014.png

i tried your evaluate.sh. and downloaded your datasets, 10kgray, 10kmid
input "bash examples/evaluate.sh"
but got "FileNotFoundError: [Errno 2] No such file or directory: '/home/dell/watermark/10kgray/masks/natural/COCO_val2014.png'
What's "COCO_val2014.png"?

on video ?

would this model be available on videos ?

Model unable to run on multiple GPUs

I have multiple GPUs and I would Like to train the model on them for faster training.
I see that you have already implemented MultiGPU training by using nn.DataParallel. There were some bugs in VX.py which were solved after i converted "self.model" to "self.model.module".

Yet after ensuring that i am using "CUDA_VISIBLE_DEVICES=0,1" i still see only GPU0 's memory to be filled and not GPU1's.

The model gives cuda out of memory if i try to use a input size >=512 with a batchsize of 12 or even 8.

Any idea why is it only consuming 1 of the 2 GPUs??

Thanks

Supplementary Material

Kindly share your supplementary paper which has the model architecture changes shared mentioned in your paper. Thanks

Can I get some evaluating metric or something else?

Excuse me , can I know how to evaluate the result of removing watermarks?

Do you have some metric like mAP or IoU? I just wonder how to evaluate the results of removing watermarks.

Can I get some evaluating metric or something else?

Thx!

The results before and after running are different

image

code as follow:

from PIL import Image
img=Image.open('XX.jpg').resize((256,256))
image=np.array(img)
ih=torch.tensor(np.transpose(image,(2,0,1)))/255.0
ih=ih.unsqueeze(0)
imoutput,immask,imwatermark= model(ih.cuda())
imcoarser,imrefine,imwatermark = imoutput[1]immask + im(1-immask),imoutput[0]immask + im(1-immask),imwatermark*immask
torchvision.utils.save_image(imcoarser,'imrefine.jpg')

Try on my image

Hi, can I try it on watermarked image which I do not have mask. If so could you provide example on colab file removing watermark from uploaded image or from a test folder?

Corrupted images in dataset

Thanks for your excellent work! However, I found some corrupted images in the dataset. In 10kgray, when reading val_images/image/COCO_val2014_000000087481-Kaporal_Jeans_Logo-185.png, train_images/image/COCO_val2014_000000260525-Breyers_Logo-163.png, train_images/image/COCO_val2014_000000510651-University_Of_Oxford_Logo_Text-179.png using imageio.v2.imread, an error image file is truncated occurs. In fact the latter two images do not contain the watermark and are thus not usable.

The three truncated images.

COCO_val2014_000000087481-Kaporal_Jeans_Logo-185
COCO_val2014_000000260525-Breyers_Logo-163
COCO_val2014_000000510651-University_Of_Oxford_Logo_Text-179

Some question about dataset

Thanks for your impressive work.
I have some question about dataset in the onedrive link.
I guess:
10kmid.zip is LOGO-L
10khigh.zip is LOGO-H
10kgray.zip is LOGO-Gray
27kpng.zip is LOGO-30k
I wonder if this mapping is correct because there is some difference in naming.
Looking forward to your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.