Giter Club home page Giter Club logo

kaggle-carvana-image-masking-challenge's People

Contributors

metakermit avatar petrosgk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kaggle-carvana-image-masking-challenge's Issues

Masks format

What is the format of the masks? Is it possible to provide an example?

Is the data augmentation performed accurately?

Hi,

Is the following code in train.py correct?

for id in ids_train_batch.values:
img = cv2.imread('input/train/{}.jpg'.format(id))
img = cv2.resize(img, (input_size, input_size))
mask = cv2.imread('input/train_masks/{}_mask.png'.format(id), cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (input_size, input_size))
img = randomHueSaturationValue(img,
hue_shift_limit=(-50, 50),
sat_shift_limit=(-5, 5),
val_shift_limit=(-15, 15))
img, mask = randomShiftScaleRotate(img, mask,
shift_limit=(-0.0625, 0.0625),
scale_limit=(-0.1, 0.1),
rotate_limit=(-0, 0))
img, mask = randomHorizontalFlip(img, mask)
mask = np.expand_dims(mask, axis=2)
x_batch.append(img)
y_batch.append(mask)

From what I understood so far about data augmentation, the idea is "adding" more training data but the code above "update" original image/mask, apply image processing algorithms on them in a series manner then use the final processed ones as training data. So, basically, we did not increase training data. Should we perform the following tasks instead?

  • Add original img/mask to x_batch/y_batch
  • Add every output of data augmentation algorithm to x_batch/y_batch

I know that It will increase training time a lot but looks like it is the right thing to do. Please correct me If I'm wrong.

Thanks,

TypeError on validation_steps in model.fit_generator

I keep getting TypeError on validation_step. I looked up on stackoverflow and tried few ones but cannot solve this issue. Has anyone solved this issue?

/home/airikiho/anaconda3/envs/py27/lib/python2.7/site-packages/keras/callbacks.py:494: RuntimeWarning: Early stopping requires val_dice_loss available!
  (self.monitor), RuntimeWarning)
Traceback (most recent call last):
  File "train.py", line 174, in <module>
    validation_steps=np.ceil(float(len(ids_valid_split)) / float(batch_size)))
  File "/home/airikiho/anaconda3/envs/py27/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/home/airikiho/anaconda3/envs/py27/lib/python2.7/site-packages/keras/engine/training.py", line 1939, in fit_generator
    callbacks.on_epoch_end(epoch, epoch_logs)
  File "/home/airikiho/anaconda3/envs/py27/lib/python2.7/site-packages/keras/callbacks.py", line 77, in on_epoch_end
    callback.on_epoch_end(epoch, logs)
  File "/home/airikiho/anaconda3/envs/py27/lib/python2.7/site-packages/keras/callbacks.py", line 496, in on_epoch_end
    if self.monitor_op(current - self.min_delta, self.best):
TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'

Getting StopIteration error

Hey,I am getting a StopIteration error when valid_generator( ) is called .The error is given below

StopIteration Traceback (most recent call last)
in ()
6 #callbacks=callbacks,
7 validation_data=valid_generator(),
----> 8 validation_steps=np.ceil(float(len(ids_valid_split)) / float(batch_size)))

/Users/shivamchandhok/anaconda2/lib/python2.7/site-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs)
85 warnings.warn('Update your ' + object_name + 86 ' call to the Keras 2 API: ' + signature, stacklevel=2)
---> 87 return func(*args, **kwargs)
88 wrapper._original_function = func
89 return wrapper

/Users/shivamchandhok/anaconda2/lib/python2.7/site-packages/keras/engine/training.pyc in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, initial_epoch)
1807 batch_index = 0
1808 while steps_done < steps_per_epoch:
-> 1809 generator_output = next(output_generator)
1810
1811 if not hasattr(generator_output, 'len'):

StopIteration:

something wrong with my code? i just following your code,3ks

ValueError Traceback (most recent call last)
in ()
5 callbacks=callbacks,
6 validation_data=valid_generator(),
----> 7 validation_steps=np.ceil(float(len(ids_valid_split)) // float(batch_size)))

/home/yang/anaconda3/lib/python3.5/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
86 warnings.warn('Update your ' + object_name + 87 ' call to the Keras 2 API: ' + signature, stacklevel=2)
---> 88 return func(*args, **kwargs)
89 wrapper._legacy_support_signature = inspect.getargspec(func)
90 return wrapper

/home/yang/anaconda3/lib/python3.5/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_q_size, workers, pickle_safe, initial_epoch)
1875 'a tuple (x, y, sample_weight) '
1876 'or (x, y). Found: ' +
-> 1877 str(generator_output))
1878 if len(generator_output) == 2:
1879 x, y = generator_output

ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

train_mask.csv does not exist

Hello, i prepared all the Training and masks into the Right Folders. When i run the Network, the following Error ocurs:
"
IOError: [Errno 2] File Input/Train_masks.csv does not exist: 'Input/Trains_masks.csv'
"

Could you give me a small exampl (for only 1 class (plus Background) ) how the csv file should look like?

running predict_multithreaded.py just halt in data_loader function

@metakermit thanks for your sharing. There is a problem i can not figure it out:
when i run predict_multithreaded.py, i found the code go into a dead loop in the function data_loader. I know little about multi thread, and i find no directive answer for this problem. I am looking forward your reply and many Thanks :)

Weights can't be saved successfully?

log said metric val_dice_loss is not available
my keras version is 2.08 ,using TF backend, TF-gpu version 1.3.0, here are logs

Epoch 1/100 /usr/local/lib/python2.7/dist-packages/keras/callbacks.py:496: RuntimeWarning: Early stopping conditioned on metricval_dice_losswhich is not available. Available metrics are: dice_coeff,loss,val_loss,val_dice_coeff (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning /usr/local/lib/python2.7/dist-packages/keras/callbacks.py:889: RuntimeWarning: Reduce LR on plateau conditioned on metricval_dice_loss` which is not available. Available metrics are: dice_coeff,loss,lr,val_loss,val_dice_coeff
(self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
/usr/local/lib/python2.7/dist-packages/keras/callbacks.py:405: RuntimeWarning: Can save best model only with val_dice_loss available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
113s - loss: 0.1976 - dice_coeff: 0.8820 - val_loss: 1.7334 - val_dice_coeff: 0.0515
Epoch 2/100
108s - loss: 0.0844 - dice_coeff: 0.9491 - val_loss: 0.0929 - val_dice_coeff: 0.9475
Epoch 3/100
109s - loss: 0.0561 - dice_coeff: 0.9678 - val_loss: 0.0456 - val_dice_coeff: 0.9742

`

Some problem with opencv reading png file

This is a bug about cv2 reading png. I am wondering whether you have experienced something similar.

  File "train.py", line 115, in train_generator
    mask = cv2.resize(mask, (input_size, input_size))
cv2.error: /feedstock_root/build_artefacts/opencv_1490907195496/work/opencv-3.2.0/modules/imgproc/src/imgwarp.cpp:3492: error: (-215) ssize.width > 0 && ssize.height > 0 in function resize

num_classes?

num_classes is filters in the below code ?

classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up1)

Correct architecture for lots of small objects?

Hello,
I have a question regarding the architecture.
I have images like this example, and want to do image segmentation for the white spots.

0177_1_4

Do I understand it correctly, that in your architecture there are no limitations about the number of objects that can be detected? Are there limitations about the size of the objects?
If not, I think for a beginner it would be a good start to use your architecture...

Thanks for a short answer!

Mapping of `val_dice_loss` to the loss function `bce_dice_loss`

While defining various callbacks, the monitor property is set to a string value val_dice_loss. And, while defining the model, the loss property is set to bce_dice_loss function. But, how does Keras come to know that the monitor value val_dice_loss corresponds to bce_dice_loss function?

How to visualize the predicted result?

I follow this code:

    model = params.model_factory()
    model.load_weights('./exp/' + exp_dir + '/weights/best_weights.hdf5')

    test_dir = './input_all/venous/test'
    test_output = './input_all/venous/test_result'
    for img in os.listdir(test_dir):
        img_name = img
        img_array = cv2.imread(os.path.join(test_dir, img))
        img_array = img_array / 255
        # add batch_size dim
        img_array = np.expand_dims(img_array, axis=0)
        
        # get predicted result
        result = model.predict(img_array)

        result = np.squeeze(result, axis=0)
        result = np.reshape(result,(512,512,1))
        result = result * 255

        print(np.unique(result))
        print("*"*80)

I always got result like this:

[4.6687322e-08 4.8040700e-08 5.0133583e-08 ... 2.2982184e+02 2.3062683e+02
2.3154042e+02]

not 0 and 255 as mask format?

Issues with the mogrify command

Getting a lot of these messages after the mogrify step. Don't know if this is normal?

mogrify: profile 'icc': 'RGB ': RGB color space not permitted on grayscale PNG

The majority of the pngs seem to be generated…

problem about dice_coeff

Thanks for sharing your work, but I have a problem in the loss function.
In dice_coeff, y_true is a label, but y_pred is a probility, what does dice_coeff mean?
I know when y_pred is also a label, dice_coeff is a proper evaluation in this task.

number of classes

Hi,
I was trying ti understand your code. Does no of classes equal to channels in the masks? Thanks for your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.