Giter Club home page Giter Club logo

eye-in-the-sky's People

Contributors

cybr17crwlr avatar dependabot[bot] avatar manideep2510 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eye-in-the-sky's Issues

libtiff

when i am trying to install libtiff .it is showing this error
Building wheel for libtiff (setup.py) ... error

Issue with pretrained weight file model_onehot.h5

I am trying to use pretrained weight file model_onehot.h5. I changed the file name at all relevant places in test_unet.py. However, when we load the weight file, it gives following error:

ValueError: Dimension 0 in both shapes must be equal, but are 1 and 9. Shapes are [1,1,16,3] and [9,16,1,1]. for 'Assign_82' (op: 'Assign') with input shapes: [1,1,16,3], [9,16,1,1].

Seems like when loading model.load_weights(weights_file), there is some inconsistency for dimensions.

requirements.txt file has incorrect structure

Hi, could you please fix structure of the requirements.txt file that anybody can use pip install -r requirements.txt command? Also, you didn't mention any info about the versions of libraries you used.

how to run this code on kaggle kernel??

when I was trying to run the code on kaggle kernel it shows import error for libtiff . can you help with this? i tried replacing libtiff with PIL but again some error came.

Library error libtiff

I'm getting when trying to run unet.py
cannot import name 'tif_lzw' from 'libtiff' .

NameError: name 'iou' is not defined in unet.py

Hi, It seems like variable/function 'iou' is missing in unet.py

My stack trace:
File "/macierz/home/s174520/eye-in-the-sky/unet.py", line 84, in UNet
model.compile(optimizer = Adam(lr = 0.000001), loss = 'categorical_crossentropy', metrics = ['accuracy', iou])
NameError: name 'iou' is not defined

Edit:
I checked commit history. I think you have added iou function to test_unet.py file istead of unet.py and you're not passing it to UNet model.

MemoryError

I got some trouble, that when I read all train_x images, I got a problem named MemoryError. The image size is 720068004 , I'm so confused, could you help me?
the content of train.py is:

#!usr/bin/env python
#-*- coding:utf-8 _*-
#@author:mqray
#@file: train.py
#@time: 2019/6/28 12:35

import glob,os
from libtiff import TIFF
from funcs import *
from keras.preprocessing.image import ImageDataGenerator

model = model.UNet(16)

train_src_filelist = glob.glob(r'E:\2019rscup_segamentation\data\main_train\src\*.tif')
train_label_filelist = glob.glob(r'E:\2019rscup_segamentation\data\main_train\label\*.tif')
val_src_filelist = glob.glob(r'E:\2019rscup_segamentation\data\main_val\src\*.tif')
val_label_filelist = glob.glob(r'E:\2019rscup_segamentation\data\main_val\label\*.tif')
test_src_filelist =  glob.glob(r'E:\2019rscup_segamentation\data\main_test\*.tif')
print(train_src_filelist)
#训练集
train_x = []
for train_src in train_src_filelist:
    tif = TIFF.open(train_src)
    img = tif.read_image()
    crop_lists = crops(img)
    train_x = train_x + crop_lists
    # print(train_x.dtype)
# print(len(train_src_tmp))

trainx = np.asarray(train_x)


train_y = []
for train_label in train_label_filelist[0]:
    tif = TIFF.open(train_label)
    img = tif.read_image()

    crop_lists = crops(img)
    train_y = train_y + crop_lists
trainy = np.asarray(train_y)


#验证集
val_x = []
for val_src in val_src_filelist:
    tif = TIFF.open(val_src)
    img= tif.read_image()

    crop_lists = crops(img)
    val_x = val_x + crop_lists
valx = np.asarray(val_x)

val_y =[]
for val_label in val_label_filelist:
    tif = TIFF.open(val_label)
    img = tif.read_image()

    crop_lists = crops(img)
    val_y = val_y + crop_lists
valy = np.asarray(val_y)

color_dict = {0:(0,200,0),
              1:(150,250,0),
              2:(150,200,150),
              3:(200,0,200),
              4:(150,0,250),
              5:(150,150,250),
              6:(250,200,0),
              7:(200.200,0),
              8:(200,0,0),
              9:(250,0,150),
              10:(200,150,150),
              11:(250,150,150),
              12:(0,0,200),
              13:(0,150,200),
              14:(0,200,250),
              15:(0,0,0)}

'''
将标签值one-hot化
'''
trainy_hot = []
for i in range(trainy.shape[0]):
    hot_img = rgb_to_onehot(train_label_filelist[i], color_dict)
    trainy_hot.append(hot_img)
trainy_hot = np.asarray(trainy_hot)

val_hot = []
for i in range(valy.shape[0]):
    hot_img = rgb_to_onehot(val_label_filelist[i], color_dict)
    val_hot.append(hot_img)
val_hot = np.asarray(val_hot)

trainy  = trainy / np.max(trainy)
valy  = valy / np.max(valy)

# data augmentation

datagen_args = dict(rotation_range=45.,
                         width_shift_range=0.1,
                         height_shift_range=0.1,
                         shear_range=0.2,
                         zoom_range=0.2,
                         horizontal_flip=True,
                         vertical_flip=True,
                         fill_mode='reflect')
x_datagen = ImageDataGenerator(**datagen_args)
y_datagen = ImageDataGenerator(**datagen_args)
seed = 1
batch_size = 16
x_datagen.fit(train_x, augment=True, seed = seed)
y_datagen.fit(trainy, augment=True, seed = seed)
x_generator = x_datagen.flow(train_x, batch_size = 16, seed=seed)
y_generator = y_datagen.flow(trainy, batch_size = 16, seed=seed)
train_generator = zip(x_generator, y_generator)
X_datagen_val = ImageDataGenerator()
Y_datagen_val = ImageDataGenerator()
X_datagen_val.fit(valx, augment=True, seed=seed)
Y_datagen_val.fit(valy, augment=True, seed=seed)
X_test_augmented = X_datagen_val.flow(valx, batch_size=batch_size, seed=seed)
Y_test_augmented = Y_datagen_val.flow(valy, batch_size=batch_size, seed=seed)
test_generator = zip(X_test_augmented, Y_test_augmented)
history = model.fit_generator(train_generator, validation_data=test_generator, validation_steps=batch_size/2, epochs = 10, steps_per_epoch=len(x_generator))
model.save("model_augment.h5")



# history = model.fit(train_src_tmp,trainy_hot,epochs=1,validation_data=(val_src_x,val_hot),batch_size=1,verbose=1)
# model.save('model_onehot.h5')

print(history.history.keys())
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('acc')
plt.xlabel('epoch')
plt.legend(['train','val'],'upper left')
plt.savefig('acc_plot.jpg')
plt.show()
plt.close()

print(history.history.keys())
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model accuracy')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','val'],'upper left')
plt.savefig('loss_plot.jpg')
plt.show()
plt.close()

and the model file is :

#!usr/bin/env python
#-*- coding:utf-8 _*-
#@author:mqray
#@file: uunet.py
#@time: 2019/6/24 10:48

import PIL
from PIL import Image
import matplotlib.pyplot as plt
from libtiff import TIFF
from libtiff import TIFFfile, TIFFimage
from scipy.misc import imresize
import numpy as np
import glob
import cv2
import os
import math
import skimage.io as io
import skimage.transform as trans
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras.preprocessing.image import ImageDataGenerator
from keras import backend as K


# %matplotlib inline

def UNet(num_class,shape=(512, 512, 4)):
    # Left side of the U-Net
    inputs = Input(shape)
    #    in_shape = inputs.shape
    #    print(in_shape)
    conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='random_normal')(inputs)
    conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv1)
    conv1 = BatchNormalization()(conv1)
    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
    conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='random_normal')(pool1)
    conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv2)
    conv2 = BatchNormalization()(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
    conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='random_normal')(pool2)
    conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv3)
    conv3 = BatchNormalization()(conv3)
    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
    conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='random_normal')(pool3)
    conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv4)
    conv4 = BatchNormalization()(conv4)
    drop4 = Dropout(0.5)(conv4)
    pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)

    # Bottom of the U-Net
    conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='random_normal')(pool4)
    conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv5)
    conv5 = BatchNormalization()(conv5)
    drop5 = Dropout(0.5)(conv5)

    # Upsampling Starts, right side of the U-Net
    up6 = Conv2D(512, 2, activation='relu', padding='same', kernel_initializer='random_normal')(
        UpSampling2D(size=(2, 2))(drop5))
    merge6 = concatenate([drop4, up6], axis=3)
    conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='random_normal')(merge6)
    conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv6)
    conv6 = BatchNormalization()(conv6)

    up7 = Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='random_normal')(
        UpSampling2D(size=(2, 2))(conv6))
    merge7 = concatenate([conv3, up7], axis=3)
    conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='random_normal')(merge7)
    conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv7)
    conv7 = BatchNormalization()(conv7)

    up8 = Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='random_normal')(
        UpSampling2D(size=(2, 2))(conv7))
    merge8 = concatenate([conv2, up8], axis=3)
    conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='random_normal')(merge8)
    conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv8)
    conv8 = BatchNormalization()(conv8)

    up9 = Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='random_normal')(
        UpSampling2D(size=(2, 2))(conv8))
    merge9 = concatenate([conv1, up9], axis=3)
    conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='random_normal')(merge9)
    conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv9)
    conv9 = Conv2D(16, 3, activation='relu', padding='same', kernel_initializer='random_normal')(conv9)
    conv9 = BatchNormalization()(conv9)

    # Output layer of the U-Net with a softmax activation
    conv10 = Conv2D(num_class, 1, activation='softmax')(conv9)

    model = Model(input=inputs, output=conv10)

    model.compile(optimizer=Adam(lr=0.000001), loss='categorical_crossentropy', metrics=['accuracy'])

    model.summary()

    # filelist_modelweights = sorted(glob.glob('*.h5'), key=numericalSort)

    # if 'model_nocropping.h5' in filelist_modelweights:
    #   model.load_weights('model_nocropping.h5')
    return model

Please mention the validation set in README.md

Hi, I think everyone who presented in this competition just played it dirty by not mentioning the validation set. I'd like to request that you mention the images taken in the validation set so that reader will not be misled by the results if one tries to reproduce them. Btw nice code we'll also make our code public soon :)

resource exhausted at gtx1070

dear friend!
when I use your code ,I have a problem about “ resource exhausted”
could you tell me what type about your GPU and my types is GTX1070(8G)
and could you give me some advice at not change my GPU's type.

thanks! hope get your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.