Giter Club home page Giter Club logo

active-learning-bayesian-convolutional-neural-networks's People

Contributors

riashat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

active-learning-bayesian-convolutional-neural-networks's Issues

Monte carlo sampling?

The paper describes computing variational inference via monte carlo estimates -- where is the monte carlo estimation performed in the code?

how do you implement BCNN?

In file Active-Learning-Bayesian-Convolutional-Neural-Networks/ConvNets/active_learning/BCNN_cifar10.py , the architecture of the model is still a CNN, rather than a BCNN. Besides, the training method in this file is SGD + momentum, which is also related to CNN other than BCNN (the training method for BCNN should be BBB for example), so how do you implement BCNN by Keras in your experiment?

Standard deviation is not properly calculated in Segnet codes

Hello,

In the code for the standard deviation acquisition function (Segnet), the standard deviation seems to not be properly calculated. Lines 236-237 in the demo read:

    for d_iter in range(dropout_iterations):
        L = np.append(L, All_Dropout_Scores[t, r+10])

Notice that the All_Dropout_Scores index doesn't use d_iter, so the L array effectively contains only copies of the same value. The calculated STD is therefore of 0. This would explain why the results for this approach in the related paper are similar to the random acquisition.

why is it invalid when applying active learning to the cnn model ?

can you give me some advises?
I want to realize that applying traditional active learning method to cnn model, such as maximal entropy, but I fail.

[network]
`

model = Sequential()
model.add(Convolution2D(32, 3, 3, border_mode='same',
                        input_shape=X_train.shape[1:]))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Convolution2D(64, 3, 3, border_mode='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))

# let's train the model using SGD + momentum (how original).
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])

`
active sampling function:

`
def getData(proba,data,label,batch_data,batch_label,num,flag):
tmpdata=np.empty((num,3,32,32),dtype='float32')
tmplabel=np.empty((num,10),dtype='uint8')
if num==batch_size:
Class_Log_Probability = np.log2(proba)
Entropy_Each_Cell = - np.multiply(proba ,Class_Log_Probability)
Entropy = np.sum(Entropy_Each_Cell, axis=1)
index=select_sort(Entropy,num,flag)
else:
index=get_index(flag)
print(index)

for i in range(num):
    t=index[i]
    flag[t]=1
    tmpdata[i]=data[t]
    tmplabel[i]=label[t]
batch_data,batch_label=np.vstack([batch_data,tmpdata]),np.vstack([batch_label,tmplabel])
return batch_data,batch_label,data,label,flag


def select_sort(list_proba,num,flag):
list_len=len(list_proba)
index=[]
while len(index)<num:
max_index = -1
max_value=-10
for j in range(0, list_len):
if(list_proba[j]>max_value) and j not in index and flag[j]==0:
max_index=j
max_value=list_proba[j]
index.append(max_index)
return index
`
dataset: cifar10

the result: random methods is better. why?

Concerns about Deterministic Bald (Softmax_Bald?)

When reading the paper "Deep Bayesian Active Learning with Image Data", I was interested in the results of Figure 2. Specifically, I wanted to replicate the Bald VS Deterministic Bald part.
I followed the file-naming logic which lead me to the code in Softmax_Bald_Q10_N1000.py. So my first question is whether this code is the one behind the results of Deterministic Bald, since it uses predict() instead of stochastic_predict()?
Assuming I got it right, I wondered how the calculation of the average entropy have been made even though we only have one single instance of the predictions.
When looking at the code, for softmax_iterations = 1, the values of G_X = Entropy_Average_Pi and F_X = Average_Entropy should be equal because there is no averaging operations involved. However, when I run the code, the values in U_X = G_X - F_X where, in fact, not zeroed-out which they should have been.
Eventually, it turned out that the empty arrays created before the loop, namely score_All and All_Entropy_Softmax had the automatic dtype=np.float64 while the softmax_score resulting from model.predict() was of type np.float32. Hence, subtracting these arrays, or any subsequent results would produce a non-zero difference.

To verify this, it's just a matter of prefixing the dtype parameters as:

score_All = np.zeros(shape=(X_Pool_Dropout.shape[0], nb_classes), dtype=np.float32)
All_Entropy_Softmax = np.zeros(shape=X_Pool_Dropout.shape[0], dtype=np.float32)

Or removing the loop all together since it is only running for one iteration anyway.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.