Giter Club home page Giter Club logo

mnist_challenge's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mnist_challenge's Issues

about version

When I want to run this code, there will always be version incompatibility。
Could you tell me about the version of the python and tensorflow?

Release Model

Are you going to release the challenge model very soon? It is Oct 19th now. Thanks!

Adversarial performance for MNIST models vary widely with different random seed initializations

Hello, thank you for the repo. I'm trying to reproduce the results that you reported in the paper. In the process, I used different seeds to initialize the weight matrix (8 different seeds) and I observe that the adversarial performance vary widely for the standard model without adversarial training (for eps = 0.3, the range is 0.02%-2.2%). I wonder whether you have tested the model with different random initialization seeds and whether you also see such high variance in the performance. Thank you very much!

Why epsilon can be larger than 1?

To whom it may concern,
I have read your paper Towards Deep Learning Models Resistant to Adversarial Attacks and try the code provided in this project.
Thanks for your good job! However, I still have some questions about the paper and the code.
Some experiments mentioned in your paper claimed that epsilon was equal to 8, but I find every pixel to be perturbed is clipped to 0 to 1 according to the code. And when I set epsilon to some number that is larger than 1, I can not get the accuracy you mentioned in the paper. So why can epsilon be set larger than 1? Do I have some misunderstandings about the paper?
Looking forward to your reply.
Best Wishes,
May, 22th by JY

Question about reproducing your results

Hello! I tried to reproduce your result but only got adversarial accuracy of 90.44% which is less than the 93.2% reported in your paper. Did you use different parameters from those in this code?

parameters for train robust adv_trained/secret networks?

Could I know how many iterations you used to train adv_trained/secret network (those have published)? Indeed, they are very robust to resist CW attacks after I tested on mnist. And I try to train a robust network from scratch. I run your source code with the untouched setting( iterations= 100,000 and batch size =50) in config.json, however, I cannot get a very robust network against CW attack. I wonder if I don't train with enough iterations. Any help will be appreciated!

About random restart

Hi, interesting work! I have some questions about random restart and hope you could answer it for me.
I saw from the code it basically add random perturbation before PGD step. but i don't understand what is the meaning for restarts 20 in your Table 1 in the paper? Does it mean attack for 20 times and take average? BTW, is this restart very important for the final performance. Do you test it with no random restart, can it retain the accuracy or it will be worse? Thanks

What does this line do?

Hi,
What does this line of code do? How was the constant of 50 decided? I am trying to reproduce this in PyTorch.

Many thanks

tf.tf.Variable() seems cannot be replaced with tf.get_variable().

I just replace the tf.Variable() in function _weight_variable and _bias_variable with tf.get_variable(). And it cannot train a robust network to resist CW attack. In contrast, I run the unchanged source code, it can train a robust network, I am really confused why it is? The following are the code I only changed. Please help.

@staticmethod
def _weight_variable(shape, name):
    initial = tf.initializers.truncated_normal(stddev=0.1)
    return tf.get_variable(shape=shape, name=name, initializer=initial)
@staticmethod
def _bias_variable(shape, name):
    initial = tf.constant(0.1, shape=shape)
    return tf.get_variable(name=name, initializer=initial)

Requirements to reproduce the results

Hello, thanks for your code. Could you provide an requirement file stating which libraries are required and their corresponding version, such as the version of tensorflow ? Thanks a lot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.