madrylab / mnist_challenge Goto Github PK
View Code? Open in Web Editor NEWA challenge to explore adversarial robustness of neural networks on MNIST.
License: MIT License
A challenge to explore adversarial robustness of neural networks on MNIST.
License: MIT License
For the reported results with 100 iterations, is the eps_iter/"a" value (in config.json) still 0.01?
Hello,it is a great job. FOR L = Lfadv + αLGAN + βLhinge in paper,where i can see in the code.
Are there any Pytorch version of this challenge? Cause tensorflow usually has conflict with my Pytorch.As I using Advertorch as attack lib. But I came across the problem that version conflict between torch and tensor flow.
When I want to run this code, there will always be version incompatibility。
Could you tell me about the version of the python and tensorflow?
Are you going to release the challenge model very soon? It is Oct 19th now. Thanks!
Is there any adversarial attack that sustains/consists of added noise, after resize attack ? (adversarial image -> converting into High / low resolution image -> resize to original adverarial image size)
Hello, thank you for the repo. I'm trying to reproduce the results that you reported in the paper. In the process, I used different seeds to initialize the weight matrix (8 different seeds) and I observe that the adversarial performance vary widely for the standard model without adversarial training (for eps = 0.3, the range is 0.02%-2.2%). I wonder whether you have tested the model with different random initialization seeds and whether you also see such high variance in the performance. Thank you very much!
To whom it may concern,
I have read your paper Towards Deep Learning Models Resistant to Adversarial Attacks and try the code provided in this project.
Thanks for your good job! However, I still have some questions about the paper and the code.
Some experiments mentioned in your paper claimed that epsilon was equal to 8, but I find every pixel to be perturbed is clipped to 0 to 1 according to the code. And when I set epsilon to some number that is larger than 1, I can not get the accuracy you mentioned in the paper. So why can epsilon be set larger than 1? Do I have some misunderstandings about the paper?
Looking forward to your reply.
Best Wishes,
May, 22th by JY
Hi authors,
cw attack in this repo is:
Line 34 in 1c61741
However, the original cw attack implementation is:
wrong_logit = tf.reduce_max((1-label_mask) * model.pre_softmax - 1e4*label_mask, axis=1)
I also found that in repo MadryLab/cifar10_challenge have fixed this issue by commit 958b8f
Hello! I tried to reproduce your result but only got adversarial accuracy of 90.44% which is less than the 93.2% reported in your paper. Did you use different parameters from those in this code?
Could I know how many iterations you used to train adv_trained/secret network (those have published)? Indeed, they are very robust to resist CW attacks after I tested on mnist. And I try to train a robust network from scratch. I run your source code with the untouched setting( iterations= 100,000 and batch size =50) in config.json, however, I cannot get a very robust network against CW attack. I wonder if I don't train with enough iterations. Any help will be appreciated!
Hi, interesting work! I have some questions about random restart and hope you could answer it for me.
I saw from the code it basically add random perturbation before PGD step. but i don't understand what is the meaning for restarts 20 in your Table 1 in the paper? Does it mean attack for 20 times and take average? BTW, is this restart very important for the final performance. Do you test it with no random restart, can it retain the accuracy or it will be worse? Thanks
Hi,
What does this line of code do? How was the constant of 50
decided? I am trying to reproduce this in PyTorch.
Many thanks
Hi. I feel that you should keep this challenge open and accept submissions even though you have released your secret model. Any particular reason for closing this on Oct 15th?
I just replace the tf.Variable()
in function _weight_variable
and _bias_variable
with tf.get_variable()
. And it cannot train a robust network to resist CW attack. In contrast, I run the unchanged source code, it can train a robust network, I am really confused why it is? The following are the code I only changed. Please help.
@staticmethod
def _weight_variable(shape, name):
initial = tf.initializers.truncated_normal(stddev=0.1)
return tf.get_variable(shape=shape, name=name, initializer=initial)
@staticmethod
def _bias_variable(shape, name):
initial = tf.constant(0.1, shape=shape)
return tf.get_variable(name=name, initializer=initial)
Hello, thanks for your code. Could you provide an requirement file stating which libraries are required and their corresponding version, such as the version of tensorflow ? Thanks a lot.
I saw the line x = x_nat + np.random.uniform(-self.epsilon, self.epsilon, x_nat.shape)
in function perturb
in class LinfPGDAttack
for adding random noise to original image, while there is not code for random restarting point. I am not sure if random restart step can be omitted.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.