Giter Club home page Giter Club logo

evademl-zoo's People

Contributors

fqdhlyc avatar mzweilin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

evademl-zoo's Issues

Optional parameters

Hi mzweilin, is it possible to use only the defense part, and so without specifying any attacks?

Like with this example:

python3 main.py --dataset_name ImageNet --model_name MobileNet --nb_examples 10 --balance_sampling --detection "FeaturesSqueezing?squeezers=bit_depth_5,median_filter_2_2,non_local_means_color_11_3_4&distance_measure=l1&threshold=1.2128;"

import error in main.py

Hello, I would like to ask if the datasets in the main program refer to the datasets library of Hugging Face. But the problem I'm having now is not being able to reference the module from datasets.

Dependency broken issue

Hi,

Thank you very much for opening such great source code.

I tried to reproduce your paper but found it broken that it is not working any longer with todays cleverhans and keras with example command line.

python main.py --dataset_name MNIST --model_name carlini
--nb_examples 2000 --balance_sampling
--attacks "FGSM?eps=0.1;"
--robustness "none;FeatureSqueezing?squeezer=bit_depth_1;"
--detection "FeatureSqueezing?squeezers=bit_depth_1,median_filter_2_2&distance_measure=l1&fpr=0.05;"

Can you share us specific keras and cleverhans version that this code depends on?

How can I use custom models rather than pre-trained models?

To cite your paper(Feature Squeezing), I would like to do separate training and detection tests on the CIFAR10 and MNIST datasets using different models, without pre-trained models.

For the model for the CIFAR10 dataset, I want to experiment using the Classifiers.get('resnet34') model provided in from classification_models.keras import Classifiers.

And for the model for the MNIST dataset, I want to experiment using a simple CUSTOM MODEL as shown below.
model = Sequential()
model.add(Convolution2D(32, 5, 5, input_shape=(1, img_rows, img_cols), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(nb_classes, activation='softmax'))

I hope you can help.
regards

########################################################################
python main.py --dataset_name CIFAR-10 --model_name RESNET34
--nb_examples 7329 --balance_sampling True
--attacks "FGSM?eps=0.2;"
--clip 1
--robustness "none;FeatureSqueezing?squeezer=median_filter_3_3;"
--detection "FeatureSqueezing?squeezers=median_filter_3_3&distance_measure=l1&fpr=0.05;"

python main.py --dataset_name MNIST --model_name CUSTOM_MODEL
--nb_examples 9790 --balance_sampling True
--attacks "FGSM?eps=0.4;"
--clip 1
--robustness "none;FeatureSqueezing?squeezer=bit_depth_1;"
--detection "FeatureSqueezing?squeezers=bit_depth_1&distance_measure=l1&fpr=0.05;"
########################################################################

FileNotFoundError

Hi,I tried to run ### python main.py --dataset_name ImageNet --model_name MobileNet --attacks "fgsm?eps=0.0078;" --detection "FeatureSqueezing?squeezers=bit_depth_5,median_filter_2_2&distance_measure=l1&fpr=0.05;"### .But,error:FileNotFoundError: [Errno 2] No such file or directory: '/Users/mac/Downloads/EvadeML-Zoo-master/externals/universal/python/init.py'.I really didn't find the file by path, how should I get it?

Looking forward to your reply

the wrong densenet package

holle sir,
i have seen in your models/densenet_models.py you used the package "densenet",
but i just could find the "densenet" package which is written in 2020. The link of the "densenet" package that i find is https://github.com/okason97/DenseNet-Tensorflow2.
So i don't know which is the package that you used in your code, can you sent me the package that you used?
Thank you very much!

About the python version and core dumped error

Hi,

Sorry to disturb you. I tried to run the code on python 3.5. But the numpy version 1.13.3 requires python 3.7 and the tensorflow-gpu 1.1.0 requires python 3.5 3.6 or 2.7. So, I tried python3.5 +tensorflow-gpu 1.1.0 +numpy 1.10.0, which results in a core dumped error when running the code. May I ask the exact python version of implementation? Thank you.

what is difference between listed pretrained models

Hi, I want to ask that what is difference between "PGDbase" and "PGDtrained" model listed in here for MNIST dataset. Also, there seems to exist two Cleverhans MNIST for mnist which I found in your code here, what is difference between these two model? Final, can you provide training detail that how is these models pretrained? for example, how many epoch used to train each of these model or some setting?

pre-generated adversary examples's original label

Hi,

Appreciate your detailed code and tutorial for reproducing the experiments. For the pre-generate adversary examples, is there anyway I can find the original label. I am trying to calculate the SAE, but without original label, I have no idea about which images is successful adversarial examples.

thanks

How to reproduce the ImageNet results

Dear Mzweilin,
First of all, thank you for your incredible work.

Anyway, I got some trouble to reproduce the paper results.
More precisely, I tried to reproduce the ImageNet results (FGSM attack).
So, I used the ImageNet validation set, with the same paper setting for the FQ detection:

  • depth 5, median filter 2x2 squeezers, and "non-local means 11, 3, 4" as squeezers.
  • and 1.2128 as the threshold

So, I got ~23% as attack detection rate (instead of ~43%).
Why this result with respect to paper result is so different?

In order to help you, and then figure out what I did,
I report the input command line along with output.

Input:

pipenv run python main.py --dataset_name ImageNet --model_name MobileNet --attacks "fgsm?eps=0.0078;" --detection 
"FeatureSqueezing?squeezers=bit_depth_5,median_filter_2_2,non_local_means_color_11_3_4&distance_measure=l1&threshold=1.2128;"

Output:

  Running attack: fgsm {'eps': 0.0078}
 Loading adversarial examples from [ImageNet_100_6cf69_mobilenet_fgsm?eps=0.0078.pickle].

 ---Attack (uint8): fgsm?eps=0.0078
 Success rate: 99.00%, Mean confidence of SAEs: 99.47%
 ### Statistics of the SAEs:
 L2 dist: 3.0134, Li dist: 0.0078, L0 dist_value: 98.5%, L0 dist_pixel: 99.4%
 ===Adversarial image examples are saved in  results/ImageNet_100_6cf69_mobilenet/ImageNet_100_6cf69_mobilenet_attacks_0b2d7_examples.png
 Loaded an existing detection dataset.
 Loaded a pre-defined threshold value 1.212800
 Detector: FeatureSqueezing?squeezers=bit_depth_5,median_filter_2_2,non_local_means_color_11_3_4&distance_measure=l1&threshold=1.2128
 Accuracy: 0.570000      TPR: 0.224490   FPR: 0.098039   ROC-AUC: 0.692677
 Detection rate on SAEs: 0.2292    11/ 48         fgsm?eps=0.0078
 Overall detection rate on SAEs: 0.229167 (11/48)
 ### Excluding FAEs:
 Overall TPR: 0.229167   ROC-AUC: 0.688725
 Overall detection rate on FAEs: 0.0000     0/  1`

As you can see, I also got 57% as accuracy.
Do I have to calculate the attack rate on 57%? (e.g. 23 * 100/57 ~= 40%)

Did I make some mistake (in the command line)?
And/or is my last assumption right?

Thanks.

The score threshold selection in train phase

hi,
thank you for your excellent work. Here are some my confusion
I notice in the paper section V experimental setup. The train phase only need legitimate examples, not depend on the adversarial example. But in the implementation function in build_detection_dataset on the base.py file is :
random.seed(1234)
length = len(X_detect)
train_ratio = 0.5
train_idx = random.sample(range(length), int(train_ratio*length))
train_test_seq = [1 if idx in train_idx else 0 for idx in range(length) ]

And I change the code to :
train_idx = range(len(X_leg_all))
Is that right?

Anyway, both the setting get 100% acc on my CW attack adversarial data.

Besides, The comment code in function evaluate_detections on line 239:
# Example: --detection "FeatureSqueezing? distance_measure=l1&squeezers=median_smoothing_2,bit_depth_4;"
But the squeezers median_smoothing_2 not in the squeezer_list?

Is that the median_filter correspond local smoothing;non_local_means_color and non_local_means_bw correspond Non-local smoothing?

Thank you for your replay.
期待您的回复!

adversarial MNIST sample

Hi,

I want to generate adversarial MNIST dataset.

Is that right run the 'main.py' file like this?

python main.py --dataset_name MNIST --attacks "FGSM?eps=0.1;"

If this is right, what file is the adversarial example file that I can use?

ImportError

I read a really good paper.
After following 5.example of README, importerror occurred and asked what to do.

ImportError: No module named models.carlini_models

image preprocessing

Hi,
First of all thank you for this interesting work.
I have a question about the image preprocessing. In the file keras_models.py there is the function named scaling_tf():

def scaling_tf(X, input_range_type):
"""
Convert to [0, 255], then subtracting means, convert to BGR.
"""

if input_range_type == 1:
    # The input data range is [0, 1]. 
    # Convert to [0, 255] by multiplying 255
    X = X*255
elif input_range_type == 2:
    # The input data range is [-0.5, 0.5]. Convert to [0,255] by adding 0.5 element-wise.
    X = (X+0.5)*255
elif input_range_type == 3:
    # The input data range is [-1, 1]. Convert to [0,1] by x/2+0.5.
    X = (X/2+0.5)*255

# Caution: Resulting in zero gradients.
# X_uint8 = tf.clip_by_value(tf.rint(X), 0, 255)
red, green, blue = tf.split(X, 3, 3)
X_bgr = tf.concat([
        blue - VGG_MEAN[0],
        green - VGG_MEAN[1],
        red - VGG_MEAN[2],
        # TODO: swap 0 and 2. should be 2,1,0 according to Keras' original code.
    ], 3)

# x[:, :, :, 0] -= 103.939
# x[:, :, :, 1] -= 116.779
# x[:, :, :, 2] -= 123.68
return X_bgr

In this code the you split the channels of the image in red, green, blue in order to perform the conversion to BGR.

There is the TODO comment which says what VGG_MEAN[0] should be swapped with VGG_MEAN[2].

I want to ask if you recommend doing this modification.

Thank you for your time.
Have a nice day.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.