This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
the code :
#converting target labels to soft labels
for data in train_loader:
input, label = data[0].to(device),data[1].to(device)
softlabel = F.log_softmax(modelF(input),dim=1)
data[1] = softlabel
it seem does not convert target labels to soft labels , the label doesn't change in the code that follows
I am not sure if the code for MI-FGSM is correct, since the gradient is expected to be calculated in each iteration rather than a single value for all iterations?