cfa_for_anomaly_localization's People
Forkers
apcc-geoslegend neixlo jaewoong1 sjpark1213 abhay5991 ez615 10183308 holmes-gu yueshuohuanxiu lihui1998 code10086web paining sorrawitboonek yangkk2019 charlie5041 xiaoyunwei fourthm liutongkun louderthanthunderx1 dragonfly2023hcfa_for_anomaly_localization's Issues
Low performance compared to the Table 4 of the paper.
When I run your code, I see that the performance is lower than that listed in Table 4 of the paper. In particular, in the case of the Screw class, the image AUROC is 97.3% in the table, but I got a value of 90.6. What could make this difference? Also, how can I get a similar value to the paper?
There seems to be no patch operation
Thanks for your code.
The paper mentioned the transformation of patch, but it seems that it is not found in the code?
Inference Speed and CFA++?
Hello and congrat for this work.
I wonder if you studied the inference time. Paper doesn't say anything about that.
Do we need to update C(centroid)?
Thanks for your code. I want to check that do we need to update C at the end of each epoch? In paper Algorithm 1, it seems that C also be updated at the end of for loops.
the inpute size of the image
how to use different h & w of the input image? Maybe the method only receive the input image that w=h?
@sungwool
Thank you!
Random seed
So, the performance depends on the random seed. Then, did you use different seeds for each class? If it is, Could I ask you the seed you used for each class?
In 'train_cfa.py' of your repository, the random seed was set to 1024.
convert to ONNX and TRT
Thanks for your code.This is interested and work well.
Do you try to convert TensorRT?
How to reproduce results
Hi,
What is the configuration to reproduce the highest I-AUROC performance as mentioned in your work?
I have tried the configuration below:
- Rd: True
- cnn: wrn50_2
- size: 224
- gamma_c: 1
- gamma_d: 1
But it seems there's still a considerable performance gap against the reported one.
A problem with argument Rd
In this repository, the official example suggested to train is as below:
python trainer_cfa.py --class_name all --data_path [/path/to/dataset/] --cnn wrn50_2 --Rd False --size 224 --gamma_c 1 --gamma_d 1
But, if you do this, args.Rd always returns 'True'. So if you want to train in the condition of args.Rd = False, I suggest you run this code with the command as follows (just remove the Rd argument):
python trainer_cfa.py --class_name all --data_path [/path/to/dataset/] --cnn wrn50_2 --size 224 --gamma_c 1 --gamma_d 1
When I run this code with Rd argument that makes 'args.Rd' returns true, I got 90.6% for the Image AUROC for the screw data. But after making the 'args.Rd' to return False, I got 94.5% for the Image AUROC for the screw data.
Thank you.
model save
I'm confused why this model is saved here. From the code it seems that it has not been trained, it is just a pre-trained model.
torch.save(
{
"epoch": epoch,
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
},
os.path.join(checkpoint_dir, "best.pt" ),
)
Train code
model = model.to(device)
model.eval()
loss_fn = DSVDD(model, train_loader, args.cnn, args.gamma_c, args.gamma_d, device)
loss_fn = loss_fn.to(device)
epochs = 30
params = [{'params' : loss_fn.parameters()},]
optimizer = optim.AdamW(params = params,
lr = 1e-3,
weight_decay = 5e-4,
amsgrad = True )
best_pxl_pro = -1
for epoch in tqdm(range(epochs), '%s -->'%(class_name)):
r'TEST PHASE'
test_imgs = list()
gt_mask_list = list()
gt_list = list()
heatmaps = None
loss_fn.train()
for (x, _, _) in train_loader:
optimizer.zero_grad()
p = model(x.to(device))
loss, _ = loss_fn(p)
loss.backward()
optimizer.step()
loss_fn.eval()
How to save the model?
@sungwool hello, i found no model saved after training. so, how to save the trained model?
Question on Memory bank Compression vs. PatchCore´s Greedy Coreset Sampling
First of all : Really interesting work. I am digging into image AD currently and I implemented patchCore for tensorflow with several additions. I think your approach regarding the feature adaption for the training images is really interesting. However, I was wondering if you tried to compare your Memory Bank Compression
with the greedy core set sampling from patchCore. In my opinion the both are interchangeable in your algorithm right? Have you made any experiments on that?
Pretrained model
Hi @sungwool ,
Do you have any plan to share your pretrained model? Thx~
test image
how do test any image using own weight?
一个关于_init_centroid 函数执行的问题 A Problem about _init_centroid Function Execution
当你执行_init_centroid 的时候,Descriptor中的CoordConv2d中nn.Conv2d的权重和偏置都是被随机初始化的,如何保证self.C(memorybank)是可以表示所有数据的稳定特征向量?
When you execute _init_centroid, the weights and biases of nn. Conv2d in CoordConv2d in the Descriptor are randomly initialized. How to ensure that self. C (memorybank) is a stable feature vector that can represent all data?
welcom chat with me .my wechat is 18566763313 in china,and my email is [email protected]
error when training own database
What are the Terms of Use?
Thank you for sharing your excellent source code.
This repository does not specify a license, but to what extent is it available?
For example, is it allowed to modify it for commercial use?
The running result is inconsistent with README
I use your source code, the running result is inconsistent with README, take screw as an example,
image ROCAUC: 0.946.
pixel ROCAUC: 0.984
pixel ROCAUC: 0.936.
The result is still the same after multiple attempts. Your hyperparameters are not modified.
The results cannot be reproduced.
Hello, the method is interesting, and thanks for your code.
I try your official code and only get 98.5%AUC in the image level, which is a little lower than the result in your paper. Please check if the code is inconsistence with your original code.
Thanks!
Memory Bank Compression
In the paper it´s said: However, if Ci-1 is updated through EMA iteratively along φi(pt ), the final C can store the core normal features representing X
. IMO it´s meaning that the core set and the feature adaption is learned iteratively. For me it looks like the core set is once initialized in the DSVDD.__init__()
and then adapted to all training samples with self._init_centroid
followed by the KMEAN
compression. Afterwards, the feature adaption is learned with a fixed core set. Do I miss something?
Intuition in initialization of centroids C
Why is C updated gradually in the _init_centroid function? Why aren't all images of a subclass added together, and an average centroid is created at the end?
self.C= ((self.C * i) + torch.mean(phi_p, dim=0, keepdim=True).detach()) / (i+1)
So the first random picture in the class is given a lot of weight, right?
Have you carried out experiments on this?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.