Giter Club home page Giter Club logo

pscc-net's People

Contributors

proteus1991 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pscc-net's Issues

About the big gap between my evaluation results and the metrics (Localization AUC of pre-trained models) reported in the paper

Thank you very much for open-sourcing the relevant code, datasets, and pre-trained models.

I used your open-source pre-trained model to infer directly on the public datasets including CASIAv1, COVERAGE, NIST16, IMD20, Columbia, and there is a big gap between the evaluation results and the metrics reported in the paper, and I also tried to retrain the dataset you provided, but generalization is hard to guarantee. Hope you can check the uploaded pretrained model, or provide the code for metric evaluation.
image

Regarding the image size of evaluation

Hello and thank you for your very interesting work,

While I was experimenting with the provided pre-trained models I found out that the results were worse when the images were utilized on their original size compared to when they were limited to 256x256 (by retaining aspect ratio). An example dataset to reproduce the issue is Columbia.

So, I would like to ask you what is the input size that leads to the best results? What was the one utilized during the evaluation reported in the paper?

about auc

Hello author, I used the code you provided for testing, but the coverage AUC was only 65, which is far from the article. Is there something wrong with my AUC evaluation method?

About train process

I want to retrain your network, but after the loss enters hundreds of pictures, the loss does not change. What is the reason for this problem?

checkpoint的参数

您好,想问一下您代码中给出的checkpoint中的参数是只预训练过,还是已经微调过的?我用原参数计算的f1分数只有四十多,auc达到80多,但感觉我计算的不对劲,方便的话可以给评价指标的代码吗?

About the source code.

How soon can the source code be released? Could the pre-trained model and evaluation code be released first? Looking forward to your reply. Best regards.

splice/mask

1688371498375
The datasets(splice) you provided doesn't exist the floder of mask,Can you provide it ?Thanks!!

About training results

Hello, why is the result like this when I run the train file?

=> loading HRNet pretrained model models/hrnet_w18_small_v2.pth
HRNet weight-loading succeeds: ./checkpoint/HRNet_checkpoint/HRNet.pth
NLCDetection weight-loading succeeds: ./checkpoint/NLCDetection_checkpoint/NLCDetection.pth
DetectionHead weight-loading succeeds: ./checkpoint/DetectionHead_checkpoint/DetectionHead.pth
length of traindata: 500
previous_score 0.9871
authentic_ratio: 0.25 fake_ratio: 0.75
resuming FENet by loading epoch 30
resuming SegNet by loading epoch 30
resuming ClsNet by loading epoch 30

Process finished with exit code 0

1

1

关于环境问题

作者你好,不知道是不是你更新了新的代码以后把之前的环境配置相关信息给覆盖掉了,目前这个代码没有环境配置的具体信息,请问你可以提供一份这个代码的相关环境配置吗?
感谢!

Config file

I am really working on this code and trying to further contribute but have some issue with the following values.

pscc_args.val_num = 200
pscc_args.train_num = 100000

authentic, splice, copymove, removal

pscc_args.train_ratio = [0.25, 0.25, 0.25, 0.25]

May I know why you make the train_num as 100000 as I checked. The copy-move list size is 100000 while for authentic, the size is 81910. So I am confused, what the train_num value should be.

Next, on what basis you chose val_num as 200?

Furthermore, I am trying to make only two classes instead of the 4 you mentioned. so what will be train_ratio? for instance, you used [0.25, 0.25, 0.25, 0.25] for authentic, splice, copymove, removal. Does it mean, it will take .25 % of sample from these? Will be good if you elaborate this too.

Many Many thanks in Advance.

About the pre-training checkpoint you released

Hello! I would like to ask on which dataset the pre-trained model released in the repository is trained? Also can you release the test code used to calculate the evaluation metrics?

very good

this paper is very good .Excuse me, can you publish the heat map, F1, testing and fine-tuning code on small data sets in the paper?

Localization Auc

My AUC on coverage is only 68%, which is far from your 84.7%. What could be the reason for this significant difference? Have you adjusted other parameters or anything? I trained the model using the original parameters.
expect for you answer!

我在coverage上的auc只有68%,和你的84.7%相去甚远,请问这是因为什么原因?请问你有调整其他参数之类的吗?我是按照原始参数训练出来的模型

search for help

Thank you for opening the source code of your work. This work is excellent. I downloaded the code and parameter weights you provided. Because I didn't see your evaluation code, I used my own evaluation indicator code, and the test results are somewhat different from your results. The following is my evaluation index method and test results. Can you share the code of your evaluation or point out my mistakes
tempsnip
111
222

About the Training Datasets

Hi,

I wanted to download the datasets that you provided in Baidu Cloud, however, I don't have an account with it and registration requires a cell number based in China. Could you provide me other link having the same contents?

Thank you.

About valuation

您好,我有两个问题想请教您一下:
1.代码中评估用的“score”计算方法似乎没有在论文中体现,请问这个评价指标如何去理解,pth模型在各个数据集上的“score”值的测试能否提供官方数据;

2.论文中F1和auc的值的计算,您是否能提供一下测试程序,计算结果似乎有一定偏差。

Hello, I have two questions that I would like to ask you:

1.The "score" calculation method used for evaluation in the code does not seem to be reflected in the paper. How to understand this evaluation indicator, and can official data be provided for testing the "score" values of the PTH model on various datasets;

2.Can you provide a testing program for the calculation of F1 and AUC values in the paper? The calculation results seem to have some deviation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.