Giter Club home page Giter Club logo

selfblendedimages's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

selfblendedimages's Issues

FF++ c23 c40 result

In paper 4.4. Cross-Manipulation Evaluation,“We use the raw version for evaluation as well as the competitors.” but the most of the CVPR2021 and ICCV21 papers all use the c23 as the FF++ result to compare. But there is no c23 and c40 results in your paper. And as we know, the raw result in ff++ is neally 100% auc which is meanless to compare the raw.

what's more, comparing with FTCN and LipForensics, "Robustness to unseen perturbations" part is lost in paper, So I want to konw the Robustness of Self-Blended.

How does the landmark augmentation code improve model performance?

Thanks for the great work, I have some questions about "exist_bi", the AUC on CDF before using the landmark augmentation code was 82%, but after using the landmark augmentation code the performance improved by almost ten percent, but I found that training the model Only used logging.disable(logging.FATAL) in sbi.py
mask = random_get_hull(landmark, img)[:, :, 0] #This function is not in the landmark enhancement code
logging.disable(logging.NOTSET), so how does exist_bi affect model performance?

Saliency map and t-SNE

I am a beginner in deepfake research and I am interested in your brilliant work. In your paper, Figure 5 is the saliency map visualization of your model trained on SBIs, and Figure 6 is the feature space visualization through t-SNE. I really want to reproduce them, so can you release your code? I would be very grateful if you could provide the code.

how to make it faster

i tried to run inference_dataset.py but it takes around 6-9s to get a result ,how to make it faster?

abort {phase}.json

Thanks for your contributions. In the script, I didn't find out how to generate {phase}.json file, and, what is recorded in the JSON file?

The AUC of FF++(c23)

I run the inference on Celeb-DF-v2, and FF++ (c23) with your pre-trained model. I get the similar results on the former (AUC:0.9381, AP:0.9669). However, the video-level AUC on the latter is only 0.9051, and AP is 0.9797. I also separately test the performance on Deepfakes, FaceSwap, Face2Face, NeuralTextures and FaceShifter. The results are as follows: DF-AUC:0.9856/AP:0.9900; FS-AUC:0.9698/AP:0.9747; F2F-AUC:0.9094/AP:0.9192; NT-AUC:0.8254/AP:0.8427; FSh-AUC:0.8351/AP:0.8427. Except for Deepfakes, other performance has some difference with that your reported in the supplementary. Do you run the inference on FF++ c23 dataset? I do not think the c23 videos can result the significant performance drop. Would you like to provide some suggestions to solve this problem?

DFDC Preview dataset

Hello, I have been unable to download DFDC preview. Could you please share the cloud drive? Thank you in advance.

About error in sbi.py?

Mr. Shiohara: when I reproduced the github code in your paper, I encountered a reorder_landmark function with an error in sbi.py. The error is index 77 is out of bounds for axis 0 with size 68. Shouldn't there be only 68 landmarks? Why is index set to 80? I would appreciate it if you could reply!

Resnet34 test on CDF

Hi
I train from scratch by using ResNet-34 with the Adam optimizer, unfortunately test on CDF auc only 59%. is there provide pretrained ResNet-34 model and weight!?
ResNet-34

有关M

想问一下fig3 中,Convex Hull之后的的mask是二值的吧(脸部区域为1,其他为0)?然后mask Augmentation之后的M只是脸部区域附近的取值变成了小数,其余不变?

About train_sbi.py

    In train_sbi.py:

for step,data in enumerate(tqdm(train_loader)):
img=data['img'].to(device, non_blocking=True).float()
target=data['label'].to(device, non_blocking=True).long()
but in SBI_dataset : only return img_f,img_r. there is no label, could someone solve my problem.

Question about SBI dataloader during training.

Thanks for your amazing work! I have a question about the SBI dataloader : Are the SBI images kept the same within different epochs? In the source code, the worker_init_fn function is set to :np.random.seed(np.random.get_state()[1][0] + worker_id)
And I found that when num_workers > 0 , the np.random.get_state() of each worker in different epochs is the same. So the random numbers output by dataloader in different epochs is the same, does that means the SBI dataset kept the same in the training phase? Demo code is below:

import torch
from torch.utils.data import Dataset
import numpy as np
import random

class TestDataset(Dataset):
    def __init__(self):
        self.datas = np.arange(16)
        print('init')

    def __len__(self):
        return len(self.datas)

    def __getitem__(self, index):
        data = self.datas[index]
        random_data = np.random.uniform(0.0, 1.0)
        return  data, random_data
    
    def worker_init_fn(self, worker_id):
        np.random.seed(np.random.get_state()[1][0] + worker_id)

if __name__ == '__main__':
    simple_dataset = TestDataset()
    dataloader = torch.utils.data.DataLoader(simple_dataset, 
                                             batch_size=2,
                                             shuffle=True,
                                             worker_init_fn=simple_dataset.worker_init_fn,
                                             num_workers=2)
    n_epoch = 3
    seed = 5
    torch.manual_seed(seed)
    np.random.seed(seed)
    random.seed(seed)
    for epoch in range(n_epoch):
        print('epoch_%d'%epoch)
        np.random.seed(seed + epoch)
        for step, data in enumerate(dataloader):
            print(data)

And the output is :

init
epoch_0
[tensor([12,  3], dtype=torch.int32), tensor([0.1520, 0.8205], dtype=torch.float64)]
[tensor([2, 8], dtype=torch.int32), tensor([0.9544, 0.0756], dtype=torch.float64)]
[tensor([ 9, 15], dtype=torch.int32), tensor([0.7755, 0.5411], dtype=torch.float64)]
[tensor([14,  6], dtype=torch.int32), tensor([0.4624, 0.1885], dtype=torch.float64)]
[tensor([ 4, 11], dtype=torch.int32), tensor([0.3422, 0.2918], dtype=torch.float64)]
[tensor([1, 7], dtype=torch.int32), tensor([0.5054, 0.6988], dtype=torch.float64)]
[tensor([10,  0], dtype=torch.int32), tensor([0.4065, 0.4715], dtype=torch.float64)]
[tensor([13,  5], dtype=torch.int32), tensor([0.9122, 0.9069], dtype=torch.float64)]
epoch_1
[tensor([13,  8], dtype=torch.int32), tensor([0.1520, 0.8205], dtype=torch.float64)]
[tensor([15,  7], dtype=torch.int32), tensor([0.9544, 0.0756], dtype=torch.float64)]
[tensor([2, 1], dtype=torch.int32), tensor([0.7755, 0.5411], dtype=torch.float64)]
[tensor([11,  6], dtype=torch.int32), tensor([0.4624, 0.1885], dtype=torch.float64)]
[tensor([14,  4], dtype=torch.int32), tensor([0.3422, 0.2918], dtype=torch.float64)]
[tensor([12,  3], dtype=torch.int32), tensor([0.5054, 0.6988], dtype=torch.float64)]
[tensor([9, 5], dtype=torch.int32), tensor([0.4065, 0.4715], dtype=torch.float64)]
[tensor([10,  0], dtype=torch.int32), tensor([0.9122, 0.9069], dtype=torch.float64)]
epoch_2
[tensor([8, 5], dtype=torch.int32), tensor([0.1520, 0.8205], dtype=torch.float64)]
[tensor([6, 7], dtype=torch.int32), tensor([0.9544, 0.0756], dtype=torch.float64)]
[tensor([ 3, 12], dtype=torch.int32), tensor([0.7755, 0.5411], dtype=torch.float64)]
[tensor([13, 14], dtype=torch.int32), tensor([0.4624, 0.1885], dtype=torch.float64)]
[tensor([9, 1], dtype=torch.int32), tensor([0.3422, 0.2918], dtype=torch.float64)]
[tensor([15, 10], dtype=torch.int32), tensor([0.5054, 0.6988], dtype=torch.float64)]
[tensor([2, 0], dtype=torch.int32), tensor([0.4065, 0.4715], dtype=torch.float64)]
[tensor([ 4, 11], dtype=torch.int32), tensor([0.9122, 0.9069], dtype=torch.float64)]

Where to download the FF++ side-by-side train.json val.json test.json. files

Where can I download these? train.json val.json test.json. I have downloaded FaceForensics++.
└── data
└── FaceForensics++
├── original_sequences
│ └── youtube
│ └── raw
│ └── videos
│ └── *.mp4
├── train.json
├── val.json
└── test.json

I can't find the weights folder

Hi
According to the readme that i need to place pretrained EfficientNet-B4. in ./weights/ folder, but I can't find the weights folder.

many thanks!

Train CDF

can you release training code of CDF?

Can not achieve good results on NT/F2F/FS while training on raw

I follow the settings in 4.1. Implementation Details and try to train a detector on FF+ raw with only real faces and self-blend fake faces. However when testing on FF+ raw it can only perform good on Deepfakes(138/140 correct), the results on NeuralTextures(≈50% acc), Face2Face(≈60% acc) and Faceswape(≈60% acc). I also tried the released pretrain weights to test on FF++ raw and everything goes well with the data in paper. Such problem makes me really confused, have anyone meet similar issue?

Question about bi_online_generation.py

I saw src/library/bi_online_generation.py in the previous code. But this file is not in the latest code base and commit history.
According to

if os.path.isfile('/app/src/utils/library/bi_online_generation.py'):
sys.path.append('/app/src/utils/library/')
print('exist library')
exist_bi=True
else:
exist_bi=False

It seems that bi_online_generation.py is needed.
I used the previous code(with bi_online_generation.py) for training, and the test results of the model in CDF can reach 0.92, which is consistent with the results you released.
But I use the latest code (without bi_online_generation.py) for training, and the test result of the model in CDF is less than 0.8. Does this have anything to do with bi_online_generation.py?

pre_trained_models.py

i cant find this file, i got retinaface by pip install ,am i using a wrong version of retinaface?

SBI dataset

Hi, i am working on a Knowledge Distillation project, I'm wondering if you have saved the SBI data separately, would you mind provide me data plz?

Dataset partitioning

Hello, I would like to ask how to divide the DFD test set. Thank you in advance.

About the fake images

a batch is consisted of a half real images and their SBIs, how do you deal with the fake image existed in the deepfake datasets?

GradCma++

Could you please share the code for visual detection of pictures with GradCma++?

Split method of DFDC-preview dataset.

Thanks for your amazing work. In the paper, I saw the SBI achieve great performance in DFDC-preview. I want to follow your work and test the model performance in DFDC-P. However, I can't find the split method of the DFDC-P. It would be appreciated if you can release the method. Thanks!

dataParallel

hi do you have dataParallel version ? and would u mind provide me Resnet34 test on FF++ result ?

The AUC of CelebDF

I run the inference on Celeb-DF-v2 with your pre trained model, but the AUC is 0.9235, which is lower than yours. And the versions of torch and torchvision are the same with you. Do you know the reason? Can you provide the version of opencv?

Test on FF++

@mapooon

I wanna test the FF++ NT dataset and revise u guys provide file - inference_dataset.py as picture~
image

unfortunately there is something wrong
image

if test on FF++dataset successfully ,it help me a lot!!

Many Thanks
Shawn

Inferencing without face detector leads to very low performance

Hi, I have my face dataset ready and wish not to perform duplicate face detection during inference as it brings huge time cost. However, when I commented out the face detection code and directly ran the inference on my cropped faces (using FFmpeg and dlib) from FF++, it seems that the performance dropped a lot. I wonder if there is a solution to it.

The AUC of the CDF has been 0.5

Dear expert, I would like to know that my test data on the data set CDF is 0.5, and after 100 rounds of training, batch-size is 16. Then the second layer training is changed into 10 rounds, and the test result is still 0.5. Could you give me some advice on this problem? Thank you very much.

微信图片_20230318224942
微信图片_20230318224950
微信图片_20230318224955
微信图片_20230318224959

The accuracy is terrible

I used the weights provided by you to test on train_sample in DFDC provided by kaggle, and the accuracy was too low. Can you help me explain why? In order to facilitate the test, I first resize the image to 512*512, then use the retinaFace to get the face (as shown in your code) without changing other codes.

Docker image

Hi, I am trying to run bash build.sh, but it shows this error:

#6 88.30 E: Unable to locate package apt-get
#6 88.30 E: Unable to locate package install
------
executor failed running [/bin/sh -c apt-get update && apt-get upgrade -y &&     apt-get install -y libgl1-mesa-dev &&     apt-get install -y cmake &&     apt-get -y 

Also, I went through the previous issues, so I came across this https://hub.docker.com/r/mapooon/sbi to pull the docker image but it shows error 404 "page not found". Can you please provide me the docker image?

Would you mind providing docker images through dockerhub or rar files?

Thanks for your brilliant work. I trained the model according to the official guide on single Tesla V100, and only 0.84 (AUC) on CDF was achieved, even though landmark augmentation was employed. However, I can get the claimed results when using the provided pretrained model. The reason, I guess, may be that the provided dockerfile does not specify the version of the package, which causes the difference in the environment. So, I wonder if you mind providing docker images through docker hub or rar files. Many thanks!

Faulty imports in the inference_image.py

As stated i have succeeded in running the src/inference/inference_image.py by adding the import of opencv (import cv2), meanwhile the current repo code does not include it, also the os library is imported twice.

import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets,transforms,models,utils
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
from PIL import Image
import sys
import random
import shutil
from model import Detector
import argparse
from datetime import datetime
from tqdm import tqdm
from retinaface.pre_trained_models import get_model
from preprocess import extract_face
import warnings
warnings.filterwarnings('ignore')

Can be a quick fix by changing a few lines and pushing 😄

GradCam++ implementation

You shared in your paper that you implemented your algorithm together with GradCam++, could you share this code as well please?

About the build image

Hi,
I experienced a problem with building an image from the build.sh, do you still have the built image for this project? Your link in the other question expired. Thank you very much.

training: FileNotFoundError

FileNotFoundError: [Errno 2] No such file or directory: 'output/sbi_base_11_23_20_26_08/'
Hello, I am in the training section, it always shows that I cannot find the file?

Training valid AUC too low

Thanks for your brilliant work! I followed your video processing code and trained the model with bathsize=28 on two 1080TI for nearly 80 epochs, but the valid loss is still very high (about 3.8) and the valid acc stucked at 0.53. The following is the training log. I tired for 3 times with a new begining, but the resluts were somehow similar--as bad as the first time. On my other laptop (one 1060), due to the limit of memeory, I set the bathsize of 4, the valid acc is high and up to 90, but the result of cross-dataset test on CDF is only 66% which is far from your given pre-trained model of 93%. So can I ask what's the main problem here and how can I get better result?

Epoch 1/100 | train loss: 0.6957, train acc: 0.5051, val loss: 0.6945, val acc: 0.4991, val auc: 0.4983
Epoch 2/100 | train loss: 0.6949, train acc: 0.5020, val loss: 0.6938, val acc: 0.5055, val auc: 0.5062
Epoch 3/100 | train loss: 0.6939, train acc: 0.5061, val loss: 0.6905, val acc: 0.5345, val auc: 0.5479
Epoch 4/100 | train loss: 0.6931, train acc: 0.5092, val loss: 0.6911, val acc: 0.5195, val auc: 0.5345
Epoch 5/100 | train loss: 0.6929, train acc: 0.5121, val loss: 0.6911, val acc: 0.5256, val auc: 0.5413
Epoch 6/100 | train loss: 0.6927, train acc: 0.5152, val loss: 0.6899, val acc: 0.5389, val auc: 0.5537
Epoch 7/100 | train loss: 0.6922, train acc: 0.5182, val loss: 0.6886, val acc: 0.5470, val auc: 0.5739
Epoch 8/100 | train loss: 0.6918, train acc: 0.5210, val loss: 0.6892, val acc: 0.5391, val auc: 0.5636
Epoch 9/100 | train loss: 0.6912, train acc: 0.5286, val loss: 0.6890, val acc: 0.5345, val auc: 0.5636
Epoch 10/100 | train loss: 0.6906, train acc: 0.5307, val loss: 0.6897, val acc: 0.5330, val auc: 0.5577
Epoch 11/100 | train loss: 0.6906, train acc: 0.5286, val loss: 0.6888, val acc: 0.5459, val auc: 0.5651
Epoch 12/100 | train loss: 0.6894, train acc: 0.5396, val loss: 0.6880, val acc: 0.5488, val auc: 0.5694
Epoch 13/100 | train loss: 0.6883, train acc: 0.5418, val loss: 0.6875, val acc: 0.5478, val auc: 0.5714
Epoch 14/100 | train loss: 0.6872, train acc: 0.5494, val loss: 0.6857, val acc: 0.5651, val auc: 0.5857
Epoch 15/100 | train loss: 0.6859, train acc: 0.5514, val loss: 0.6844, val acc: 0.5600, val auc: 0.5864
Epoch 16/100 | train loss: 0.6829, train acc: 0.5643, val loss: 0.6815, val acc: 0.5787, val auc: 0.6059
Epoch 17/100 | train loss: 0.6788, train acc: 0.5791, val loss: 0.6808, val acc: 0.5748, val auc: 0.6029
Epoch 18/100 | train loss: 0.6709, train acc: 0.6011, val loss: 0.6721, val acc: 0.5950, val auc: 0.6340
Epoch 19/100 | train loss: 0.6565, train acc: 0.6383, val loss: 0.6596, val acc: 0.6409, val auc: 0.6729
Epoch 20/100 | train loss: 0.6043, train acc: 0.7358, val loss: 0.6487, val acc: 0.6480, val auc: 0.6844
Epoch 21/100 | train loss: 0.4083, train acc: 0.8870, val loss: 0.6878, val acc: 0.6033, val auc: 0.7112
Epoch 22/100 | train loss: 0.1422, train acc: 0.9700, val loss: 1.0582, val acc: 0.5720, val auc: 0.6615
Epoch 23/100 | train loss: 0.0659, train acc: 0.9852, val loss: 1.4780, val acc: 0.5259, val auc: 0.6610
Epoch 24/100 | train loss: 0.0353, train acc: 0.9918, val loss: 1.9196, val acc: 0.5036, val auc: 0.5579
Epoch 25/100 | train loss: 0.0349, train acc: 0.9921, val loss: 2.0033, val acc: 0.5147, val auc: 0.6118
Epoch 26/100 | train loss: 0.0154, train acc: 0.9982, val loss: 2.2417, val acc: 0.5201, val auc: 0.5975
Epoch 27/100 | train loss: 0.0135, train acc: 0.9975, val loss: 2.7043, val acc: 0.5063, val auc: 0.5585
Epoch 28/100 | train loss: 0.0102, train acc: 0.9988, val loss: 2.6171, val acc: 0.5312, val auc: 0.5991
Epoch 29/100 | train loss: 0.0094, train acc: 0.9978, val loss: 2.8837, val acc: 0.5129, val auc: 0.5996
Epoch 30/100 | train loss: 0.0079, train acc: 0.9987, val loss: 3.0676, val acc: 0.5196, val auc: 0.5721
Epoch 31/100 | train loss: 0.0086, train acc: 0.9970, val loss: 3.2782, val acc: 0.5121, val auc: 0.5628
Epoch 32/100 | train loss: 0.0055, train acc: 0.9986, val loss: 3.3657, val acc: 0.5152, val auc: 0.5466
Epoch 33/100 | train loss: 0.0046, train acc: 0.9992, val loss: 3.3215, val acc: 0.5300, val auc: 0.5612
Epoch 34/100 | train loss: 0.0034, train acc: 0.9990, val loss: 3.5200, val acc: 0.5254, val auc: 0.5206
Epoch 35/100 | train loss: 0.0026, train acc: 0.9995, val loss: 3.8392, val acc: 0.5340, val auc: 0.5411
Epoch 36/100 | train loss: 0.0083, train acc: 0.9968, val loss: 3.8063, val acc: 0.5348, val auc: 0.5629
Epoch 37/100 | train loss: 0.0021, train acc: 0.9999, val loss: 4.0034, val acc: 0.5317, val auc: 0.5678
Epoch 38/100 | train loss: 0.0084, train acc: 0.9968, val loss: 4.0282, val acc: 0.5080, val auc: 0.5787
Epoch 39/100 | train loss: 0.0024, train acc: 1.0000, val loss: 4.4494, val acc: 0.5214, val auc: 0.5455
Epoch 40/100 | train loss: 0.0022, train acc: 0.9998, val loss: 4.4703, val acc: 0.5241, val auc: 0.5653
Epoch 41/100 | train loss: 0.0017, train acc: 0.9998, val loss: 4.7169, val acc: 0.5313, val auc: 0.5503
Epoch 42/100 | train loss: 0.0040, train acc: 0.9986, val loss: 4.8927, val acc: 0.5358, val auc: 0.5620
Epoch 43/100 | train loss: 0.0024, train acc: 0.9993, val loss: 4.4155, val acc: 0.5335, val auc: 0.5910
Epoch 44/100 | train loss: 0.0033, train acc: 0.9985, val loss: 5.8817, val acc: 0.5286, val auc: 0.5152
Epoch 45/100 | train loss: 0.0063, train acc: 0.9988, val loss: 5.2544, val acc: 0.5322, val auc: 0.5107
Epoch 46/100 | train loss: 0.0035, train acc: 0.9987, val loss: 5.5359, val acc: 0.5273, val auc: 0.5316
Epoch 47/100 | train loss: 0.0014, train acc: 0.9998, val loss: 5.3060, val acc: 0.5277, val auc: 0.5540
Epoch 48/100 | train loss: 0.0009, train acc: 1.0000, val loss: 5.3945, val acc: 0.5143, val auc: 0.5317
Epoch 49/100 | train loss: 0.0006, train acc: 1.0000, val loss: 5.8465, val acc: 0.5263, val auc: 0.5461
Epoch 50/100 | train loss: 0.0006, train acc: 1.0000, val loss: 5.1220, val acc: 0.5559, val auc: 0.5769
Epoch 51/100 | train loss: 0.0046, train acc: 0.9988, val loss: 4.3058, val acc: 0.5523, val auc: 0.5456
Epoch 52/100 | train loss: 0.0008, train acc: 1.0000, val loss: 4.6946, val acc: 0.5429, val auc: 0.5244
Epoch 53/100 | train loss: 0.0005, train acc: 1.0000, val loss: 4.9155, val acc: 0.5375, val auc: 0.5274
Epoch 54/100 | train loss: 0.0008, train acc: 1.0000, val loss: 5.2772, val acc: 0.5273, val auc: 0.5283
Epoch 55/100 | train loss: 0.0010, train acc: 0.9999, val loss: 5.0639, val acc: 0.5277, val auc: 0.5495
Epoch 56/100 | train loss: 0.0011, train acc: 0.9999, val loss: 4.7789, val acc: 0.5344, val auc: 0.5525
Epoch 57/100 | train loss: 0.0008, train acc: 1.0000, val loss: 4.4146, val acc: 0.5406, val auc: 0.5842
Epoch 58/100 | train loss: 0.0005, train acc: 1.0000, val loss: 5.5026, val acc: 0.5210, val auc: 0.5553
Epoch 59/100 | train loss: 0.0019, train acc: 0.9994, val loss: 5.3561, val acc: 0.5694, val auc: 0.5495
Epoch 60/100 | train loss: 0.0052, train acc: 0.9984, val loss: 8.7304, val acc: 0.5268, val auc: 0.5271
Epoch 61/100 | train loss: 0.0008, train acc: 1.0000, val loss: 8.1215, val acc: 0.5165, val auc: 0.5369
Epoch 62/100 | train loss: 0.0026, train acc: 0.9989, val loss: 8.4821, val acc: 0.5281, val auc: 0.5232
Epoch 63/100 | train loss: 0.0025, train acc: 0.9991, val loss: 7.9619, val acc: 0.5335, val auc: 0.5330
Epoch 64/100 | train loss: 0.0021, train acc: 0.9990, val loss: 4.6351, val acc: 0.5040, val auc: 0.5348
Epoch 65/100 | train loss: 0.0006, train acc: 0.9999, val loss: 4.4135, val acc: 0.5098, val auc: 0.5105
Epoch 66/100 | train loss: 0.0006, train acc: 0.9998, val loss: 4.6362, val acc: 0.5013, val auc: 0.5327
Epoch 67/100 | train loss: 0.0006, train acc: 0.9999, val loss: 4.1294, val acc: 0.5264, val auc: 0.5434
Epoch 68/100 | train loss: 0.0005, train acc: 1.0000, val loss: 4.2482, val acc: 0.5357, val auc: 0.5387
Epoch 69/100 | train loss: 0.0004, train acc: 1.0000, val loss: 4.5430, val acc: 0.5219, val auc: 0.5367
Epoch 70/100 | train loss: 0.0004, train acc: 1.0000, val loss: 4.7145, val acc: 0.5143, val auc: 0.5129
Epoch 71/100 | train loss: 0.0006, train acc: 0.9999, val loss: 4.8528, val acc: 0.5183, val auc: 0.5158
Epoch 72/100 | train loss: 0.0002, train acc: 1.0000, val loss: 4.9054, val acc: 0.5183, val auc: 0.5200
Epoch 73/100 | train loss: 0.0002, train acc: 1.0000, val loss: 4.9477, val acc: 0.5184, val auc: 0.5129
Epoch 74/100 | train loss: 0.0003, train acc: 1.0000, val loss: 5.0406, val acc: 0.5201, val auc: 0.5104
Epoch 75/100 | train loss: 0.0008, train acc: 0.9997, val loss: 5.0495, val acc: 0.5188, val auc: 0.5066
Epoch 76/100 | train loss: 0.0003, train acc: 1.0000, val loss: 5.5569, val acc: 0.5282, val auc: 0.5319
Epoch 77/100 | train loss: 0.0003, train acc: 1.0000, val loss: 5.0770, val acc: 0.5326, val auc: 0.5362

Docker

Hi,
I ran the code in (build.sh) by cmd then executor failed running and created a container by code in (exec.sh) i don't know it's function, was there any wrong that i have conducted?because I found another issue say that we had to run dockerfiles/Dockerfile!?
executor failed running

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.