Giter Club home page Giter Club logo

lsc-cnn's Issues

small question about the paper

hi
I have a little question about the picture of your paper: The exact configuration of Feature Extractor.

Can you tell me what the meaning of 2C|64 3C|512 etc. Thank you very much!!

Asking for author's help:train error

when i train the lsc-cnn
such error happens:

Traceback (most recent call last):
File "/home/Linux-doc1/Yxp/lsc-cnn-master/data_reader.py", line 725, in _test_one_image
predicted_maps_full_size
UnboundLocalError: local variable 'predicted_maps_full_size' referenced before assignment

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/Linux-doc1/Yxp/lsc-cnn-master/main.py", line 1113, in
# -- Train the model
File "/home/Linux-doc1/Yxp/lsc-cnn-master/main.py", line 1053, in train
network_functions=networkFunctions(),
File "/home/Linux-doc1/Yxp/lsc-cnn-master/main.py", line 930, in train_networks

File "/home/Linux-doc1/Yxp/lsc-cnn-master/main.py", line 657, in test_lsccnn
for e_idx, e_iter in enumerate(e):
File "/home/Linux-doc1/Yxp/lsc-cnn-master/data_reader.py", line 281, in iterate_over_test_data
pred_maps_full_size = self._test_one_image(crops, test_function)
File "/home/Linux-doc1/Yxp/lsc-cnn-master/data_reader.py", line 727, in _test_one_image
predicted_maps_full_size = [np.zeros((pmap.shape[1], crops[6].shape[0], crops[6].shape[1])) for pmap in results]
File "/home/Linux-doc1/Yxp/lsc-cnn-master/data_reader.py", line 727, in
predicted_maps_full_size = [np.zeros((pmap.shape[1], crops[6].shape[0], crops[6].shape[1])) for pmap in results]
AttributeError: 'list' object has no attribute 'shape'

I do not make changes to the original code ,when I start to train or test ,this error always happens, I really want to know the reason why such bug would happen , please!

image

Is error_function files wrong?

When I run a test code? This is a matter. Is the environment must be a ubuntu or other error? Thanks!

(pytorch) E:\example\LSC-CNN\lsc-cnn>python main.py --dataset="partb" --gpu=0 -skip-init-tests --start-epoch=24 --threshold=0.25
Traceback (most recent call last):
File "main.py", line 22, in
from error_function import offset_sum
ModuleNotFoundError: No module named 'error_function'

Test other datasets

Sorry for creating another issue but did you finish your script to test own images, which you mentioned in #6?

I already tried your hint to change the images in the directory and keep the mat files but I get completely wrong values for different images.
#6 (comment)_


../dataset/UCF-QNRF_ECCV18/testing/images/test1.jpg
Pred: 1.0 gt: 2.0


../dataset/UCF-QNRF_ECCV18/testing/images/testhighRes.jpg
Pred: 1.0 gt: 2.0

Thanks

How to make it faster?

The current model takes around 0.5 seconds on an image on RTX 2080 8 GB GPU. Are there are ways to make it run faster? May be change the backbone network or something else that you can suggest?

I am fine having not so accurate result but I want to run it lets say at around 10 fps. Any help or suggestions will be appreciated.

how to generate the GT of UCF_QNRF dataset?

when I run python main.py --dataset="ucfqnrf" --gpu=2, there are some errors.

/home/xuwei/miniconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.6 of module 'error_function' does not match runtime version 3.7
return f(*args, **kwds)
In data_reader.init: Can't read meta data in /data/weixu/qnrf_dotmaps_predictionScale_2; call create_dataset_files.
{'test': ['/data/weixu/UCF-QNRF_ECCV18/Test', '/data/weixu/UCF-QNRF_ECCV18/Test'], 'train': ['/data/weixu/UCF-QNRF_ECCV18/Train', '/data/weixu/UCF-QNRF_ECCV18/Train']} <data_reader.DataReader object at 0x7f8a6cd3b790>
CREATING DATASET...
In data_reader.create_dataset_files: Deleted old /data/weixu/qnrf_dotmaps_predictionScale_2/test.
In data_reader.create_dataset_files: /data/weixu/qnrf_dotmaps_predictionScale_2/test created.
In data_reader.create_dataset_files: Deleted old /data/weixu/qnrf_dotmaps_predictionScale_2/train.
In data_reader.create_dataset_files: /data/weixu/qnrf_dotmaps_predictionScale_2/train created.
In data_reader.create_dataset_files: Deleted old /data/weixu/qnrf_dotmaps_predictionScale_2/test_valid.
In data_reader.create_dataset_files: /data/weixu/qnrf_dotmaps_predictionScale_2/test_valid created.
Processing img_0001.jpg ...
Traceback (most recent call last):
File "/home/xuwei/miniconda3/lib/python3.7/site-packages/scipy/io/matlab/mio.py", line 31, in _open_file
return open(file_like, 'rb'), True
FileNotFoundError: [Errno 2] No such file or directory: '/data/weixu/UCF-QNRF_ECCV18/Test/GT_img_0001.mat'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 1110, in
train()
File "main.py", line 1029, in train
test_batch_size=4)
File "/home/xuwei/projects/synchronous/lsc-cnn-master/data_reader.py", line 200, in create_dataset_files
self._dump_all_test_images(set_name)
File "/home/xuwei/projects/synchronous/lsc-cnn-master/data_reader.py", line 827, in _dump_all_test_images
data = self._read_image_and_gt_prediction(paths, file_name, kernel)
File "/home/xuwei/projects/synchronous/lsc-cnn-master/data_reader.py", line 790, in read_image_and_gt_prediction
'GT
' + tmp + '.mat'))
File "/home/xuwei/miniconda3/lib/python3.7/site-packages/scipy/io/matlab/mio.py", line 207, in loadmat
MR, file_opened = mat_reader_factory(file_name, appendmat, **kwargs)
File "/home/xuwei/miniconda3/lib/python3.7/site-packages/scipy/io/matlab/mio.py", line 62, in mat_reader_factory
byte_stream, file_opened = _open_file(file_name, appendmat)
File "/home/xuwei/miniconda3/lib/python3.7/site-packages/scipy/io/matlab/mio.py", line 37, in _open_file
return open(file_like, 'rb'), True
FileNotFoundError: [Errno 2] No such file or directory: '/data/weixu/UCF-QNRF_ECCV18/Test/GT_img_0001.mat'

why skip some points in test model,and why are the predicted results and ground_truth the same?

In my test process:
Processing IMG_400.jpg ...
In data_reader.create_heatmap: Error in annotations; 1 point(s) skipped.
Processing IMG_44.jpg ...
Processing IMG_54.jpg ...
In data_reader.create_heatmap: Error in annotations; 1 point(s) skipped.
Processing IMG_59.jpg ...
Processing IMG_65.jpg ...
In data_reader.create_heatmap: Error in annotations; 2 point(s) skipped.

My predicted results' paths is "/home4/shuai/projects/LSC-CNN/dataset/stpartb_dotmaps_predscale0.5_rgb_ddcnn++_test/test/1/"
I notice that the predicted counts and ground_truth is the same results.

Can you answer my question?Thanks.

TypeError: 'NoneType' object is not iterable

Hi,

When i run the code:

python main.py --dataset="parta" --gpu=2 --start-epoch=0 --epochs=30

I get this error:

In data_reader.init: Can't read meta data in ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30; call create_dataset_files.
{'test': ['../dataset/ST_partA/test_data/images', '../dataset/ST_partA/test_data/ground_truth'], 'train': ['../dataset/ST_partA/train_data/images', '../dataset/ST_partA/train_data/ground_truth']} <data_reader.DataReader object at 0x7fde80abd4a8>
CREATING DATASET...
In data_reader.create_dataset_files: Deleted old ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test created.
In data_reader.create_dataset_files: Deleted old ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/train.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/train created.
In data_reader.create_dataset_files: Deleted old ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test_valid.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test_valid created.
Processing IMG_1.jpg ...
Traceback (most recent call last):
File "main.py", line 1096, in
train()
File "main.py", line 1017, in train
test_batch_size=4)
File "/data/lsc/lsc-cnn/data_reader.py", line 200, in create_dataset_files
self._dump_all_test_images(set_name)
File "/data/lsc/lsc-cnn/data_reader.py", line 826, in _dump_all_test_images
data = self._read_image_and_gt_prediction(paths, file_name, kernel)
File "/data/lsc/lsc-cnn/data_reader.py", line 789, in read_image_and_gt_prediction
'GT
' + tmp + '.mat'))
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 141, in loadmat
MR, file_opened = mat_reader_factory(file_name, appendmat, **kwargs)
File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 64, in mat_reader_factory
byte_stream, file_opened = _open_file(file_name, appendmat)
TypeError: 'NoneType' object is not iterable

Can you help me on this?

Where do I find the head count detected for each image ?

I have followed all the instructions and ran the test (using the pre-trained models provided) on Part A using the command :

python main.py --dataset="parta" --gpu=2 --start-epoch=13 --epochs=13 --threshold=0.21

I have got boxed images in models/dump_test.

But I want to know the efficiency of this model (ground truth vs detected head count, as presented on the page 11 of the paper ).

  • If I have to compute it, how can I ?

  • If it is already computed, where can I find it ?

File Not Found Error

FileNotFoundError: [Errno 2] No such file or directory: './models/train2/snapshots/losses.pkl'
How should we place the downloaded folder models, I have placed it anywhere possible buy still get this error.
Any help is appreciated.

how to solve this error ?

File "/content/drive/MyDrive/lsc-cnn-master/main.py", line 1109, in
train()
File "/content/drive/MyDrive/lsc-cnn-master/main.py", line 1029, in train
test_batch_size=4)
File "/content/drive/MyDrive/lsc-cnn-master/data_reader.py", line 201, in create_dataset_files
self._dump_all_test_images(set_name)
File "/content/drive/MyDrive/lsc-cnn-master/data_reader.py", line 828, in _dump_all_test_images
crops = self._get_one_image_test_crops(data)
File "/content/drive/MyDrive/lsc-cnn-master/data_reader.py", line 556, in _get_one_image_test_crops
<= data[0].shape[WIDTH_IDX] and
IndexError: tuple index out of range

shape error on running test of part_A

The code for train works as REAMDME.md mentions. However, after that, when I tried to run test code like below,

python3 main.py --dataset="parta" --gpu=2 --start-epoch=13 --epochs=13 --threshold=0.21

Error happens like this:

Traceback (most recent call last):
File "main.py", line 1110, in
train()
File "main.py", line 1049, in train
log_path=model_save_path)
File "main.py", line 998, in train_networks
_, txt = test_lsccnn(test_funcs, dataset, 'test', network, './models/dump_test', thresh=threshold)
File "main.py", line 653, in test_lsccnn
for e_idx, e_iter in enumerate(e):
File "/home/kiichi.otsuka/lsc-cnn/data_reader.py", line 281, in iterate_over_test_data
pred_maps_full_size = self._test_one_image(crops, test_function)
File "/home/kiichi.otsuka/lsc-cnn/data_reader.py", line 766, in _test_one_image
roi_rel_slice[2]: roi_rel_slice[3]]
ValueError: operands could not be broadcast together with shapes (8,76,76) (8,4,76,112) (8,76,76)

It seems the shape of result is incorrect. Could you give me a hint for solving this errors?

error when run "python main.py --dataset="parta" --gpu=2 --start-epoch=0 --epochs=30"

python main.py --dataset="parta" --gpu=2 --start-epoch=0 --epochs=30
In data_reader.init: Can't read meta data in ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30; call create_dataset_files.
{'test': ['../dataset/ST_partA/test_data/images', '../dataset/ST_partA/test_data/ground_truth'], 'train': ['../dataset/ST_partA/train_data/images', '../dataset/ST_partA/train_data/ground_truth']} <data_reader.DataReader object at 0x7f2d2b928ef0>
CREATING DATASET...
In data_reader.create_dataset_files: Deleted old ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test created.
In data_reader.create_dataset_files: Deleted old ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/train.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/train created.
In data_reader.create_dataset_files: Deleted old ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test_valid.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test_valid created.
Processing IMG_1.jpg ...
Traceback (most recent call last):
File "main.py", line 1108, in
train()
File "main.py", line 1029, in train
test_batch_size=4)
File "/home/fire/lsc-cnn/data_reader.py", line 200, in create_dataset_files
self._dump_all_test_images(set_name)
File "/home/fire/lsc-cnn/data_reader.py", line 826, in _dump_all_test_images
data = self._read_image_and_gt_prediction(paths, file_name, kernel)
File "/home/fire/lsc-cnn/data_reader.py", line 789, in read_image_and_gt_prediction
'GT
' + tmp + '.mat'))
File "/home/fire/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 141, in loadmat
MR, file_opened = mat_reader_factory(file_name, appendmat, **kwargs)
File "/home/fire/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 64, in mat_reader_factory
byte_stream, file_opened = _open_file(file_name, appendmat)
TypeError: 'NoneType' object is not iterable

Datasetpatherror for UCF-QNRF

I tried to run the framework with the UCF Dataset but everytime I start the Test or Training process I get the following eror:

root@bf38862e4add:/lsc# python3 main.py --dataset="ucfqnrf" --gpu=1 --start-epoch=46 --epochs=46 --threshold=0.20

In data_reader.__init__: Can't read meta data in ../dataset/qnrf_dotmaps_predictionScale_2; call create_dataset_files.
{'test': ['../dataset/UCF-QNRF_ECCV18/Test/', '../dataset/UCF-QNRF_ECCV18/Test/'], 'train': ['../dataset/UCF-QNRF_ECCV18/Train/', '../dataset/UCF-QNRF_ECCV18/Train/']} <data_reader.DataReader object at 0x7fccccc43ef0>
CREATING DATASET...
In data_reader.create_dataset_files: Deleted old ../dataset/qnrf_dotmaps_predictionScale_2/test.
In data_reader.create_dataset_files: ../dataset/qnrf_dotmaps_predictionScale_2/test created.
In data_reader.create_dataset_files: Deleted old ../dataset/qnrf_dotmaps_predictionScale_2/train.
In data_reader.create_dataset_files: ../dataset/qnrf_dotmaps_predictionScale_2/train created.
In data_reader.create_dataset_files: Deleted old ../dataset/qnrf_dotmaps_predictionScale_2/test_valid.
In data_reader.create_dataset_files: ../dataset/qnrf_dotmaps_predictionScale_2/test_valid created.
Processing img_0001.jpg ...
Traceback (most recent call last):
  File "main.py", line 1108, in <module>
    train()
  File "main.py", line 1029, in train
    test_batch_size=4)
  File "/lsc/data_reader.py", line 200, in create_dataset_files
    self._dump_all_test_images(set_name)
  File "/lsc/data_reader.py", line 826, in _dump_all_test_images
    data = self._read_image_and_gt_prediction(paths, file_name, kernel)
  File "/lsc/data_reader.py", line 789, in _read_image_and_gt_prediction
    'GT_' + tmp + '.mat'))
  File "/usr/local/lib/python3.6/dist-packages/scipy/io/matlab/mio.py", line 141, in loadmat
    MR, file_opened = mat_reader_factory(file_name, appendmat, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/scipy/io/matlab/mio.py", line 64, in mat_reader_factory
    byte_stream, file_opened = _open_file(file_name, appendmat)
TypeError: 'NoneType' object is not iterable

My setup looks like this:

/lsc-cnn
   -- network.py
   -- main.py
   -- ....
/dataset
|-- UCF-QNRF_ECCV18
|   |-- Test
|   `-- Train
`-- qnrf_dotmaps_predictionScale_2
    |-- test
    |   |-- 0
    |   `-- 1
    |-- test_valid
    `-- train

I already tried to separate the images and mat files but I still get the same eror.

dataset_paths = {'test': ['../dataset/UCF-QNRF_ECCV18/Test/images',
                              '../dataset/UCF-QNRF_ECCV18/Test/ground-truth'],
                         'train': ['../dataset/UCF-QNRF_ECCV18/Train/images',
                               '../dataset/UCF-QNRF_ECCV18/Train/ground-truth']}

Can you please help me to specify the correct settings and paths?

Facing error in offset_sum

hey, thanks for opensource.
a

i am beginner to this. maybe this error will very basic but i really don't know how to handle this.

line 15
def offset_sum(long long [:, :] sorted_idx, double [:, :] d, long long n, long long m, long long max_dist):
^
SyntaxError: invalid syntax

can't execute the code for ucf-qrtf dataset

Dear paper authors, I am not able to execute the code for ucf-qnrf dataset, I am executing the code on windows, pycharm professsional 2019. Code for shanghaitech dataset is working fine but, I am getting the following error in case of other dataset

In data_reader.init: Can't read meta data in ../dataset/qnrf_dotmaps_predictionScale_2; call genDatasetFiles.

{'test': ['../dataset/UCF-QNRF_ECCV18/Test/images', '../dataset/UCF-QNRF_ECCV18/Test/ground_truth'], 'train': ['../dataset/UCF-QNRF_ECCV18/Train/images', '../dataset/UCF-QNRF_ECCV18/Train/ground_truth']} <readData.DataReader object at 0x0000019C881DFE88>
CREATING DATASET...
In data_reader.genDatasetFiles: Deleted old ../dataset/qnrf_dotmaps_predictionScale_2\test.
In data_reader.genDatasetFiles: ../dataset/qnrf_dotmaps_predictionScale_2\test created.
In data_reader.genDatasetFiles: Deleted old ../dataset/qnrf_dotmaps_predictionScale_2\train.
In data_reader.genDatasetFiles: ../dataset/qnrf_dotmaps_predictionScale_2\train created.
In data_reader.genDatasetFiles: Deleted old ../dataset/qnrf_dotmaps_predictionScale_2\test_valid.
In data_reader.genDatasetFiles: ../dataset/qnrf_dotmaps_predictionScale_2\test_valid created.
Processing img_0001.jpg ...
Traceback (most recent call last):
File "D:/crowd_count_code_sumit/crowd_code/mainPro.py", line 1122, in
train()
File "D:/crowd_count_code_sumit/crowd_code/mainPro.py", line 1015, in train
test_batch_size=1)
File "D:\crowd_count_code_sumit\crowd_code\readData.py", line 198, in genDatasetFiles
self._dump_all_test_images(set_name)
File "D:\crowd_count_code_sumit\crowd_code\readData.py", line 824, in _dump_all_test_images
data = self._read_image_and_gt_prediction(paths, file_name, kernel)
File "D:\crowd_count_code_sumit\crowd_code\readData.py", line 788, in _read_image_and_gt_prediction
gt_annotation_points = data_mat['image_info'][0, 0]['location'][0, 0]
KeyError: 'image_info'

Process finished with exit code 1

please help in removing this error

Error : Can't read meta data call create_dataset_files

Hello, after downloading the st_parta dataset and running the following command :
python main.py --dataset="parta" --gpu=2 --start-epoch=13 --epochs=13 --threshold=0.21

I get the following error :

In data_reader.__init__: Can't read meta data in ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30; call create_dataset_files.
{'test': ['../dataset/ST_partA/test_data/images', '../dataset/ST_partA/test_data/ground_truth'], 'train': ['../dataset/ST_partA/train_data/images', '../dataset/ST_partA/train_data/ground_truth']} <data_reader.DataReader object at 0x7f237be32ba8>
CREATING DATASET...
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30 does not exists; but created.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test created.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/train created.
In data_reader.create_dataset_files: ../dataset/stparta_dotmaps_predscale0.5_rgb_ddcnn++_test_val_30/test_valid created.
Processing IMG_1.jpg ...
Traceback (most recent call last):
  File "main.py", line 1096, in <module>
    train()
  File "main.py", line 1017, in train
    test_batch_size=4)
  File "/media/mounir/a1340c42-9115-49f7-9b0b-61e804384f0e/PycharmProjectsHDD/LSC-CNN-Counting/lsc-cnn/data_reader.py", line 200, in create_dataset_files
    self._dump_all_test_images(set_name)
  File "/media/mounir/a1340c42-9115-49f7-9b0b-61e804384f0e/PycharmProjectsHDD/LSC-CNN-Counting/lsc-cnn/data_reader.py", line 826, in _dump_all_test_images
    data = self._read_image_and_gt_prediction(paths, file_name, kernel)
  File "/media/mounir/a1340c42-9115-49f7-9b0b-61e804384f0e/PycharmProjectsHDD/LSC-CNN-Counting/lsc-cnn/data_reader.py", line 789, in _read_image_and_gt_prediction
    'GT_' + tmp + '.mat'))
  File "/home/mounir/anaconda3/envs/pytorch-gpu/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 141, in loadmat
    MR, file_opened = mat_reader_factory(file_name, appendmat, **kwargs)
  File "/home/mounir/anaconda3/envs/pytorch-gpu/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 64, in mat_reader_factory
    byte_stream, file_opened = _open_file(file_name, appendmat)
TypeError: 'NoneType' object is not iterable

Can you tell me what is the error due to ?

AttributeError: 'float' object has no attribute 'item'

Hello! I tried to train for epochs from 0 to 3 and after finishing epoch 0 a message appears to re-run the code. Re-running train by using the same command I get the entitled error.. The full message is the following:

`

Training0...
LR: 0.001000000000.
Classification Model
/home/user/miniconda3/envs/lsc/lib/python3.6/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='elementwise_mean' instead.
warnings.warn(warning.format(ret))
Traceback (most recent call last):
File "main.py", line 1108, in
train()
File "main.py", line 1049, in train
log_path=model_save_path)
File "main.py", line 897, in train_networks
losses, hist_boxes, hist_boxes_gt = train_funcs[0](Xs, Ys, hist_boxes, hist_boxes_gt, loss_weights, network)
File "main.py", line 343, in train_function
losses.append(loss_.item())
AttributeError: 'float' object has no attribute 'item'
`

Any clues would be appreciated !

Training data

Hi

I adjusted the code to train my own dataset.
What is the ideal dataset size you would recommend to train with?

I tried to train with a reduced openimage dataset with 132GB of pictures and it created about 2.2TB in the preprocessing and it's still processing pictures. Would you recommend a smaller dataset?

Thanks

How can we test the code on CPU only ?

I tried to replace all the torch instructions that include cuda in the main script, but each time I run the code to test a model, it looks for a GPU.
Can you tell me how can I run the code on CPU only to test a model ?

How to test the model?

I notice that "main.py" file don't have "skip-init-tests" parameter. It has only code about training the model.How to test the "scale_4_epoch_24.pth"?

usage: main.py [-h] [--epochs N] [--gpu GPU] [--start-epoch N] [-b N]
[--patches N] [--dataset DATASET] [--lr LR] [--momentum M]
[--threshold M] [--weight-decay W] [--mle] [--lsccnn]
[--trained-model PATH [PATH ...]]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.