Giter Club home page Giter Club logo

cfcnet's Issues

train NYUDataset error

Hello,I tried to run your train_depth_complete.py but an error occurred. I run the code on Windows system,and I set num_threads to 0 and I set path in options.py.

As a beginner, I don't know whether it is caused by not running on Linux system or not setting the running parameters correctly. I hope you can help me find out the problem. Thank you very much.Here are the results of my run

D:\Anaconda\envs\pytorch\python.exe D:/科研资料/CCA/CFCNet-master/train_depth_complete.py
----------------- Options ---------------
batch_size: 16
checkpoints_dir: ./checkpoints
continue_train: False
epoch: latest
epoch_count: 1
gpu_ids: 0
init_gain: 0.02
init_type: xavier
isTrain: True [default: None]
lambda_L1: 100.0
lr: 0.001
lr_decay_epochs: 100
lr_decay_iters: 5000000
lr_gamma: 0.9
lr_policy: lambda
max_dataset_size: inf
model: DCCA_sparse
momentum: 0.9
nP: 500
name: experiment_name
niter: 400
no_flip: True
norm: instance
num_threads: 0
phase: train
print_freq: 1
resize_or_crop: none
save_epoch_freq: 1
seed: 0
serial_batches: False
suffix:
test_path: D:/科研资料/CCA/CFCNet-master/nyudepthv2/val/
train_path: D:/科研资料/CCA/CFCNet-master/nyudepthv2/train/
verbose: False
weight_decay: 0.0005
----------------- End -------------------
Found 47584 images in train folder.
Found 654 images in val folder.
----------------- Options ---------------
batch_size: 16
checkpoints_dir: ./checkpoints
continue_train: False
epoch: latest
epoch_count: 1
gpu_ids: 0
init_gain: 0.02
init_type: xavier
isTrain: True [default: None]
lambda_L1: 100.0
lr: 0.001
lr_decay_epochs: 100
lr_decay_iters: 5000000
lr_gamma: 0.9
lr_policy: lambda
max_dataset_size: inf
model: DCCA_sparse
momentum: 0.9
nP: 500
name: experiment_name
niter: 400
no_flip: True
norm: instance
num_threads: 0
phase: train
print_freq: 1
resize_or_crop: none
save_epoch_freq: 1
seed: 0
serial_batches: False
suffix:
test_path: D:/科研资料/CCA/CFCNet-master/nyudepthv2/val/
train_path: D:/科研资料/CCA/CFCNet-master/nyudepthv2/train/
verbose: False
weight_decay: 0.0005
----------------- End -------------------
#training images = 2974
#test images = 654
initialize network with xavier
initialize network with xavier
model [DCCASparseNetModel] was created
---------- Networks initialized -------------
[Network DCCASparseNet] Total number of parameters : 40.417 M

Traceback (most recent call last):
File "D:/科研资料/CCA/CFCNet-master/train_depth_complete.py", line 72, in
nn = next(iterator)
File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 435, in next
data = self._next_data()
File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in default_collate
return [default_collate(samples) for samples in transposed]
File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\utils\data_utils\collate.py", line 83, in
return [default_collate(samples) for samples in transposed]
File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\utils\data_utils\collate.py", line 55, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [6, 224, 224] at entry 0 and [8, 224, 224] at entry 3

进程已结束,退出代码为 1

Question about SAConv

Thanks fot your great work.
In SAConv:

class SAConv(nn.Module):
	# Convolution layer for sparse data
	def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, bias=True):
		super(SAConv, self).__init__()
		self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, padding=padding, dilation=dilation, bias=False)
		self.if_bias = bias
		if self.if_bias:
			self.bias = nn.Parameter(torch.zeros(out_channels).float(), requires_grad=True)
		self.pool = nn.MaxPool2d(kernel_size, stride=stride, padding=padding, dilation=dilation)
		nn.init.kaiming_normal_(self.conv.weight, mode='fan_out', nonlinearity='relu')
		self.pool.require_grad = False

	def forward(self, input):
		x, m = input
		x = x * m
		x = self.conv(x)
		weights = torch.ones(torch.Size([1, 1, 3, 3])).cuda()
		mc = F.conv2d(m, weights, bias=None, stride=self.conv.stride, padding=self.conv.padding, dilation=self.conv.dilation)
		mc = torch.clamp(mc, min=1e-5)
		mc = 1. / mc * 9

		if self.if_bias:
			x = x + self.bias.view(1, self.bias.size(0), 1, 1).expand_as(x)
		m = self.pool(m)

		return x, m

It seems that the weights and mc are not used.
What does that mean?

Pretained NYU error

I downloaded the pretrained nyu weights (500.pth), and edited line 138 of base_model.py to

state_dict = torch.load('.../500.pth', map_location=str(self.device))

When I run python evaluate.py --name nyu --checkpoints_dir ... --train_path ~/nyudepthv2/nyudepthv2/val/ --test_path ~/nyudepthv2/nyudepthv2/val/

I get:

           batch_size: 16                            
      checkpoints_dir: /home/haozhen/Downloads/500.pth	[default: ./checkpoints]
       continue_train: False                         
                epoch: latest                        
          epoch_count: 1                             
              gpu_ids: 0                             
            init_gain: 0.02                          
            init_type: xavier                        
              isTrain: False                         	[default: None]
                   lr: 0.001                         
      lr_decay_epochs: 100                           
       lr_decay_iters: 5000000                       
             lr_gamma: 0.9                           
            lr_policy: lambda                        
     max_dataset_size: inf                           
                model: DCCA_sparse                   
             momentum: 0.9                           
                   nP: 500                           
                 name: nyu                           	[default: experiment_name]
                niter: 400                           
              no_flip: True                          
                 norm: instance                      
          num_threads: 8                             
                phase: train                         
           print_freq: 1                             
       resize_or_crop: none                          
      save_epoch_freq: 1                             
                 seed: 0                             
       serial_batches: False                         
               suffix:                               
            test_path: /home/haozhen/Documents/nyudepthv2/val	[default: None]
           train_path: /home/haozhen/Documents/nyudepthv2/val	[default: None]
              verbose: False                         
         weight_decay: 0.0005                        

----------------- End -------------------
Found 654 images in val folder.
#test images = 654
initialize network with xavier
initialize network with xavier
model [DCCASparseNetModel] was created
loading the model from /home/haozhen/Downloads/500.pth/nyu/latest_net_DCCASparseNet.pth
---------- Networks initialized -------------
[Network DCCASparseNet] Total number of parameters : 40.417 M

torch.Size([1, 6, 224, 224]) torch.Size([1, 1, 224, 224])
Traceback (most recent call last):
File "/home/haozhen/codes/CFCNet/evaluate.py", line 99, in
model.set_new_input(data,target)
File "/home/haozhen/codes/CFCNet/models/DCCA_sparse_model.py", line 68, in set_new_input
self.mask = input[:,7,:,:].to(self.device).unsqueeze(1)
IndexError: index 7 is out of bounds for dimension 1 with size 6

How to do ORB sparse sampling?

Thanks for your work!
As shown in paper,there are three strategys of sampling the dense/semi-dense depth maps. I read your code and understand two of them(Uniform & Stereo), but the ORB Sparsifier is not shown in the code. Could you tell me how to do it?

Pretrained NYU inference error

I downloaded the pretrained nyu weights (500.pth), and placed them at nyu/latest_net_DCCASparseNet.pth.

When I run python evaluate.py --name nyu --checkpoints_dir . --train_path ~/nyudepthv2/nyudepthv2/train/ --test_path ~/nyudepthv2/nyudepthv2/val/
I get:

Found 654 images in val folder.
#test images = 654
initialize network with xavier
initialize network with xavier
model [DCCASparseNetModel] was created
loading the model from ./nyu/latest_net_DCCASparseNet.pth
---------- Networks initialized -------------
[Network DCCASparseNet] Total number of parameters : 40.417 M
-----------------------------------------------
Traceback (most recent call last):
  File "evaluate.py", line 95, in <module>
    model.test_depth_evaluation()
  File "/home/pcheng/CFCNet/models/DCCA_sparse_model.py", line 99, in test_depth_evaluation
    self.test_result.evaluate(self.depth_est.data, self.depth_image.data)
  File "/home/pcheng/CFCNet/models/DCCA_sparse_model.py", line 161, in evaluate
    output = output[valid_mask]
IndexError: The shape of the mask [1, 1, 228, 141] at index 3 does not match the shape of the indexed tensor [1, 1, 228, 912] at index 3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.