Giter Club home page Giter Club logo

shape-guided's People

Contributors

jayliu0313 avatar sam-chu-07 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

shape-guided's Issues

A minor bug in cut_patches.py

Hi, excelent work.
I note that in cut_patches.py,
it normalize the point cloud by following lines:

pointcloud_s_t = pointcloud_s_t / (np.array([np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0])]))

should it be instead:

pointcloud_s_t = pointcloud_s_t / (np.array([np.max(pointcloud_s[:,0]) - np.min(pointcloud_s[:,0]), np.max(pointcloud_s[:,1]) - np.min(pointcloud_s[:,1]), np.max(pointcloud_s[:,2]) - np.min(pointcloud_s[:,2])]))

The poor results

I didn't train the 3D expert model myself. I directly used the best checkpoint of the 3D expert model in ./checkpoint. However, I obtained relatively poor results as shown in the figure below. I'm wondering if it might be because the checkpoint isn't very good? Also, if I want to train the 3D expert model to get the optimal parameters, what should I do? Could you provide more details? Thank you for the excellent work you've done. I'm looking forward to your response~

Shape_Guided result

What the difference between the parameter 'split' in the function of 'get_featrue'?

def get_feature(self, points_all, points_idx, data_id, split='test'):
    
    total_feature = None
    total_rgb_feature_indices = []
    
    for patch in range(len(points_all)):
        points = points_all[patch].reshape(-1, self.POINT_NUM, 3)
        indices = points_idx[patch].reshape(self.POINT_NUM)

        # compute the correspoding location of rgb features
        rgb_f_indices = get_relative_rgb_f_indices(indices, self.image_size, 28)
        # extract sdf features
        feature = self.sdf_model.get_feature(points.to(self.device))
        
        if patch == 0:
            total_feature = feature
        else:
            total_feature = torch.cat((total_feature, feature), 0)
            
        if split == 'test':
            total_rgb_feature_indices.append(rgb_f_indices)
        elif split == 'train':
            total_rgb_feature_indices.append(data_id * 784 + rgb_f_indices)
        else:
            return KeyError
        
    return total_feature, total_rgb_feature_indices

What the difference between the parameter 'split' in the function of 'get_featrue'?
Why in the split of 'train' the total_rgb_feature_indices is added data_id * 784 ?

Where is “core.modules.model”?

In the 'sdf_features.py' file under the 'core' directory, the import statement on line 6 'from core.modules.model import *' is failing. It appears that the 'core.modules.model' might be missing, as both the 'load_encoder' on line 22 and 'local_decoder' functions on line 24 are likely located within this module. Could you please advise me on how to address this issue? I appreciate your assistance in advance.

An OS bug is reported when I tried to run the main.py

I follow your instruction on data preprocessing and cut_paste, but when I run the main part, there seems to be an unexpected error that I encounter. Here is the error description, could you help me to fix it?

SDF Device: cuda
sdf device cuda

Running on class bagel

Extracting train features for class bagel:   0%|          | 0/244 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/home/wt/miniconda3/envs/shape_guide/lib/python3.8/multiprocessing/queues.py", line 239, in _feed
  File "/home/wt/miniconda3/envs/shape_guide/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps
  File "/home/wt/miniconda3/envs/shape_guide/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 348, in reduce_storage
  File "/home/wt/miniconda3/envs/shape_guide/lib/python3.8/multiprocessing/reduction.py", line 198, in DupFd
  File "/home/wt/miniconda3/envs/shape_guide/lib/python3.8/multiprocessing/resource_sharer.py", line 48, in __init__
OSError: [Errno 24] Too many open files

PCP Model instead of PointNet

According to the paper, Pointnets (2017) are used for the 3D feature extraction.
However, here a modified PCP model is used.
Could you explain why it is so and why not pointnets?

main.py error

image
作者你好,我在windows上进行main.py验证时出现网络问题,使用梯子也无法解决,请问有解决方法吗?

Can we train the 3d model one sample a time but one patch a time?

Hey, @jayliu0313
Can we train the 3d model one sample a time but one patch a time?
I think the pointnet and the NIF in Shape-Guided is designed for a patch, we may not cut the pointcloud into 3415003(which means patchpointnumxyz), we can concate all the sample together in the patch dim, if we have 200 sample, the result will be 682005003, then we can the increase the batchsize to train the 3d model, may be set the batchsize as 32.

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

Traceback (most recent call last):
File "/private/Shape-Guided-main/main.py", line 92, in
patchcore.align()
File "/private/Shape-Guided-main/core/shape_guide_core.py", line 111, in align
self.methods.predict_align_data(sample, align_data_id)
File "/private/Shape-Guided-main/core/rgb_sdf_feature.py", line 163, in predict_align_data
rgb_map, rgb_s = self.Dict_compute_rgb_map(rgb_features_size28, rgb_features_indices, lib_idices, mode='alignment')
File "/private/Shape-Guided-main/core/features.py", line 118, in Dict_compute_rgb_map
knn_features = shape_guide_rgb_features_28[knn_idx[patch]]
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

how to solve this problem?

Can not increase the batchsize

def get_data_loader(split, class_name, img_size, datasets_path, grid_path, shuffle=False):
    if split in ['train', 'validation']:
        dataset = MVTec3DTrain(split=split, class_name=class_name, img_size=img_size, grid_path=grid_path)
    elif split in ['test']:
        dataset = MVTec3DTest(class_name=class_name, img_size=img_size, dataset_path=datasets_path, grid_path=grid_path)
    data_loader = DataLoader(dataset=dataset, batch_size=1, shuffle=shuffle, num_workers=1, drop_last=False, pin_memory=True)
    return data_loader

when I tired to increase the batchsize, error will happen. it says the list must be the same length. Why you use batch_size=1 here?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.