Giter Club home page Giter Club logo

semanticstf's People

Contributors

weihao1115 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

semanticstf's Issues

Question about SemanticKITTI→SemanticSTF Evaluation

Hi, and thanks for sharing the good work!

I had a general question regarding the evaluation setup that you show in Table 2 of the paper.
The question is, when you train your model on the SemanticKITTI, you classify points in one of the 19 classes (car, ..., traf.). However, the SemanticSTF has also the 4 adverse weather classes in addition to the 19 base classes.
How do you segment them if your model outputs only 19 logits?

The same goes of the SynLiDAR→SemanticSTF evaluation.

Got an error that the number of samples is different using SemanticSTF.

Hi, I'm raising an issue because I'm having a problem using SemanticSTF.

When I load xyz point data and label data from dataloader to use the validation set of SemanticSTF, I get an error that they have different sample lengths as follows.

 File "~/clip3d/dataloader/dataset.py", line 1072, in __getitem__
    labels = labels[inds]
IndexError: index 124898 is out of bounds for axis 0 with size 111392

When I actually print it out, it looks like this: on the left is the xyz point data, and on the right is the label data.

(138785, 4) (111028, 1)
(137495, 4) (109996, 1)
(138860, 4) (111088, 1)
(139240, 4) (111392, 1)

I'm currently using a different codebase(2DPASS), so there may be differences, but I don't see anything in SemanticSTF/PointDR/core/datasets/semantic_stf.py, I don't see any code that specifically addresses this or makes a difference, so I'm wondering if there's something wrong with the dataset you uploaded.

code below is the dataset class that load xyz point data and label data from SemanticSTF.

import os
import yaml
import numpy as np
from PIL import Image

from torch.utils import data

def absoluteFilePaths(directory, num_vote):
    for dirpath, _, filenames in os.walk(directory):
        filenames.sort()
        for f in filenames:
            for _ in range(num_vote):
                yield os.path.abspath(os.path.join(dirpath, f))


class SemanticKITTISTF(data.Dataset):
    def __init__(self, config, data_path, num_vote=1):
        with open(config['dataset_params']['label_mapping'], 'r') as stream:
            semkittiyaml = yaml.safe_load(stream)

        self.config = config
        # self.corruption = corruption
        self.imageset = 'val'
        self.num_vote = num_vote
        self.learning_map = semkittiyaml['learning_map']
        original_data_path = data_path
        self.im_idx = []
        self.im_idx += absoluteFilePaths('/'.join([data_path.replace('sequences', 'SemanticSTF'), 'val', 'velodyne']), num_vote)
        self.im_idx = sorted(self.im_idx)
        # added
        calib_path = os.path.join(original_data_path, '08', "calib.txt")
        calib = self.read_calib(calib_path)
        self.proj_matrix = np.matmul(calib["P2"], calib["Tr"]) 
        print('dataloader corruption_dataset init length im_idx', len(self.im_idx))

        self.no_image = False
        # if config['model_params']['model_architecture'] == "baseline" or config['baseline_only'] is True:
        if config['model_params']['model_architecture'] != "arch_2dpass" or config['baseline_only'] is True:
            self.no_image = True

        # print(self.im_idx[0])
        # print(self.im_idx[0].replace('velodyne', 'labels')[:-3] + 'label')
            
    def __len__(self):
        'Denotes the total number of samples'
        return len(self.im_idx)

    @staticmethod
    def read_calib(calib_path):
        """
        :param calib_path: Path to a calibration text file.
        :return: dict with calibration matrices.
        """
        calib_all = {}
        with open(calib_path, 'r') as f:
            for line in f.readlines():
                if line == '\n':
                    break
                key, value = line.split(':', 1)
                calib_all[key] = np.array([float(x) for x in value.split()])

        # reshape matrices
        calib_out = {}
        calib_out['P2'] = calib_all['P2'].reshape(3, 4)  # 3x4 projection matrix for left camera
        calib_out['Tr'] = np.identity(4)  # 4x4 matrix
        calib_out['Tr'][:3, :4] = calib_all['Tr'].reshape(3, 4)

        return calib_out

    def __getitem__(self, index):
        # print('SemanticKITTIC getitem')
        raw_data = np.fromfile(self.im_idx[index], dtype=np.float32).reshape((-1, 4))

        origin_len = len(raw_data)
        raw_data = raw_data[:, :4]
        points = raw_data[:, :3]

        annotated_data = np.fromfile(self.im_idx[index].replace('velodyne', 'labels')[:-3] + 'label',
                                     dtype=np.uint32).reshape((-1, 1))
        instance_label = annotated_data >> 16
        annotated_data = annotated_data & 0xFFFF  # delete high 16 digits binary

        ## change invalid (snow, fog, rain ...) to ignore label
        idx_20 = np.where(annotated_data == 20)
        annotated_data[idx_20] = 0

        print(raw_data.shape, annotated_data.shape)
        print(np.unique(annotated_data))

        if self.config['dataset_params']['ignore_label'] != 0:
            annotated_data -= 1
            annotated_data[annotated_data == -1] = self.config['dataset_params']['ignore_label']

        if not self.no_image:
            # added
            image_file = self.im_idx[index].replace('velodyne', 'image_2').replace('.bin', '.png')
            image = Image.open(image_file)
            
        data_dict = {}
        data_dict['xyz'] = points
        data_dict['labels'] = annotated_data.astype(np.uint8)
        data_dict['instance_label'] = instance_label
        data_dict['signal'] = raw_data[:, 3:4]
        data_dict['origin_len'] = origin_len
        if not self.no_image:
            data_dict['img'] = image # added
            data_dict['proj_matrix'] = self.proj_matrix # added

        return data_dict, self.im_idx[index]

Thank you in advance for your kind response.
Sincerly,

encounter a bug

Hi, thanks for sharing your great work.
I am trying to reproduce and follow your research. But there is a bug when I try to train. I am new to this area. Could you please check it:
[2023-09-04 18:59:19.072] Epoch 1/50 started.
0% 0/4783 [00:00<?, ?it/s]/project/RDS-FEI-HMZDet-RW/pointdr3/lib/python3.7/site-packages/torchsparse/nn/functional/conv.py:117: UserWarning: This overload of nonzero is deprecated:
nonzero(Tensor input, *, Tensor out)
Consider using one of the following signatures instead:
nonzero(Tensor input, *, bool as_tuple) (Triggered internally at /usr/local/src/PYTORCH/27-apr-2020/pytorch/torch/csrc/utils/python_arg_parser.cpp:760.)
nbmaps = torch.nonzero(results != -1)
Traceback (most recent call last):
File "train.py", line 108, in
main()
File "train.py", line 103, in main
Saver(),
File "/project/RDS-FEI-HMZDet-RW/pointdr3/lib/python3.7/site-packages/torchpack/train/trainer.py", line 39, in train_with_defaults
callbacks=callbacks)
File "/project/RDS-FEI-HMZDet-RW/SemanticSTF-master/PointDR/core/trainers.py", line 183, in train
output_dict = self.run_step(feed_dict)
File "/project/RDS-FEI-HMZDet-RW/pointdr3/lib/python3.7/site-packages/torchpack/train/trainer.py", line 125, in run_step
output_dict = self._run_step(feed_dict)
File "/project/RDS-FEI-HMZDet-RW/SemanticSTF-master/PointDR/core/trainers.py", line 75, in _run_step
pred_2, feat_2 = self.model(inputs_2)
File "/usr/local/python/3.7.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 562, in call
result = self.forward(*input, **kwargs)
File "/project/RDS-FEI-HMZDet-RW/SemanticSTF-master/PointDR/core/models/semantic_kitti/minkunet_dr.py", line 201, in forward
x2 = self.stage2(x1)
File "/usr/local/python/3.7.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 562, in call
result = self.forward(*input, **kwargs)
File "/usr/local/python/3.7.7/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/usr/local/python/3.7.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 562, in call
result = self.forward(*input, **kwargs)
File "/project/RDS-FEI-HMZDet-RW/SemanticSTF-master/PointDR/core/models/semantic_kitti/minkunet_dr.py", line 74, in forward
out = self.relu(self.net(x) + self.downsample(x))
File "/usr/local/python/3.7.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 562, in call
result = self.forward(*input, **kwargs)
File "/usr/local/python/3.7.7/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/usr/local/python/3.7.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 562, in call
result = self.forward(*input, **kwargs)
File "/project/RDS-FEI-HMZDet-RW/pointdr3/lib/python3.7/site-packages/torchsparse/nn/modules/conv.py", line 72, in forward
transposed=self.transposed)
File "/project/RDS-FEI-HMZDet-RW/pointdr3/lib/python3.7/site-packages/torchsparse/nn/functional/conv.py", line 98, in conv3d
feats = feats.matmul(weight)
RuntimeError: expected scalar type Half but found Float

The config of the baseline

Thank you for your work. Could you provide the baseline training configuration? I train the minkunet 50 epoches without any DG methods and test it in semanticSTF dataset, but miou is only about 15%.

CUDA error: an illegal memory access was encountered

Hi, thanks for sharing your great work. When I run this code, I encounter a bug "CUDA error: an illegal memory access was encountered". It happens on the evaluation of epoch 8 if the batch size is 2 and on the evaluation of epoch 1 if the batch size is 4. Could you help me figure this issue out?
here is the complete information:

[2023-09-29 23:19:55.970] Epoch 8/50 started.
[loss] = 0.256, [loss_1] = 0.219, [loss_2] = 0.373: 100% 2271/2271 [05:10<00:00, 7.31it/s]
[2023-09-29 23:25:06.476] Training finished in 5 minutes 10 seconds.
19% 24/125 [00:08<00:37, 2.72it/s]
Traceback (most recent call last):
File "train.py", line 110, in
main()
File "train.py", line 91, in main
trainer.train_with_defaults(
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/train/trainer.py", line 37, in train_with_defaults
self.train(dataflow=dataflow,
File "/home/zcc/zsl/SemanticSTF-master/PointDR/core/trainers.py", line 207, in train
self.trigger_epoch()
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/train/trainer.py", line 156, in trigger_epoch
self.callbacks.trigger_epoch()
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/callbacks/callback.py", line 90, in trigger_epoch
self._trigger_epoch()
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/callbacks/callback.py", line 308, in _trigger_epoch
callback.trigger_epoch()
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/callbacks/callback.py", line 90, in trigger_epoch
self._trigger_epoch()
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/callbacks/inference.py", line 29, in _trigger_epoch
self._trigger()
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/callbacks/inference.py", line 38, in _trigger
output_dict = self.trainer.run_step(feed_dict)
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torchpack/train/trainer.py", line 125, in run_step
output_dict = self._run_step(feed_dict)
File "/home/zcc/zsl/SemanticSTF-master/PointDR/core/trainers.py", line 68, in _run_step
outputs_1, feat_1 = self.model(inputs_1)
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zcc/zsl/SemanticSTF-master/PointDR/core/models/semantic_kitti/minkunet_dr.py", line 200, in forward
x1 = self.stage1(x0)
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zcc/zsl/SemanticSTF-master/PointDR/core/models/semantic_kitti/minkunet_dr.py", line 74, in forward
out = self.relu(self.net(x) + self.downsample(x))
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/home/zcc/.conda/envs/stf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "torchsparse/nn/modules/conv.pyx", line 99, in torchsparse.nn.modules.conv.Conv3d.forward
File "torchsparse/nn/functional/conv/conv.pyx", line 89, in torchsparse.nn.functional.conv.conv.conv3d
File "torchsparse/nn/functional/conv/kmap/build_kmap.pyx", line 83, in torchsparse.nn.functional.conv.kmap.build_kmap.build_kernel_map
File "torchsparse/nn/functional/conv/kmap/func/hashmap_on_the_fly.pyx", line 63, in torchsparse.nn.functional.conv.kmap.func.hashmap_on_the_fly.build_kmap_imp
licit_GEMM_hashmap_on_the_fly
RuntimeError: CUDA error: an illegal memory access was encountered

my environment:
Python: 3.8.16
Cuda: 11.1
torch:1.10.0
TorchSparse 2.0.0b0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.