Giter Club home page Giter Club logo

vilib's Introduction

CUDA Visual Library

This repository holds some GPU optimized algorithms by the "Robotics and Perception Group" at the Dep. of Informatics, "University of Zurich", and Dep. of Neuroinformatics, ETH and University of Zurich.

Now available as a simple ROS Node!

Publication

If you use this code in an academic context, please cite the following IROS 2020 paper.

Balazs Nagy, Philipp Foehn, and Davide Scaramuzza: Faster than FAST: GPU-Accelerated Frontend for High-Speed VIO, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2020.

@inproceedings{Nagy2020,
  author = {Nagy, Balazs and Foehn, Philipp and Scaramuzza, Davide},
  title = {{Faster than FAST}: {GPU}-Accelerated Frontend for High-Speed {VIO}},
  booktitle = {IEEE/RSJ Int. Conf. Intell. Robot. Syst. (IROS)},
  year = {2020},
  doi = {10.1109/IROS45743.2020.9340851}
}

ROS Quick Start

Our front-end is now available as a simple ROS Node based on our library, OpenCV, and CUDA!

Quick Start in your catkin workspace using a EuRoC dataset:

git clone [email protected]:uzh-rpg/vilib.git
catkin build vilib_tracker

# Get one of the EuRoC datasets, or download it manually.
wget http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/machine_hall/MH_01_easy/MH_01_easy.bag

# Launch the node with the example EuRoC config.
roslaunch vilib_tracker euroc.launch

# In a separate terminal, start to play the bag.
rosbag play MH_01_easy.bag

# If you want to visualize the feature tracks on the image, use this launch argument:
roslaunch vilib_tracker euroc.launch publish_debug_image:=true

# and then inspect the image using
rqt_image_view

# Note: visualizing images has a large negative impact on runtime performance!

If you don't have the dependencies yet:

  • Install a version of ROS according to their official website here.
  • CUDA:
  • OpenCV
    • OpenCV with ROS: make sure you have the following ROS packages cv_bridge, image_transport, sensor_msgs
    • OpenCV on the Jetson: The latest Jetson SDKs install with the nVidia SDK manager should provide OpenCV. However, it might be necessary to create a symlink by sudo ln -s /usr/local/opencv-4.3 /usr/local/opencv
    • Alternatively you can follow the instruction below
  • Eigen
    • Install using your package manager, e.g. sudo apt-get install libeigen3-dev

Organization

This library focuses on the front-end of VIO pipelines. We tried to organize functionalities into the following categories:

  • Storage: various storage related functionalities
  • Preprocessing: image preprocessing functionalities
  • Feature detection: various functionalities for feature detection, feature detectors
  • High-level functionalities: more sophisticated algorithms for other front-end tasks

Getting started on a CUDA-enabled desktop computer

The following guide was written for Ubuntu 18.04, but one should proceed similarly on other OS-es. This guide attempts to install the latest CUDA toolkit and driver directly from NVIDA. Through the package manager of your OS (e.g.: apt, yum), you should be able to install an NVIDIA driver and the CUDA toolkit with a one-liner.

# Download the latest NVIDIA CUDA Toolkit from their website:
# Note: I specifically downloaded the .run file, but the others should also
#       suffice
https://developer.nvidia.com/cuda-toolkit

# Enter console-only mode (on next start-up)
sudo systemctl set-default multi-user.target

# Reboot
sudo shutdown -r now

# Log in and remove old display drivers
# i)  Remove the Nouveau driver
# ii) Remove the previously installed NVIDIA driver
sudo apt --purge remove xserver-xorg-video-nouveau
sudo apt purge nvidia*

# Reboot
sudo shutdown -r now

# Now there shouldn't be any display-specific kernel module loaded
lsmod | grep nouveau
lsmod | grep nvidia

# Run the installer
# Note: I didn't run the nvidia-xconfig on a multi-GPU laptop
sudo ./cuda_10.0.130_410.48_linux.run

# Add the executables and the libraries to the appropriate paths:
# Open your .bashrc file
vim ~/.bashrc
# ... and append to the bottom (you might need to change the path)
# for the CUDA 10.0 Toolkit
export PATH=/usr/local/cuda-10.0/bin:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:${LD_LIBRARY_PATH}

# Return to the graphical mode (on next start-up)
sudo systemctl set-default graphical.target

# Reboot
sudo shutdown -r now

# Log in and verify
nvidia-smi
# Example output:
# +-----------------------------------------------------------------------------+
# | NVIDIA-SMI 410.48                 Driver Version: 410.48                    |
# |-------------------------------+----------------------+----------------------+
# | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
# | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
# |===============================+======================+======================|
# |   0  GeForce GTX 960M    Off  | 00000000:01:00.0 Off |                  N/A |
# | N/A   61C    P8    N/A /  N/A |    442MiB /  4046MiB |      0%      Default |
# +-------------------------------+----------------------+----------------------+
#
# +-----------------------------------------------------------------------------+
# | Processes:                                                       GPU Memory |
# |  GPU       PID   Type   Process name                             Usage      |
# |=============================================================================|
# |    0      1378      G   /usr/lib/xorg/Xorg                           193MiB |
# |    0      1510      G   /usr/bin/gnome-shell                         172MiB |
# |    0      3881      G   ...-token=CD62689F151B18325B90AE72DCDA2460    73MiB |
# +-----------------------------------------------------------------------------+

nvcc --version
# Example output:
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2018 NVIDIA Corporation
# Built on Sat_Aug_25_21:08:01_CDT_2018
# Cuda compilation tools, release 10.0, V10.0.130

How to use

Compile without cmake

  1. Compile the library
# Clean any previous build artifacts
make clean
# Compile the shared library
make solib -j4
  1. Compile the test suite (optional)
# We prepared a test suite for the library
# that verifies the code and provides an example for the available functionalities
make test -j4
# Download the dataset: some tests require a dataset
# We used the Machine Hall 01 from ETH Zürich.
cd test/images
# Follow the instructions of the downloader script:
./create_feature_detector_evaluation_data.sh
# Once the dataset has been acquired successfully,
# simply run the test suite:
./test_vilib
  1. Install the library
# Default installation paths :
# Header files : /usr/local/vilib/include
# Library files : /usr/local/vilib/lib
# Clean previous installations
sudo make uninstall
# Install the last compiled version
sudo make install
  1. Accomodate your target application’s Makefile to locate the library
# i ) Compilation stage
CXX_INCLUDE_DIRS += -I<path to the include directory of the visual lib>
# ii ) Linking stage
CXX_LD_DIRS += -L<path to the directory containing libvilib.so>
CXX_LD_LIBRARIES += -lvilib
# If , however , the shared library was not installed to a regular
# library folder :
CXX_LD_FLAGS += -Wl, -rpath,<path to the directory containing the .so>
# or modify the LD_LIBRARY_PATH environment variable

Compile with cmake

Vilib follows the standard patterns for building a cmake project:

# make directory at the top of the the vilib source directory:
mkdir build
# create make files and build the library. Adjust the install prefix to
# match your install directory
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr/local -DCMAKE_BUILD_TYPE=Release ..
make install -j 8

After this, vilib can be included into another cmake project in the usual way. An example CMakeLists.txt file for compiling the tests that come with vilib looks like this:

cmake_minimum_required(VERSION 3.10)

include(CheckLanguage)
check_language(CUDA)
if (CMAKE_CUDA_COMPILER)
   project(vilib-test LANGUAGES CXX CUDA)
else()
   project(vilib-test LANGUAGES CXX)
endif()

find_package(vilib REQUIRED)
find_package(CUDA REQUIRED)
# only necessary if you happen to use opencv
find_package(OpenCV COMPONENTS core imgproc features2d highgui)

message(STATUS "Found CUDA ${CUDA_VERSION_STRING} at ${CUDA_TOOLKIT_ROOT_DIR}")

file(GLOB_RECURSE VILIB_TEST_SOURCES
  src/*.cpp
  src/*.cu
  )

add_executable(vilib_tests ${VILIB_TEST_SOURCES})
include_directories(include)

target_link_libraries(vilib_tests
  vilib::vilib
  opencv_core opencv_imgproc opencv_features2d opencv_highgui
  ${CUDA_LIBRARIES})

install(TARGETS vilib_tests
  DESTINATION lib)

Examples

The test suite serves two purposes: verifying the functionality and providing examples for setting up the library calls properly.

The EuRoC Machine Hall dataset mentioned in the paper for feature detection and tracking can be downloaded through our custom script. This is the dataset, that is used by default in the test code. Please note, that in our online example, the test image count has been reduced from the original 3682 to 100 for a quicker evaluation, but this may be readjusted any time here.

In case you would like to use the library in your application, we kindly ask you to consult the examples below:

  • Feature detection: here
  • Feature tracking: here

Dependencies

Eigen (mandatory)

Make sure, that this library (vilib) compiles with the:

  • Same Eigen version that your end-application is using
  • Same compilation flags, with special attention to the vectorization flags

More about the common headaches: here.
More information about the library: here.

# Install Eigen3 headers (as it is header-only) via the package manager
sudo apt-get install libeigen3-dev

OpenCV (mandatory)

One can use a custom installation of OpenCV if needed, or just use the version that comes with the package manager. In both cases below, consult the CUSTOM_OPENCV_SUPPORT variable in the Makefile.

Use the OpenCV version that comes with the package manager [default]

# Make sure that the Makefile variable is set to *zero*
CUSTOM_OPENCV_SUPPORT=0

Use a custom OpenCV version that you compile yourself from scratch

# Make sure that the Makefile variable is set to *one*
# And adjust the location of the custom library,
# also located in the Makefile
CUSTOM_OPENCV_SUPPORT=1
#
# Update your installed packages
sudo apt-get update
sudo apt-get upgrade
sudo apt-get autoremove
sync

#
# Install OpenCV dependencies
sudo apt-get install build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python3-numpy libtbb2 libtbb-dev libcanberra-gtk-module
sudo apt-get install libjpeg-dev libpng-dev libtiff5-dev libdc1394-22-dev libeigen3-dev libtheora-dev libvorbis-dev libxvidcore-dev libx264-dev sphinx-common libtbb-dev yasm libfaac-dev libopencore-amrnb-dev libopencore-amrwb-dev libopenexr-dev libgstreamer-plugins-base1.0-dev libavutil-dev libavfilter-dev libavresample-dev

#
# Download OpenCV (opencv, opencv_contrib)
# - make sure you have enough space on that disk
# - you also might need to change the version depending on the current state of OpenCV
# - the version of opencv and opencv_contrib should match (in order to avoid compilation issues)
# OpenCV
git clone https://github.com/opencv/opencv.git
cd opencv
git checkout 4.3.0
cd ..
# OpenCV-Contrib
git clone https://github.com/opencv/opencv_contrib.git
cd opencv_contrib
git checkout 4.3.0
cd ..

#
# Build & Install OpenCV
mkdir -p opencv/build
cd opencv/build
# Configure the build parameters
cmake -D CMAKE_BUILD_TYPE=RELEASE \
      -D CMAKE_INSTALL_PREFIX=/usr/local/opencv-4.3 \
      -D INSTALL_C_EXAMPLES=OFF \
      -D INSTALL_PYTHON_EXAMPLES=OFF \
      -D BUILD_EXAMPLES=OFF \
      -D BUILD_TESTS=OFF \
      -D ENABLE_FAST_MATH=1 \
      -D WITH_TBB=ON \
      -D WITH_V4L=ON \
      -D WITH_QT=OFF \
      -D WITH_OPENGL=ON \
      -D WITH_OPENCL=OFF \
      -D WITH_CUDA=OFF \
      -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules ..
# Start building
# - use your number of CPU cores
make -j4
# ..and "install" OpenCV
sudo make install
# create symlink (in order to support multiple installations)
sudo ln -s /usr/local/opencv-4.3 /usr/local/opencv

More information about the library is available here.

Optional ROS Type support

ROS cv_bridge type support was made optional, but is built by default if compiling the ROS wrapper in a catkin environment. However, if you want to include some specific functionalities for cv_bridge type conversion in your own ROS packages, enable it with:

# 1) Either accomodate the Makefile:
ROS_SUPPORT?=1
# 2) or just compile accordingly:
make solib ROS_SUPPORT=1 -j4

vilib's People

Contributors

baliika avatar berndpfrommer avatar foehnx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vilib's Issues

TEST_IMAGE_LIST_EUROC_4000_3000

Hi, I have rewrited a cmakelists.txt and run the demo.
I am able to run the example normally and also reproduce the effect correctly. Now I want to try to use my own data set, imitating the content in create_feature_detector_evaluation_data.sh, and generated image_list_4000_3000.txt, but when I run the test code, the following error appears, what is the problem?

/home/xds/Documents/code/vilib/visual_lib/tests_demo
### Image Pyramid
 CPU (w/o. preallocated array)      : min: 365, max: 526, avg: 394.57 usec
 GPU Device (w. preallocated array) : min: 11.36, max: 28.608, avg: 13.2752 usec
 GPU Host (w. preallocated array)   : min: 10, max: 27, avg: 11.47 usec
 Success: OK
### SubframePool
 Pool creation (with 10 frames)     : min: 367, max: 4085, avg: 505.84 usec
 Preallocated access                : min: 0, max: 5, avg: 0.328 usec
 New allocation                     : min: 2, max: 7993, avg: 43.1495 usec
 Success: OK
### PyramidPool
 Pool creation (with 10 frames)     : min: 590, max: 1803, avg: 731.74 usec
 Preallocated access                : min: 2, max: 19, avg: 2.82 usec
 New allocation                     : min: 14, max: 707, avg: 67.7765 usec
 Success: OK
### FAST detector
CUDA Error: invalid argument (err_num=1)
File: /tmp/tmp.O1K1K5c0YU/src/storage/opencv.cpp | Line: 82
tests_demo: /tmp/tmp.O1K1K5c0YU/src/storage/opencv.cpp:82: void vilib::opencv_copy_from_image_common(const cv::Mat&, unsigned char*, unsigned int, bool, cudaStream_t, cudaMemcpyKind): Assertion `0' failed.

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)


and there is my image_list_4000_3000.txt:

4000
3000
test/images/euroc/images/4000_3000/2020_04_09_14_32_05_001119.jpg
test/images/euroc/images/4000_3000/2020_04_09_14_32_05_001120.jpg
test/images/euroc/images/4000_3000/2020_04_09_14_32_05_001121.jpg
test/images/euroc/images/4000_3000/2020_04_09_14_32_05_001122.jpg
test/images/euroc/images/4000_3000/2020_04_09_14_32_05_001123.jpg
test/images/euroc/images/4000_3000/2020_04_09_14_32_05_001124.jpg
test/images/euroc/images/4000_3000/2020_04_09_14_32_05_001125.jpg
...
...

83 pictures in total

There is the define:

// Frame preprocessing
#define PYRAMID_LEVELS                       1
#define PYRAMID_MIN_LEVEL                    0
#define PYRAMID_MAX_LEVEL                    PYRAMID_LEVELS

// FAST detector parameters
#define FAST_EPSILON                         (10.0f)
#define FAST_MIN_ARC_LENGTH                  10
// Remark: the Rosten CPU version only works with 
//         SUM_OF_ABS_DIFF_ON_ARC and MAX_THRESHOLD
#define FAST_SCORE                           SUM_OF_ABS_DIFF_ON_ARC

// NMS parameters
#define HORIZONTAL_BORDER                    0
#define VERTICAL_BORDER                      0
#define CELL_SIZE_WIDTH                      32
#define CELL_SIZE_HEIGHT                     32

// Test framework options
#define DISPLAY_PYRAMID_CPU                  0
#define DISPLAY_DETECTED_FEATURES_CPU        0
#define DISPLAY_DETECTED_FEATURES_GPU        1 // 显示GPU结果
#define ENABLE_CPU_VERSION                   1
#define ENABLE_GPU_VERSION                   1
// Remark: the subset verification only works with the scores mentioned above
//         for the CPU version
#define ENABLE_SUBSET_VERIFICATION           1
#define ENABLE_SUBSET_VERIFICATION_MSG       1
#define ENABLE_SUBSET_VERIFICATION_IMG       0
#define ENABLE_SUBSET_VERIFICATION_IMG_SAVE  0

// Test framework statistics
#define STAT_ID_DETECTOR_TIMER               0
#define STAT_ID_FEATURE_COUNT                1

ICE-BA integration with vilib

Dear @baliika @foehnx

I recently read your publication, thanks for making the VIO framework available.

I am trying to re-create the results you got by combining the VIO with the ICE-BA backend bundle adjustment, but I am having some doubts on how you combined your framework and ICE-BA.

My current understanding is that there is a three-step approach to it:

  1. Run the detection and tracking using vilib (and somehow store these data)
  2. Create dat files for each frame using the images as well as the stored feature points.
  3. Finally run the back-end bash script using the .dat files generated from 2.

If so, how did you manage to create the .dat files without using the ice-ba executable?

Is this correct approach? If not could you point me in the correct direction?

Best,
Ilyass

How to use vilib for a slam system?

Hello, I found that the vilib library is more difficult to use for vio, mainly because the tracking part did not find the corresponding relationship between the feature points of the previous and subsequent frames. Maybe I need your guidance, thank you!

Ask for documentation

Hi! I am impressed about your work and trying to use the library in one of my project to do feature detection and tracking. However, the lack of documentation adds a lot of difficulties for me. Can you update some documentation about how to use the library?

OpenCV parameter compatibility

Hi,
We are currently testing faster than fast tracker compared to OpenCV calcOpticalFlowPyrLK on KITTI dataset.
We have observed that two methods outputs significantly different sometimes.
Have you ever tested conformity of the proposed tracker to OpenCV’s calcOpticalFlowPyrLK? If so, could you let us know the parameters that you were using?

Cuda Problem happens within test_vilib

I compiled and ran this project on Jetson AGX Xavier developer kit, while I ran the test called test_vilib, Image Pyramid, SubframePool, PyramidPool all show success, but FAST detector showed no result, and the test program paused there, so I decided to find where it pause.
Then I found that FAST_CPU detector is completely ok, and the test pause within one member function called copyGridToHost belong to FAST_GPU's parent class DetectorBaseGPU. Finally I find that the test halts just after it successfully runs,
CUDA_API_CALL(cudaMemcpyAsync(h_feature_grid_,
d_feature_grid_,
feature_grid_bytes_,
cudaMemcpyDeviceToHost,
stream_)) ,
the test halts while execute CUDA_API_CALL(cudaStreamSynchronize(stream_))
Since I am not very familiar with cuda api, I haven't tried to delete this piece of synchronize code, I just need to know how to fix this bug in this project.
I have read this paper, ur work's result is so exciting, and I really expect to see this result in my computer!!!!
Thanks in advance :)

FAST features detected on top of each other

Do I understand this right that at the cell boundaries one can get the same feature detected twice, at different levels?
In the attached image (top left corner) essentially the same feature is detected twice, at different levels:

x: 735, y: 542 level: 0
x: 734, y: 544 level: 1

Note: circle size is related to detection level.

Parameters are as follows:
cell_width: 32
cell_height: 32
pyramid_min_level: 0
pyramid_max_level: 2
threshold: 10.0
min_arc_length: 10
horizontal_border: 8
vertical_border: 8

Is there anything I can do to avoid that?
Thanks!

fast_features_checkerboard

How to make a live webcam test

I want to test vilib on my mobile robot ,but the test binary is based on Euroc dataset. Can you tell me how to make a live webcam test?

Running in windows?

Hi, i would like to use this on a windows OS, but the MakeFile gives me multiple errors. Would it be possible to supply a cMake option for building this lib?

Thanks!

test_vilib results - Jetson Nano vs TX2

Hi,

I'm trying to run vilib on my Jetson Nano to compare the results with a TX2. I managed to run the EuRoC Machine Hall 1 dataset example on my Jetson Nano by following the Examples section.

Could you please provide the console output when running the same example on a Jetson TX2 ? Here is the output that I get :

### Image Pyramid
 CPU (w/o. preallocated array)      : min: 248, max: 750, avg: 265.69 usec
 GPU Device (w. preallocated array) : min: 286.458, max: 303.334, avg: 290.041 usec
 GPU Host (w. preallocated array)   : min: 102, max: 156, avg: 106.98 usec
 Success: OK
### SubframePool
 Pool creation (with 10 frames)     : min: 3537, max: 4025, avg: 3710.1 usec
 Preallocated access                : min: 0, max: 2, avg: 0.47 usec
 New allocation                     : min: 10, max: 882, avg: 342.326 usec
 Success: OK
### PyramidPool
 Pool creation (with 10 frames)     : min: 5460, max: 5769, avg: 5564.71 usec
 Preallocated access                : min: 1, max: 108, avg: 2.527 usec
 New allocation                     : min: 45, max: 2643, avg: 525.929 usec
 Success: OK
### FAST detector
 CPU ---
 FAST: min: 11306, max: 16425, avg: 13537.4 [usec]
 FAST feature count: min: 3258, max: 7875, avg: 4671.76 [1]
 GPU ---
 FAST: min: 7523, max: 10749, avg: 8008.92 [usec]
 FAST feature count: min: 270, max: 340, avg: 316.68 [1]
 Success: OK
### Feature Tracker
 Note: No verification performed
 GPU ---
 Tracker execution time: min: 1523, max: 11502, avg: 2262.44 [usec]
 Tracked feature count: min: 0, max: 49, avg: 19.98 [1]
 Detected feature count: min: 0, max: 50, avg: 0.87 [1]
 Total feature count: min: 16, max: 50, avg: 20.85 [1]
 Feature track life: min: 0, max: 99, avg: 22.9655 [1]
 Success: OK
### Overall
Success: OK

Thanks

ToDo: Unit Testing and Continuos Integration

Enabling continuous integration checks through unit tests should help with various possible bugs.
This issue might take a while to be implemented and resolved.
It will require testing on a CUDA enabled Jenkins machine or similar.

  • Create unit tests within GoogleTest
  • Create simpler cmake for building
  • Setup our Jenkins and webhooks.
  • Deploy CI by requiring successful tests for a pull request merge.

Compile Error(error: identifier "__shfl_xor_sync" is undefined)

Sorry, the following error occurred when we compiled the project, I don’t know why. thank you!

src/feature_tracker/feature_tracker_cuda_tools.cu(146): error: identifier "__shfl_xor_sync" is undefined
detected during instantiation of "void vilib::feature_tracker_cuda_tools::track_features_kernel<T,affine_est_offset,affine_est_gain>(int, int, int, float, vilib::image_pyramid_descriptor_t, vilib::pyramid_patch_descriptor_t, const int *, const T *, const float *, const float2 *, float2 *, float2 *, float4 *, float *) [with T=int, affine_est_offset=true, affine_est_gain=true]"
(332): here

src/feature_tracker/feature_tracker_cuda_tools.cu(652): error: a designator into a template-dependent type is not allowed

src/feature_tracker/feature_tracker_cuda_tools.cu(652): error: a designator into a template-dependent type is not allowed

src/feature_tracker/feature_tracker_cuda_tools.cu(677): error: identifier "__syncwarp" is undefined

src/feature_tracker/feature_tracker_cuda_tools.cu(579): error: identifier "__shfl_down_sync" is undefined

OpenCV not found

I'm trying to compile the library, following the "How to use" section instructions, but when i run make solib -j4, I get that kind of error :

include/vilib/storage/subframe.h:25:32: fatal error: opencv2/core/mat.hpp: No such file or directory

I am using OpenCV 4.1.0 that is compiled from sources. As a consequence, I set these Makefile variables

CUSTOM_OPENCV_SUPPORT=1
CUSTOM_OPENCV_INSTALLATION_PATH=/usr/local/include/opencv4

Any idea why it is not working ? I think it might come from CXX_NVCC_INCLUDES, CXX_LD_DIRS or CXX_LD_FLAGS but I don't know where these should lead to.

Why CPU version generates much more keypoints than GPU one?

Here is the result ( 4671 VS 316.68)

### FAST detector
 CPU ---
 FAST: min: 4559, max: 6822, avg: 5477.3 [usec]
 FAST feature count: min: 3258, max: 7875, avg: 4671.76 [1]
 GPU ---
 FAST: min: 677, max: 971, avg: 716.67 [usec]
 FAST feature count: min: 270, max: 340, avg: 316.68 [1]
 Note: No verification performed
 Success: OK

Any ideas?

How to run the code?

After compiling the library and completing various configurations,I couldn‘t found necessary instructions for running the compiled code of the whole project in the readme section,so I want to know how can I start running those codes and how to use the dataset mentioned in the paper. Please add more details about those issues, thanks!

Hard to understand [pyramid_gpu.cu], may more details ?

Hello !
When I review vilib code , I confused the fucntion : image_halfsample_gpu_kernel , could you explain it more detial and which theory is based on ? The main hard to understand is the three line:

const int dst  = y*pitch_dst_px/N + x; //every thread writes N bytes. the next row starts at pitch_dst_px/N
int src_top    = y*pitch_src_px + x*N; //every thread reads in Nx2 bytes
int src_bottom = y*pitch_src_px + x*N + (pitch_src_px/2);

location : pyramid_gpu.cu
Thanks!

add mask feature?

the process like:
loop:

  1. track feautres by klt
  2. sort features by lifetime
  3. cirle the features in the mask image
  4. remove the newer one if two point too close
  5. find new features with the mask image

so, in the detection method, Is it possible to add filtering(mask) function?

Problems when Compiling

The problem is that some header files cannot be found when compiling:
src/preprocess/pyramid_cpu.cpp:26:36: fatal error: visual_lib/simd_common.h: No such file or directory
It seems that there is no header file named simd_common.h in the repo?

Does the starting trace of a point start with an integer?

I add the code.
std::cout<<"first_pos_: "<<track.first_pos_[0]<<", "<<track.first_pos_[1]<<std::endl;
std::cout<<"cur_pos_ : "<<track.cur_pos_[0]<<", "<<track.cur_pos_[1]<<std::endl;
Get the following results.
first_pos_: 698, 36
cur_pos_ : 694.986, 33.3397
first_pos_: 708, 294
cur_pos_ : 705.637, 278.984
first_pos_: 640, 16
cur_pos_ : 636.937, 12.2501
first_pos_: 378, 48
cur_pos_ : 374.201, 37.2297
first_pos_: 700, 24
cur_pos_ : 697.136, 21.1231
...

Does the starting trace of a point start with an integer?

Thank you !

The tracking effect is not the same as the function of Opencv

I transferred the tracked feature points into the vilib library and got the tracking results, which are indicated in white in the picture. Give the same set of coordinates to the OpenCV function calcOpticalFlowPyrLK, and the result is shown in black. By comparison, it is found that some points of vilib are not as good as opencv, and the tracking range is not as long as opencv. Is this normal?
Thanks again!
Screenshot from 2020-11-11 20-32-37
Screenshot from 2020-11-11 20-33-49

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.