yzrobot / online_learning Goto Github PK
View Code? Open in Web Editor NEW[ROS package] Online Learning for Human Detection in 3D Point Clouds
License: GNU General Public License v3.0
[ROS package] Online Learning for Human Detection in 3D Point Clouds
License: GNU General Public License v3.0
Hello!
We are looking at using your repo, but it doesn't seem to say which version of ROS it is using. Is there a specific version needed to support your repo?
Thanks,
Josh
Hi:
I want to ask an irrelevant question about the dataset from your paper "3DOF Pedestrian Trajectory Prediction Learned from Long-Term Autonomous Mobile Robot Deployment Data". I am so sorry to ask the question here because there is no explaining in the dataset webpage "https://lcas.lincoln.ac.uk/wp/3dof-pedestrian-trajectory-dataset/".
What does each column mean in the file data.csv in the folder strands2 and the folder minerva? That dataset looks like calculated by the rosbag in this project? Another question, how to transform the data from strands2 to minerva, or they are irrelevant?
Last question, the dataset here "https://lcas.lincoln.ac.uk/wp/research/data-sets-software/l-cas-multisensor-people-dataset/" can not be found.
Hi,
I launched object3d_detector and passed a bag file I took with a VLP16 Velodyne lidar as the argument but I get the following error:
process[rosbag_play-1]: started with pid [23284]
process[cloud_node-2]: started with pid [23285]
process[object3d_detector-3]: started with pid [23292]
[FATAL] [1541099625.238843188]: Error opening file: ~/MassVassar.bag
process[bayes_people_tracker-4]: started with pid [23306]
YAML Exception: yaml-cpp: error at line 0, column 0: invalid node; this may result from using a map iterator as a sequence iterator, or vice-versa
[ERROR] [1541099625.248293481]: Unable to open calibration file: /home/golnaz/online_ws/src/online_learning/object3d_detector/config/vlp16.yaml
process[rviz-5]: started with pid [23325]
[ INFO] [1541099625.291080353]: Found filter type: UKF
[ INFO] [1541099625.292908679]: Constant Velocity Model noise: [std_limit:1,x:1.4,y:1.4]
[ INFO] [1541099625.292954698]: Created UKF based tracker using constant velocity prediction model.
[ INFO] [1541099625.293570025]: Found detector: object3d_detector ==> [matching_algorithm:NN,noise_params:[x:0.1,y:0.1],observation_model:CARTESIAN,seq_size:4,seq_time:0.3,topic:/object3d_detector/poses]
[ INFO] [1541099625.293616579]: Adding detector model for: object3d_detector.
[ INFO] [1541099625.307439327]: [object3d detector] load SVM model from '/home/golnaz/online_ws/src/online_learning/object3d_detector/libsvm/pedestrian.model'.
[ INFO] [1541099625.307535510]: [object3d detector] load SVM range from '/home/golnaz/online_ws/src/online_learning/object3d_detector/libsvm/pedestrian.range'.
================================================================================REQUIRED process [rosbag_play-1] has died!
process has died [pid 23284, exit code 1, cmd /opt/ros/kinetic/lib/rosbag/play --clock ~/MassVassar.bag __name:=rosbag_play __log:=/home/golnaz/.ros/log/f8ecc04e-de06-11e8-ab35-f0d5bfb4922c/rosbag_play-1.log].
log file: /home/golnaz/.ros/log/f8ecc04e-de06-11e8-ab35-f0d5bfb4922c/rosbag_play-1*.log
Initiating shutdown!
================================================================================
[rviz-5] killing on exit
[bayes_people_tracker-4] killing on exit
[object3d_detector-3] killing on exit
[cloud_node-2] killing on exit
[rosbag_play-1] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
Is there something wrong with my bagfile? The topics I recorded in my bagfile were /velodyne_points and /tf.
As shown above, there is also an issue with opening the vlp16.yaml file.
Do you know how I can resolve this issue? Thanks!
Hi, Thank you for the awesome work.
When I run your project, I encountered some errors. Please see the information below:
[driver_nodelet-2] process has died [pid 13061, exit code 255, cmd /opt/ros/noetic/lib/nodelet/nodelet velodyne2_nodelet_manager __name:=driver_nodelet __log:=/home/lk3696/.ros/log/16ed2878-26a1-11ec-834a-19b1195a40f3/driver_nodelet-2.log].
log file: /home/lk3696/.ros/log/16ed2878-26a1-11ec-834a-19b1195a40f3/driver_nodelet-2*.log
[cloud_nodelet-3] process has died [pid 13062, exit code 255, cmd /opt/ros/noetic/lib/nodelet/nodelet velodyne_nodelet_manager __name:=cloud_nodelet __log:=/home/lk3696/.ros/log/16ed2878-26a1-11ec-834a-19b1195a40f3/cloud_nodelet-3.log].
log file: /home/lk3696/.ros/log/16ed2878-26a1-11ec-834a-19b1195a40f3/cloud_nodelet-3*.log
I use ubuntu 20.04 and I take real lidar (VLP-16) data.
Additionally, I uncomment line 13~ 28 and change args="load velodyne_driver/DriverNodelet velodyne_nodelet_manager" to args="velodyne_nodelet_manager" in object3d_detector.launch.
Do you know how I can fix it?
Any help is much appreciated!
Thank you for your work in advance.
There are some questions about the feature 5 source code in the file of online_learning/object3d_detector/src/object3d_detector.cpp .
In the function Get3ZoneCovarianceMatrix, cov-matrix shape is 3x3, the elements used is (0,1) (0,2) and (1,2). Since cov-matrix is symmetric, there are nine unique elements, why did you only use (0,1) (0,2) and (1,2). For my sample, cov-matrix elements are all valid, none of them is near to zero.
In addtion, feature 5 is from the paper "Pedestrian Detection and Tracking Using Three-Dimensional LADAR Data". As description of the feature, the cov-matrix is 2D. Then i am confusion.
May i have you explain about the difference. Thank you
Hello, thank you for your great project. I have configured the launch file so that the project can be run on Ubuntu 20.04. Also, I have created some bag files with 3 topics recorded, /velodyne_packets, /odom, and /tf so that I can train and built my own pedestrian model.
However, as far as I know, does the training process can only be run when the robot is stable? In other words, when I move the robot with LiDAR, the model was not good even though there were so many support vectors. So my question is how can I train in a large environment which requires the robot to move? Also, can it be possible to train with multiple bag files respectively ? For example, can I record bag files in different locations and train them altogether? If I misunderstood anything, please let me know. Thank you for your time and consideration.
Hi, im newbie in robotics. can i implement your code using laser range finder ? what is the specific code needs to change?
Hi,
When I run the code, I see the bounding boxes are flickering. Why the bounding boxes are blinking (not steady), while, the id's are stable? Does it mean the clustering algorithm some times does not detect the pedestrians?
Thanks,
Golnaz
Hi, when I try to build it, it gives me the following error:
-- Could not find the required component 'people_msgs'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found.
CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "people_msgs" with
any of the following names:
people_msgsConfig.cmake
people_msgs-config.cmake
Add the installation prefix of "people_msgs" to CMAKE_PREFIX_PATH or set
"people_msgs_DIR" to a directory containing one of the above files. If
"people_msgs" provides a separate development package or SDK, be sure it
has been installed.
Call Stack (most recent call first):
online_learning/bayes_people_tracker/CMakeLists.txt:7 (find_package)
Do you know how i can fix it? Thanks!
Thank you for your work.
I have already added point clouds path like:
name="bag" value="/home/**/LCAS_20160523_1200_1218.bag"
and execute the comman
$ roslaunch object3d_detector object3d_detector.launch
i got the output:
... logging to /home/chen/.ros/log/993bd5aa-f262-11ea-a9fc-409f38ea5d55/roslaunch-chen-9831.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/init.py", line 306, in main
p.start()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/parent.py", line 268, in start
self._start_infrastructure()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/parent.py", line 217, in _start_infrastructure
self._load_config()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/parent.py", line 132, in _load_config
roslaunch_strs=self.roslaunch_strs, verbose=self.verbose)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/config.py", line 451, in load_config_default
loader.load(f, config, verbose=verbose)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 749, in load
self._load_launch(launch, ros_config, is_core=core, filename=filename, argv=argv, verbose=verbose)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 721, in _load_launch
self._recurse_load(ros_config, launch.childNodes, self.root_context, None, is_core, verbose)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 657, in _recurse_load
n = self._node_tag(tag, context, ros_config, default_machine, verbose=verbose)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 95, in call
return f(*args, **kwds)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 408, in _node_tag
self._param_tag(t, param_ns, ros_config, force_local=True, verbose=verbose)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 95, in call
return f(*args, **kwds)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 257, in _param_tag
vals = self.opt_attrs(tag, context, ('value', 'textfile', 'binfile', 'command'))
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 202, in opt_attrs
return [self.resolve_args(tag_value(tag,a), context) for a in attrs]
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/xmlloader.py", line 183, in resolve_args
return substitution_args.resolve_args(args, context=context.resolve_dict, resolve_anon=self.resolve_anon)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 370, in resolve_args
resolved = _resolve_args(resolved, context, resolve_anon, commands)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 383, in _resolve_args
resolved = commands[command](resolved, a, args, context)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 151, in _find
source_path_to_packages=source_path_to_packages)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/roslaunch/substitution_args.py", line 197, in _find_executable
full_path = _get_executable_path(rp.get_path(args[0]), path)
File "/usr/lib/python2.7/dist-packages/rospkg/rospack.py", line 203, in get_path
raise ResourceNotFound(name, ros_paths=self._ros_paths)
ResourceNotFound: velodyne_pointcloud
ROS path [0]=/opt/ros/kinetic/share/ros
ROS path [1]=/home/chen/ssh/develop/person_detection_ziliao/robosense_SDK/git_program/ yzrobot/online_learning /catkin_ws/src
ROS path [2]=/opt/next/share
ROS path [3]=/opt/bzrobot_vision/share
ROS path [4]=/opt/bzrobot_patrol/share
ROS path [5]=/opt/ros/kinetic/share
i don't know where goes wrong. have i missed something. thank you in advance.
excuse me,
I have played my own bag file collected from VLP16, but the result is poor,
may I ask how to train my own svm model by my self? I am a newer with ml.
thank you
In config/detectors.yaml
in the Kinetic branch, it seems like the observation_model
parameter is missing from both upper_body_detector
and leg_detector
. Running the detector notified me of this, and I added the parameters. Please see the code from my detectors.yaml
bayes_people_tracker:
filter_type: "UKF" # The Kalman filter type: EKF = Extended Kalman Filter, UKF = Uncented Kalman Filter
cv_noise_params: # The noise for the constant velocity prediction model
x: 1.4
y: 1.4
detectors: # Add detectors under this namespace
upper_body_detector: # Name of detector (used internally to identify them). Has to be unique.
observation_model: "CARTESIAN"
topic: "/upper_body_detector/bounding_box_centres" # The topic on which the geometry_msgs/PoseArray is published
cartesian_noise_params: # The noise for the cartesian observation model
x: 0.5
y: 0.5
matching_algorithm: "NNJPDA" # The algorthim to match different detections. NN = Nearest Neighbour, NNJPDA = NN Joint Probability Data Association
leg_detector: # Name of detector (used internally to identify them). Has to be unique.
observation_model: "CARTESIAN"
topic: "/to_pose_array/leg_detector" # The topic on which the geometry_msgs/PoseArray is published
cartesian_noise_params: # The noise for the cartesian observation model
x: 0.2
y: 0.2
matching_algorithm: "NNJPDA" # The algorthim to match different detections. NN = Nearest Neighbour, NNJPDA = NN Joint Probability Data Association
hi, I'm trying to roslaunch people_tracker.launch
,but there is an error:
[FATAL] [1542597847.533215683]: Unknown observation model! is not specified. Unable to add leg_detector to the tracker. Please use either CARTESIAN or POLAR as observation models.
, what's wrong with it? and how to fix it?
Hi,
I think that it does not work with the equipmen of Velodyne HDL-64 because the different feature extract from points, so could you please tell me how to use LIBSVM to obtain six features of human from points.
best wishes
Thanks.
Hi, Yan @yzrobot , thank you for your project.
I have read your paper, in the abstract you said that
The learning framework requires a minimal set of labelled samples (e.g.
one or several samples) to initialise a classifier.
and I know you had developed the cloud_annotation_tool to label pedestrians with the loaded pcd file.
I want to know that how to use these labelled examples(including the pcd file and its corresponding annotation txt file) to train the svm classifier, do you have any suggestions or clues to instruct me?
Many thanks!
Hello,
Thank you for the awesome work, I am try to reimplement the repo, but I have some question and want to ask, please give me some advise, thank you.
roslaunch object3d_detector object3d_detector.launch
. I have successfully dectected the human, but I can't get the human trajectory. I don't know where is the error. The following picture is the running screen, the rostopic list
and rosrun rqt_graph rqt_graph
.Thank you for your help.
Best Regards.
hi, thanks for your good work. When I run "people_tracker.launch", I have met the same question as https://github.com/yzrobot/online_learning/issues/6. I have changed the "detectors.yaml" as you said, but I still can not obstain tracking marks or trajectory as your video shows, just like the following picture. Do you know how to solve this problem? Thank you in advance.
Thank you for the awesome project. I am trying to reimplement the repo, but I got some questions I'd like to ask.
As I understood from your paper, first an initial model that has been trained offline is used in the first round. But I could not find the code to import an external model in the object_3d_detector_ol.cpp
I ran the object_3d_detector_3d with the LCAS bag file, but the result svm model has over 1500 SVs and the sv_coefs
are all just 8 and -8.
Hi,
I am trying to run the multisensor object detection on the LCAS data but I get the following error:
ERROR: cannot launch node of type [bayes_people_tracker/bayes_people_tracker_ol_ms]: can't locate node [bayes_people_tracker_ol_ms] in package [bayes_people_tracker]
When I look in the catkin_ws/devel/lib/ folder, the only nodes I see are bayes_people_tracker and bayes_people_tracker_ol. Is there a way to build the bayes_people_tracker_ol_ms node? I built the multisensor branch simply using catkin_make. Is there another method I should use?
Thanks,
Luke
Hello, What is the range file mean?
How can I get this file from my own dataset?
Hi:
Thank you for your work. This is what I need.
I have a 64-line LIDAR but this package seems to only support 16-line LIDAR. What should I do to take my LIDAR data to use this package?
Looking forward to your reply.
Hi Zhi,
We were able to run your package in our dataset, but when I run it in your dataset provided here (bag file) :
https://lcas.lincoln.ac.uk/wp/research/data-sets-software/l-cas-3d-point-cloud-people-dataset/
It opens rviz but it does not show anything, do you have any thought on that?
Thanks,
Golnaz
Thanks! in Advance!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.