Node to filter raw human detection locations with the Wire package
Here is how the human detection pipeline currently works:
- The
darknet
node pushes out bounding boxes of detected objects. - These bounding boxes are captured by the
rgb_human_detection_node
along with near-syncrhonous depth images. Image transofmrations are used to rectify the bounding boxes into the depth frame, from which a depth value is extracted, and projected into the real world relative to the RealSense camera. These relative human locations in (x,y,z) are then pushed out as/dragoon_messages/Objects
. - This node (
detection_filter
) grabs these relative human detection locations and pre-processes them (April 12: Pre-processesing only consists of ignoring relative detections over 8m away or under 0.05m away). These pre-processed detections are then transformed to the/map
frame using the TF-tree and are published aswire_msgs/WorldEvidence
to be fed into the global Kalman filter. - The global Kalman filter
wire_core
takes in thesewire_msgs/WorldEvidence
evidences, does probability magic, and outputswire_msgs/WorldState
messages representing our filtered human detections in the/map
frame. - The visualizer eats up
wire_msgs/WorldEvidence
andwire_msgs/WorldState
to display markers that represent the world evidence (pre-processed detections transformed to the global frame) and the world state (aka filtered detections) in their appropriate places on the/map
frame.
So we clearly have some issues with the human detection pipeline. Where are they? Fuck me I don't know but lets find out. Something is certainly wrong in the rgb_human_detection_node
, detection_filter
, and/or visualizer
, or some synchronization/timing issue between imagery. Testing out an updated node anywehere along this chain requires removing some of the messages from the rosbag above. Specifically, it requires removing all messages along this pipeline downstream of the targeted node.
That said, the best way to try out new algorithms is the folliwing procedure:
- Remove all rosmsgs downstream of the
darknet
node - Run all downstream nodes locally
- Replay the rosbag with
use_sime_time
set to true and--clock
set to the rosbag timer
Use this command to filter the new bag into a filtered one called April12_Filtered.bag
or whatever the fuck you want I'm not your dad
rosbag filter April12_SVD_test_human_detection_tf_off.bag April12_Filtered.bag "topic!='/world_state' and topic!='/world_evidence' and topic!='/ObjectPoses'"
- Pull our repos for
darkenet_ros_msgs
(see last section for cloning onlymsgs
without the rest),RealSenseDev
(which includesrgb_human_detection_node
),detection_filter
,wire
andvisualizer
into youcatkin_ws/src/
. If you havent already installed jsk for the visualizer then:sudo apt-get install ros-melodic-jsk-visualization
. There might be other dependencies just hmu if you run into issues building these nodes.wire
is gonna throw a fuck ton of warnings when you build about some highly problematicCube
, idk what they are but don't worry about them. - The
rgb_human_detection_node
requires pyrealsense2 libraries.pip install pyrealsense2
catkin_make
duhroslaunch dragoon_bringup detection_testing.launch
This launchesrgb_human_detection_node
,detection_filter
,wire_core
, andvisualizer
rosbag play <FILTERED_BAGFILE>.bag --clock <FILTERED_BAGFILE>.bag
There you have it. Happy bug hunting
If you just want the messages and not having to pull the whole darknet repo with all the weights and shit:
cd catkin_ws/src/
mkdir darknet_ros
cd darknet_ros
git init
git remote add origin https://github.com/howde-robotics/darknet_ros.git
git fetch
git checkout origin/master darknet_ros_msgs/*