Giter Club home page Giter Club logo

concert_description's Introduction

concert_description

ROS package containing modular's simulation scripts and launch files

Docker image

A ready-to-use Docker container is provided, and it can be executed with .docker/run-docker.bash. Upon first execution, a lot of data might be downloaded. The container can be used to follow the rest of this readme.

To update the image to the latest version

docker pull arturolaurenzi/concert_description

To locally build the image

.docker/build-docker.bash [--no-cache] 

Dependencies

  • ROS (desktop full is recommended, moveit-core)
  • XBot2 binaries (see here for instructions)
  • The modular Python3 package (will be installed by forest)

Setup

In addition to using Docker, you can setup concert_description using forest.

  1. Install forest:
[sudo] pip3 install hhcm-forest
  1. Create a forest workspace. We are going to call it concert_ws for the sake of this example:
mkdir concert_ws && cd concert_ws
  1. Initialize the forest workspace and add recipes:
forest init
source setup.bash
echo "source $PWD/setup.bash" >> /home/USER/.bashrc
forest add-recipes [email protected]:advrhumanoids/multidof_recipes.git --tag master 

Where you should substitute USER with your username.

Optional: If you don't have any ssh key set up in your system run also:

export HHCM_FOREST_CLONE_DEFAULT_PROTO=https

and consider adding it to the .bashrc

  1. Finally, just run:
forest grow concert_description

which will clone this repo and install the modular package.

If you have the XBot2 binaries installed you are ready to simulate the CONCERT robot!


P.S. If you want to run also this IK example remember to also run:

forest grow centauro_cartesio -j 4

Quickstart (CONCERT example)

Launch the simulation environment, including the xbot2 process

mon launch concert_gazebo concert.launch [rviz:=true]

Screenshot from 2022-10-17 18-44-46 MicrosoftTeams-image (4)

Note: For selecting to simulate sensors or not, the launch file accepts also a series of additional arguments. For example to run a simulation that will load also the gazebo plugins for the Realsense cameras, the Velodyne lidars and the ultrasound sensors run:

mon launch concert_gazebo concert.launch realsense:=true velodyne:=true ultrasound:=true

You'll need to have the proper dependencies installed in your setup in order for sensor simulation to work. See the forest recipe for this package.

Launch XBot2's monitoring GUI

xbot2-gui

Run a homing motion (it is a default, simple real-time plugin)

rosservice call /xbotcore/homing/switch 1

or click Start on the GUI, next to the homing label.

Screenshot from 2022-10-17 18-43-39

Enable robot control via ROS

rosservice call /xbotcore/ros_ctrl/switch 1

or click Start on the GUI, next to the ros_ctrl label. NOTE: you must not publish messages on the /xbotcore/command topic when starting this module! Messages published on the /xbotcore/command topic are now forwarded to the simulator. This can be done (for debugging purposes) also via the GUI's sliders.

Move the base with IK

First, make sure that the ros_ctrl module is enable, and that the robot arm is not in a singular configuration (e.g., run the homing module once). Then, invoke the following launch file

mon launch concert_cartesio concert.launch xbot:=true gui:=true

Screenshot from 2022-10-17 18-51-15 Then, right-click on the interactive marker, and select Continuous Ctrl. Move the marker around, and see the resulting motion in Gazebo.

Note that this last part requires additional dependencies (see also setup-docker.bash), that can be installed via the hhcm-forest tool. Follow instructions from here and then invoke

forest grow centauro_cartesio

Note to control the base in velocity mode (i.e., via geometry_msgs/TwistStamped messages), you must first invoke the following ROS service:

rosservice call /cartesian/base_link/set_control_mode velocity

Upon succesful return, you can move the base by continuously sending velocity commands to the topic /cartesian/base_link/velocity_reference; note that the msg.header.frame_id field of the published messages can be usefully set to base_link in order to have the commanded twist interpreted w.r.t. the local frame.

Deploy instructions

When launching the simulation environment (mon launch concert_gazebo concert.launch) a Python file is used to generate the robot model and write the URDF, SRDF, etc. By default this file is the concert_example.py script in concert_examples, although it can be changed by passing the path to another script to the modular_description argument of the launch file.

Executing the Python script, the required files will be generated in the /tmp folder and will be used by Gazebo, XBot2, etc. To save these files in a non-temporary folder the deploy argument can be passed to the Python script. For instance running:

roscd concert_examples
python3 concert_example.py --deploy ~/concert_ws/ros_src/ --robot-name my_concert_package

will deploy a ROS package called my_concert_package in the ~/concert_ws/ros_src directory. This can now be used as an independent ROS package, that can be shared or stored as usual.

Further documentation

The robot API: https://advrhumanoids.github.io/XBotInterface/

XBot2: https://advrhumanoids.github.io/xbot2/ , https://github.com/ADVRHumanoids/xbot2_examples

CartesIO: https://advrhumanoids.github.io/CartesianInterface/

concert_description's People

Contributors

alaurenzi avatar edoardoromiti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

concert_description's Issues

Create a common simulation environment in gazebo

I propose creating a gazebo world from the BIM model.
It would benefit the project to have the entire drilling mission defined in one simulation environment.
Parts that would need to be included:

  • geometries from BIM model
  • Pre-defined drilling positions
  • ArUco markers for localization of drilling position
  • Humans walking through the scene or doing some typical working motion (can be provided by TUM)

Could not run IK control in RViZ

Hi,
I am trying to run the last step from the README: https://github.com/ADVRHumanoids/concert_description#move-the-base-with-ik
Unfortunately, I run into this error:

    /ros_server_node: [ok  ] OpenSotBackEndQPOases so library found! 
    /ros_server_node: [err ] ERROR: INITIALIZING STACK 1 
    /ros_server_node: terminate called after throwing an instance of 'std::runtime_error'
    /ros_server_node:   what():  Can Not initizalize SoT!
               /rviz: [RenderWindow* rviz::RenderSystem::makeRenderWindow]: Stereo is NOT SUPPORTED
               /rviz: [RenderSystem::detectGlVersion]: OpenGL device: Quadro RTX 4000/PCIe/SSE2
               /rviz: [RenderSystem::detectGlVersion]: OpenGl version: 4.6 (GLSL 4.6).
    /ros_server_node: /ros_server_node died from signal 6
    /ros_server_node: /ros_server_node left a core dump
    /ros_server_node: Determined pattern '/tmp/rosmon-node-x1tlg1/core'
    /ros_server_node: Could not find a matching core file :-(

Steps I took:
Run docker:

cd docker && ./run-docker.bash

Start roscore:

roscore

Start simulation:

mon launch concert_gazebo concert.launch rviz:=true

Start xbot2 GUI:

xbot2-gui

Enabled roscontrol in GUI.
Launched concert cartesio:

mon launch concert_cartesio concert.launch xbot:=true gui:=true

--> This threw the above error.
RViZ looks like this:
image
Gazebo looks fine as I can judge.

unable to quickstart concert

Hi, Concert team.
I tried to quickstart the Concert as shown in your guide. I followed the preceding steps and installed all the requirements (shown no errors), but the line
mon launch concert_gazebo concert.launch [rviz:=true]
just runs silently and I cannot proceed with the setup. What might be the problem here?

Also, I generated the .urdf file using the python script you've provided and set up the config package using MoveIt, but even though
roslaunch concert_moveit_config demo.launch
runs successfully and allows me to jump between poses I had created,
neither of the following lets me to open a gazebo world with the robot in it:
roslaunch gazebo_ros empty_world.launch paused:=true use_sim_time:=false gui:=true throttled:=false recording:=false debug:=true && rosrun gazebo_ros spawn_model -file /home/airat/concert_ws/concert_gazebo.urdf -urdf -x 0 -y 0 -z 1 -model concert;
roslaunch concert_moveit_config gazebo.launch
Thank you!

Forest Grow Error

I pulled the new commits on the repository and i can no longer do the docker build using the script build-docker.bash inside the docker folder.

Screenshot from 2023-01-20 10-43-20

Gravity compensation of joint impendence controller

We are currently conformance-checking our safety models.
It seems like the gravity compensation of the controller is not well-tuned.
The green capsules show how the robot should look like when given the desired joint position [2.5, 1.5, 0.0, 0.0, 0.0, 0.0], and the actual robot is significantly lower due to the gravity in simulation:
image
When disabling gravity, this matches perfectly:
image

Could you check if the gravity compensation is correct @EdoardoRomiti?

Gazebo segmentation fault

I tried running the gazebo with the concert robot using the mon command. At some point, it shows a segmentation error and I get stuck there. Could you please help me with it?
I took into account the fixes you'd suggested in this issue, but none of those seem to have helped.

The steps I take are the following:
0. Clone the repo using forest. Install nvidia-docker2 and some other things to get the run-docker.bash file running.

  1. Run
sudo systemctl daemon-reload
sudo systemctl restart docker
  1. Run the ./run-docker.bash which opens a new terminal.
  2. Split that terminal and run
    3.1. roscore
    3.2. mon launch concert_gazebo concert.launch [rviz:=true]. This command shows the following output:
$ mon launch concert_gazebo concert.launch [rviz:=true]
Loaded launch file in 0.017532s
Still loading parameter '/robot_description_semantic'...
Still loading parameter '/robot_description_gz'...
Still loading parameter '/robot_description_xbot'...
ROS_MASTER_URI: 'http://localhost:11311'
roscore is already running.
Running as '/rosmon_1677501005641155397'

/xbot2: [info][xbot2-core] waiting for socket 'gz_to_xbot2_time'..
/urdf_spawner: [SpawnModelNode.run]: Loading model XML from ros parameter robot_description_g
~  z
/urdf_spawner: [spawn_urdf_model_client]: Waiting for service /gazebo/spawn_urdf_model
/gazebo: [GazeboRosApiPlugin::Load]: Finished loading Gazebo ROS API Plugin.
/gazebo: [service::exists]: waitForService: Service [/gazebo/set_physics_properties] ha
~  s not been advertised, waiting...
/gazebo: libGL error: MESA-LOADER: failed to retrieve device information
/gazebo_gui: [GazeboRosApiPlugin::Load]: Finished loading Gazebo ROS API Plugin.
/gazebo_gui: [service::exists]: waitForService: Service [/gazebo_gui/set_physics_properties
~  ] has not been advertised, waiting...
/gazebo_gui: libGL error: MESA-LOADER: failed to retrieve device information
/xbot2: [info][xbot2-core] waiting for socket 'gz_to_xbot2_time'..
/gazebo: Segmentation fault (core dumped)
/gazebo: /gazebo exited with status 139
/gazebo_gui: Segmentation fault (core dumped)
/gazebo_gui: /gazebo_gui exited with status 139
/xbot2: [info][xbot2-core] waiting for socket 'gz_to_xbot2_time'..
/xbot2: [info][xbot2-core] waiting for socket 'gz_to_xbot2_time'..
/xbot2: [info][xbot2-core] waiting for socket 'gz_to_xbot2_time'..

3.2.* Alternatively, I ran the other mon command and got this:

$ mon launch concert_cartesio concert.launch xbot:=true gui:=true
Loaded launch file in 0.027434s
ROS_MASTER_URI: 'http://localhost:11311'
roscore is already running.
Running as '/rosmon_1677501084613753010'

/ik_rviz: QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-user'
/ik_rviz: [VisualizerApp::init]: rviz version 1.14.19
/ik_rviz: [VisualizerApp::init]: compiled against Qt version 5.12.8
/ik_rviz: [VisualizerApp::init]: compiled against OGRE version 1.9.0 (Ghadamon)
/interactive_markers: [service::exists]: waitForService: Service [/cartesian/get_task_list] has not 
~  been advertised, waiting...
/ik_rviz: [RenderSystem::forceGlVersion]: Forcing OpenGl version 0.
/ros_server_node: [info] Configuring from ROS parameter server
/ros_server_node: terminate called after throwing an instance of 'std::runtime_error'
/ros_server_node:   what():  robot_description parameter not set
/ik_rviz: libGL error: MESA-LOADER: failed to retrieve device information
/ik_rviz: libGL error: MESA-LOADER: failed to retrieve device information
/ros_server_node: /ros_server_node died from signal 6
/ros_server_node: /ros_server_node left a core dump
/ros_server_node: Determined pattern '/tmp/rosmon-node-UqSGUH/core'
/ros_server_node: Could not find a matching core file :-(
/ik_rviz: /ik_rviz died from signal 11
/ik_rviz: /ik_rviz left a core dump
/ik_rviz: Determined pattern '/tmp/rosmon-node-yDw1HE/core'
/ik_rviz: Could not find a matching core file :-(

How to include cameras in the URDF model?

At the moment to generate the CONCERT robot URDF we use the modular package. As it can be seen from modular.launch it accepts arguments to control whether or not to add cameras, velodyne, etc.
In this launch file we use it to generate two URDF files, let's call them urdf_gz and urdf_xbot:

<!-- Load the URDF/SRDF into the ROS Parameter Server -->
    <param name="robot_description_gz" 
        command="python3 $(arg modular_description) -o urdf 
                                                    -a gazebo_urdf:=true
                                                        realsense:=$(arg realsense)
                                                        velodyne:=$(arg velodyne)
                                                    -r modularbot_gz"/>

    <param name="robot_description_xbot" 
        command="python3 $(arg modular_description) -o urdf 
                                                    -a gazebo_urdf:=false
                                                        realsense:=$(arg realsense)
                                                        velodyne:=$(arg velodyne)
                                                    -r modularbot"/>

Basically, the possible arguments are:

  • gazebo_urdf. Allows to generate either urdf_gz or urdf_xbot by controlling the inclusion of:
    • the floating joint needed by xbot but not by gazebo
    • all the gazebo tags, which will be not included in the urdf used by xbot. This is mainly to have a "cleaner" urdf to be used by xbot (and other libraries that need just kin./dyn. parameters) whithout all the gazebo tags with simulation parameters and plugins, that are needed just by gazebo
  • velodyne. Allows to control the inclusion of Velodyne lidars (true for simulating them).
  • realsense. Allows to control the inclusion of Realsense cameras (true for simulating them).

In particular, for the cameras I think is a bit tricky to determine how to include them, depending if we are simulating them or not. For example, see the discussion regarding this for the Centauro platform: #26 and #29.

In general, I think there are two ways (feel free to propose a third):

  1. let XBot publish all the tfs when simulating the cameras (realsense:=true). This means the camera tf tree will be part of both urdf_gz and urdf_xbot.
    When realsense:=false only the root frames (*_bottom_screw_frame for the D camera and the *_pose_frame for the T camera) are included in the model, so that the realsense node (or an equivalent one) will take care of publishing the rest of the tf tree.
    This is a similar approach to that used for Centauro and it's how is currently implemented (last commit: a7bd2ce).
    Possible drawbacks of this approach are:

    • the fact that the urdf used in simulation is different than the one used on the real robot (according to @alaurenzi this has caused some issues in the past),
    • that the gazebo tag and plugin included by the realsense xacro will end up in the urdf_xbot anyway (unless we modify realsense_gazebo_description)
  2. publish all camera tfs with external robot_state_publishers when simulating the cameras (realsense:=true). This means the camera tf tree will be part only of urdf_gz.
    The urdf_xbot will contain only the root frames (*_bottom_screw_frame and *_pose_frame) both for realsense:=true or realsense:=false. So it will be the same urdf both in simulation and on the real robot.
    The tfs will be published by the realsense node (or an equivalent) when on the real robot. While, when in simulation, will need to be published by external robot_state_publishers, since xbot will not be publishing them anymore.
    The only drawbacks here is that we'll need to have 4 separate "robot_description" (one for each camera) and 4 separate robot_state_publisher, so making the launchfile a bit more complex. But the other two drawbacks of option 1 should be solved.

What do you think @alaurenzi @liesrock @aled96 @torydebra?
Considering your experience in the past with the cameras what do you think will be the best option? Or do you have other suggestions?

Robot-Model mismatch in number of joints

I am a student of Jakob Thumm and i have the problem regarding running the Xbot ROS API example notebook on the concert_description robot. So whenever I try
q0 = robot.getJointPosition() , I get a 13 dimensional vector back. But the model.setJointPosition(q0) assumes a 19 dimensional vector. I guess this is because of cfg.set_string_parameter('model_type', 'RBDL')
Which model_type do I have to choose for the concert robot?
Another issue I encounter is that ros_control stops whenever i send a robot.move() over the xbot.RobotInterface. Is this expected behaviour?

Imu Message

I'm using the Concert platform odometry, on topic "/concert_odometry/base_link/odom", to localize the robot with respect to its starting position. The x and y coordinates seem to work quite well, but the orientation angle on the navigation plane does not seem to be accurate in the same way.
Unfortunately, small errors on the orientation angle create more visible problems in localization that can only be compensated for through the use of sensors (checking the differences between the environment with the given map).
I have noticed that a common solution is to use the orientation component of an Imu, which is already on the Concert platform (e.g. Neobotix settings here, or even the turtlebot3, from what i have seen here). The meaning of the robot_localization node settings used by Neobotix are explained here.
For these reasons It would also be useful to have an /imu topic with a standard Imu Message sensor_msgs/Imu, as defined here, so i can test whether this improves the localization performance of the navigation stack.
Thank you!

Adding Dalsa camera to the URDF just for Real Robot not for the simulation

Hello All,

As there is an integration meeting between 17th-21st July 2023 we wished to integrate our Dalsa fisheye cameras to the robot for 3D Pose estimation.
For this we want following transforms to give out exact 3D Poses,

  1. t_fisheye_robotBase ( transform between fisheye camera and the mobile base)
  2. t_robotBase_map ( tranform between robot Base and the map)

For this I thought, it would be a good idea to add the camera to the urdf of the real robot. So that the fisheye cameras are available in the tf tree and after localization we should get the required tf's accurately.

Please let us know if further information is required. @alaurenzi @EdoardoRomiti

Fix Travis PR Build

It fails due to not being able to perform Docker's login command, since secrets are not shared with external PRs. We should disable this section if if [ "$TRAVIS_PULL_REQUEST" != "false" ];

Docker Installation Failed

While Installing concert_description by running,
./build-docker.bash,
the following Error Occurred:

calling "cmake /home/user/concert_ws/src/centauro_cartesio/. -DCMAKE_INSTALL_PREFIX=/home/user/concert_ws/install -DCMAKE_BUILD_TYPE=RelWithDebInfo"
returned 1
..[centauro_cartesio] configuring failed
[concert_all] failed to install dependency centauro_cartesio
(failed)
The command '/bin/bash -ic forest grow concert_all --verbose --jobs 4 --pwd user' returned a non-zero code: 1

Happens with the newest version, even after re-cloning the repository

Modular pkg does not compile

I noticed as reported here that the modular pkg does not compile.
I cloned the repository and then i wrote the command catkin_make.

Looking at commits on the modular_hhcm repository i noticed that the repository inside the container is not updated, indeed the commit is acd71a21cde925efbdd628ec4cc5b7a83cd5843c instead of f1b784cc9f7ca7fa69cbac7cbeabb57641b3acf9.
I tryed to update the repo manually, but there is a new error:
"Multiple packages found with the same name "modular_resources"

namespace discrepancies found on hardware wrst concert_description

Hy guys!

Opening this non urgent issue with the links to the commits that include the namespace changes/discrepancies to concert_description when porting the control- as well as the navigation-ws to hardware.
You can find the neccessary changes for the control-ws here and those for the navigation-ws here.

We commited the changes on a separate branch, so let us know to which namespaces you want to stick, either those present in concert_description or those we found on the hardware.

Thanks MT

Gazebo Settings

I noticed that the simulation seems to be very heavy, I think because of the gazebo physics settings, here.
I'm trying to add a more complex simulation world, but if I change these settings (to maintain a real time factor close to one) the robot simulation doesn't work. Could you tell me how I could modify them and what are the limits?

Cannot Start Concert_Gazebo in docker

Hello,
after building and starting the docker image, I tried to start the program with mon launch concert_gazebo concert.launch
but the following Error occurred:

Traceback (most recent call last):
  File "/home/user/concert_ws/ros_src/concert_description/concert_examples/concert_example.py", line 6, in <module>
    urdf_writer = UrdfWriter(speedup=True, floating_base=True)
  File "/home/user/concert_ws/src/modular/src/modular/URDF_writer.py", line 1229, in __init__
    self.base_link = ModuleNode.ModuleNode(data, "base_link")
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 547, in __init__
    super(ModuleNode, self).__init__(dictionary, filename, format=format, template_d=template_dictionary)
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 329, in __init__
    self.interpreter = interpreter_map[format](self, d, filename)
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 298, in __init__
    self.owner.set_size()
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 358, in set_size
    setattr(self, 'size', switcher.get(self.size, "Invalid size"))
AttributeError: can't set attribute
Traceback (most recent call last):
  File "/home/user/concert_ws/ros_src/concert_description/concert_examples/concert_example.py", line 6, in <module>
    urdf_writer = UrdfWriter(speedup=True, floating_base=True)
  File "/home/user/concert_ws/src/modular/src/modular/URDF_writer.py", line 1229, in __init__
    self.base_link = ModuleNode.ModuleNode(data, "base_link")
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 547, in __init__
    super(ModuleNode, self).__init__(dictionary, filename, format=format, template_d=template_dictionary)
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 329, in __init__
    self.interpreter = interpreter_map[format](self, d, filename)
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 298, in __init__
    self.owner.set_size()
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 358, in set_size
    setattr(self, 'size', switcher.get(self.size, "Invalid size"))
AttributeError: can't set attribute
Traceback (most recent call last):
  File "/home/user/concert_ws/ros_src/concert_description/concert_examples/concert_example.py", line 6, in <module>
    urdf_writer = UrdfWriter(speedup=True, floating_base=True)
  File "/home/user/concert_ws/src/modular/src/modular/URDF_writer.py", line 1229, in __init__
    self.base_link = ModuleNode.ModuleNode(data, "base_link")
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 547, in __init__
    super(ModuleNode, self).__init__(dictionary, filename, format=format, template_d=template_dictionary)
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 329, in __init__
    self.interpreter = interpreter_map[format](self, d, filename)
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 298, in __init__
    self.owner.set_size()
  File "/home/user/concert_ws/src/modular/src/modular/ModuleNode.py", line 358, in set_size
    setattr(self, 'size', switcher.get(self.size, "Invalid size"))
AttributeError: can't set attribute
Could not load launch file: /home/user/concert_ws/ros_src/concert_description/concert_gazebo/launch/modular.launch:58: <param> command failed (exit status 1)

Sourcing setup.bash also didn't help.
Can you tell me how to proceed?

Thanks

Visualizing Robot with a recorded rosbag file

image
These are the rostopic I have
image
and these are the rosnodes which are running

I have the concert description installed and sourced, instead of robot_description I have mentioned xbotcore/robot_description, still I am not able to visualize the robot in rviz.

Odom Topic

To develop the navigation stack it would be useful to have an "/odom" topic of type nav_msgs/Odometry (http://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html) where it is published the velocity of robot with respect to the robot base_link.

I also noticed that in the topic /concert_odometry/base_link/twist the covariances are zero (do they have the right value?).
Instead, what covariances do the values in the topic /concert_odometry/base_link/pose have?.

In particular we need a message of type nav_msgs/Odometry coming from the robot. This must have the child_frame_id equal to the "base_link" of the robot and the frame_id inside the header must be "odom." It must also contain at least the robot's velocities with respect to the base_link (e.g., positive x velocities if it is moving forward) and non-zero covariances ( e.g. could be just the values along the diagonal equal to 0.05).
This message is needed to then filter with an EKF and estimate the tf between "odom" and "base_link". Then this estimate will be corrected with localization by sensors.

Homing Example doesn't work on concert_description

Changing the modular.yaml from

  homing:
    thread: rt_main
    type: homing
    params:
      time: { value: 5.0, type: double }

to

  homing:
    thread: rt_main
    type: homing_example
    params:
      time: { value: 5.0, type: double }

results in the following errors:

              /xbot2: [err ][J_wheel_A] [J_wheel_A] resource 'Position' undefined
              /xbot2: [err ][robot_homing] [J_wheel_A] failed to acquire 'Position' control mode
              /xbot2: [info][rt_main] xbot2 thread 'rt_main' transits to state 'Run' 
              /xbot2: [err ][J_wheel_B] [J_wheel_B] resource 'Position' undefined
              /xbot2: [err ][robot_homing] [J_wheel_B] failed to acquire 'Position' control mode
              /xbot2: [err ][J_wheel_C] [J_wheel_C] resource 'Position' undefined
              /xbot2: [err ][robot_homing] [J_wheel_C] failed to acquire 'Position' control mode
              /xbot2: [err ][J_wheel_D] [J_wheel_D] resource 'Position' undefined
              /xbot2: [err ][robot_homing] [J_wheel_D] failed to acquire 'Position' control mode

I guess it is similar to #9, and caused by this line :

_robot->setControlMode(ControlMode::Position());

The same also happens in self written C++ Plugins. And I tried to change Control Mode to ControlMode::Velocity(), but then no errors would be thrown, but the robot still doesn't move.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.