rgring / drl_local_planner_ros_stable_baselines Goto Github PK
View Code? Open in Web Editor NEWLicense: BSD 3-Clause "New" or "Revised" License
License: BSD 3-Clause "New" or "Revised" License
Hi,
thanks for your work. I have read your Master Thesis, there is a modified diff_drive-plugin, where can I find the corresponding code?
Thanks a lot!
Hi,
Thanks for your great work.
There are some problems when I test the trained model.
1.I trained the model successfully.
2.However, when I execute trained ppo-agent the error showed in terminal is:
Failed to call step_simulation_ service
3.Then when I run " roslaunch rl_agent run_ppo2_agent.launch mode:="train" "
The error showed in terminal is:
File "/home/hitsz/drl_local_planner_ws/src/drl_local_planner_ros_stable_baselines/rl_agent/scripts/run_scripts/run_ppo.py", line 152, in
num_stacks=int(sys.argv[8]))
ValueError: invalid literal for int() with base 10: '__name:=run_ppo.py'
4.Then I modified the run_ppo2_agent.launch
from "node pkg="rl_agent" type="run_ppo.py" name="run_ppo.py" output="screen" args="ppo2_foo CnnPolicy_multi_input_vel2 $(arg mode) 1 0 1 ped" "
to "node pkg="rl_agent" type="run_ppo.py" name="run_ppo.py" output="screen" args="ppo2_foo CnnPolicy_multi_input_vel2 $(arg mode) 1 0 1 ped 4" /"
The error showed in 3 is solved successfully.
5.However, the other error happened in the "src/drl_local_planner_ros_stable_baselines/rl_agent/src/rl_agent/env_wrapper/ros_env_disc_img_vel.py" line 36
img_width = rospy.get_param("%s/rl_agent/img_width_pos" % ns) + rospy.get_param("%s/rl_agent/img_width_neg" % ns)
The error showed in the terminal is:
raise KeyError(key)
KeyError: 'sim1/rl_agent/img_width_pos'
Do you saw these problems?
Can you give me some suggestions?
Thanks a lot!
Hi, RGring!
May I ask if the wheels of the vehicle belong to omnidirectional or universal wheels? Is there vehocle Kinematics?
Thank you in advance!!!
Hi, thanks for open-sourcing the project.
I get the following error while making the project.
CMake Error: The following variables are used in this project, but they are set to NOTFOUND. Please set them or make sure they are set and tested correctly in the CMake files: LUA_INCLUDE_DIR [(ADVANCED)]
Does the project require Lua?
Thanks,
Tanvir
Hi @RGring i am new with ROS and stable-baselines. i successfully run your code. i want to deploy the trained agent to my HSR robot with laser range finder. Could you assist me which code or topic that i have to focus if i want to deploy it into my robot?
Hello,RGring. I try to catkin_make drl_local_planner_ros_stable_baselines and there is an error that "fatal error: flatland_msgs/Step.h: No such file or directory". And I find that because toggle_setup_init.cpp(in drl_local_planner_ros_stable_baselines/rl_bringup/src ) includes <flatland_msgs/Step.h>, but in fact there is no Step.msg in flatland_msgs. So how can I solve this problem.
Thanks a lot for your answer and you codes!
Hello everyone,
I am getting an error when compiling with catkin_make -DCMAKE_BUILD_TYPE=Release
, it is the following:
[ 84%] Building CXX object drl_local_planner_forks/pedsim/pedsim_simulator/CMakeFiles/pedsim_simulator.dir/include/pedsim_simulator/element/moc_waypoint.cxx.o
[ 85%] Linking CXX executable /home/sasm/catkin_ws/devel/lib/pedsim_simulator/pedsim_simulator
[ 85%] Built target pedsim_simulator
Makefile:159: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j8 -l8" failed
I have installed the dependencies already. When compiling with only catkin_make
I get a bunch of errors from pedsim, may I compile pedsim externally?
Hi,
I've been trying to load the PPO2 models
PPO2.load("example_agents/ppo2_1_raw_data_cont_0/ppo2_1_raw_data_cont_0.pkl")
And get the following error: ValueError: Cannot feed value of shape (1344, 256) for Tensor 'Placeholder_4:0', which has shape '(1600, 256)'
I did some digging, one one hand the weights saved in your pickle file are for a fc1 layer of size 1344, 256. On the other hand, the conv1d operations defined in your custom stable-baselines fork lead to a fc1 layer of size 1600, 256.
In order for the weights to be the correct size (1344, 256), the output of the second conv1d should be of size (?, 21, 64), but instead we obtain (?, 25, 64).
def laser_cnn_multi_input(state, **kwargs):
"""
1D Conv Network
:param state: (TensorFlow Tensor) state input placeholder
:param kwargs: (dict) Extra keywords parameters for the convolutional layers of the CNN
:return: (TensorFlow Tensor) The CNN output layer
"""
# scan = tf.squeeze(state[:, : , 0:kwargs['laser_scan_len'] , :], axis=1)
scan = tf.squeeze(state[:, : , 0:kwargs['laser_scan_len'] , :], axis=1)
wps = tf.squeeze(state[:, :, kwargs['laser_scan_len']:, -1], axis=1)
# goal = tf.math.multiply(goal, 6)
kwargs_conv = {}
activ = tf.nn.relu
layer_1 = activ(conv1d(scan, 'c1d_1', n_filters=32, filter_size=5, stride=2, init_scale=np.sqrt(2), **kwargs_conv))
layer_2 = activ(conv1d(layer_1, 'c1d_2', n_filters=64, filter_size=3, stride=2, init_scale=np.sqrt(2), **kwargs_conv))
layer_2f = conv_to_fc(layer_2)
where conv1d is defined here
I've been making sure to use tensorflow 1.13.1.
Could it be that you used a different version of the conv1d code during training?
hello ,
When I tried to
python rl_agent/scripts/train_scripts/train_ppo.py
It had error here:
Traceback (most recent call last):
File "rl_agent/scripts/train_scripts/train_ppo.py", line 13, in
from rl_agent.env_wrapper.ros_env_cont_img import RosEnvContImg
File "/home/hantewin/test/src/drl_local_planner_ros_stable_baselines-master/rl_agent/src/rl_agent/env_wrapper/ros_env_cont_img.py", line 19, in
from rl_agent.env_wrapper.ros_env_img import RosEnvImg
File "/home/hantewin/test/src/drl_local_planner_ros_stable_baselines-master/rl_agent/src/rl_agent/env_wrapper/ros_env_img.py", line 15, in
from rl_agent.env_wrapper.ros_env import RosEnvAbs
File "/home/hantewin/test/src/drl_local_planner_ros_stable_baselines-master/rl_agent/src/rl_agent/env_wrapper/ros_env.py", line 32, in
from rl_agent.env_utils.task_generator import TaskGenerator
File "/home/hantewin/test/src/drl_local_planner_ros_stable_baselines-master/rl_agent/src/rl_agent/env_utils/task_generator.py", line 29, in
from pedsim_srvs.srv import SpawnPeds
ImportError: cannot import name 'SpawnPeds'
Can you help me? please,I don't know how to deal with it , I had chmod 777 als
Hi
Thanks for your work.
Is there any way to create new scenario xml files?
(for example, the pedsim_scenario.xml file in your project)
Thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.