ahq1993 / mpnet Goto Github PK
View Code? Open in Web Editor NEWMotion Planning Networks
License: MIT License
Motion Planning Networks
License: MIT License
Hello,
I read the paper of MPNet and it's really amazing.
And you said you implemented this project on ROS with MoveIt! in paper.
So, I would like to know could you provide the information or tutorials of this project on ROS?
If you willing to provide me, I really appreciate it.
CAI
When running neuralplanner.py
on a sample 2D environment that is previously unseen (ie, environment 150 as an example), GPU utilization is only about 10%, and the planner takes about 10 mins to run.
The computer has an RTX 3060 GPU so I would think it should be much faster.
Does it sound like something is configured incorrectly, or is this expected?
Hello! I am trying to build this package in Ubuntu 18.04.
As written in the instructions in README, I installed the dependencies including libbot2.
However, when I successfully built libbot2 and I tried
$ make
in /MPNet/data_generations folder, error message as bellow came up.
-- No package 'bot2-vis' found
CMake Error at /usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:419 (message):
A required package was not found
Call Stack (most recent call first):
/usr/share/cmake-3.10/Modules/FindPkgConfig.cmake:597 (_pkg_check_modules_internal)
CMakeLists.txt:17 (pkg_check_modules)
I installed libbot with
$ sudo make BUILD_PREFIX=/usr/local
Is there any solution to this problem?
Thank you in advance!!
When compiling the dataset I got the following error:
CMakeFiles/viewer.dir/main_viewer.cpp.o: In function main': main_viewer.cpp:(.text.startup+0x2c): undefined reference to
g_thread_init'
I think this is due to some updated version of the lcm where g_thread_init was removed. If the reference to it in main_viewer.cpp is removed it fixes this issue.
Hi Ahmed,
I think you have done a great job in the field of Path Planning.
I read your paper and was wondering what was the point cloud data that was fed to the ENet. was it the point cloud data from a sensor at the end effector or any other position or is it the point cloud data that you generate for a known environment. You don't talk about the details of the data in the paper.
what kind of data did point cloud data did you use?
@ahq1993
Tks for sharing such a great work.
It seems that libbot2 is not supported on Windows.
So can you please explain for me the format of the dataset after getting through all steps in data generation ?
I will try to generate the dataset by myself after knowing thoroughly the format of needed dataset.
I think that the joint's angle and velocity and link's position and velocity(transitional and rotational) are usually used in a manipulation task.
I want to know which information is considered to the input of the neural network ( without the feature Z from the environment's cloud points ).
Thanks for your kind response in advance ^ ^!
when compiling the dataset I got the following error
[ 25%] Building CXX object src/CMakeFiles/rrtstar.dir/rrts_main.cpp.o
/home/huyu/github/MPNet/data_generation/rrtstar/src/rrts_main.cpp:56:6: error: expected constructor, destructor, or type conversion before ‘(’ token
56 | mkdir(env_path.c_str(),ACCESSPERMS); // create folder with env label to store generated trajectories
| ^
In file included from /home/huyu/github/MPNet/data_generation/rrtstar/src/rrts_main.cpp:13:
/home/huyu/github/MPNet/data_generation/rrtstar/src/rrts.hpp: In member function ‘int RRTstar::Planner<State, Trajectory, System>::iteration(double (&)[2], double, double) [with State = SingleIntegrator::State; Trajectory = SingleIntegrator::Trajectory; System = SingleIntegrator::System]’:
/home/huyu/github/MPNet/data_generation/rrtstar/src/rrts.hpp:506:11: warning: control reaches end of non-void function [-Wreturn-type]
506 | State stateRandom;
| ^~~~~~~~~~~
make[4]: *** [src/CMakeFiles/rrtstar.dir/build.make:76:src/CMakeFiles/rrtstar.dir/rrts_main.cpp.o] 错误 1
make[3]: *** [CMakeFiles/Makefile2:125:src/CMakeFiles/rrtstar.dir/all] 错误 2
make[2]: *** [Makefile:136:all] 错误 2
make[1]: *** [Makefile:25:all] 错误 2
make: *** [Makefile:15:all] 错误 2
Not sure what the difference is between some files in the provided S2D library.
I am struggling to figure out what the difference is between the obstacle point cloud data files in dataset/obs_cloud/ compared to the obs.dat and obs_perm2.dat files in dataset/ directory.
For the visualizer.py
example, point cloud data from obs_cloud/ directory gives a point cloud representation of seven obstacles boxes. However, it looks like neuralplanner.py
(via dataloader.py
) uses obstacles from obs.dat and obs_perm2.dat, both within the datasets/ directory.
If anybody could explain the difference it would be much appreciated! Thanks!
Hi there,
I’m doing some work based on your project. Thanks for sharing your great work. I really need the data for complex 2D experiment because I do not know the specific details about generating them. Can you put this on Google Drive as well if possible?
Any help or response will be highly appreciated. Thanks in advance!
Hi, Thanks for your code.
However, our trajectory generation seems does not work well. We can not see a complete trajectory from the viewer.
The viewer shows trajectory generation procedure as follows,
And the recorded result in the file below.
2018-11-22-viewer.03.ppms.gz
Here is our code to generate trajectories.
Thanks for your help.
Hey,
Don't know if this is an issue, but I'm looking into your paper and trying to reproduce your results. However, I'm stuck with the encoder network. I get roughly a meas squared error of approx ~2. I plot the reconstructed data, but I don't think it looks good enough. Did you also experience roughly the same loss or did you match your input data perfectly?
Your'e doing really cool work, keep it up! 👍
Hi, Thanks for your code.
Can you explain in more detail how to run the main_viewer
and rrts_main
?
After I successfully build the data_generation, I have got viewer
in data_generation/viewer/build/bin/
and rrtstar
in data_generation/build/bin/
. However, when I try to run them, the terminal gives me ./viewer: error while loading shared libraries: libbot2-vis.so.1: cannot open shared object file: No such file or directory
./rrtstar: error while loading shared libraries: libbot2-core.so.1: cannot open shared object file: No such file or directory
Could you help me with it? Is there any parameter I should send to the script?
Thanks for your help.
Hi! Thank you for your generous sharing and I am sure I will learn a lot from it!
I have some confusion and hope to get your answer:
(1) I am curious about how to efficiently build a random environment and collect PCL data.
(2) Does all this work based on Gazebo?
Or maybe I'm missing something in reading your article and code.
Looking forward to your reply, thanks!
The size of the 2D environment you provided seems to be 40x40. (probably x:-20~+20, y:-20~+20)
However, when I run the code, the output of the neural net (x or y axis) became upper than 20 or lower than -20. Is there any restriction such as upper bounds or lower bounds to the neural network's output?
Can I learn MPNet for 6DOF or 7DOF robot ?
Hi Thank you for sharing your work,
I am trying to reproduce the results. I downloaded the 2D sample data and run the following commands
Assuming paths to obstacles point-cloud are declared, train obstacle-encoder: python MPNET/AE/CAE.py
Assuming paths to demonstration dataset and obstacle-encoder are declared, run mpnet_trainer:
python MPNET/train.py
it trained correctly but when I run the neural planner using the following command
Run tests by first loading the trained models:
python MPNET/neuralplanner.py
it shows the above-stated output. I don't know if it is correct or not, how can I get the computed path to visualize it using visulizer.py.
Thank you
Hi,
I have trained mpnet_trainer by running python MPNET/train.py
, and am pointing neuralplanner.py
to my trained models.
Is neuralplanner.py
supposed to be able to generate individual paths for a given environment? If so, it is not clear to me how to pass neuralplanner.py
an environment to generate a path for. If there is a way to do this, please let me know!
Thanks
Spencer
I saw on the paper that supplementary material of implementation parameters is available at http://sites.google.com/view/mpnet/home. I went through the website but did not find the training parameters. Can I get some help with that?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.