weiyx16 / active-perception Goto Github PK
View Code? Open in Web Editor NEWDeep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment
Home Page: https://weiyx16.github.io/RobotGrasping
License: MIT License
Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment
Home Page: https://weiyx16.github.io/RobotGrasping
License: MIT License
Hello, you have done a really awaresome work. I am wondering where can I find a complete version of the paper. Is there a complete manuscript that could be provided? Many thanks.
Hi, I use the cmd:
python main.py --is_train=True
It shows that No module named 'dqn'. But I find there is dqn folder.
Maybe the error is from
from dqn.agent import Agent
Could you please help me?
Hello, I am recently running your code in ubuntu16.04, ros kinetic, and v-rep 3.6.1. When I start the simulation of V-rep scene, and run the command "python main.py --is_train=True" to perform training, the following error occurs. It seems that the affordance map cannot be created in the former step of "os.system(affordance_cmd)". Would you please help me with this issue? Thanks a lot! The log is as follows:
(active-perception) ys@huang:~/Active-Perception/DQN$ python main.py --is_train=True
/home/ys/anaconda3/envs/active-perception/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(args, **kwds)
/home/ys/anaconda3/envs/active-perception/lib/python3.6/site-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float
to np.floating
is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type
.
from ._conv import register_converters as _register_converters
--- Successfully load vrep ---
[] Use GPU Fraction is : 0.6000
2019-09-25 20:43:31.029367: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
[*] Current Configuration
{'Lua_PATH': '../affordance_model/infer.lua',
'batch_size': 16,
'ckpt_dir': './dqn/checkpoint',
'cnn_format': 'NHWC',
'discount': 0.6,
'end_metric': 0.85,
'env_name': 'Act_Perc',
'ep_end': 0.2,
'ep_end_t': 10000.0,
'ep_start': 1.0,
'history_length': 4,
'inChannel': 4,
'is_sim': True,
'is_train': True,
'learn_start': 200.0,
'learning_rate': 0.001,
'learning_rate_decay': 0.96,
'learning_rate_decay_step': 200,
'learning_rate_minimum': 0.00025,
'max_reward': 1.0,
'max_step': 20000,
'memory_size': 2500,
'min_reward': -1.0,
'model_dir': './dqn/model',
'scale': 50,
'scene_num': 24,
'screen_height': 128,
'screen_width': 128,
'target_q_update_step': 50,
'test_scene_num': 6,
'test_step': 200,
'train_frequency': 4}
Simulation started
Connection success!
[] Initialize the simulation environment
[] Build Deep Q-Network
[] Build Q-Evaluate Scope
[] Build Q-Target Scope
[] Build Weights Transform Scope
[] Build Optimize Scope
[] Build Summary Scope
[] Initial All Variables
[] Loading checkpoints...
[] Load SUCCESS: ./dqn/checkpoint/dqn_model_ckpt-13400
[] Assign Weights from Prediction to Target
[] Random init the scene 13 with 0 object removed
/home/ys/torch/install/bin/luajit: cannot open <../affordance_model/model.t7> in mode r at /home/ys/torch/pkg/torch/lib/TH/THDiskFile.c:673
stack traceback:
[C]: at 0x7fb4af384440
[C]: in function 'DiskFile'
/home/ys/torch/install/share/lua/5.1/torch/File.lua:405: in function 'load'
../affordance_model/infer.lua:36: in main chunk
[C]: in function 'dofile'
...e/ys/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50
Traceback (most recent call last):
File "main.py", line 88, in
tf.app.run()
File "/home/ys/anaconda3/envs/active-perception/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 72, in main
agent.train()
File "/home/ys/Active-Perception/DQN/dqn/agent_18.py", line 50, in train
screen, reward, action, terminal = self.env.new_scene()
File "/home/ys/Active-Perception/DQN/simulation/environment.py", line 483, in new_scene
self.screen, self.local_afford_past, self.location_2d = self.camera.local_patch(self.index, (self.screen_height, self.screen_width))
File "/home/ys/Active-Perception/DQN/simulation/environment.py", line 211, in local_patch
raise Exception(' [!] !!!!!!!!!!! Error occurred during creating affordance map')
Exception: [!] !!!!!!!!!!! Error occurred during creating affordance map
The checkpoint file in DQN/dqn/checkpoint seems to be invalid.
self.load_model() in agent_8.py returns the following error
[*] Loading checkpoints...
2019-07-23 16:57:57.724805: W tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key prediction/7_lq_1/biases not found in checkpoint
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1278, in _do_call
return fn(*args)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1263, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: Key prediction/7_lq_1/biases not found in checkpoint
[[Node: save/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT32], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]
During handling of the above exception, another exception occurred:
etc...
dqn_model_ckpt-13400.data-00000-of-00001 seems to contain the model data. However, my attempt to load this as checkpoint is unsuccessful too.
[*] Loading checkpoints...
[!] Load FAILED: ./dqn/checkpoint
[*] Created model with fresh parameters.
Hi, I'm testing your environment now.
It ends quickly with 'Simulation ended!' when I type python main.py --is_train=True.
Could you please tell me how to train my own model?
I'm sorry to bother you again.
-- Maximum Location at [210 227]
-- Metric for current frame: 0.734154
With peak_dis: 0.173634, flatten: 0.820850, max_value: 0.924716
13478it [15:28, 9.80s/it]
-- Push from [152. 198.] to [106.74516600406096, 243.25483399593904]
-- Maximum Location at [180 340]
-- Metric for current frame: 0.769855
With peak_dis: 0.264738, flatten: 0.859865, max_value: 0.852448
13479it [15:38, 9.88s/it]
-- Push from [193. 311.] to [257.0, 311.0]
-- Maximum Location at [175 349]
-- Metric for current frame: 0.880095
With peak_dis: 1.000000, flatten: 0.890577, max_value: 0.785750
[!!] Nearest distance: 0.084278
[*] Update the scene 16 with 3 object removed
-- Maximum Location at [175 349]
-- Metric for current frame: 0.880095
With peak_dis: 1.000000, flatten: 0.890577, max_value: 0.785750
13480it [15:57, 12.57s/it]
-- Push from [146. 398.] to [210.0, 398.0]
-- Maximum Location at [370 335]
Traceback (most recent call last):
File "main.py", line 88, in
tf.app.run()
File "/home/ys/anaconda3/envs/active-perception/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 72, in main
agent.train()
File "/home/ys/Active-Perception/DQN/dqn/agent_18.py", line 66, in train
screen, reward, terminal = self.env.act(action, if_train=True)
File "/home/ys/Active-Perception/DQN/simulation/environment.py", line 517, in act
self.screen, self.local_afford_new, self.location_2d = self.camera.local_patch(self.index, (self.screen_height, self.screen_width))
File "/home/ys/Active-Perception/DQN/simulation/environment.py", line 222, in local_patch
return get_patch(location_2d, self.cur_color, self.cur_depth, post_afford, patch_size)
File "/home/ys/Active-Perception/DQN/util/utils.py", line 44, in get_patch
patch_rgbd[:,:,0:3] = patch_color
ValueError: could not broadcast input array from shape (118,128,3) into shape (128,128,3)
Hi Dear
I tried to run the code but I got this sort of error
ImportError: libpcl_visualization.so.1.8: cannot open shared object file: No such file or directory
I have tried to install the pcl module but I face the same problem!!!
may I got a help to settle this issue
I would appreciate your help
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.