Giter Club home page Giter Club logo

alexfrom0815 / online-3d-bpp-pct Goto Github PK

View Code? Open in Web Editor NEW
227.0 7.0 40.0 182 KB

Code implementation of "Learning Efficient Online 3D Bin Packing on Packing Configuration Trees". We propose to enhance the practical applicability of online 3D Bin Packing Problem (BPP) via learning on a hierarchical packing configuration tree which makes the deep reinforcement learning (DRL) model easy to deal with practical constraints and well-performing even with continuous solution space.

Python 100.00%
3d-packing packing-algorithm reinforcement-learning online-packing bin-packing

online-3d-bpp-pct's Introduction

Introduction

We are committed to continuously promoting the development of 3D packing technology.

The following are functions we developed:

  • Online packing solver [1, 2, 3].
  • Online packing with lookahead [1].
  • Packing stability solution [2].
  • Packing in continuous domain [2].
  • Custom-constrained packing [2].
  • Online packing with buffer [3].
  • Irregular shape packing [3].
  • Packing with physical constraints [3].
  • Basic tools for rendering, packing shape processing, and simulation scenarios [4].

If you are interested in 3D packing, I strongly recommend you take a look. All kinds of questions and potential collaboration are welcome!

Learning Efficient Online 3D Bin Packing on Packing Configuration Trees

We propose to enhance the practical applicability of online 3D bin packing problem (BPP) via learning on a hierarchical packing configuration tree which makes the deep reinforcement learning (DRL) model easy to deal with practical constraints and well-performing even with continuous solution space. Compared to our previous work, the advantages of this repo are:

  • Container (bin) size and item sizes can be set arbitrarily.
  • Continuous online 3D-BPP is allowed and a continuous environment is provided.
  • Algorithms to approximate stability are provided (see our other work).
  • Better performance and the ability to account for more complex constraints.
  • More adequate heuristic baselines for domain development.
  • More stable training.

See these links for video demonstration: YouTube, bilibili

If you are interested, please star this repo!

PCT

Paper

For more details, please see our paper Learning Efficient Online 3D Bin Packing on Packing Configuration Trees which has been accepted at ICLR 2022. If this code is useful for your work, please cite our paper:

@inproceedings{
zhao2022learning,
title={Learning Efficient Online 3D Bin Packing on Packing Configuration Trees},
author={Hang Zhao and Yang Yu and Kai Xu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=bfuGjlCwAq}
}

Dependencies

  • NumPy
  • gym
  • Python>=3.7
  • PyTorch >=1.7
  • My suggestion: Python == 3.7, gym==0.13.0, torch == 1.10, OS: Ubuntu 16.04

Quick start

For training online 3D-BPP on setting 2 (mentioned in our paper) with our PCT method and the default arguments:

python main.py 

The training data is generated on the fly. The training logs (tensorboard) are saved in './logs/runs'. Related file backups are saved in './logs/experiment'.

Usage

Data description

Describe your 3D container size and 3D item size in 'givenData.py'

container_size: A vector of length 3 describing the size of the container in the x, y, z dimensions.
item_size_set:  A list records the size of each item. The size of each item is also described by a vector of length 3.

Dataset

You can download the prepared dataset from here. The dataset consists of 3000 randomly generated trajectories, each with 150 items. The item is a vector of length 3 or 4, the first three numbers of the item represent the size of the item, the fourth number (if any) represents the density of the item.

Model

We provide pretrained models trained using the EMS scheme in a discrete environment, where the bin size is (10,10,10) and the item size range from 1 to 5.

Training

For training online 3D BPP instances on setting 1 (80 internal nodes and 50 leaf nodes) nodes:

python main.py --setting 1 --internal-node-holder 80 --leaf-node-holder 50

If you want to train a model that works on the continuous domain, add '--continuous', and don't forget to change your problem in 'givenData.py':

python main.py --continuous --sample-from-distribution --setting 1 --internal-node-holder 80 --leaf-node-holder 50

Warm start

You can initialize a run using a pre-trained model:

python main.py --load-model --model-path path/to/your/model

Evaluation

To evaluate a model, you can add the --evaluate flag to evaluation.py:

python evaluation.py --evaluate --load-model --model-path path/to/your/model --load-dataset --dataset-path path/to/your/dataset

Heuristic

Running heuristic.py for test heuristic baselines, the source of the heuristic algorithm has been marked in the code:

Running heuristic on setting 1 (discrete) with the LASH method:

python heuristic.py --setting 1 --heuristic LSAH --load-dataset  --dataset-path setting123_discrete.pt

Running heuristic on setting 2 (continuous) with OnlineBPH method:

python heuristic.py --continuous --setting 2 --heuristic OnlineBPH --load-dataset  --dataset-path setting2_continuous.pt

Help

python main.py -h
python evaluation.py -h
python heuristic.py -h

License

This source code is released only for academic use. Please do not use it for commercial purposes without authorization of the author.

online-3d-bpp-pct's People

Contributors

alexfrom0815 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

online-3d-bpp-pct's Issues

about performance in the continuous environment

Firstly, thank you so much for sharing the code. This is really excellent work!
I train a model using the EMS scheme in a continuous environment, and then test the model on the 'setting13_continuous.pt' dataset that you provide. But I struggle to achieve the same performance as that in paper. In the paper, PCT & EMS in setting 3 achieves 66.6% utilization, but I can only achieve 62.6%.

Here is my training command:
python main.py --continuous --setting 3 --internal-node-holder 80 --leaf-node-holder 50 --sample-from-distribution --sample-left-bound 0.1 --sample-right-bound 0.5

The training goes on 20K iterations. The hyperparameters in tools.py remain exactly the same as default. (and I find that set 'shuffle' to False during training seems to get better performance)
Could you please give me some advice on improving the result? Thank you so much!

How to modify "internal nodes" to update the stacking space?

Thank you for your sharing, it's an incredible job!
I‘ve read your paper and am interested in the real-world experiments. I found that the size of the new box can be defined by the class bincreator, but I'd love to know how you "correct the descriptor of the corresponding internal node b ∈ B with the offset position". Could you please give a specific location in the code?
Finally, I would also like to know what your LICENSE is and if I can modify it for further study and research.
Thank you whatever and best wishes for you!

ValueError: cannot find context for 'fork'

Dear author:
Thanks for your sharing! I am really interested in your work! I'm a undergraduate student and new to reinforcement learning.
When I type "python main.py --no-cuda ", I get that error quickly.

Traceback (most recent call last):
File "main.py", line 61, in
main(args)
File "main.py", line 43, in main
envs = make_vec_envs(args, './logs/runinfo', True)
File "G:\graduate design\code\Online-3D-BPP-PCT\envs.py", line 112, in make_vec_envs
envs = ShmemVecEnv(envs, spaces, context='fork')
File "G:\graduate design\code\Online-3D-BPP-PCT\wrapper\shmem_vec_env.py", line 30, in init
ctx = mp.get_context(context)
File "E:\anaconda\envs\Online3D\lib\multiprocessing\context.py", line 238, in get_context
return super().get_context(method)
File "E:\anaconda\envs\Online3D\lib\multiprocessing\context.py", line 192, in get_context
raise ValueError('cannot find context for %r' % method) from None
ValueError: cannot find context for 'fork'

Thanks in advance.

Regards

Running on cpu

Dear author:
Thanks for your sharing! I am really interested in your work! I'm a graduate student and new to reinforcement learning. How can I modify the code to train the model through the CPU instead of GPU?

AssertionError: You must specify a action space

Dear authors:

I am trying to reproduce your experiments claimed in your ICLR paper. But I got an error "You must specify a action space", the whole error information can be found below. Would you mind please providing some helpful information for this?

/home/liwj/gitcode/Online-3D-BPP-PCT/attention_model.py:9: DeprecationWarning: class not set defining 'AttentionModelFixed' as <class 'attention_model.AttentionModelFixed'>. Was classcell propagated to type.new?
class AttentionModelFixed(NamedTuple):
Please input the experiment name
pct
[(1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 1, 4), (1, 1, 5), (1, 2, 1), (1, 2, 2), (1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 1), (1, 3, 2), (1, 3, 3), (1, 3, 4), (1, 3, 5), (1, 4, 1), (1, 4, 2), (1, 4, 3), (1, 4, 4), (1, 4, 5), (1, 5, 1), (1, 5, 2), (1, 5, 3), (1, 5, 4), (1, 5, 5), (2, 1, 1), (2, 1, 2), (2, 1, 3), (2, 1, 4), (2, 1, 5), (2, 2, 1), (2, 2, 2), (2, 2, 3), (2, 2, 4), (2, 2, 5), (2, 3, 1), (2, 3, 2), (2, 3, 3), (2, 3, 4), (2, 3, 5), (2, 4, 1), (2, 4, 2), (2, 4, 3), (2, 4, 4), (2, 4, 5), (2, 5, 1), (2, 5, 2), (2, 5, 3), (2, 5, 4), (2, 5, 5), (3, 1, 1), (3, 1, 2), (3, 1, 3), (3, 1, 4), (3, 1, 5), (3, 2, 1), (3, 2, 2), (3, 2, 3), (3, 2, 4), (3, 2, 5), (3, 3, 1), (3, 3, 2), (3, 3, 3), (3, 3, 4), (3, 3, 5), (3, 4, 1), (3, 4, 2), (3, 4, 3), (3, 4, 4), (3, 4, 5), (3, 5, 1), (3, 5, 2), (3, 5, 3), (3, 5, 4), (3, 5, 5), (4, 1, 1), (4, 1, 2), (4, 1, 3), (4, 1, 4), (4, 1, 5), (4, 2, 1), (4, 2, 2), (4, 2, 3), (4, 2, 4), (4, 2, 5), (4, 3, 1), (4, 3, 2), (4, 3, 3), (4, 3, 4), (4, 3, 5), (4, 4, 1), (4, 4, 2), (4, 4, 3), (4, 4, 4), (4, 4, 5), (4, 5, 1), (4, 5, 2), (4, 5, 3), (4, 5, 4), (4, 5, 5), (5, 1, 1), (5, 1, 2), (5, 1, 3), (5, 1, 4), (5, 1, 5), (5, 2, 1), (5, 2, 2), (5, 2, 3), (5, 2, 4), (5, 2, 5), (5, 3, 1), (5, 3, 2), (5, 3, 3), (5, 3, 4), (5, 3, 5), (5, 4, 1), (5, 4, 2), (5, 4, 3), (5, 4, 4), (5, 4, 5), (5, 5, 1), (5, 5, 2), (5, 5, 3), (5, 5, 4), (5, 5, 5)]
Traceback (most recent call last):
File "main.py", line 61, in
main(args)
File "main.py", line 43, in main
envs = make_vec_envs(args, './logs/runinfo', True)
File "/home/liwj/gitcode/Online-3D-BPP-PCT/envs.py", line 104, in make_vec_envs
sample_right_bound=args.sample_right_bound
File "/home/liwj/.conda/envs/pytorch-wenjie/lib/python3.7/site-packages/gym/envs/registration.py", line 601, in make
env = PassiveEnvChecker(env)
File "/home/liwj/.conda/envs/pytorch-wenjie/lib/python3.7/site-packages/gym/wrappers/env_checker.py", line 24, in init
), "You must specify a action space. https://www.gymlibrary.ml/content/environment_creation/"
AssertionError: You must specify a action space. https://www.gymlibrary.ml/content/environment_creation/

AttributeError: 'PackingDiscrete' object has no attribute 'action_space'

Hi,

When I type "python main.py ", I get that error quickly.

Full error trace:

Traceback (most recent call last):
File "main.py", line 55, in
main(args)
File "main.py", line 40, in main
envs = make_vec_envs(args, './logs/runinfo', True)
File "G:\graduate design\code\Online-3D-BPP-PCT-main\envs.py", line 108, in make_vec_envs
spaces = [env.observation_space, env.action_space]
File "E:\anaconda\envs\Online3D\lib\site-packages\gym\core.py", line 229, in getattr
return getattr(self.env, name)
AttributeError: 'PackingDiscrete' object has no attribute 'action_space'

Thanks in advance.

Regards

About env PctDiscrete0

Use the 'drop'box_virtual' function to check if the position is feasible, there may be dangling placement

The pretrained models run had no effect

When I downloaded the pre-trained model and data set and evaluated it using the code below, the average ratio was just 0.01

python evaluation.py --evaluate --load-model --model-path models/setting2_discrete.pt --load-dataset --dataset-path datasets/setting123_discrete.pt
Evaluation using 100 episodes
Mean ratio 0.01000, mean length0.01000

How to get the 3D visualization?

Hi, I am very interested in your work. I want to visualize your results like those done in the C.5 section (VISUALIZED RESULTS) of your paper. Is there any off-the-shelf code that I can use? Thanks!

What are the main factors that affect the training effect?

Hi,I would like to ask, what are the main factors that affect the training effect? Whether the number of leaf nodes and the number of internal nodes, the better the training effect.
Even with previerw, the model I worked out didn't work very well.

Thanks~!

Online learning or Offline learning

Hello,I have a question about whether this project is online or offline learning: as far as I understand, online learning is usually input one piece of data at a time (not one batch) , directly update the weights after training.While offline learning, similar to batch learning, updates the weights after a batch of training.Can we input one data for training at a time in the training of this project?

Thank you in advance for your reply, and best wishes to you!

RuntimeError: __class__ not set defining

Dear author:
Thanks for your sharing! When I ran main.py, I got an error:

Traceback (most recent call last):
File "E:/User002/Online-3D-BPP-PCT-main/main.py", line 6, in
from model import *
File "E:\User002\Online-3D-BPP-PCT-main\model.py", line 4, in
from attention_model import AttentionModel
File "E:\User002\Online-3D-BPP-PCT-main\attention_model.py", line 9, in
class AttentionModelFixed(NamedTuple):
RuntimeError: class not set defining 'AttentionModelFixed' as <class 'attention_model.AttentionModelFixed'>. Was classcell propagated to type.new?

Process finished with exit code 1

python3.8
pytorch1.9.0

The network structure

Thank you very much for your selfless sharing.
I noticed that you did not show the details of the network structure in the paper, but you showed your deep reinforcement learning network structure based on the ACKTR algorithm in the Online-3D-BPP-DRL(https://github.com/alexfrom0815/Online-3D-BPP-DRL) paper. Is the network structure of deep reinforcement learning the same in this PCT paper? Or do you have a suggestion for a quick way to see the structure of the PCT network?
Thanks in advance for your reply. Best wishes for you!

AttributeError: 'PackingDiscrete' object has no attribute 'action_space'

Dear Sir

I ran Quick start, typing in "python main.py", but I got the error:

/home/steven/Online-3D-BPP-PCT-main/attention_model.py:9: DeprecationWarning: class not set defining 'AttentionModelFixed' as <class 'attention_model.AttentionModelFixed'>. Was classcell propagated to type.new?
class AttentionModelFixed(NamedTuple):
/home/steven/Online-3D-BPP-PCT-main/wrapper/shmem_vec_env.py:17: DeprecationWarning: np.bool is a deprecated alias for the builtin bool. To silence this warning, use bool by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_ here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
np.bool: ctypes.c_bool}
Please input the experiment name
demo
[(1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 1, 4), (1, 1, 5), (1, 2, 1), (1, 2, 2), (1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 1), (1, 3, 2), (1, 3, 3), (1, 3, 4), (1, 3, 5), (1, 4, 1), (1, 4, 2), (1, 4, 3), (1, 4, 4), (1, 4, 5), (1, 5, 1), (1, 5, 2), (1, 5, 3), (1, 5, 4), (1, 5, 5), (2, 1, 1), (2, 1, 2), (2, 1, 3), (2, 1, 4), (2, 1, 5), (2, 2, 1), (2, 2, 2), (2, 2, 3), (2, 2, 4), (2, 2, 5), (2, 3, 1), (2, 3, 2), (2, 3, 3), (2, 3, 4), (2, 3, 5), (2, 4, 1), (2, 4, 2), (2, 4, 3), (2, 4, 4), (2, 4, 5), (2, 5, 1), (2, 5, 2), (2, 5, 3), (2, 5, 4), (2, 5, 5), (3, 1, 1), (3, 1, 2), (3, 1, 3), (3, 1, 4), (3, 1, 5), (3, 2, 1), (3, 2, 2), (3, 2, 3), (3, 2, 4), (3, 2, 5), (3, 3, 1), (3, 3, 2), (3, 3, 3), (3, 3, 4), (3, 3, 5), (3, 4, 1), (3, 4, 2), (3, 4, 3), (3, 4, 4), (3, 4, 5), (3, 5, 1), (3, 5, 2), (3, 5, 3), (3, 5, 4), (3, 5, 5), (4, 1, 1), (4, 1, 2), (4, 1, 3), (4, 1, 4), (4, 1, 5), (4, 2, 1), (4, 2, 2), (4, 2, 3), (4, 2, 4), (4, 2, 5), (4, 3, 1), (4, 3, 2), (4, 3, 3), (4, 3, 4), (4, 3, 5), (4, 4, 1), (4, 4, 2), (4, 4, 3), (4, 4, 4), (4, 4, 5), (4, 5, 1), (4, 5, 2), (4, 5, 3), (4, 5, 4), (4, 5, 5), (5, 1, 1), (5, 1, 2), (5, 1, 3), (5, 1, 4), (5, 1, 5), (5, 2, 1), (5, 2, 2), (5, 2, 3), (5, 2, 4), (5, 2, 5), (5, 3, 1), (5, 3, 2), (5, 3, 3), (5, 3, 4), (5, 3, 5), (5, 4, 1), (5, 4, 2), (5, 4, 3), (5, 4, 4), (5, 4, 5), (5, 5, 1), (5, 5, 2), (5, 5, 3), (5, 5, 4), (5, 5, 5)]
Traceback (most recent call last):
File "main.py", line 61, in
main(args)
File "main.py", line 43, in main
envs = make_vec_envs(args, './logs/runinfo', True)
File "/home/steven/Online-3D-BPP-PCT-main/envs.py", line 107, in make_vec_envs
spaces = [env.observation_space, env.action_space]
File "/home/steven/anaconda3/envs/bpp/lib/python3.7/site-packages/gym/core.py", line 229, in getattr
return getattr(self.env, name)
AttributeError: 'PackingDiscrete' object has no attribute 'action_space'

My Envs: Python == 3.7.13, pytorch == 1.10.1, gym == 0.23.1, Ubuntu 18.04

Thanks!

An error occurs if x and y in container size is too large

if i set container size to [120, 100, 180], it will report the following error.
now step is the count log argument that I defined.

now step = 1
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/site-packages/torch/nn/modules/module.py:1117: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/kfac.py:195: UserWarning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release.
The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion.
L, _ = torch.symeig(A, upper=upper)
should be replaced with
L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L')
and
L, V = torch.symeig(A, eigenvectors=True)
should be replaced with
L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (Triggered internally at ../aten/src/ATen/native/BatchLinearAlgebra.cpp:2794.)
self.m_gg[m], eigenvectors=True)
now step = 2
now step = 3
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 4
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 5
now step = 6
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 7
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 8
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 9
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 10
Time version: test-2023.12.05-11-21-13 is training
Updates 10, num timesteps 3520, FPS 589
Last 6 training episodes: mean/median reward 0.4/0.4, min/max reward 0.2/0.6
The dist entropy 2.81792, the value loss 0.05034, the action loss 0.02634
The mean space ratio is 0.039128163580246914, the ratio threshold is0.060708796296296295

now step = 11
now step = 12
now step = 13
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 14
now step = 15
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
now step = 16
/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py:65: RuntimeWarning: invalid value encountered in true_divide
new_stack_centre /= new_stack_mass
Process ForkProcess-32:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-62:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-27:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-9:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-12:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-22:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-4:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-16:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
now step = 17
Process ForkProcess-59:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-37:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Process ForkProcess-44:
Traceback (most recent call last):
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 140, in _subproc_worker
obs, reward, done, info = env.step(data)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/monitor.py", line 54, in step
ob, rew, done, info = self.env.step(action)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/bin3D.py", line 159, in step
succeeded = self.space.drop_box(next_box, idx, rotation_flag, self.next_den, self.setting)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/pct_envs/PctDiscrete0/space.py", line 386, in drop_box
[lx, ly, max_h, lx + x, ly + y, max_h + z, density, 0, 1])
IndexError: index 80 is out of bounds for axis 0 with size 80
Traceback (most recent call last):
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/main.py", line 61, in
main(args)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/main.py", line 56, in main
trainTool.train_n_steps(envs, args, device)
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/train_tools.py", line 69, in train_n_steps
obs, reward, done, infos = envs.step(selected_leaf_node.cpu().numpy())
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/vec_env.py", line 108, in step
return self.step_wait()
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/envs.py", line 179, in step_wait
obs, reward, done, info = self.venv.step_wait()
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 77, in step_wait
outs = [pipe.recv() for pipe in self.parent_pipes]
File "/home/jintao.chen/agv/Online-3D-BPP-PCT_water/wrapper/shmem_vec_env.py", line 77, in
outs = [pipe.recv() for pipe in self.parent_pipes]
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/jintao.chen/anaconda3/envs/python3.7_work/lib/python3.7/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError

Process finished with exit code 1

AssertionError

Dear author:
Thanks for your sharing! I am really interested in your work!
When I type "python main.py --no-cuda ", I get that error after hours of training.

Time version: 3-2022.03.19-12-45-46 is training
Updates 15320, num timesteps 4902720, FPS 225
Last 10 training episodes: mean/median reward 7.9/8.0, min/max reward 7.1/8.4
The dist entropy 0.68102, the value loss 0.59049, the action loss 0.06197
The mean space ratio is 0.7919, the ratio threshold is0.946

Traceback (most recent call last):
File "main.py", line 61, in
main(args)
File "main.py", line 56, in main
trainTool.train_n_steps(envs, args, device)
File "/home/Online-3D-BPP-PCT/train_tools.py", line 66, in train_n_steps
selectedlogProb, selectedIdx, dist_entropy, _ = self.PCT_policy(all_nodes, normFactor = factor)
File "/home/.conda/envs/Online3D/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/Online-3D-BPP-PCT/model.py", line 22, in forward
o, p, dist_entropy, hidden, _= self.actor(items, deterministic, normFactor = normFactor, evaluate = evaluate)
File "/home/.conda/envs/Online3D/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/Online-3D-BPP-PCT/attention_model.py", line 129, in forward
valid_length = valid_length)
File "/home/Online-3D-BPP-PCT/attention_model.py", line 136, in _inner
log_p, mask = self._get_log_p(fixed, mask)
File "/home/Online-3D-BPP-PCT/attention_model.py", line 198, in _get_log_p
assert not torch.isnan(log_p).any()
AssertionError

Python == 3.7.7, torch == 1.10.1, OS: Ubuntu 18.04

How to add limitation that can affect the model output?

I've tried modifying it in the get_reward function of bin3D.py and found that it doesn't affect the results given by the model. So, if I want to give the model some constraints or preferences, should I modify the forward function in the DRL_GAT ?

    def forward(self, items, deterministic = False, normFactor = 1, evaluate = False):
        # action_log_prob, pointers, dist_entropy, hidden, dist
        o, p, dist_entropy, hidden, _= self.actor(items, deterministic, normFactor = normFactor, evaluate = evaluate)
        values = self.critic(hidden)
        return o, p, dist_entropy,values

But I can't think of a reasonable way to pass the existing state information in the environment into forward, any suggestions?

Struggling to achieve the same performance in the discrete and continuous environment

Hi, excellent work on your paper and thank you so much for sharing your code! This is fascinating research! I wanted to explore your work more through the code, but I was struggling to achieve the same performance in the discrete environment and the continuous environment. I think the error is lines 160 and 164 in PCTContinuous0/bin3D.py. The break statements should be indented. This gives me the same performance between discrete and continuous :)

KeyError: 'PctDiscrete-v0'

Dear authors,
while executing the main.py on a CPU mode, i get this error : "KeyError: 'PctDiscrete-v0' ", i was wondering if you could help me with this. sincerely
I'm turning the code on win10, python 3.7, gym==0.15.7, torch == 1.10.1
This is the Traceback :
Traceback (most recent call last):
File "C:\Users\aketfi\Anaconda3\envs\3Dpacking\lib\site-packages\gym\envs\registration.py", line 132, in spec
return self.env_specs[id]
KeyError: 'PctDiscrete-v0'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\aketfi\Anaconda3\envs\3Dpacking\lib\multiprocessing\process.py", line 297, in _bootstrap
self.run()
File "C:\Users\aketfi\Anaconda3\envs\3Dpacking\lib\multiprocessing\process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\aketfi\Online-3D-BPP-PCT\wrapper\shmem_vec_env.py", line 131, in _subproc_worker
env = env_fn_wrapper.x()
File "C:\Users\aketfi\Online-3D-BPP-PCT\envs.py", line 46, in _thunk
sample_right_bound = args.sample_right_bound
File "C:\Users\aketfi\Anaconda3\envs\3Dpacking\lib\site-packages\gym\envs\registration.py", line 156, in make
return registry.make(id, **kwargs)
File "C:\Users\aketfi\Anaconda3\envs\3Dpacking\lib\site-packages\gym\envs\registration.py", line 100, in make
spec = self.spec(path)
File "C:\Users\aketfi\Anaconda3\envs\3Dpacking\lib\site-packages\gym\envs\registration.py", line 142, in spec
raise error.UnregisteredEnv('No registered env with id: {}'.format(id))

Usage in real-world

Hello ! Thank you very much for this wonderful work!
I'd like to reproduce the experiment with a real robot (C-6 of your paper). I'm having trouble adjusting the code in order to continuously feed the PCT pipeline new boxes.
The random boxes seem to be generated in PackingDiscrete.gen_next_box, but the function is called at the end of PackingDiscrete.step, in PackingDiscrete.cur_observation().

The ideal workflow, as I understand it, would be :

  • Capture a new Box (sx,sy,sz)
  • Call PackingDiscrete.gen_next_box with box dimensions
  • Generate the observation vector and forward the network with it
  • Call env.step() with the selected_leaf_node
  • Get the box target location from the latest box added in env.packed

Do you think that would that work correctly ? Did you do something similar for your real-world packing video ?
If you still have some code of that experiment somewhere, I would be very glad to consult it :)

Thanks again for sharing your work and for your time!

leaf node generation for real-world experiments

Hello
Great work.
I am curious on the following two aspects of the work in real-world use-case.

  1. Validity of the tree and leaf nodes after placing an item:
    In real-world scenarios, the actual pose where the robot placed the object and the pose given by the policy might differ a little because of manipulation errors and noise. Additionally, there's also a chance that the current object being placed might move or disturb the previously placed objects. In these scenarios, some of the existing nodes in the tree might not be valid(as corner points might have moved by small amount) and so the whole tree structure need to be regenerated to represent the current state of the place bin accurately. How do you address this? As far as I understood from the paper(correct me if I am wrong), only the leaf nodes are removed if they are not valid but what about the internal & leaf nodes whose values are slightly off?

  2. Orientation representation in state:
    While considering a new incoming item(Sx,Sy,Sz), if we want to consider the placement with more than one orientation for the item, how are these orientation values encoded into the state representation.

Is it possible to add "preview" (like bpp-k) to the code?

Dear author,
Thanks for your sharing!
I found the "BPP-K" function in your paper "Online 3D Bin Packing with Constrained Deep Reinforcement Learning" is not added to the code. Is it possible to join the preview function to the code?
Thanks for your sharing again!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.