This repository contains the project source code of our team (@rajk853, @saif61) for the TUM - Advanced Deep Learning for Robotics SS21 course.
In this project, we will investigate the Supervised Learning (SL) approach in the Neural Motion Planning (NMP). As T. Jurgenson and A. Tamar, 2019 claims that
supervised learning approaches are inferior in their accuracy due to insufficient data on the boundary of the obstacles, an issue that RL methods mitigate by actively exploring the domain
So in our project, we will investigate the Image-to-Image and Image-to-Coordinate approaches for NMP using SL.
- Install Conda
- Clone this repository
git clone https://github.com/RajK853/tum-adlr-ss21-11.git ~/adlr
- Create and activate conda environment with following command
cd ~/adlr
conda env create -f environment_gpu.yaml
conda activate adlr
Our interactive plot is available at . Anyone can open it in their browser to interact with one of our trained models available under the directory
sample_models
. It contains each model for Image-to-Image and Image-to-Coordinate approaches.
Open as an notebook.
Execute the given command where ${PATH_TO_DB_FILE}
is the location of the .db
file in your local machine.
python demo_plot.py ${PATH_TO_DB_FILE}
-
Set the database path environment variable
DB_PATH
:export DB_PATH=${PATH_TO_DB_FILE}
-
Create a YAML config file (for eg
focal.yaml
)Focal: epochs: 30 log_dir: results batch_size: 64 path_row_config: train: [0, 3000000, 200] validation: [3000000, 4000000, 100] test: [4000000, 4100000, 250] model_config: lr: 0.001 input_shape: [64, 64, 2] num_db: 7 # Total number of Dense blocks convs_per_db: 2 # Convolutional blocks per Dense block growth_rate: 16 # Growth rate of the DenseNet num_channels: 16 # Number of channels in the first Transition block loss_config: name: focal gamma: 1.3 beta: 0.75 weight: 0.01
Sample configuration files are available here
-
Execute the python script:
python train_image2image_model.py focal.yaml
To train the image-to-coordiante model, use
train_image2vector_model.py
instead Try it in colab.
From each model training session, following components are logged in the results
directory:
- model.tf: Trained model as
.tf
format - tb_logs: Tensorboard log information
- test_images: Images with model predictions on the test data set
- model.png:
PNG
image of the model architecture graph