Giter Club home page Giter Club logo

learningtonavigate's Introduction

LearningToNavigate

Preparing conda env

Assuming you have conda installed, let's prepare a conda env:

# We require python>=3.7 and cmake>=3.10
conda create -n habitat python=3.7 cmake=3.14.0
conda activate habitat

conda install habitat-sim

  • To install habitat-sim with bullet physics
    conda install habitat-sim withbullet headless -c conda-forge -c aihabitat
    

Clone the repository

git clone https://github.com/sashankmodali/LearningToNavigate.git
cd LearningToNavigate

Install Habitat-lab using following commands:

git clone  https://github.com/sashankmodali/habitat-lab.git
cd habitat-lab
pip install -r requirements.txt
python setup.py develop --all # install habitat and habitat_baselines

Datasets

Dataset can be downloaded from here
Gibson Scene datasets can be downloaded from here
Object datasets can be downloaded from here

Folder Structure

Folder structure should be as follows:

LearningToNavigate/
  habitat-lab/
  data/
    scene_datasets/
      gibson/
        Adrian.glb
        Adrian.navmesh
        ...
    datasets/
      pointnav/
        gibson/
          v1/
            train/
            val/
            ...
    object_datasets/
      banana.glb
      ...        

Create symbolic link of data folder inside Habitat Lab

cd habitat-lab
ln -s ../data data
cd ..

After setting up the environment:

  1. For Milestone 1, run the following:
. milestone1.sh

OR

conda activate habitat

python main.py --print_images 1
  1. To generate training data and train depth1, run the following:
. generate_training_data.sh

python train_depth1.py

OR

conda activate habitat

python main.py --print_images 1 -d ./training_data/ -el 10000 --task generate_train

python train_depth1.py
  1. For Milestone 2 , run the following:
. milestone2.sh

OR

conda activate habitat

python nslam.py --split val --eval 1 --train_global 0 --train_local 0 --train_slam 0 --load_global pretrained_models/model_best.global --load_local pretrained_models/model_best.local --load_slam pretrained_models/model_best.slam -n 1 --print_images 1

python generate_video.py
  1. For Final Evaluations , run the following:
. eval_ppo_st.sh

AND

. eval_ppo.sh

AND

. eval_ans.sh

Then, the results can be obtained in /tmp/dump/[experiment]/episodes/1/1/

After replacing tmp directory line in generate_video.py

To generate video, run,

python generate_video.py

Then, the results can be obtained in /tmp/dump/[experiment]/episodes/1/1video.avi

learningtonavigate's People

Contributors

sashankmodali avatar kpant14 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.