Giter Club home page Giter Club logo

mintashkim / text2robot Goto Github PK

View Code? Open in Web Editor NEW

This project forked from generalroboticslab/text2robot

0.0 0.0 0.0 247.64 MB

text2robot is an open-source project leveraging text-to-mesh generative models to create robotic designs, which are then optimized using evolutionary algorithms based on user-specified aesthetics and performance criteria. The repository offers tools and documentation for generating, simulating, and refining robotic meshes.

License: Apache License 2.0

Shell 0.07% Python 6.01% Jupyter Notebook 93.93%

text2robot's Introduction

Text2Robot

Ryan Ringel, Zachary Charlick, Jiaxun Liu, Boxi Xia, Boyuan Chen
Duke University

Overview

This repo contains the Fusion360 and Python implementation for paper "Text2Robot." Our pipeline automatically converts a text prompt to a quadrupedal robot. We utilize a state of the art text to mesh generative model as initialization for our pipeline, and convert the static mesh to a kinetic robot model. We evolve the robots control and morphology simultaneously using our evolutionary algorithm.

teaser

Citation

If you find our paper or codebase helpful, please consider citing:

@article{ringel2024text2robot,
      title={Text2Robot: Evolutionary Robot Design from Text Descriptions}, 
      author={Ryan P. Ringel and Zachary S. Charlick and Jiaxun Liu and Boxi Xia and Boyuan Chen},
      year={2024},
      eprint={2406.19963},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2406.19963}, 
}

Content

Project Structure

├── conda_env_py38.yaml                     # Conda enviornment for isaacgym
├── Assembly Instructions.pdf               # Instructions for print and assembly of bots
├── Evolutionary_Algorithm
│   ├── driver.py                           # Evolutionary algo driver
│   ├── Example_Frog_Experiment             # Example Experiment URDF_Bank and directory
│   ├── experiments                         # Customize command line overrides for your experiment
│   ├── extract_rewards_from_tensorboard_file.py
│   ├── gen_config_file.py
│   ├── greatest_reward.py
│   ├── init_population.py
│   ├── __init__.py
│   ├── output
│   ├── __pycache__
│   ├── swapping.py
│   ├── train_util.py
│   └── util.py
├── Fusion360_Scripts                               
│   ├── Install_Packages                   # Install necessary python packages for Fusion360
│   └── Wrapper                            # Mesh to Robot Model(s) w/ geometric slicing
├── __init__.py
├── legged_env
│   ├── assets
│   ├── envs                               # Navigate here for visualization & sim2real
│   ├── __init__.py
│   ├── __pycache__
│   └── README.md
├── README.md
├── Sim2Real                               # Sim2Real receiver for RaspPi
│   └── receiver.py
└── STL_Files                              # Example Text-to-mesh and modular components
    ├── Electronics_Modules
    └── Example_Meshes

Installation

The installation has been tested on Ubuntu 22.04.4 LTS with CUDA 12.3. The experiments are performed on several different servers with either PNY RTX A6000, NVIDIA GeForce RTX 3090, or NVIDIA A100 PCIe GPUs.

The conda yaml file conda_env_py38.yaml can be found in the base folder of the repo

alias conda="micromamba"

# Create environment
conda env create --file conda_env_py38.yaml -y

# Activate the environment
conda activate py38

# Export library path
export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib

For Fusion360 geometric slicing installation, see section Mesh2CAD

Text2Mesh

We use the Meshy website https://www.meshy.ai/ to generate STL meshes from text prompts. STL's used in our experiments are provided in STL_Files/Example_Meshes.

MeshyHomeScreen

Additional meshes can be generated using Meshy, although they are not guaranteed to work with the provided slicing script. Include the keywords "quadrupedal walking robot" in the text prompt for best results.

MeshyText

Mesh2CAD

STL meshes can be converted to a fusion360 assembly using the provided sliceBody script. First open Fusion360, and preprocess the generated or downloaded mesh desired. Installation instructions can be found at https://www.autodesk.com/campaigns/education/fusion-360. Insert the mesh into a fusion360 document, and use the convert mesh operation to convert the mesh to a brepboy. Organic mesh conversion is enabled with the fusion design extension.

InsertMesh ConvertMesh

To slice the brepbody, add both python scripts in the 'Fusion360_Scripts' folder into Fusion360. To link an add-in in Fusion360, click on the green plus, and navigate to and select the Wrapper and Install_Packages folders.

AddScripts

Prior to running the Wrapper script, add "Polyethylene Low Density" material to favorites by navigating to the fusion material library plastics folder, right clicking on polyethylene low density, and adding to favorites.

AddMaterial

Run Install_Packages to install necessary python libraries to the same file path as the wrapper script. Running the Wrapper script will convert the preprocessed brepbody to a robot assembly. If the mesh does not properly slice, adjusting the steps in the slicebyDX function of slicebody can adjust the location of shoulder slices.

AdjustSteps

CAD2URDF

Uncommenting the URDF exporter function in wrapper will export the generated robot to a URDF. Uncommenting the loop will create 30 variations of the generated robot, and export all as URDF's. This may take up to 15 minutes.

URDFExporter

Evolutionary Loop

To further evolve robots using our evolutionary algorithm, create an experiment directory similar to our provided example: Evolutionary_Algorithm/Example_Experiment. If you wish to make custom changes to the training parameters, you can create a configuration file for the experiment in the Evolutionary_Algorithm/experiments directory, modeling it after our example. This directory should contain a folder URDF_Bank, which will hold the entire gene pool. We recommend using at least 5 prompts for a total of 150 robot models. If you have fewer, you will need to modify Evolutionary_Algorithm/init_population to initialize a smaller first generation. You can include as many robots as you would like!

You can run driver.py out of the box to train our example Frog experiment! Or, to use your own models, under the Evolutionary_Algorithm/experiments folder, duplicate and rename Example_Frog_Experiment.py. Change the robot_names array to include the names of each prompt in the bank. Ensure the naming convention for your robots is correct. For example, if you have a bank of just one prompt 'Frog' (with 30 total models), each of the 30 models should follow the naming convention: Frog_Frog_top-z-9_Frog_bottom-z-9.urdf with 'z' and '9' replaced by the three axis orientations [x, y, z], and all possible limb scales 0-9.

Lines 20-25 of driver.py can be edited to change the number of generations of evolution, as well as preferences for energy efficiency, velocity tracking accuracy, or performance on rough terrain.

max_generations = 55
inform_based_on_energy = False
inform_based_on_velocity = False
rough_terrain = False

You will also need to adjust the GPU configuration settings on driver.py line #67 to match your system properties.

After setting up the experiment, navigate to the Evolutionary_Algorithm folder, and run driver.py. driver.py can be updated to change the max generations of evolution, rough vs flat terrain, and if there should be an additional user specified preference of velocity tracking accuracy, or energy efficiency.

After running the experiment, you can visualize any of the evolved robots and their walking policies. Navigate to legged_env/envs and run:

bash run.sh example -pk

to visualize an example checkpoint of an evolved frog. To visualize your own bot, create a new entry in exp.sh modeled after example. Here you can specify the path to the robot urdf, and the path to the checkpoint file. -pk specifies playback mode, with keyboard input enabled. This will allow you to control the velocity of your robot using (ijkl) as arrow keys and (u and o) to control yaw (rotational velocity). For a full list of playback options, check out run.sh. If you run into unexpected errors, make sure you have the conda environment activated!

Sim2Real

Instructions for printing and assembling your robot can be found in Assembly Instructions.pdf.

For Sim2Real, download the Sim2Real_Receiver/receiver.py folder on to your robots RaspberryPi. You will also need to initialize the servo python control library found at: https://github.com/ethanlipson/PyLX-16A

Running the receiver.py python script will listen for the UDP message packs that are transmitted when a robot is being played in simulation. You will need to match the target_url of the data receiver and data publisher. The data publisher url can be modified using a command line override to issacgym "task.env.dataPublisher.target_url," and can be specified in the bash profile for playback in legged_envs/envs/exp.sh. We provide an example in this file.

example(){
    base
    task=RobotDog
    PLAY_ARGS+=(
        num_envs=1
        checkpoint=assets/checkpoints/example_frog.pth
    )
    BASE_ARGS+=(
    # task.env.terrain.terrainType=plane
    ++task.env.urdfAsset.root="assets/urdf/example_frog"
    task.env.urdfAsset.file="frog.urdf"
    task.env.randomCommandVelocityRanges.linear_x=[-0.5,0.5]
    task.env.randomCommandVelocityRanges.linear_y=[-0.5,0.5]
    task.env.randomCommandVelocityRanges.yaw=[-1,1]

    task.env.dataPublisher.enable=true
    task.env.dataPublisher.target_url=udp://10.172.14.96:9870
    )
}

License

This repository is released under the Apache License 2.0. See LICENSE for additional details.

text2robot's People

Contributors

zachcharlick avatar boyuanchen avatar ringelryan avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.