Giter Club home page Giter Club logo

motion_imitation's Introduction

Motion Imitation

Further development (new features, bug fixes etc) happen in the master branch. The 'paper' branch of this repository contains the original code accompanying the paper:

"Learning Agile Robotic Locomotion Skills by Imitating Animals",

by Xue Bin Peng et al. It provides a Gym environment for training a simulated quadruped robot to imitate various reference motions, and example training code for learning the policies.

Learning Agile Robotic Locomotion Skills by Imitating Animals

Project page: https://xbpeng.github.io/projects/Robotic_Imitation/index.html

Getting Started

We use this repository with Python 3.7 or Python 3.8 on Ubuntu, MacOS and Windows.

  • Install MPC extension (Optional) python3 setup.py install --user

Install dependencies:

  • Install MPI: sudo apt install libopenmpi-dev
  • Install requirements: pip3 install -r requirements.txt

and it should be good to go.

Training Imitation Models

To train a policy, run the following command:

python3 motion_imitation/run.py --mode train --motion_file motion_imitation/data/motions/dog_pace.txt --int_save_freq 10000000 --visualize

  • --mode can be either train or test.
  • --motion_file specifies the reference motion that the robot is to imitate. motion_imitation/data/motions/ contains different reference motion clips.
  • --int_save_freq specifies the frequency for saving intermediate policies every n policy steps.
  • --visualize enables visualization, and rendering can be disabled by removing the flag.
  • the trained model and logs will be written to output/.

For parallel training with MPI run:

mpiexec -n 8 python3 motion_imitation/run.py --mode train --motion_file motion_imitation/data/motions/dog_pace.txt --int_save_freq 10000000

  • -n is the number of parallel.

Testing Imitation Models

To test a trained model, run the following command

python3 motion_imitation/run.py --mode test --motion_file motion_imitation/data/motions/dog_pace.txt --model_file motion_imitation/data/policies/dog_pace.zip --visualize

  • --model_file specifies the .zip file that contains the trained model. Pretrained models are available in motion_imitation/data/policies/.

Motion Capture Data

  • motion_imitation/data/motions/ contains different reference motion clips.
  • motion_imitation/data/policies/ contains pretrained models for the different reference motions.

For more information on the reference motion data format, see the DeepMimic documentation

Locomotion using Model Predictive Control

whole body MPC locomotion for real A1 robot and PyBullet

Getting started with MPC and the environment

To start, just clone the codebase, and install the dependencies using

pip install -r requirements.txt

Then, you can explore the environments by running:

python3 -m motion_imitation.examples.test_env_gui --robot_type=A1 --motor_control_mode=Position --on_rack=True

The three commandline flags are:

robot_type: choose between A1 and Laikago for different robot.

motor_control_mode: choose between Position ,Torque for different motor control modes.

on_rack: whether to fix the robot's base on a rack. Setting on_rack=True is handy for debugging visualizing open-loop gaits.

The gym interface

Additionally, the codebase can be directly installed as a pip package. Just run:

pip3 install motion_imitation --user

Then, you can directly invoke the default gym environment in Python:

import gym
env = gym.make('motion_imitation:A1GymEnv-v1')

Note that the pybullet rendering is slightly different from Mujoco. To enable GUI rendering and visualize the training process, you can call:

import gym
env = gym.make('motion_imitation:A1GymEnv-v1', render=True)

which will pop up the standard pybullet renderer.

And you can always call env.render(mode='rgb_array') to generate frames.

Running MPC on the real A1 robot

Since the SDK from Unitree is implemented in C++, we find the optimal way of robot interfacing to be via C++-python interface using pybind11.

Step 1: Build and Test the robot interface

To start, build the python interface by running the following:

cd third_party/unitree_legged_sdk
mkdir build
cd build
cmake ..
make

Then copy the built robot_interface.XXX.so file to the main directory (where you can see this README.md file).

Step 2: Setup correct permissions for non-sudo user

Since the Unitree SDK requires memory locking and high-priority process, which is not usually granted without sudo, add the following lines to /etc/security/limits.conf:

<username> soft memlock unlimited
<username> hard memlock unlimited
<username> soft nice eip
<username> hard nice eip

You may need to reboot the computer for the above changes to get into effect.

Step 3: Test robot interface.

Test the python interfacing by running: 'sudo python3 -m motion_imitation.examples.test_robot_interface'

If the previous steps were completed correctly, the script should finish without throwing any errors.

Note that this code does not do anything on the actual robot.

Running the Whole-body MPC controller

To see the whole-body MPC controller in sim, run:

python3 -m motion_imitation.examples.whole_body_controller_example

To see the whole-body MPC controller on the real robot, run:

sudo python3 -m motion_imitation.examples.whole_body_controller_robot_example

Credits

This repo was developed at Google Robotics and is maintained by one of its members, Erwin Coumans. The original Motion Imitation code was written by Jason Peng as part of an internship and student researcher at Google Robotics. Some MPC parts for A1 and running on real A1 are written by Yuxiang Yang, a former resident researcher at Google Robotics.


Disclaimer: This is not an official Google product.

motion_imitation's People

Contributors

dependabot[bot] avatar erwincoumans avatar franktiantt avatar nicrusso7 avatar priority-kew avatar stedn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

motion_imitation's Issues

position drifting happens when set v=0 in whole_body_controller_example.py

In this function,

_generate_example_linear_angular_speed(t)

I set the linear and angular velocity both zero. When simulated robot A1 marks time for tens of seconds, it drifts from the origin position gradually. I want to know the reason of this phenomenon, is it attributed to unprecise dynamic modeling? How do we fix it?

Training infrastructure

Hi,
I was wondering if there is any guideline regarding the suggested infrastructure to run the training using this library.
I've read this but would be great to have more info! I was thinking to use GKE but not really sure how to make the setup.

Thanks!

hyperparameters

Hello.
How are the hyperparameters in laikago.py, laikago_constants.py and laikago_pose_utils.py designed?
Specifically, I would like to know the principles of these parameter designs:
laikago.py
INIT_RACK_POSITION = [0, 0, 1]
INIT_POSITION = [0, 0, 0.48]
JOINT_DIRECTIONS = np.array([-1, 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1])# ??
HIP_JOINT_OFFSET = 0.0# ??
UPPER_LEG_JOINT_OFFSET = -0.6 # ??
KNEE_JOINT_OFFSET = 0.66 # ??

MAX_MOTOR_ANGLE_CHANGE_PER_STEP = 0.2 # ??
_DEFAULT_HIP_POSITIONS = (
(0.21, -0.1157, 0),
(0.21, 0.1157, 0),
(-0.21, -0.1157, 0),
(-0.21, 0.1157, 0),
)

ABDUCTION_P_GAIN = 220.0
ABDUCTION_D_GAIN = 0.3
HIP_P_GAIN = 220.0
HIP_D_GAIN = 2.0
KNEE_P_GAIN = 220.0
KNEE_D_GAIN = 2.0

_BODY_B_FIELD_NUMBER = 2
_LINK_A_FIELD_NUMBER = 3

#--------------------------------------
laikago_constants.py
INIT_RACK_POSITION = [0, 0, 1]
INIT_POSITION = [0, 0, 0.48]

INIT_ORIENTATION = pyb.getQuaternionFromEuler([math.pi / 2.0, 0, math.pi / 2.0]) #??

INIT_ABDUCTION_ANGLE = 0 #?
INIT_HIP_ANGLE = 0.67
INIT_KNEE_ANGLE = -1.25

JOINT_DIRECTIONS = collections.OrderedDict(
zip(JOINT_NAMES, (-1, 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1)))

HIP_JOINT_OFFSET = 0.0 #?
UPPER_LEG_JOINT_OFFSET = -0.6
KNEE_JOINT_OFFSET = 0.66

MAX_MOTOR_ANGLE_CHANGE_PER_STEP = 0.12

HIP_POSITIONS = collections.OrderedDict((
(LEG_NAMES[0], (0.21, -0.1157, 0)),
(LEG_NAMES[1], (0.21, 0.1157, 0)),
(LEG_NAMES[2], (-0.21, -0.1157, 0)),
(LEG_NAMES[3], (-0.21, 0.1157, 0)),
))

#---------------------------------------------------------
laikago_pose_utils.py
LAIKAGO_DEFAULT_ABDUCTION_ANGLE = 0
LAIKAGO_DEFAULT_HIP_ANGLE = 0.67
LAIKAGO_DEFAULT_KNEE_ANGLE = -1.25

Thank you very much.

using GPU

hi, thank you for sharing you code, and I wonder to know whether this code can run with GPU. I build env for tensorflow GPU, but I find it cannot use GPU.
thank you

Crash in the Intel OpenGL driver on Linux/Ubuntu

Sorry to interrupt, but after I entered the following codes as instructed
python3 -m motion_imitation.examples.test_env_gui --robot_type=A1 --motor_control_mode=Position --on_rack=True"
The UI comes out but the terminal comes out the following codes:
workaround for some crash in the Intel OpenGL driver on Linux/Ubuntu
And after a while, the UI crashed. And the error says:

numActiveThreads = 0
stopping threads
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed
finished
numActiveThreads = 0
btShutDownExampleBrowser stopping threads
Thread with taskId 0 exiting
Thread TERMINATED
destroy semaphore
semaphore destroyed
destroy main semaphore
main semaphore destroyed

And I do not know how to fix it, thanks in advance.

The quadruped robot swings violently in the real environment.

Hello.
I trained a RL algorithm on A1, which performs well in the simulation environment.
I used the file modified from whole_body_controller_example.py to achieve this algorithm in reality, but the quadruped robot swings very sharply.
Some parameters are as follows:
p = bullet_client.BulletClient(connection_mode=pybullet.DIRECT) robot = a1_robot.A1Robot(pybullet_client=p, urdf_filename="envs/a1/urdf/a1.urdf", enable_action_interpolation=True, action_repeat=8, time_step=0.0025)
Could you give me some suggestions?
Thank you very much.

how to change the third party so that can use on the real Laikago

Recently I focus on change the code in order to use on the real Laikago robot, but there are some problems occurred like that:
In file included from /home/ubuntu/motion_imitation/third_party/unitree_legged_sdk/pybind11/include/pybind11/operators.h:12:0,
from /home/ubuntu/motion_imitation/third_party/unitree_legged_sdk/python_interface.cpp:9:
/home/ubuntu/motion_imitation/third_party/unitree_legged_sdk/pybind11/include/pybind11/pybind11.h: In instantiation of ‘pybind11::class_<type_, options>::def_readwrite(const char*, D C::, const Extra& ...)::<lambda(pybind11::class_<type_, options>::type&, const D&)> [with C = laikago::IMU; D = float [4]; Extra = {}; type_ = laikago::IMU; options = {}; pybind11::class_<type_, options>::type = laikago::IMU]’:
/home/ubuntu/motion_imitation/third_party/unitree_legged_sdk/pybind11/include/pybind11/pybind11.h:1326:28: required from ‘struct pybind11::class_<type_, options>::def_readwrite(const char
, D C::, const Extra& ...) [with C = laikago::IMU; D = float [4]; Extra = {}; type_ = laikago::IMU; options = {}]::<lambda(using type = struct laikago::IMU&, const float (&)[4])>’
/home/ubuntu/motion_imitation/third_party/unitree_legged_sdk/pybind11/include/pybind11/pybind11.h:1326:22: required from ‘pybind11::class_<type_, options>& pybind11::class_<type_, options>::def_readwrite(const char
, D C::*, const Extra& ...) [with C = laikago::IMU; D = float [4]; Extra = {}; type_ = laikago::IMU; options = {}]’
/home/ubuntu/motion_imitation/third_party/unitree_legged_sdk/python_interface.cpp:72:54: required from here
/home/ubuntu/motion_imitation/third_party/unitree_legged_sdk/pybind11/include/pybind11/pybind11.h:1326:65: error: invalid array assignment
fset([pm](type &c, const D &value) { c.*pm = value; }, is_method(*this));

If anyone who knows how to solve these problems , please reply me ,thank you .

Additional motor torques

Is not an issue, just for a better understanding, how do you calculate the additional motor torques that are used in the Hybrid Control of your robot. Do you do a substraction of the desired motor torques minus the actual motor torques?
Selección_034

Install requirements issue

Hello!
When I try to install requirement libs using command:
pip3 install -r requirements.txt
I have an error with this output:

Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: attrs==19.3.0 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 1)) (19.3.0)
Collecting gym==0.17.1 (from -r requirements.txt (line 2))
  Using cached gym-0.17.1.tar.gz (1.6 MB)
  Preparing metadata (setup.py) ... done
Requirement already satisfied: mpi4py==3.0.3 in /usr/lib/python3/dist-packages (from -r requirements.txt (line 3)) (3.0.3)
Collecting numpy==1.17.3 (from -r requirements.txt (line 4))
  Using cached numpy-1.17.3-cp38-cp38-manylinux1_x86_64.whl (20.5 MB)
Requirement already satisfied: pybullet>=3.0.6 in /usr/local/lib/python3.8/dist-packages (from -r requirements.txt (line 5)) (3.2.5)
ERROR: Ignored the following versions that require a different python version: 1.25.0 Requires-Python >=3.9; 1.25.0rc1 Requires-Python >=3.9; 1.25.1 Requires-Python >=3.9
ERROR: Could not find a version that satisfies the requirement tensorflow==1.15.4 (from versions: 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.3.4, 2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.4.4, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0rc0, 2.6.0rc1, 2.6.0rc2, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.7.0rc0, 2.7.0rc1, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.8.0rc0, 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.10.0rc0, 2.10.0rc1, 2.10.0rc2, 2.10.0rc3, 2.10.0, 2.10.1, 2.11.0rc0, 2.11.0rc1, 2.11.0rc2, 2.11.0, 2.11.1, 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0)
ERROR: No matching distribution found for tensorflow==1.15.4

My system is:
Ubuntu 20.04
Python 3.8.10

A bug in sensor.py file

Hi, thank you for your brilliant job.
I find a small bug in the class BoxSpaceSensor in sensor.py, as follows

if isinstance(lower_bound, float):
      self._lower_bound = np.full(shape, lower_bound, dtype=dtype)
    else:
      self._lower_bound = np.array(lower_bound)

but in the class BasePositionSensor in robot_sensors.py, the default parameters of constructor are

  def __init__(self,
               lower_bound: _FLOAT_OR_ARRAY = -100,
               upper_bound: _FLOAT_OR_ARRAY = 100,
               name: typing.Text = "BasePosition",
               dtype: typing.Type[typing.Any] = np.float64) -> None:
    """Constructs BasePositionSensor.
    Args:
      lower_bound: the lower bound of the base position of the robot.
      upper_bound: the upper bound of the base position of the robot.
      name: the name of the sensor
      dtype: data type of sensor value
    """
    super(BasePositionSensor, self).__init__(
        name=name,
        shape=(3,),  # x, y, z
        lower_bound=lower_bound,
        upper_bound=upper_bound,
        dtype=dtype)

So, you know, the result of isinstance(lower_bound, float) is False for BasePositionSensor, so BasePositionSensor._lower_bound become a float, but it should be a list, for example, [x, y, z].

So when I add sensor_wrappers.HistoricSensorWrapper(wrapped_sensor=robot_sensors.BasePositionSensor(), num_history=5) to the sensors, the observation returned by the step add 15, but env.observation_space.shape[0] only add 5!

I think it should help to correct class BoxSpaceSensor as follows

if isinstance(lower_bound, float) or isinstance(lower_bound, int):
      self._lower_bound = np.full(shape, lower_bound, dtype=dtype)
    else:
      self._lower_bound = np.array(lower_bound)

Thanks again for your outstanding work :)

How can I finetune the policy in real world

I'm working on a real robot Laikago and I have no problem training the policy in the simulation.

However, I fail to find codes to finetune it in the real world, which is called domain adaptation in paper.
image

Please let me know if I miss something.

Thanks a lot.

Adding new robot

Is there any plan to add new robots? (e.g. B1) Or can the code (as is) be modified to add new robot?

error of whole_body_control_example

Hi, I got the error "FileNotFoundError: [Errno 2] No such file or directory: '/sys/class/input/event24/device/name'" while running whole_body_controller_example, is the joypad necessary for this program?
Also when I ran mpc_example.py, it crashed and returned the error
"in ComputeJacobian
return super(Laikago, self).ComputeJacobian(leg_id)[(2, 0, 1), :]
IndexError: index 2 is out of bounds for axis 0 with size 0
"
Any help? Thx

Question about the structure of the encoder for domain adaptation

Thank you so much for providing us with the code. It's such great work!

However, I'm curious what's the structure of the encoder for domain adaptation during training? From your paper, I think the shape for the output of the encoder is the same as its input as the dynamic parameters. However, I'm not sure whether it is correct. May I have your suggestions? Thank you!

dynamic model

Hello. Is the dynamic model of laikago public? If it is public, where are the relevant program files? Thank you.

os.environ["CUDA_VISIBLE_DEVICES"] = '-1'

Hi, I put a GPU-based nerual network in this project and I noticed that run.py has set the cuda invisible.
https://github.com/google-research/motion_imitation/blob/bbc04b1c3b083215f14858ce3435e3f0cb34e4d0/motion_imitation/run.py#L149

And I get error info like this:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Once I comment this line, the error disappear.

I wonder why we make the cuda environment invisible? For some specific concern?

Thanks in advance.

Question on the friction setting for randomization

Hi, thank you for your great work!

However, I have a question about a setting for randomization. In the paper, you mentioned the training range for the lateral friction is [0.05, 1.25] which is a relatively big range. However, in the code it seems to be [0.5, 1.25].

Which one did you choose in the paper's experiments during training and what's the reason? Is that because friction is a very crucial impact factor?

Thank you!

[retarget_motion] Inverse Kinematics Issue with limits

Hello.

When you call the Inverse Kinematics method, by here, you don't include the jointRanges parameter, and as I understand from the PyBullet documentation, don't including it implies that limits and rest pose are ignored.

When testing with Laikago, if you remove lowerLimits, upperLimits and restPoses when calling that function, the program works well too, so I think it is not necessary because optimizer is finding the correct solution first, without taking into account the joints limits.

But I am testing with other robot, and as inverse kinematics is not taking into account the joint limits, it is returning an incorrect solution. I have:

joint_lim_low = [-0.35, 0.0, -2.44, -0.35, 0.0, -2.44, -0.35, 0.0, -2.44, -0.35, 0.0, -2.44]
joint_lim_high = [0.35, 1.57, 0.0, 0.35, 1.57, 0.0, 0.35, 1.57, 0.0, 0.35, 1.57, 0.00]

And I am getting out of the range values in joints 0 and 11:

joint_pose = [-0.37, 0.23, -0.34, 0.26, 0.14, -0.18, -0.22, 0.13, -0.22, 0.33, 0.17, 0.07]

This is the result of the inverse kinematics solution (see the rear right leg):

image

I tried to add the jointRanges parameter (as the difference of limits), but pybullet crashes (with both my robot and Laikago).

a issue about sim2real

Hello, I try to excute whole_body_controller_example in the pybullet and it works well. When I use whole_body_controller_robot_example in a real A1 robot with MPC, I found that the robot jumped up and fell down abruptly, is there any setting I have missed?

missing file for the a1 simulation

Hi Erwin,

a1_sim.py is missing from the mpc_controller directory. laikago_sim also does not seem to be working from the latest push "AttributeError: module 'mpc_controller.laikago_sim' has no attribute 'URDF_NAME'"

Ways to test robustness of learned policy?

Thanks for sharing the code!

Is there an easy way to test robustness of a policy such as applying external forces to the body? An example will be greatly appreciated!

Head and tail?

If you really want to duplicate movement of animals, don't you need the balance of a tail, and the steady gaze from eyes in the head, connected by a neck to the body? This is part of the animals' motion and grace.

Adapting this implementation of PPO for DeepMimic

Hello,

I wanted to ask if you think it would be feasible to adapt this implementation of PPO using stable_baselines to DeepMimic. If so, which are the main differences in the learning algorithm?

By reading both papers, it didn't look like there were major differences in the PPO implementation, apart from the reward functions.

Is there any specific reason why you decided to use PPO1 instead of PPO2?

Project dependencies may have API risk issues

Hi, In motion_imitation, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

attrs==19.3.0
gym==0.17.1
mpi4py==3.0.3
numpy==1.17.3
pybullet>=3.0.6
tensorflow==1.15.4
tensorboard==1.15.0
typing==3.7.4.1
stable-baselines==2.10.0
tqdm
numba
quadprog
inputs

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3.
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.
The version constraint of dependency numba can be changed to >=0.7.2,<=0.11.0.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the numpy
distutils.extension.Extension
numpy.linalg.norm
numpy.linalg.pinv
distutils.extension.Extension.__init__
numpy.linalg.inv
distutils.core.setup
The calling methods from the tqdm
tqdm.tqdm
The calling methods from the numba
random.seed
The calling methods from the all methods
argparse.ArgumentParser.add_argument
inspect.currentframe
self._pybullet_client.setJointMotorControlArray
motion_imitation.utilities.motion_data.MotionData
self._BuildUrdfIds
writer.add_run_metadata
self.GetPDObservation
p.connect
numpy.asarray
compiler._get_cc_args
self.set_random_seed
self._pybullet_client.resetJointState
self._enable_curriculum
self._pybullet_client.loadURDF
self._pybullet_client.getQuaternionFromEuler
self._env.np_random.uniform
_gen_parabola
self._action_filter.filter
self._value_deque.append
minitaur.GetLegInertiasFromURDF
stable_baselines.common.mpi_adam.MpiAdam
_gen_swing_foot_trajectory
self._GetDefaultInitPosition
tensorflow.summary.histogram
self._joint_angles.items
self.reset
compute_objective_matrix
len
com_vels.append
self._get_pybullet_client.resetBaseVelocity
tensorflow.minimum
motion_imitation.robots.kinematics.compute_jacobian
motion_imitation.envs.env_builder.build_laikago_env.reset
minitaur.GetNumKneeJoints
get_neutral_motor_angles
s.on_terminate
self._FilterAction
f.write
self.extra_link_args.append
self._pybullet_client.resetBasePositionAndOrientation
round
lower_bound.append
self.get_active_motion.set_frame_root_pos
self._get_pybullet_client.getBasePositionAndOrientation
self.robot.ComputeJacobian.dot
self.old.append
self.GetBasePosition
self.get_frame_root_vel
self.get_frame_root_ang_vel
motion_imitation.envs.env_builder.build_laikago_env.step
self._stepper.is_com_stable
self.yhist.clear
self._robot.GetMotorPositionGains
self._sync_ref_origin
self._flatten_observation
self.Step
foot_position_in_hip_frame_to_joint_angle
stable_baselines.common.distributions.make_proba_dist_type
self.install
env.action_space.sample
self.xhist.clear
sys.platform.startswith
self._knee_link_ids.sort
motion_imitation.utilities.pose3d.QuaternionNormalize
self._state_estimator.update
self._get_sim_base_rotation
self.Laikago.super.__init__
main
numpy.append
self.HistoricSensorWrapper.super.on_reset
self.get_active_motion.calc_phase
self._robot.Terminate
self._BuildActionFilter
tensorflow.layers.flatten
self.GetBaseRollPitchYaw
self.bullet_client.changeVisualShape
self._update_ref_model
command_function
super
self.set_frame_joints_vel
minitaur.SetFootFriction
tuple
self._LoadRobotURDF
s.get_upper_bound
env.step
self._gym_env.render
self._robot.pybullet_client.getMatrixFromQuaternion
self._stance_leg_controller.get_action
self._wrapped_sensor.get_observation
self._env.robot.GetDefaultInitJointPose.append
pybullet.getLinkState
self._pybullet_client.configureDebugVisualizer
sensor_.get_name
wrapped_sensor.get_lower_bound
map
self._GetControlObservation
task.get_ref_model
j_vel_diff.dot
numpy.sqrt
foot_angles.reshape.reshape
Q.mass_matrix.T.dot.dot
self._robot.GetTimeSinceReset
self._calc_reward_root_velocity
numpy.random.randn
self._pybullet_client.loadPlugin
motion_imitation.envs.env_wrappers.imitation_wrapper_env.ImitationWrapperEnv
self.GetTrueBaseRollPitchYawRate
tempfile.mkdtemp
self._leg_inertia_urdf.extend
self.butter_filter
motion_imitation.robots.a1_robot.A1Robot
motion_imitation.robots.a1_robot_velocity_estimator.VelocityEstimator
env.robot.GetFootLinkIDs
numpy.min
threading.Lock
motion_imitation.robots.kinematics.joint_angles_from_link_position
robot.GetBaseRollPitchYawRate.np.array.copy
math.atan2
rot_z.T.dot
self.get_num_motions
numpy.random.randint
self._swing_leg_controller.reset
self.get_frame_size
self._wrapped_sensor.get_robot
robot.GetBasePosition
self._pybullet_client.getContactPoints
motion_imitation.utilities.motion_util.calc_heading
six.iteritems
self._initial_state_ratio_in_cycle.append
self._velocity_estimator.estimated_velocity.copy
self._clock
self._next_leg_state.append
self._base_orientation.copy
super.__init__
cnn_extractor
mpc_controller.raibert_swing_leg_controller.RaibertSwingLegController
self.pybullet_client.getJointStates
open
self._velocity_filter_z.calculate_average
self._robot.GetBaseRollPitchYawRate
s.get_lower_bound
self.get_active_motion.get_frame_root_pos
callback.on_step
value.low.np.asarray.flatten
self.ProcessAction
self._neumaier_sum
self._get_num_joints
stable_baselines.common.tf_util.function
self._randomize_minitaur
numpy.array.dot
self.get_active_motion
self.bullet_client.invertTransform
self.get_frame_time
self._robot.GetTrueBaseRollPitchYaw
self._reset_ref_motion
codegen
self._swing_leg_controller.get_action
self._record_default_pose
self._env.robot.GetDefaultInitOrientation
self._chassis_link_ids.append
self.GetTrueMotorAngles
threading.Thread
self._ResetActionFilter
self.GetMotorTorques
datetime.datetime.now.strftime
numpy.ceil
pybullet.setGravity
self._bracket_link_ids.sort
calc_heading
self._pybullet_client.setAdditionalSearchPath
stable_baselines.common.tf_util.get_globals_vars
minitaur.SetMotorViscousDamping
self.get_num_tar_frames
self.get_pose_size
self._GetPDObservation
self._stance_leg_controller.update
all
self.robot.ComputeJacobian
robot_class.get_neutral_motor_angles
self._calc_reward_pose
self.proba_distribution_from_flat
argparse.ArgumentParser.print_help
self.FeedForwardPolicy.super.__init__
action_upper_bound.extend
max
self.GetTrueMotorVelocities
self._velocity_estimator.reset
mpi4py.MPI.COMM_WORLD.allgather
minitaur.GetBaseInertiasFromURDF
self.get_active_motion.set_frame_root_rot
setuptools.find_packages
build_model.predict
self._get_pybullet_client.loadURDF
callback.on_rollout_end
self._ClipMotorCommands
sys.exit
minitaur.SetControlLatency
self.MotorAngleSensor.super.__init__
self._sync_sim_model
robot.GetBaseVelocity.np.array.copy
self.get_active_motion.calc_frame_vel
root_vel_diff.dot
self.set_frame_root_ang_vel
self.pybullet_client.getQuaternionFromEuler
numpy.mean.append
s.set_robot
self._calc_ref_pose
tar_poses.append
self.GetFootLinkIDs
_update_controller_params
action_lower_bound.append
self._calc_ref_pose_warmup
pybullet.getBasePositionAndOrientation
format
self.read_thread.start
self.get_active_motion.get_frame_vel_size
env_randomizer.randomize_env
self._np_random.seed
self.ComputeJacobian
joint_limit_low.append
scipy.interpolate.interp1d
datetime.datetime.now
numpy.random.seed
self._world_dict.copy
self.MinitaurLegPoseSensor.super.__init__
self._change_ref_motion
minitaur.GetLegMassesFromURDF
self._np_random.randint
self.GetURDFFile
self._action_filter.init_history
self.HistoricSensorWrapper.super.on_step
p.getBasePositionAndOrientation
observation_dict_after_flatten.items
numpy.inner
math.sin
_to_int
numpy.tile
self.pybullet_client.invertTransform
value.high.np.asarray.flatten
numpy.reshape
int
A1MotorModel
new_dict.copy
self._env.close
self.include_dirs.append
self._RemoveDefaultJointDamping
self.num_joints.x.reshape.copy
sim_env.pybullet_client.readUserDebugParameter
tensorflow.placeholder
self._EndEffectorIK
stable_baselines.common.tf_util.is_image
self._pybullet_client.getDynamicsInfo
self.pybullet_client.getEulerFromQuaternion
stable_baselines.common.explained_variance
NotImplementedError
self._leg_link_ids.append
numpy.full
self.GetTrueObservation
self._env.get_time_since_reset
self._AddSensorNoise
task.is_motion_over
self._get_motion_time
self._foot_link_ids.sort
self._build_randomization_function_dict
self.get_num_frames
self._wrapped_sensor.set_robot
self._pybullet_client.getCameraImage
numpy.cos
exit
env.reset.reshape
numpy.sum
numpy.log
os.path.getsize
mpi4py.MPI.COMM_WORLD.allreduce
self._get_joint_vel_size
self.load
self._check_all_randomization_parameter_in_rejection_range
tensorflow.square
numpy.hstack
self.pybullet_client.getJointInfo
motion_imitation.envs.env_wrappers.observation_dictionary_to_array_wrapper.ObservationDictionaryToArrayWrapper
self._RecordMassInfoFromURDF
self._reward
motion_imitation.envs.locomotion_gym_config.LocomotionGymConfig
self._get_pybullet_client
self._get_sim_base_position
foot_positions.append
multiprocessing.pool.ThreadPool
com_force_torque.foot_force_to_com.np.linalg.pinv.np.matmul.transpose
self.pybullet_client.getBaseVelocity
robot_class.convert_leg_pose_to_motor_angles.insert
p.getLinkStates
getattr
self._leg_link_ids.extend
tensorflow.summary.image
self._pybullet_client.getBaseVelocity
self._enable_warmup
shutil.rmtree
distutils.core.setup
tensorflow.RunMetadata
minitaur.SetMotorStrengthRatios
s.get_dtype
self._motor_angles.copy
self._leg_masses_urdf.append
mass_matrix.T.dot
minitaur.SetFootRestitution
numpy.eye
value.np.asarray.flatten
self._termination
self._pybullet_client.multiplyTransforms
motion_imitation.envs.env_builder.build_regular_env.reset
self._motor_model.convert_to_torque
self.ROT_SIZE.self.POS_SIZE.self.POS_SIZE.frame.copy
motion_imitation.robots.laikago_pose_utils.LaikagoPose
self.Reset
pybullet_utils.transformations.quaternion_from_matrix
self._pybullet_client.getJointStates
self.GetActionDimension
QuaternionFromAxisAngle
self.get_active_motion.calc_frame
self._gait_generator.reset
self.get_active_motion.set_frame_root_vel
commands.get_include
enumerate
numpy.sum.reshape
Pybind11Extension.cxx_std.__set__
numpy.array.extend
self._pybullet_client.resetSimulation
copy.deepcopy
minitaur.SetJointFriction
numpy.linalg.inv
math.fmod
TemporaryDirectory
self.SensorWrapper.super.__init__
a_coeffs.append
numpy.sin
self._pybullet_client.getBasePositionAndOrientation
opts.contents.decode.string.Template.substitute.encode
motion_imitation.envs.sensors.sensor_wrappers.HistoricSensorWrapper
motion_imitation.envs.sensors.environment_sensors.LastActionSensor
self._observation_history.appendleft
self._compile
os.path.basename
minitaur.SetBaseInertias
build_model.learn
self.get_active_motion.get_frame_root_vel
self._pybullet_client.setPhysicsEngineParameter
collections.deque
env.pybullet_client.getMatrixFromQuaternion
self.robot.GetFootContacts
collections.OrderedDict
self._robot.SetTimeSteps
os.path.abspath
self._stance_leg_controller.reset
self._cpp_mpc.compute_contact_forces
build_markers
unique_dirs.append
numpy.concatenate.append
env.robot.GetBasePosition
numpy.identity.eye.np.isclose.all
self._pybullet_client.getEulerFromQuaternion
self._compute_delta_time
callback.on_rollout_start
motor_torques.append
self._load_ref_motions
self.graph.as_default
self.policy
re.compile
self.ANG_VEL_SIZE.self.VEL_SIZE.frame.copy
self._get_pybullet_client.getJointStateMultiDof
self._GetDefaultInitOrientation
self._swing_leg_controller.update
self.robot.GetBaseOrientation
numpy.expand_dims.reshape
os.path.join
self._pybullet_client.changeVisualShape
self._pybullet_client.createConstraint
self._robot.pybullet_client.multiplyTransforms
motion_imitation.envs.env_wrappers.trajectory_generator_wrapper_env.TrajectoryGeneratorWrapperEnv
self.get_active_motion.is_over
self.pybullet_client.multiplyTransforms
self._motor_model.set_voltage
pybullet.getQuaternionFromEuler
self.bullet_client.getQuaternionFromEuler
self._robot.GetBaseOrientation
os.environ.get
self.save
print
Vector3RandomUnit
tensorflow.variable_scope
stable_baselines.common.tf_util.flatgrad
self._get_pybullet_client.getLinkState
build_model.load_parameters
time.time
numpy.multiply.tolist
stable_baselines.common.tf_util.make_session
self._motor_velocities.copy
self.pybullet_client.getNumJoints
motion_imitation.robots.a1.A1.GetBaseVelocity
tensorflow.to_float
json.load
kwargs.pop
self.blend_frames
self.pybullet_client.getLinkState
desired_acc.g.T.dot
self.assign_old_eq_new
minitaur.SetLegMasses
numpy.linalg.norm
self._terminal_condition
self.A1Robot.super.__init__
atarg.std.mean
self._motor_link_ids.append
self._BuildMotorIdList
utilities.pose3d.QuaternionToAxisAngle
re.compile.match
observation.extend
pybullet.configureDebugVisualizer
numpy.multiply
print_includes
hasattr
numpy.identity
self._link_urdf.append
numpy.isclose
tensorflow.RunOptions
sys.stderr.write
self._velocity_estimator.update
self._SettleDownForReset
numpy.arctan2
os.path.realpath
numpy.maximum
self._init_callback
ext.__class__.cxx_std.__set__
self._robot.GetTrueBaseOrientation
self._update_ref_motion
motion_imitation.learning.imitation_runners.traj_segment_generator
numpy.arccos
self._get_joint_vel_idx
compute_constraint_matrix
randomizers.append
self.VEL_SIZE.frame.copy
losses.append
os.system
os.mkdir
pybullet_utils.bullet_client.BulletClient.createCollisionShape
f.read.decode
pybullet_data.getDataPath
argparse.ArgumentParser.parse_args
sensors_dict.items
self._sample_ref_motion
load_ref_data
numpy.transpose
input
self.calc_cycle_count
isinstance
self.get_active_motion.get_frame_joints
self.calc_phase
policy.step
Gamepad
tensorflow.greater
self.pybullet_client.resetJointState
motion_imitation.utilities.pose3d.QuaternionToAxisAngle
self._velocity_filter_y.calculate_average
self._wrapped_sensor.on_reset
self._motor_link_ids.sort
wrapped_sensor.get_name
self._update_time_limit
contents.string.Template.substitute
dict
os.makedirs
self._wrapped_sensor
pybullet_utils.transformations.quaternion_inverse
CXX_FLAGS.split
self.LastActionSensor.super.__init__
math.floor
j_pose_diff.dot
self.policy_pi.proba_distribution.entropy
self._robot.GetFootPositionsInBaseFrame
numpy.round
minitaur.SetLegInertias
self._randint
self.ReceiveObservation
motion_imitation.envs.env_builder.build_imitation_env.step
self._setup_compile
root_ang_vel_diff.dot
utilities.motion_util.normalize_rotation_angle
numpy.max
get_root_pos
robot.urdf_loader.get_end_effector_id_dict.values
pybullet_utils.transformations.quaternion_about_axis
seg_gen.__next__.get
self._foot_link_ids.extend
self._stepper.next_foot
action_lower_bound.extend
motion_imitation.envs.locomotion_gym_env.LocomotionGymEnv
gym.spaces.Dict
os.sys.path.insert
_single_compile
os.unlink
self._robot_class
action_upper_bound.append
self._wrapped_sensor.on_step
attr.astuple
numpy.interp
numpy.all
globals
self.adam.update
retarget_motion
tensorflow.Graph
pybullet.getJointInfo
sf.write
s.get_dimension
self._env.step
robots.minitaur_pose_utils.motor_angles_to_leg_pose
self._apply_state_perturb
joint_pos.copy
self._robot.GetBasePosition
self._calc_frame_vels
tensorflow.constant_initializer
imu_rates.append
self._env.robot.ReceiveObservation
list
motion_imitation.robots.a1.A1.Step
p.removeBody
self._update_ref_state
self.bullet_client.loadURDF
self._pybullet_client.invertTransform
param_name.self._randomization_function_dict
numpy.expand_dims
self._robot_interface.receive_observation
random.uniform
attr.ib
numpy.array.append
motion_imitation.learning.ppo_imitation.PPOImitation
minitaur.GetBaseMassesFromURDF
motion_imitation.robots.kinematics.link_position_in_base_frame
self.Laikago.super.ComputeJacobian
self._motor_angles.minitaur.MapToMinusPiToPi.copy
self.enable_loop
mpc_controller.com_velocity_estimator.COMVelocityEstimator
observations.append
mpc_controller.torque_stance_leg_controller_quadprog.TorqueStanceLegController
numpy.set_printoptions
self._get_robot_from_env
seg_gen.__next__.reshape
self.bullet_client.setCollisionFilterGroupMask
functools.partial
self.get_vel_size
info.get
stance_foot_ids.append
self._robot.pybullet_client.invertTransform
f.read
motion_imitation.robots.a1_robot.A1Robot.Terminate
numpy.get_include
self.set_frame_root_pos
compiler.compile
threads.ThreadPool.imap_unordered
__version__.split
tensorflow.compat.v1.logging.info
observation_spaces.spaces.items
issubclass
mpc_controller.foot_stepper.FootStepper
get_and_replace
self.all_sensors
self.GetTrueMotorTorques
mpc_controller.openloop_gait_generator.OpenloopGaitGenerator
self.extra_compile_args.append
StepOutput
self.GetTrueBaseOrientation
self._randn
self.A1.super.__init__
self._trajectory_generator.get_observation
ep_rets.append
numpy.sign
stable_baselines.common.tf_util.total_episode_reward_logger
numpy.clip.append
quat.copy
stable_baselines.common.TensorboardWriter
_setup_controller.get_action
compile
motion_imitation.envs.env_wrappers.simple_openloop.LaikagoPoseOffsetGenerator
self._foot_link_ids.append
warnings.warn
self._gym_env.robot.GetTimeSinceReset
self._calc_reward_velocity
self._robot.GetAllSensors
inertia_value.np.asarray.any
os.sysconf
absl.flags.DEFINE_string
self._robot.Step
self.bullet_client.multiplyTransforms
robot.pybullet_client.multiplyTransforms
time.sleep
self._calc_cycle_offset_rot
self._set_state
self.pybullet_client.setJointMotorControlArray
observation_dict.items
setuptools.command.sdist.sdist.make_release_tree
policy.value
build_model
self._get_joint_pose_idx
pybullet_utils.transformations.quaternion_slerp
random.randint
self.get_duration
update_camera
self.set_frame_root_rot
os.chdir
pybullet.getDebugVisualizerCamera
motion_imitation.envs.env_builder.build_imitation_env
self._estimate_robot_height
self.SetAllSensors
self.pybullet_client.getContactPoints
s.get_dtype.count
self._reset_motion_time_offset
self._pybullet_client.getJointInfo
self._calc_ref_vel
self.GetMotorVelocities
self._base_mass_urdf.append
pybullet_utils.bullet_client.BulletClient
self.BoxSpaceSensor.super.__init__
motion_imitation.envs.env_builder.build_regular_env.step
motion_imitation.envs.env_wrappers.default_task.DefaultTask
abs
commands.get_cmake_dir
self._env.robot.GetDefaultInitJointPose
argparse.ArgumentParser
robot.urdf_loader.get_end_effector_id_dict
numpy.array
p.removeAllUserDebugItems
self._default_pose.copy
self._convert_to_torque_from_pwm
self.ROT_SIZE.self.POS_SIZE.frame.copy
self._build_action_space
inspect.getfile
self._get_ref_base_position
self.GetMotorPositionGains
self._lower_link_ids.append
actions.append
motion_imitation.envs.env_builder.build_imitation_env.reset
speed_points.time_points.scipy.interpolate.interp1d
self._CreateRackConstraint
numpy.any
s.get_name
range
numpy.loadtxt
p.configureDebugVisualizer
numpy.stack
joint_limit_high.append
self._calc_reward_root_pose
robot.pybullet_client.calculateInverseKinematics
self._get_default_root_rotation
_setup_controller.update
tmp_chdir
self.get_active_motion.get_duration
pybullet.createMultiBody
self._get_pybullet_client.getNumJoints
pybullet_utils.bullet_client.BulletClient.connect
motion_imitation.envs.utilities.controllable_env_randomizer_from_config.ControllableEnvRandomizerFromConfig
motion_imitation.robots.a1_robot.A1Robot.Step
self.get_active_motion.get_frame_root_ang_vel
self._stepper.swing_foot
pybullet_utils.transformations.quaternion_from_euler
pybullet.getJointStateMultiDof
platform.python_implementation
self.reward
self._get_pybullet_client.getJointInfo
self._get_pybullet_client.changeVisualShape
absl.flags.DEFINE_enum
numpy.savez
self._sensors.append
self._BuildJointNameToIdDict
self.seed
self._pybullet_client.computeViewMatrixFromYawPitchRoll
numpy.arcsin
self.pybullet_client.calculateJacobian
mpc_osqp.ConvexMpc
self._robot.Reset
numpy.random.normal
tensorflow.set_random_seed
stable_baselines.common.Dataset
self._leg_link_ids.sort
self.get_frame_root_pos
pybullet.resetJointStateMultiDof
self._rejection_param_range.items
minitaur.SetBaseMasses
self._calc_reward_end_effector
self._init_callback.on_training_end
mpc_controller.qp_torque_optimizer.compute_contact_force
atarg.std.std
self._get_pybullet_client.changeDynamics
tensorflow.assign
multiprocessing.pool.ThreadPool.imap
tar_toe_pos.append
self.ResetPose
self._gait_generator.update
subprocess.check_call
os.stat
sorted
motion_imitation.utilities.motion_util.normalize_rotation_angle
motions.append
self.TransformAngularVelocityToLocalFrame
self.A1.super.ApplyAction
self.ANG_VEL_SIZE.self.VEL_SIZE.self.VEL_SIZE.frame.copy
stable_baselines.common.policies.mlp_extractor
self._env.robot.GetDefaultInitPosition
robot.GetFootPositionsInBaseFrame
self.get_frame_vel
absl.logging.info
zip
contact_forces.items
self.GetBaseOrientation
numpy.floor
self.compute_losses
self._StepInternal
self.get_reference_pos_swing_foot
pybind11.get_include
self.IMUSensor.super.__init__
self.get_active_motion.set_frame_root_ang_vel
extensions.append
pybullet.resetBasePositionAndOrientation
self._pybullet_client.getDebugVisualizerCamera
self.filter.update
self._init_num_timesteps
self.moving_window_filter_y.calculate_average
gym.utils.seeding.np_random
self.BasePositionSensor.super.__init__
string.Template
self._get_pybullet_client.getBaseVelocity
self._ResetPoseForLeg
tensorflow.logging.info
self._pybullet_client.getJointInfo.decode
numpy.dot
self.update_command
env.get_ground
motion_imitation.robots.a1_robot.A1Robot.GetBaseVelocity
tqdm.tqdm
motion_imitation.robots.gamepad.gamepad_reader.Gamepad
self.pybullet_client.getJointInfo.decode
s.get_observation
self._kwargs_check
pybullet.addUserDebugParameter
upper_bound.append
analytical_leg_jacobian
self._setup_init
motion_imitation.learning.imitation_runners.traj_segment_generator.__next__
self._robot_interface.send_command
env.set_time_step
self._bracket_link_ids.append
pybullet_utils.bullet_client.BulletClient.setPhysicsEngineParameter
self._get_cc_args
motion_imitation.envs.sensors.robot_sensors.IMUSensor
self.get_frame_root_rot
pybullet_utils.bullet_client.BulletClient.loadURDF
pybullet.calculateInverseKinematics2
robot_class.convert_leg_pose_to_motor_angles
numpy.arange
motion_imitation.envs.sensors.robot_sensors.MotorAngleSensor
n1.n2.total_seconds
sysconfig.get_path
stable_baselines.common.tf_util.initialize
real_env.robot.GetMotorAngles
self._hip_link_ids.sort
rot_mat.np.array.reshape.dot
motor_ids.append
numpy.exp
s.on_reset
motion_imitation.envs.env_builder.build_regular_env.Terminate
stable_baselines.logger.log
quadprog.solve_qp
self._get_joint_pose_size
stable_baselines.common.Dataset.iterate_once
ep_lens.append
motion_imitation.envs.sensors.robot_sensors.BaseDisplacementSensor
min
self._velocity_filter_x.calculate_average
STD_TMPL.format
tensorflow.abs
src.endswith
self._stepper.update
join
stable_baselines.common.policies.linear
self.policy_pi.pdtype.sample_placeholder
stable_baselines.trpo_mpi.utils.add_vtarg_and_adv
numpy.vstack
self.link_position_in_base_frame
pybullet_utils.bullet_client.BulletClient.setAdditionalSearchPath
motion_imitation.utilities.pose3d.QuaternionRotatePoint
motion_imitation.robots.a1.A1
test
MapToMinusPiToPi
self._robot.ComputeMotorAnglesFromFootLocalPosition
minitaur.SetBatteryVoltage
self._wrapped_sensor.on_terminate
self._ApplyOverheatProtection
self._task.done
self._sample_time_offset
self._modify_observation
collections.deque.extend
self._add_lflags
self._env.reset
ImportError
self._pybullet_client.changeDynamics
app.connect
self._robot.GetHipPositionsInBaseFrame
numpy.cross
self._postprocess_frames
numpy.matmul
self.adam.sync
mpc_controller.torque_stance_leg_controller.TorqueStanceLegController
retarget_pose
self.get_frame_joints
get_joint_limits
foot_positions_in_base_frame
numpy.clip
motion_imitation.envs.sensors.space_utils.convert_sensors_to_gym_space_dictionary
self.pybullet_client.stepSimulation
com_torque.com_force.np.concatenate.transpose
gym.spaces.Box
DiagGaussianFixedVarProbabilityDistributionType
self._pybullet_client.stepSimulation
reward_giver.get_reward
self._robot.GetTrueBaseRollPitchYawRate
s.on_step
self._pybullet_client.setGravity
distutils.util.get_platform
inv_inertia.rot_z.T.dot.dot
distutils.extension.Extension.__init__
self.get_active_motion.get_frame_joints_vel
robot.pybullet_client.getLinkState
subprocess.call
self._get_pybullet_client.resetJointStateMultiDof
random.seed
self._task.get_target_obs_bounds
self._knee_link_ids.append
self._randomization_param_dict.items
swing_extend_to_motor_angles
self._motor_model.set_strength_ratios
self._foot_link_ids.index
self.HistoricSensorWrapper.super.__init__
self._imu_link_ids.append
self.pybullet_client.setJointMotorControl2
self._check_change_clip
root_pos_diff.dot
numpy.abs
motion_imitation.envs.locomotion_gym_config.ScalarField
numpy.mean
self.compute_jacobian
MovingWindowFilter
numpy.load
self.xhist.appendleft
UnsupportedConversionError
contacts.append
robot.GetBaseOrientation
self.Laikago.super.ApplyAction
env_randomizer.randomize_step
self.DiagGaussianFixedVarProbabilityDistributionType.super.__init__
self._pybullet_client.getNumJoints
motion_imitation.envs.utilities.env_utils.flatten_observations
pybullet.resetSimulation
output_motion
self._pybullet_client.readUserDebugParameter
numpy.concatenate.copy
self._raw_state.imu.gyroscope.np.array.copy
inputs.get_gamepad
numpy.ones
reversed
stable_baselines.common.SetVerbosity
self.sess.as_default
tensorflow.summary.merge_all
self.ApplyAction
tensorflow.app.run
foot_position_in_hip_frame
process_ref_joint_pos_data
mpi4py.MPI.COMM_WORLD.Get_rank
self._gym_env.reset
self._robot.motor_angles_from_foot_positions
markers.append
str
self._RecordInertiaInfoFromURDF
self._add_cflags
stable_baselines.common.mpi_moments.mpi_moments
self.joint_angles_from_link_position
self._task.reset
ep_true_rets.append
self._task
motion_imitation.envs.env_wrappers.imitation_task.ImitationTask
self._robot.GetBaseVelocity
self._get_pybullet_client.setCollisionFilterGroupMask
compiler._compile
remove_output
self._pybullet_client.resetBaseVelocity
self._SetMotorTorqueByIds
self._chassis_link_ids.sort
robot.pybullet_client.calculateJacobian
self._calc_cycle_delta_heading
self.policy_pi.proba_distribution.logp
retarget_root_pose
compute_mass_matrix
stable_baselines.common.callbacks.CheckpointCallback
env.robot.GetBaseOrientation
tensorflow.clip_by_value
sf.readline
rot_mat.np.array.reshape
self.set_env_from_randomization_parameters
self.moving_window_filter_z.calculate_average
self.blend_frame_vels
old_pi.proba_distribution.kl
contents.decode.string.Template.substitute
motion_imitation.envs.locomotion_gym_config.SimulationParameters
self._trajectory_generator.get_action
tensorflow.exp
env.reset
mpc_controller.a1_sim.SimpleRobot
numpy.diag
self.get_frame_duration
config
self.filter.predict
self.set_frame_joints
self._get_ref_base_rotation
self._reset_clip_change_time
absl.flags.DEFINE_bool
self._get_observation
self.pybullet_client.getBasePositionAndOrientation
set_maker_pos
motor_model_class
self.yhist.appendleft
sum
absl.app.run
p.multiplyTransforms
self.get_active_motion.set_frame_joints_vel
self._action_filter.reset
auto_cpp_level
self._setup_learn
self._robot.MapContactForceToJointTorques.items
self.moving_window_filter_x.calculate_average
tensorflow.summary.scalar
self._env.robot.GetURDFFile
self._env.np_random.randn
six.moves.range
self.get_active_motion.set_frame_joints
tensorflow.get_variable
self._gym_env.step
tensorflow.concat
stable_baselines.common.tf_util.get_trainable_vars
locals
logging.info
math.cos
stable_baselines.common.zipsame
pybullet.disconnect
motion_imitation.envs.utilities.env_utils.flatten_observation_spaces
self._build_joint_data
self._robot.GetTrueMotorAngles
self.GetMotorAngles
distutils.command.build_ext.build_ext.build_extensions
pybullet.createVisualShape
self.bullet_client.resetBasePositionAndOrientation
_setup_controller
numpy.power
get_root_rot
self._get_pybullet_client.resetBasePositionAndOrientation
b_coeffs.append
self.function
wrapped_sensor.get_upper_bound
absl.flags.DEFINE_float
motion_imitation.robots.a1_robot.A1Robot.ReceiveObservation
self._stepper.get_reference_pos_swing_foot
stable_baselines.logger.dump_tabular
numpy.fmod
self._motor_model.set_viscous_damping
robot.MPC_BODY_INERTIA.np.array.reshape
m.get_frames
pybullet.getNumJoints
self._init_callback.on_training_start
self._lower_link_ids.sort
self.PoseSensor.super.__init__
env.robot.GetTimeSinceReset
self._GetDelayedObservation
motion_imitation.robots.a1_robot.A1Robot.GetMotorAngles
AmbiguousDataTypeError
self._env.np_random.randint
self._build_ref_model
mpi4py.MPI.COMM_WORLD.Get_size
motion_imitation.utilities.motion_data.MotionData.get_duration
self._calc_ref_vel_warmup
self._calc_cycle_delta_pos
callbacks.append
self._robot.GetBaseRollPitchYaw
numpy.zeros
pybullet_utils.bullet_client.BulletClient.setGravity
mpc_controller.a1_sim.SimpleRobot.Step
self._pybullet_client.setJointMotorControl2
distutils.extension.Extension
self._robot.MapContactForceToJointTorques
self.num_joints.y.reshape.copy
stable_baselines.logger.record_tabular
pybullet_utils.transformations.quaternion_multiply
self._env.render
self._rand_uniform
numpy.random.RandomState
self._build_observation_space
motion_imitation.utilities.motion_util.standardize_quaternion
new_toe_pos_world.append
self._calc_heading
os.path.dirname
re.compile.findall
bool
robot_interface.RobotInterface
compute_contact_force_projection_matrix
self._origin_offset_pos.fill
self._calc_cycle_offset_pos
jacobians.append
numpy.absolute
pybullet_utils.bullet_client.BulletClient.setTimeStep
RuntimeError
has_flag
self._history_buffer.appendleft
self._calc_heading_rot
math.sqrt
numpy.isfinite
body_height.append
os.path.exists
self._task.build_target_obs
Q.desired_acc.g.T.dot.dot
pybullet.loadURDF
self._pybullet_client.resetDebugVisualizerCamera
motion_imitation.robots.minitaur.MapToMinusPiToPi
self.POS_SIZE.frame.copy
_run_example
os.getcwd
env.pybullet_client.getContactPoints
self._motor_model.set_motor_gains
quadprog.solve_qp.reshape
self.get_frame
self.BaseDisplacementSensor.super.__init__
_interpolate
convert_1d_box_sensors_to_gym_space
mpc_controller.a1_sim.SimpleRobot.GetTimeSinceReset
numpy.linalg.pinv
mpc_controller.locomotion_controller.LocomotionController
scipy.signal.butter
action_selector_ids.append
self.calc_blend_idx
absl.logging.warning
self._state_estimator.reset
self._trajectory_generator.reset
writer.add_summary
numpy.minimum
self._flatten_observation_spaces
pybullet_utils.bullet_client.BulletClient.createMultiBody
motion_imitation.envs.env_builder.build_regular_env
self._task.update
self._observation_history.clear
numpy.random.normal.extend
motion_imitation.robots.minitaur_pose_utils.MinitaurPose
numpy.empty
self.old.pop
self.get_active_motion.get_frame_root_rot
compiler._setup_compile
_generate_example_linear_angular_speed
LaikagoMotorModel
numpy.zeros.copy
motion_imitation.robots.gamepad.gamepad_reader.Gamepad.stop
ValueError
self.A1Robot.super.Reset
motion_imitation.robots.a1.A1.GetTimeSinceReset
self._pybullet_client.setTimeStep
stable_baselines.common.fmt_row
motion_imitation.robots.action_filter.ActionFilterButter
pybullet.submitProfileTiming
pybullet_utils.transformations.quaternion_conjugate
motion_imitation.envs.env_builder.build_laikago_env
self.ActionFilterButter.super.__init__
self._robot.GetMotorVelocityGains
numpy.concatenate
pyb.getBasePositionAndOrientation
self._estimated_velocity.copy
mpc_controller.foot_stepper.StepInput
numpy.random.uniform
motion_imitation.robots.a1.A1.GetBaseRollPitchYawRate
self._SetDesiredMotorAngleById
float
self.set_frame_root_vel
self.lossandgrad
multiprocessing.cpu_count
self._pybullet_client.computeProjectionMatrixFOV
motion_imitation.utilities.motion_util.calc_heading_rot
self.ActionFilterExp.super.__init__
old_pi.proba_distribution.logp
set_pose
self._np_random.uniform
motion_imitation.utilities.moving_window_filter.MovingWindowFilter
self._robot.GetFootPositionsInBaseFrame.flatten
self.robot.pybullet_client.getMatrixFromQuaternion
self.GetMotorVelocityGains
self.pdtype.proba_distribution_from_latent
self._hip_link_ids.append
self.get_active_motion.get_frame_size
robot.pybullet_client.invertTransform
filterpy.kalman.KalmanFilter
motion_imitation.envs.env_wrappers.simple_forward_task.SimpleForwardTask
pybullet.resetDebugVisualizerCamera
self._robot.GetFootContacts
pybullet_utils.bullet_client.BulletClient.submitProfileTiming
pybullet.setAdditionalSearchPath
self._robot.GetMotorAngles
self._GetMotorNames
exec
numpy.zeros.extend
self.get_frame_vel_size
_setup_controller.reset
self._enable_perturb_init_state
tensorflow.reduce_mean
pybullet_utils.bullet_client.BulletClient.configureDebugVisualizer
motion_imitation.robots.minitaur_pose_utils.leg_pose_to_motor_angles
train
sphinx_rtd_theme.get_html_theme_path

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

Unexpected behavior on hardware running whole_body_controller_example.py

First, thank you for open sourcing your code! I am using the baseline MPC controller in your whole_body_controller_example.py. I have made minimal changes to your code so it's working in pybullet simulation.

However, it's producing unexpected behavior on my A1 hardware, if I switch on the use_real_robot flag. The back legs would suddenly jerk backwards, almost breaking the hardware. I tried lowering the gains by a large factor, and suspending the robot to observe the trajectories. I observe the back legs tilting inwards rubbing each other.

Have you made modifications to your A1 that might have made the code unable to work on a standard A1? Do you have suggestions to making your baseline MPC run on a real robot?

Thank you!

Yu

ModuleNotFoundError: No module named 'locomotion'

When I want to use gym.
env = gym.make('motion_imitation:A1GymEnv-v1', render=True)
Error Occurred. I dont' know what's wrong with this, and wonder if someone could help fix it.

Traceback (most recent call last):
File "", line 1, in
File "/home/robot/anaconda2/envs/stone_legged/lib/python3.6/site-packages/gym/envs/registration.py", line 142, in make
return registry.make(id, **kwargs)
File "/home/robot/anaconda2/envs/stone_legged/lib/python3.6/site-packages/gym/envs/registration.py", line 87, in make
env = spec.make(**kwargs)
File "/home/robot/anaconda2/envs/stone_legged/lib/python3.6/site-packages/gym/envs/registration.py", line 58, in make
cls = load(self.entry_point)
File "/home/robot/anaconda2/envs/stone_legged/lib/python3.6/site-packages/gym/envs/registration.py", line 17, in load
mod = importlib.import_module(mod_name)
File "/home/robot/anaconda2/envs/stone_legged/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 941, in _find_and_load_unlocked
File "", line 219, in _call_with_frames_removed
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 941, in _find_and_load_unlocked
File "", line 219, in _call_with_frames_removed
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'locomotion'

the format of datasets

what is the format about the datasets in the data/motions? I look at the format in DeepMimic ,but it is about human body.

python versions

I have difficulties using this system with python 3.9 so I set up a conda environment with python 3.7. It took me a while to work this out. Could you put a small note in the readme about the python version that you use?

Btw great repo

How to set the PD gains

Hi, thanks for sharing this great project with the community.

I wanted to know how did you set the PD gains for the robot? The gains seem to set rather precisely in laikago.py. Did you perform tuning to find these values? And are these values the same for the real robots (MPC controlled and/or the RL controller)?

Thanks again and really hoping you can answer this question!

locomotion_controller_example.py not working on Ubuntu

Hello, I found that the example presented in the file "locomotion_controller_example.py" does not work on Ubuntu 16.04 machine, with NVIDIA GPU CUDA 10.0 enabled. The example gives an effect of flying robots instead of walking a reference trajectory. The same file works on MacBook though. When I transfer the actions onto the Ubuntu machine it then works. I think maybe there is something wrong within the controller that does not fit Ubuntu machine's environments (I followed the guidelines in the README file to install environments). I tried this on multiple Ubuntu machines and got consistent failed results.

Reference robot velocities 0

When computing the rewards for the Laikago robot reference velocity difference to the sim robot, the reference robot joint state velocities are always 0 for all joints. My guess is that the reference model is a kinematic model and the velocities are set to zero automatically somewhere.

MPC controller : Robot deviates from its trajectory

Hello,

When I run the locomotion_controller_example.py, I observe that the robot is deviating from the commanded trajectory. It's not so bad when the robot is walking, but if I try to keep the robot in place (all the legs in stance mode) using the code commented in the same file, then the base of the robot is moving back and the robot falls. I didn't change anything in the code, just followed the setup instructions and run the example.

Do you have any idea where this problem comes from ?

Best,
Stephen

Forces doubt

You did a wonderful job in your repository of locomotion_controller_example. I was analizing your code and I had the doubt that if the contact forces of torque_stance_leg_controller are calculated with the archive mpc_osqp.cc.

I send you my greetings and I hope that you can answer my question.
Selección_022

How many step does it take to converge?

Hi, I have trained the policy for over 50M steps, but it seems not converge yet.

I wonder how many steps in total it need to finish trianing?

image
(BTW, I use 16 way parallel training)

Thanks.

Question about .config file

Can you give a little more detail about what sim_toe_joint_ids and sim_hip_joint IDs are and how they are calculated in retarget config. laikago.py file?

[
Capture

Brief explanation of input data format

Could you include just a brief explanation of the input data format in your readme? I've dug down into the MotionData class and found that it should represent a pose of the form
[root position, root orientation, joint poses (e.g. rotations)], but I'm still a bit confused what that entails. I don't understand why they would each be of length 19.

Is there some documentation on this format for a pose.

MPC baseline

Hi, I encountered the following error when compiling, are there any bugs or I didn't install correctly?
Following is the error: Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/yandol/motion_imitation/motion_imitation/examples/mpc_example.py", line 153, in <module> app.run(main) File "/home/yandol/.local/lib/python3.8/site-packages/absl/app.py", line 303, in run _run_main(main, args) File "/home/yandol/.local/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "/home/yandol/motion_imitation/motion_imitation/examples/mpc_example.py", line 149, in main _run_example() File "/home/yandol/motion_imitation/motion_imitation/examples/mpc_example.py", line 141, in _run_example hybrid_action, info = controller.get_action() File "/home/yandol/motion_imitation/mpc_controller/locomotion_controller.py", line 88, in get_action stance_action, qp_sol = self._stance_leg_controller.get_action() File "/home/yandol/motion_imitation/mpc_controller/torque_stance_leg_controller.py", line 179, in get_action motor_torques = self._robot.MapContactForceToJointTorques(leg_id, force) File "/home/yandol/motion_imitation/motion_imitation/robots/minitaur.py", line 695, in MapContactForceToJointTorques jv = self.ComputeJacobian(leg_id) File "/home/yandol/motion_imitation/motion_imitation/robots/laikago.py", line 249, in ComputeJacobian return super(Laikago, self).ComputeJacobian(leg_id)[(2, 0, 1), :] IndexError: index 2 is out of bounds for axis 0 with size 0

New robot URDF and robot orientation

Hello.

I'm trying to change the URDF for other robot, and my idea is to use the laikago code and change files and descriptions.

I changed the urdf location here, and I ensured that all joint names coincide with the original laikago (new robot is 12 DOF too).

The new robot urdf has different orientation that laikago urdf, so I need to change the initial orientation, I have done that here and here.

I changed this:

[math.pi / 2.0, 0, math.pi / 2.0]

for this:

[0.0, 0.0, 0.0]

When pybullet starts, the robot starts with the correct orientation (looking to the positive X axis):

image

But when training or testing start, the orientation of reference and simulated robot change (after this line ):

image

I haven't found why the initial orientation is not kept.

MPC‘s function in this program

hi, I wonder the MPC folder's function in your program, do you use it combine with your imitation learning? and how do you use it in this robot system? Thanks!

Env Issues

Hi, Erwin.
First of all, thank you for your codeshare.
I followed the steps and it worked.

But I faced with an error; 'No registered env with id: A1GtmEnv-v1' in the process of 'Getting started with MPC and the environment'
When I ran python code:

import gym
env = gym.make('motion_imitation:A1GymEnv-v1', render=True)

Do you know how to fix it?

Thank you in advance.
Euihyun Han

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.