arise-initiative / robosuite Goto Github PK
View Code? Open in Web Editor NEWrobosuite: A Modular Simulation Framework and Benchmark for Robot Learning
Home Page: https://robosuite.ai
License: Other
robosuite: A Modular Simulation Framework and Benchmark for Robot Learning
Home Page: https://robosuite.ai
License: Other
It would be nice if robosuite officially supported mujoco 2.0. I've ran most of the existing environments with mujoco 2.0 and they seem to run fine. I think it's possible that the upgrade is just a matter of bumping up the mujoco version in the requirements.txt and setup.py and no changes in the robosuite code base are needed.
I'd be happy to submit a PR for this.
test_linear_interpolator
produced the following error in v1.0
branch
Testing controller EE_IK with trajectory pos and interpolator=linear...
Completed trajectory. Took 30 timesteps total.
Traceback (most recent call last):
File "test_linear_interpolator.py", line 170, in <module>
test_linear_interpolator()
File "test_linear_interpolator.py", line 160, in test_linear_interpolator
assert timesteps[1] > min_ratio * timesteps[0], "Error: Interpolated trajectory time should be longer " \
AssertionError: Error: Interpolated trajectory time should be longer than non-interpolated!
Hello,
Thanks again for the repo, it has been really helpful! :)
Could you please provide some information on how did you decide on the velocity actuator gains for the sawyer, as well as the control ranges for both the velocity and torque versions of the sawyer?
I have setup a pushing task of a small cubic object (5cm edges, 200grams) on the table arena, and am noticing that if the friction of the object is high enough (0.8-0.9) , the sawyer fails to provide enough torque to push the object.
This is fixed if I increase the values of the velocity actuator gains, but I would like to better understand if this is a sensible thing to do, and how to get a good idea of what are sensible gains.
I would also like to create a PID controller on top of mujoco torque actuators by modifying the base, sawyer_robot and sawyer modules, but I am struggling to figure out a way to get sensible P and I gains for each joint.
I would appreciate any insights you could give me on this!
I'm trying to replay some demos in various tasks and environments. While they work setting use-actions to False, they always fail setting it to True.
Is there a solution? Is it caused by some source of non-determinism?
In case it ever comes up for anyone else: I was getting NameError: name '_raise_glfw_errors_as_exceptions'
when running demo.py after following the instructions in the README from scratch. I had to downgrade glfw from 1.10.0 to fix it. I went ahead and used 1.7.0 since that's what I've used before, but given that 1.10.0 came out yesterday, it's likely just a 1.10.0 issue.
Hello,
That repository is fantastic, thank you for the good work!
I have noticed in the UniformRandomSampler under models.tasks.placementsampler that the z_rotation argument default is "random"
, in the SaywerLift environment it is passed as True
, and in the sample_quat
function where it is used it seems that the only supported options are [None, Iterable, scalar]
.
In case I am misunderstanding something, and because I am not sure what the intended defaults were originally, I am just raising an issue here for now!
Thanks!
This project looks very interesting! I recently released a real robot dataset with a UR5 and a block stacking task.
Do you think it might be feasible & worthwhile to integrate the following?
Side note: Have you also seen the MIME dataset?
I'd very much like to hear your thoughts, so thanks for your time and consideration!
placement_initializer
argument is ignored, and no placement initializer is used by the PickPlaceTask to place objects - instead the placement is hardcoded such that placement corresponds to the left side of the table (the bin).Hi, I am wondering is there a way to remove/add objects (and also change properties of objects) in a particular environment on the fly?
It seems that this used to be a problem with mujoco-py but they have fixed the read-write issue. How can I do the same in robosuite environments?
Great work! The repo has provided image (256x256) as extracted observation. However, I'm encountering some difficulties performing coordinate conversions from 3D scene to 2D image and vice versa. For example, table in the scene is defined in MJCF model as pos="0.5, 0.3, 0.8" and quat, I want to extract 2D image of this particular region and after performing some detections on objects from this cropped region, transfering back 2D coordinates to 3D scene in pos and quat format. Can you provide some guidance on how to do this?
Many thanks!
In robosuite/models/base.py, why does the get_model()
function use StringIO? Can't load_model_from_xml
directly take a string? The current way appears to create a new .xml file under /tmp/ every time it is called.
def get_model(self, mode="mujoco_py"):
"""
Returns a MjModel instance from the current xml tree.
"""
available_modes = ["mujoco_py"]
with io.StringIO() as string:
string.write(ET.tostring(self.root, encoding="unicode"))
if mode == "mujoco_py":
from mujoco_py import load_model_from_xml
model = load_model_from_xml(string.getvalue())
return model
raise ValueError(
"Unkown model mode: {}. Available options are: {}".format(
mode, ",".join(available_modes)
)
)
Hi,
I am trying to load different objects in your amazing environment to train a general purpose grasper. For that I have to provide different xml's for each object. In these xml's there are "site" parameters which specify the topmost and bottommost part of the object, from the center of the object. Also looking at those values they are only computed in z-direction. But when I use the models provided in meshes of this repository and try to compute this top-site and bottom-site, I do not get the same answer as that written in the xml files. So I just find out the mean of the stl file, this is my center and then subtract this center so now everything lies between 0 and 1 and then I just find out the min-max in the z-direction to get the top site and bottom site.
Maybe I am doing something wrong, any help regarding this topic would be great.
Thanks
I've found that images rendered off-screen by mujoco_py (w/ sim.render
) are upside down and mirrored when compared to images rendered by the Mujoco viewer. While the former issue can be fixed by adjusting the camera transform, the later issue can only be fixed by flipping the image buffers in code.
For example, below is the default image returned by the sawyer_pick_place
environment, a rotated version of it for convenience, and the image shown by the Mujoco Viewer. Note that the bin is mirrored between the rotated and "real" images.
I believe the Mujoco viewer's rendering is correct, as the right baxter gripper appears on the robot's left in the sim rendering. To fix this issue I flipped the image buffer's width channel in code, though there may be a better way to resolve this.
From sim | Rotate 180 degrees | From viewer |
---|---|---|
![]() |
![]() |
![]() |
Hi,
I am trying to generate camera observations of rollouts of a policy trained with Surreal.
However I found that the Sawyer robot rendered with env.unwrapped.render() onscreen and the observation rendered with env.unwrapped.sim.render() look different. More specifically, the one rendered in the simulation window has more details whereas the one rendered in offscreen mode does not resemble the real look of the robot:
Is there a way to obtain the more realist looking robot in observation images?
Thanks
Hi,
simulation speed with standard parameters is currently very slow.
For a single env.step() it takes about 0.15 seconds.
Is there a parameter that allows speeding this up? I know there are Mujoco solver parameters like the number of iterations it takes at every step. Is that exposed in robosuite anywhere?
Thanks!
The MujocoGeneratedObject
class supports directly setting a vector of friction coefficients, but subclasses (see here) but subclasses such as BoxObject
instead convert the friction
parameter to a friction_range
that is just used by the superclass to sample a translational friction value, while keeping the other friction parameters at their default values. Solution is to make sure that the friction
argument is passed through from subclasses to the super calls.
I want to replay the demonstrations.
However, when I "python playback_demonstrations_from_hdf5.py --folder ../models/assets/demonstrations/SawyerPickPlace/ --use-actions" under the robosuite/robosuite/scripts/, the program failed the assertion and the robot can't grab the things.
I failed when installing mujoco-py==2.0.2.2 & robosuite==0.3.0 as well as mujoco-py==1.50.1.68 & robosuite==0.2.0.
Could you give any suggestion?
BTW, when I installed surreal, it says
"ERROR: surreal 0.2.1 has requirement mujoco-py<1.50.2,>=1.50.1, but you'll have mujoco-py 2.0.2.2 which is incompatible."
"ERROR: robosuite 0.3.0 has requirement mujoco-py==2.0.2.2, but you'll have mujoco-py 1.50.1.68 which is incompatible."
Does this mean RoboTurk collected demonstrations using mujoco-py 1.50.1.68 and maybe incompatible with the current robosuite?
The function reset_from_xml_string
in base.py
doesn't call _reset_internal
, which can be problematic when subclasses expect this to be called on every env reset. We should factor MjSim
creation outside of _reset_internal
and then call it in reset_from_xml_string
.
Following the README.md
When I run $ python robosuite/demo.py
, everything works fine however under the Quick Start section of your README.md when I run:
import numpy as np
import robosuite as suite
# create environment instance
env = suite.make("SawyerLift", has_renderer=True)
# reset the environment
env.reset()
for i in range(1000):
action = np.random.randn(env.dof) # sample random action
obs, reward, done, info = env.step(action) # take action in the environment
env.render() # render on display
I get a blank (black) window that appears for three seconds before vanishing. I don't see any images or animations. I find that strange since running the demo.py
worked fine.
Hello,
Please correct me if I am wrong, but I have noticed that your mujoco XML files have been constructed using the ``standard'' urdf description of the sawyer provided by rethink here.
Do you by any chance have functionality that could take the urdf obtained from the parameter server of the actual physical sawyer and update the mujoco xml files accordingly? (Did you have to write all the xml files by hand or do you have any scripts that helped you obtain them from the urdf description?)
Thanks!
Links to playback_demonstrations_from_hdf5 and collect_human_demonstrations have incorrect relative paths to actual documents.
Current link:
https://github.com/StanfordVL/robosuite/blob/master/docs/robosuite/scripts/collect_human_demonstrations.py
Actual link:
https://github.com/StanfordVL/robosuite/blob/master/robosuite/scripts/collect_human_demonstrations.py
Hello!
I would like to use the ik_wrapper
and the sawyer_ik_controller
in order to control the robot in end-effector pose, but I am noticing that my environment consumes more and more memory over time!
Because I want to use distributed environments in order to speed up learning, my program ends up running out of memory.
I am wandering if this could be a memory leak somewhere in the controller? The only thing that I could think of is pybullet (looking at the code, i don't see where else this could be coming from).
Any Ideas? Perhaps any recommendation on what other packages I could use to replace pybullet do the the inverse kynematics?
Many Thanks,
Eugene
There's an issue with the camera width being ignored. See below:
https://github.com/StanfordVL/robosuite/blob/master/robosuite/environments/sawyer.py#L90
https://github.com/StanfordVL/robosuite/blob/master/robosuite/environments/panda.py#L90
I read through all the documentation and a lot of details necessary to run and contribute to this project appear to be missing so I'd appreciate your help. Will the documentation be expanded at some point?
Examples of missing info for the robosuite:
Thanks for your help!
In some environments (which don't look to have been merged yet, such as wiping environments in the vices_19 branch), sensor data from sensor with id sensor_id
is referenced using env.sim.data.sensordata[sensor_id*3:sensor_id*3+3]
, where the magic number 3
is used based on the assumption that the previous sensors all have 3 values associated with them. If sensors with different dimensions are added (e.g. 1-dim contact sensors), the returned values for will be wrong.
Note: this is due to the fact that Mujoco/mujoco-py simply puts all sensor data in a single array in env.sim.data.sensordata
(in the order that the sensors were added).
We will eventually need some way of tracking the kinds of sensors that have been added to mitigate the issue; filing the issue here for reference in the meantime. Users will have to manually keep track of what sensors are used and be careful with how they reference them.
As the episode increases, at the fixed step, obs will return a black images from obs until the end of the entire episode.
Very strangely, there are no problems with more than a hundred episodes at the beginning. The resulting images are normal and clearly.
Then once the bad picture appears, it will always appear regularly!
in https://github.com/SurrealAI/surreal/blob/da705c02a243dbc7709c6002a02f1f8df6007674/surreal/main/ddpg_configs.py#L133
I found that your team have also encountered similar bad pictures, I really want to know how you avoided it.
env = suite.make( 'SawyerLift', has_renderer=True, use_camera_obs=True, camera_depth=False, ignore_done=False, render_visual_mesh=False, reward_shaping=True, camera_height=Origin_size, camera_width=Origin_size, camera_name=camera_name, control_freq=10, reach_flag=False, )
While MuJoCo has advantages, $500 per year is a very high barrier to entry for many. It will severely limit any organic community of tutorials and blogs which could spring up around this repository.
Please consider support for an open source and free engine such as Bullet.
Hello, thank you very much for the repo.
When I followed the steps of 'How to build a custom environment', I found a problem. In the 'adding the object' part, the example code is
from robosuite.models.objects import BoxObject
from robosuite.utils.mjcf_utils import new_joint
object_mjcf = BoxObject()
world.merge_asset(object_mjcf)
obj = object_mjcf.get_collision(name="box_object", site=True)
obj.append(new_joint(name="box_object", type="free"))
obj.set("pos", [0, 0, 0.5])
world.worldbody.append(obj)
At this part I tried to change the position of the object, i.e change obj.set("pos", [0, 0, 0.5])
into other numbers, but the rendering result shows the position of the object remains unchanged (at the bottom of the table, shown in the picture attached).
Has anybody also faced this problem and could you please give me some advice? Thank you very much!
To make it easier for other developers to contribute to robosuite, I would recommend following PEP8 (https://www.python.org/dev/peps/pep-0008/) and providing some standardized way of linting the files, e.g. black or yapf.
Robosuite is an amazing open-source code that helps the researchers a lot to benchmark their algorithms. I really appreciate your effort and kindness to share this source code. Could you share the real robot API interface to work with the real Sawyer robot? That would also help people a lot to see algorithm work on the real robot. Currently, I am working on real Sawyer robot. I really look forward to seeing your sharing.
Thank you
Hi,
I'm trying to play back demonstrations using inverse kinematics using this script:
https://github.com/kpertsch/robosuite/blob/dev_frederik/robosuite/scripts/replay_dataset.py#L60
Using the "bins-Bread" data it produces motions that look random and do not pick up anything.
Could this be an issue with the control_freq parameter? Was this data collected with control_freq=100?
Thanks at lot!
Hi, thank you for providing a complete simulation robotic arm a lot.
However, now I want to get more different pictures about the state, I don't know how to move the objects?(e.g. bottles, mikes.)
I tried to modify the location of the random initialization, but the change was not obvious.
Perhaps, this project is too big to review all details especially my English is not skilled~
May I could not find your example programs?
Can you please provide one direction and let me go to learn how to move those, your project or mujoco-py?
Hi,
I find the robosuite framework very cool. Does it support manipulating soft objects like cloth, ropes? As far as I understand, mujoco does support soft object manipulation. Does that support come out-of-the-box for robosuite? If not then any ideas on how to do it?
Thanks!
Hi,
I want to integrate robosuite with Kinova's Jaco2 (j2n6s300) robot arm model.
Are there tutorials or guidelines for this?
Thank you.
Are there any plans to make robosuite compatible with Python3.6. Tensorflow doesn't offer support for Python3.7 and I want to use both together.
Hi,
I'm generating demo data through the "collect_human_demonstrations" script file. I have a question related to this.
Can I get data corresponding to robot action in demo? (it may be torque of robot joint)
thanks for your job,it is very exciting for robot.And when I run the demo,I meet a error,could you help me?Thanks very much!
GLFW error (code %d): %s 65544 b'X11: RandR gamma ramp support seems broken' Creating window glfw Creating window glfw
Hi,
I'd like to compute inverse kinematics for an absolute endeffector position. I'm trying to reuse parts of RoboSuite's inverse kinematics controller for this.
Here is my attempt at doing it.
https://github.com/kpertsch/robosuite/blob/dev_frederik/robosuite/scripts/test_absolute_inv_kin.py
Unfortunately the solution is incorrect, am I missing some tranformations? (I added the transform from base to world but that didn't help)
Thanks!
This is a query regarding the interface of the novint falcon and getting force feedback from the simulation. Is it possible?
MJCF complains about "Error: mass and inertia of moving bodies must be positive"
There is a bug where the x_range and y_range arguments are ignored by the UniformRandomPegsSampler placement initializer - probably because it needs to place multiple object types (square and round nuts) see here
This placement initializer should probably take dictionaries as input to specify ranges per object type and use the input dictionary appropriately.
Hello,
I noticed that the render behaves sometimes strangely, with regards to the frequency of rendering and timing.
It is possible to see this by slightly changing the demo.py
file by adding sleep
and print
calls :
Launching the demo then with say Sawyer lift we can see that it only renders every so often, not at every call of render()
. Moreover, if we set up a test at a control frequency of 5 hz, and allow for 20 timesteps, we can see that it does not render in 4s which is what would have been expected in real-time.
Upon further investigation, this comes from mujoco-py, that implements a compensation to make the rendering real-time (c.f. lines 194-204 in mjviewer.py):
Now, this compensation relies on sim.nbsubsteps
which tells mujoco-py
how many simulation steps to take for every control step. This here is set to 1, but the functionality is re-implemented in base.py
(note that step in base.py reasons in terms of frequencies and time while sim.nbsubsteps
just considers the number of simulation substeps):
The result of this is that every-time that render()
is called, mujoco-py compensates time as if only one simulation step was taken between consecutive render calls (as sim.nbsubsteps =1), while there are actually more taken, depending on the control_freq
; which makes the whole thing go out of sink.
I made a quick and dirty hack to compensate for this, but a better ideas are more than welcome:
In base.py
:
self.viewer._render_every_frame
= True in _reset_internal
to stop the compensation by mujoco-py
time.sleep(1/self.control_freq)
to get back to real-time.This does not take into account the actual compute time (as opposed to mujoco-py
which does) between render calls so it is actually not showing real-time, but its an approximate fix if you don't care about being very exact.
Thanks!!
We should add solref
and solimp
arguments to MujocoGeneratedObject
and its subclasses to easily play around and experiment with contact modeling. The default behavior for some objects is pretty bad - for example thin cylinders tend to sink into the table.
Hi,
I'd like to set object positions during simulation, however mujoco-py only supports setting the complete qpos vector.
Is there a way to get the qpos-address of the object joints? Or is there another way to modify object positions/orientations?
Thanks!
Hi,
I think this is related to issue #17 and probably has to do with environment configuration. I am on Ubuntu 18.04 and followed all the installation instructions. I was able to run the demo successfully. However, in the tutorial, whenever use_camera_obs is set to true, I got the following error message:
File "/home/robosuite/robosuite/environments/base.py", line 149, in reset
return self._get_observation()
File "/home/robosuite/robosuite/environments/sawyer_lift.py", line 274, in _get_observation
depth=self.camera_depth,
File "mjsim.pyx", line 149, in mujoco_py.cymj.MjSim.render
File "mjsim.pyx", line 151, in mujoco_py.cymj.MjSim.render
File "mjrendercontext.pyx", line 43, in mujoco_py.cymj.MjRenderContext.init
File "mjrendercontext.pyx", line 108, in mujoco_py.cymj.MjRenderContext._setup_opengl_context
File "opengl_context.pyx", line 128, in mujoco_py.cymj.OffscreenOpenGLContext.init
RuntimeError: Failed to initialize OpenGL
I am just wondering if you know what is causing this problem.
Thanks!
Hi,
Thank you for providing this benchmarking framework.
I have a couple of inconveniences with using this framework.
First, when rendering via "env.render ()", the rendering screen is turned back on whenever the environment is reset. Is there a solution for this?
Second, a GLFW missing error occurs if the "has_offscreen_render" option is set to true when the "has_render" option is false. If both options are true, the rendering screen will blink.
Thank you.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.