Giter Club home page Giter Club logo

sapien's People

Contributors

angli66 avatar colin97 avatar fbxiang avatar jetd1 avatar jiayuan-gu avatar kent0318 avatar mrznone avatar yzqin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sapien's Issues

PartNet-Mobility dataset

Hi there,

Thanks for sharing the project and dataset.

I would like to visualize the PartNet similar to the browsing and controlling widgets you provided in your website locally (possibly in Jupyter notebook) and render/take pictures from different views and different poses of the object parts.

Could you provide some leading points?

P.S. So far, I've gone through the rendering part in the tutorial documentation. However, it seems like most functionality is in the viewer that is embedded in Vulkan rendering. However, there are no controlling widgets that change the pose of the object parts.

Incorrect textures when rendering RGB images

System:

  • OS version: MacOS Big Sur 11.4
  • Python version (if applicable): Python 3.8.10
  • SAPIEN version (pip freeze | grep sapien): sapien==1.0.0rc2
  • Environment: Desktop

Describe the bug

I observed incorrect textures when rendering RGB images using camera.py

These textures are found to be invariant on my MacBook. Apparently, they are not something like random noise.

However, the exact same code produces different textures on another MacBook (Catalina).

Note that it happens to other instances like chairs, too.

Screenshots

image

Keyboard 13023 rendered by MacBook Big Sur 11.4

image

Keyboard (13023) rendered by MacBook Catalina 10.15.7

Incompatible function arguments in 'camera.py'

System:

  • OS version: Ubuntu 18.04
  • Python version (if applicable): 3.8.12
  • SAPIEN version (pip freeze | grep sapien): 1.1.1
  • Environment: Desktop

Describe the bug
I am just began to study on SAPIEN. I couldn't run the script camera.py in examples. I changed the URDF path and implemented the file. Unfortunately, I got incompatible function error. Do

To Reproduce
My environment packages is:
`(sapien) berk@berk-HP:~/VS_Project/SAPIEN$ conda list
packages in environment at /home/berk/anaconda3/envs/sapien:

Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 4.5 1_gnu
addict 2.4.0 pypi_0 pypi
anyio 3.3.4 pypi_0 pypi
argon2-cffi 21.1.0 pypi_0 pypi
attrs 21.2.0 pypi_0 pypi
babel 2.9.1 pypi_0 pypi
backcall 0.2.0 pypi_0 pypi
bleach 4.1.0 pypi_0 pypi
ca-certificates 2021.10.26 h06a4308_2
certifi 2021.10.8 py38h06a4308_0
cffi 1.15.0 pypi_0 pypi
charset-normalizer 2.0.7 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
debugpy 1.5.1 pypi_0 pypi
decorator 5.1.0 pypi_0 pypi
defusedxml 0.7.1 pypi_0 pypi
deprecation 2.1.0 pypi_0 pypi
entrypoints 0.3 pypi_0 pypi
idna 3.3 pypi_0 pypi
ipykernel 6.4.2 pypi_0 pypi
ipython 7.29.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 7.6.5 pypi_0 pypi
jedi 0.18.0 pypi_0 pypi
jinja2 3.0.2 pypi_0 pypi
joblib 1.1.0 pypi_0 pypi
json5 0.9.6 pypi_0 pypi
jsonschema 4.1.2 pypi_0 pypi
jupyter-client 7.0.6 pypi_0 pypi
jupyter-core 4.9.1 pypi_0 pypi
jupyter-packaging 0.11.0 pypi_0 pypi
jupyter-server 1.11.1 pypi_0 pypi
jupyterlab 3.2.1 pypi_0 pypi
jupyterlab-pygments 0.1.2 pypi_0 pypi
jupyterlab-server 2.8.2 pypi_0 pypi
jupyterlab-widgets 1.0.2 pypi_0 pypi
kiwisolver 1.3.2 pypi_0 pypi
ld_impl_linux-64 2.35.1 h7274673_9
libedit 3.1.20210714 h7f8727e_0
libffi 3.3 he6710b0_2
libgcc-ng 9.3.0 h5101ec6_17
libgomp 9.3.0 h5101ec6_17
libstdcxx-ng 9.3.0 hd4cf53a_17
markupsafe 2.0.1 pypi_0 pypi
matplotlib 3.4.3 pypi_0 pypi
matplotlib-inline 0.1.3 pypi_0 pypi
mistune 0.8.4 pypi_0 pypi
nbclassic 0.3.4 pypi_0 pypi
nbclient 0.5.4 pypi_0 pypi
nbconvert 6.2.0 pypi_0 pypi
nbformat 5.1.3 pypi_0 pypi
ncurses 6.2 he6710b0_1
nest-asyncio 1.5.1 pypi_0 pypi
notebook 6.4.5 pypi_0 pypi
numpy 1.21.3 pypi_0 pypi
open3d 0.13.0 pypi_0 pypi
openssl 1.1.1l h7f8727e_0
packaging 21.2 pypi_0 pypi
pandas 1.3.4 pypi_0 pypi
pandocfilters 1.5.0 pypi_0 pypi
parso 0.8.2 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pillow 8.4.0 pypi_0 pypi
pip 21.2.4 py38h06a4308_0
prometheus-client 0.12.0 pypi_0 pypi
prompt-toolkit 3.0.21 pypi_0 pypi
ptyprocess 0.7.0 pypi_0 pypi
pycparser 2.20 pypi_0 pypi
pygments 2.10.0 pypi_0 pypi
pyparsing 2.4.7 pypi_0 pypi
pyrsistent 0.18.0 pypi_0 pypi
python 3.8.12 h12debd9_0
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.8 2_cp38 conda-forge
pytz 2021.3 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
pyzmq 22.3.0 pypi_0 pypi
readline 8.1 h27cfd23_0
requests 2.26.0 pypi_0 pypi
requests-unixsocket 0.2.0 pypi_0 pypi
sapien 1.1.1 pypi_0 pypi
scikit-learn 1.0.1 pypi_0 pypi
scipy 1.7.1 pypi_0 pypi
send2trash 1.8.0 pypi_0 pypi
setuptools 58.0.4 py38h06a4308_0
six 1.16.0 pypi_0 pypi
sniffio 1.2.0 pypi_0 pypi
sqlite 3.36.0 hc218d9a_0
terminado 0.12.1 pypi_0 pypi
testpath 0.5.0 pypi_0 pypi
threadpoolctl 3.0.0 pypi_0 pypi
tk 8.6.11 h1ccaba5_0
tomlkit 0.7.2 pypi_0 pypi
tornado 6.1 pypi_0 pypi
tqdm 4.62.3 pypi_0 pypi
traitlets 5.1.1 pypi_0 pypi
transforms3d 0.3.1 pypi_0 pypi
tzdata 2021e hda174b7_0
urllib3 1.26.7 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
websocket-client 1.2.1 pypi_0 pypi
wheel 0.37.0 pyhd3eb1b0_1
widgetsnbextension 3.5.2 pypi_0 pypi
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3 `

Screenshots
image

Can you suggest a solution?

Thank you in advance.

Hardware requirement

Hi, thanks for your nice project. But I just wonder if there is any hardware requirement if I want to run Sapien(from source code), like recommended CPU or GPU. Is it possible to run Sapien without GPU support?

MPlib installation issue on Mac

When I try to install MPlib using pip install mplib I get:

ERROR: Could not find a version that satisfies the requirement mplib (from versions: none)
ERROR: No matching distribution found for mplib

I am on MacOS with python 3.7 and pip 21.3.1. Any ideas?

Kinematic constraint between a gripper and a target object

Is your feature request related to a problem? Please describe.
Hi, is there a way to implement a "sticky gripper" in maniskill? i.e. create a kinematic constraint when making contacts? Thanks.

Describe the solution you'd like
I think all I need to do is to create a fixed link between the target joint and the gripper finger?
In ManiSkill, here is what we did:

parent = env.agent.finger1_joint.articulation.get_builder().create_link_builder()
child = env.target_joint.articulation.get_builder().create_link_builder(parent)
child.set_joint_properties('fixed', limits=[], pose_in_parent=sapien.core.Pose(p=[0, 0, 0], q = [0, 0, 0, 0]), pose_in_child=sapien.core.Pose(p=[0, 0, 0], q = [0, 0, 0, 0]),\
                               friction = 1, damping = 1)

but it had no visible effect on the environment

Additional context
basically it is a gripper that upon contacting the object, it creates a kinematic constraint that never lets go of the object (analogous to a suction gripper). The core question is how to create a fixed kinematic constraint between the gripper and the target.

Sapien 1.1.1 set camera intrinsics

I am trying to directly set up the camera intrinsics with the focal length and principal points, for example, I have a intrinsic matrix:
1169.621094 0.000000 646.295044 0.000000
0.000000 1167.105103 489.927032 0.000000
0.000000 0.000000 1.000000 0.000000
0.000000 0.000000 0.000000 1.000000
Thank you!

Caught an unknown exception

System:

  • OS version: MacOS
  • Python version (if applicable): Python 3.8
  • SAPIEN version (pip freeze | grep sapien): 1.0.0.rc2
  • Environment: Desktop

Describe the bug
When I run visualization, specifically this step:
self.controller.render()
The code throw error
RuntimeError: Caught an unknown exception!

Regarding Sapien Challenge

Hey, I saw on the Sapien website that a challenge is going to be released. What is the approximate date around which this challenge will be released?
Thanks!

How to set the mass, friction, and ray-tracing option?

I have some questions about SAPIEN:

  1. I found a function named actor_builder.set_mass_and_inertia, but I am not sure what the second and third required parameter are. How should I use this function to define the mass?

  2. I am wondering if it is possible to customize friction between objects in SAPIEN.

  3. How should I enable ray-tracing in SAPIEN for scene rendering?

I have gone through the documents but cannot find the related APIs.

Segmentation Fault when initializing with Vulkan

I have been trapped in the vulkan environment setting for days. After I install sapien via pip install sapien and try to run the hello_world.py, it shows vk::PhysicalDevice::getSurfaceFormatsKHR: ErrorInitializationFailed.

I think the problem may be related to the vulkan, so I tried to install vulkan dependencies via apt-get install libvulkan1 mesa-vulkan-drivers vulkan-utils and run the comand vulkaninfo. Getting the segmentation fault error as follows:


(sapien) root@node146246:~# vulkaninfo
===========
VULKAN INFO
===========

Vulkan Instance Version: 1.1.70

















Instance Extensions:
====================
Instance Extensions     count = 15
        VK_KHR_device_group_creation        : extension revision  1
        VK_KHR_display                      : extension revision 23
        VK_KHR_external_fence_capabilities  : extension revision  1
        VK_KHR_external_memory_capabilities : extension revision  1
        VK_KHR_external_semaphore_capabilities: extension revision  1
        VK_KHR_get_physical_device_properties2: extension revision  2
        VK_KHR_get_surface_capabilities2    : extension revision  1
        VK_KHR_surface                      : extension revision 25
        VK_KHR_xcb_surface                  : extension revision  6
        VK_KHR_xlib_surface                 : extension revision  6
        VK_EXT_acquire_xlib_display         : extension revision  1
        VK_EXT_debug_report                 : extension revision  9
        VK_EXT_debug_utils                  : extension revision  2
        VK_EXT_direct_mode_display          : extension revision  1
        VK_EXT_display_surface_counter      : extension revision  1
Layers: count = 3
=======
VK_LAYER_NV_optimus (NVIDIA Optimus layer) Vulkan version 1.2.168, layer version 1
        Layer Extensions        count = 0
        Devices         count = 8
                GPU id       : 0 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 1 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 2 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 3 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 4 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 5 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 6 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 7 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0

VK_LAYER_NV_optimus (NVIDIA Optimus layer) Vulkan version 1.2.168, layer version 1
        Layer Extensions        count = 0
        Devices         count = 8
                GPU id       : 0 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 1 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 2 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 3 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 4 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 5 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 6 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 7 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0

VK_LAYER_LUNARG_standard_validation (LunarG Standard Validation Layer) Vulkan version 1.0.70, layer version 1
        Layer Extensions        count = 0
        Devices         count = 8
                GPU id       : 0 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 1 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 2 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 3 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 4 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 5 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 6 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0
                GPU id       : 7 (NVIDIA TITAN Xp)
                Layer-Device Extensions count = 0

Presentable Surfaces:
=====================
GPU id       : 0 (NVIDIA TITAN Xp)
Surface type : VK_KHR_xcb_surface
Formats:                count = 0
Present Modes:          count = 3
        FIFO_KHR
        FIFO_RELAXED_KHR
        IMMEDIATE_KHR
Segmentation fault (core dumped)

The project was running on a headless remote server with Ubuntu18.04 and has 8 Titan Xp GPUs. I tried to install VNC for desktop visualization however nothing changed. Anybody can help me with the environment setting? What should I do to correctly setup the vulkan environment? Too little information about this on the Internet. Thanks a lot!

Unexpected Segmentation Fault

System:

  • OS version: Ubuntu 20.04
  • Python version (if applicable): Python 3.8
  • SAPIEN version (pip freeze | grep sapien): 0.7.1dev0
  • Environment: Desktop

Describe the bug
Running a minimum example makes the system gives Segmentation Fault immediately after the 'engine.create_scene()' function call.
Tried running sapien on VMWare Workstation Player 15, Windows subsystem for Linux(WSL) and a physical machine with GTX 1050, the outcomes are the same.

The code

import sapien.core as sapien

engine = sapien.Engine()
renderer = sapien.VulkanRenderer()
engine.set_renderer(renderer)

scene0 = engine.create_scene()
scene0.set_timestep(1 / 240)

Screenshots
(Using chinese as ubuntu system language, so gets chinese 'segmentation fault' report.)
using optifuserrenderer:
2021-03-03 19-40-32 的屏幕截图

using vulkanrenderer:
2021-03-03 19-41-47 的屏幕截图

Segmentation fault on sapien.VulkanRenderer()

System:

  • OS version: MacOS 10.15.1
  • Python version (if applicable): Python 3.7
  • SAPIEN version (pip freeze | grep sapien): 1.0.0rc2
  • Environment: Local machine

Describe the bug

import sapien.core as sapien
renderer = sapien.VulkanRenderer()

This results in the program quitting with no error message and Segmentation fault: 11

Any ideas?

Velocity control

First, thanks for this great library. Great work!

The documentation provides great examples of how to do position control using the built-in PhysX controllers (as well as custom controllers). But it looks like velocity control isn't fully exposed. Following the API calls, it seems as though setDriveVelocityTarget needs to be exposed much like setDriveTarget is exposed via setDriveTarget in SArticulation. I can submit a PR for this, but is there anything I am missing?

Multiprocess problem

I build a class with self.sim = sapien_core.Engine()
I build one obejct and delete it and then build another new one. I will get this runtime error:

RuntimeError: logger with name 'SAPIEN' already exists

vkCore: Missing physical device extension: VK_KHR_pipeline_library

System:

  • OS version: Ubuntu 18.04
  • Python version (if applicable): Python 3.8.12
  • SAPIEN version (pip freeze | grep sapien): 2.0a0
  • Environment: Desktop

Describe the bug

engine = sapien.Engine()
render_config = sapien.KuafuConfig()
render_config.use_viewer = False
render_config.spp = 32
render_config.max_bounces = 8
render_config.use_denoiser = True
renderer = sapien.KuafuRenderer(render_config)

gives:

INFO - 2021-11-04 13:30:47,535 - topics - topicmanager initialized
[2021-11-04 13:30:47.674] [kuafu] [info] Camera is not yet usable due to uninitialized context!
[2021-11-04 13:30:47.674] [kuafu] [info] Offscreen mode enabled.
[2021-11-04 13:30:47.674] [kuafu] [warning] Denoiser ON! You must have an NVIDIA GPU with driver version > 470 installed.
vkCore: Missing physical device extension: VK_KHR_pipeline_library
[1]    11190 segmentation fault (core dumped)  python ./renderer/sapien_render.py

Expected behavior

The exact same code could run on another desktop with the same vulkan-related packges installed:

$ apt list --installed | grep vulkan

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

libvulkan-dev/bionic,now 1.2.189.0~rc1-1lunarg18.04-1 amd64 [installed,automatic]
libvulkan1/bionic,now 1.2.189.0~rc1-1lunarg18.04-1 amd64 [installed]
lunarg-vulkan-layers/bionic,now 1.2.189.0~rc2-1lunarg18.04-1 amd64 [installed,automatic]
mesa-vulkan-drivers/bionic-updates,now 20.0.8-0ubuntu1~18.04.1 amd64 [installed]
vulkan-extensionlayer/bionic,now 1.2.189.0~rc1-1lunarg18.04-1 amd64 [installed,automatic]
vulkan-headers/bionic,bionic,now 1.2.189.0~rc1-1lunarg18.04-1 all [installed,automatic]
vulkan-sdk/bionic,bionic,now 1.2.189.0~rc2-1lunarg18.04-1 all [installed]
vulkan-tools/bionic,now 1.2.189.0~rc2-1lunarg18.04-1 amd64 [installed,automatic]
vulkan-validationlayers/bionic,now 1.2.189.0~rc2-1lunarg18.04-1 amd64 [installed,automatic]
vulkan-validationlayers-dev/bionic,now 1.2.189.0~rc2-1lunarg18.04-1 amd64 [installed,automatic]

The only noted difference is that the NVIDIA driver version for the functioning desktop is 470.xx while the version for the failed desktop is 495.44 (which is updated as it suggests that we have driver version > 470, see the terminal output above). Nevertheless, this difference should not lead to problems with vulkan.

Is there a easy way to get 2D pixel position of a 3D point?

Sorry to trouble again!
I try to obtain the 2D pixel position of a 3D point after projection of a perspective camera.
What confuses me is the extrinsic matrix and coordinates. How can I get the camera extrinsic parameters from the camera pose I use for camera_mount_actor.set_pose and the pose feature in camera = scene.add_mounted_camera?
I tried to use get_model_matrix, get_projection_matrix, but I still don't understand what the two matrix means...

Requirement for latest version of the Sapien.

System:

  • OS version: macOS Big Sur (Version 11.1)
  • Python version (if applicable): 3.7
    • SAPIEN version (0.6.0.dev0):

Describe the bug
Hey, I am unable to install Sapien version > 0.6.0.dev0 on my machine. What are the exact requirements for later versions?

Explicitly close a scene in the engine

In my project, it is required that the simulator could reset the environment for many times.

I found that there might be two ways to achieve so:

  1. clear all objects, or called actors.
  2. delete the scene and re-initialize it. In my case, I just assign a none value to it.

However, for 1), I found that it will throw out an exception. For 2), the memory occupied will increase gradually and the running speed for each initialization will also be slower and slower.

Is there anyway to explicitly and completely delete the scene in an engine?

How to move end-effector to handle?

I downloaded a drawer (sapien_assets_id = 41083) and am using a panda robot. How can I move the end-effector to the handle of the drawer? Currently, I am guessing and checking both the pose of the end-effector with the self.move_to_pose(pose, with_screw) function, as well as the pose of the drawer, which as you can imagine is very slow and tedious.

I'm not entirely sure how to get the handle location: asset.get_joints()[-2].get_pose_in_child() gives me Pose([0, 0, 0], [0.707107, 0, -0.707107, 0]) while asset.get_joints()[-2].get_child_link().pose gives me Pose([0, 1.86265e-09, 0], [1, 0, 0, 0]); why are these two different? Additionally, both of these values don't seem to update even after I try setting the pose with asset.set_pose(sapien.Pose([1.0155, 0.25, -0.1])) -- is this because these values are relative to the root pose?

Failed to cook mesh

System:

  • OS version: Ubuntu 18.04
  • Python version (if applicable): 3.6.13
  • SAPIEN version (pip freeze | grep sapien): 0.8.0
  • Environment: Ubuntu Desktop

Describe the bug
I have downloaded the full PartNet-Mobility dataset, but it reports errors when I try to load some particular objects, such as TrashCan with id 102259, 103012, and 102244. See the screenshot.

Screenshots
sapien_error

RuntimeError: vk::Device::allocateDescriptorSetsUnique: ErrorOutOfPoolMemory

System:

  • OS version: Ubuntu 16.04
  • Python version (if applicable): Python 3.7
  • SAPIEN version (pip freeze | grep sapien): sapien-2.0-cp37-cp37m-manylinux2014_x86_64.whl
  • Environment: Desktop

Describe the bug
Hi fb, I'd like to use sapien to iterate all 2347 shapes within one python script (like a dataloader). To achieve that, I built one unique engine (=sapien.Engine(0, 0.001, 0.005)), one unique renderer (=sapien.VulkanRenderer(offscreen_only=not self.settings['show_gui'])), and one unique scene(=self.engine.create_scene(config=scene_config)), and one unique viewer (Viewer(self.renderer)).

Therefore, I will (1) create a new camera and load a new object, (2) update viewer's parameters (including x,y,z,r,p,y, and fovy) given the newly created camera, and (3) call self.viewer.render() for visualization purpose at the beginning of each iteration, and remove them from self.scene at the end of each iteration.

For the first 210+ objects, everything works fine - I could successfully iterative among them and pop a viewer window for each. However, the following error showed up. I suppose this error has nothing to do with any corrupted shapes, as (1) when I repeat the script multiple times, it still runs successfully for first 210+ iterations, but could be stopped at different iterations (like 213, 215, etc), (2) I have monitored both my GPU and RAM - both of them look far below the limitation when error poped.

May I have your guidance or comments on this, please? Thanks!

Update: I have tried to re-run the scrips without self.viewer (and self.viewer.render()), i.e., no visualization/GUI now, the same error would pop at self.camera.take_picture() around 210+ iterations.

ErrorTrack
Traceback (most recent call last):
File "xx", line xx, in xx
self.viewer.render()
File "/home/xx/anaconda3/lib/python3.7/site-packages/sapien/utils/viewer.py", line 1786, in render
self.info_window,
RuntimeError: vk::Device::allocateDescriptorSetsUnique: ErrorOutOfPoolMemory

No matching distribution found for sapien

System:

  • OS version: MacOS Catalina / Ubuntu 20.04
  • Python version: Python 3.6
  • SAPIEN version: Default
  • Environment: Desktop

Describe the bug
When I input the command "pip install sapien" or "pip3 install sapien" whether in MacOS or Ubuntu, the related package cannot be found and installed, and the following errors occur:

ERROR: Could not find a version that satisfies the requirement sapien (from versions: none)
ERROR: No matching distribution found for sapien

I am wondering if there is something wrong with my system or python version?

Clear a single actor

Is there a way to explicitly clear only one specific actor in the scene?
Thank you!

Error (cannot build dynamic articulation with more than 64 links) when loading some instances with loader.load_kinematic(urdf_path)

System:

  • OS version: MacOS Big Sur
  • Python version (if applicable): Python 3.8.10
  • SAPIEN version (pip freeze | grep sapien): sapien==1.0.0rc2

Describe the bug
I downloaded the script from https://sapien.ucsd.edu/docs/latest/tutorial/rendering/camera.html and modified the index of the instance and got a bug.
original:

    loader = scene.create_urdf_loader()
    loader.fix_root_link = True
    urdf_path = '../assets/179/mobility.urdf'
    # load as a kinematic articulation
    asset = loader.load_kinematic(urdf_path)
    assert asset, 'URDF not loaded.'

mine:

    loader = scene.create_urdf_loader()
    loader.fix_root_link = True
    urdf_path = '../assets/13027/mobility.urdf'
    # load as a kinematic articulation
    asset = loader.load_kinematic(urdf_path)
    assert asset, 'URDF not loaded.'

and I got this message:

[2021-07-07 14:14:52.532] [SAPIEN] [error] cannot build dynamic articulation with more than 64 links
[1] 35806 segmentation fault /opt/anaconda3/envs/detection/bin/python

It seems that asset = loader.load_kinematic(urdf_path) does not work.

This is the 13027 folder in the dataset (which contains a keyboard with 100+ parts)
image

I observed this error when there are too many parts in one instance, and I have not been able to render images for many instances in the Keyboard category. How can I fix that problem?

Thanks!

Foundation object exists already. Only one instance per process can be created.

System:

  • OS version: Ubuntu 16.04
  • Python version (if applicable): Python 3.6
  • SAPIEN version : 0.7.0.dev0
  • Environment:conda virtual envrionment

Describe the bug
I generate the scene in a loop, and for every scene ,I load models to scene on by one,then I remove all the models and set scene=None. But in the third cycle,when I create scene, the program exits abnormally. The error messages are as follows.

[SAPIEN] [critical] Foundation object exists already. Only one instance per process can be created.
[SAPIEN] [critical] Foundation destruction failed due to pending module references. Close/release all depending modules first.
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
**

Get segmentation for a moving link

I try to move a link by adjusting the joints, and I wish to get the segmentation mask for that specific moving link.
Is there any way to map an adjusted joint to a moving link, and then map the moving link's name to a specific label in the result of camera.get_uint32_texture('Segmentation') ?
Or is there's another way to reaching the goal?

When I do:
camera.get_uint32_texture('Segmentation') seg_array = np.array(seg_labels) print(np.unique(seg_array[:,:,0]))
The output is a bit hard to understand how it is related to different links:
[0 1 12 18 19 20 21 31]
Also, sorry that I don't understand how the render of segmentation works in the source code.

I would be really appreciate if you can help. Thank you!

About the render

Is it possible to use the render in sapien as a differentiable render for back-propagation?
i.e. use rendering in sapien as a component in a neural network model pipeline, and back-propagate losses to modules before the render.
Thanks a lot!

ImportError of Sapien2.0

System:

  • OS version: [Ubuntu 18.04]
  • Python version (if applicable): [Python 3.7]
  • SAPIEN version (pip freeze | grep sapien): 2.0a0
  • Environment: [Desktop]

Describe the bug
I want to import sapien2 and get the ImportError below.

Traceback (most recent call last):
File "xxxxxx.py", line 1, in
import sapien.core as sapien
File "/home/xxx/anaconda3/envs/xx/lib/python3.7/site-packages/sapien/init.py", line 1, in
from sapien import core, sensor, asset, example, utils
File "/home/xxx/anaconda3/envs/xx/lib/python3.7/site-packages/sapien/core/init.py", line 1, in
from .pysapien import *
ImportError: libvulkan.so.1: cannot open shared object file: No such file or directory

To Reproduce
Steps to reproduce the behavior:

  1. pip install sapien==2.0a0
  2. python

import sapien.core as sapien

Additional context
I can correctly import sapien1.1.1 by the same way, but failed with sapien2.0a0.

About joint features after loading urdf

I have loaded 167th model's urdf, in which, for example, the initial joint feature is

<joint name="joint_1" type="continuous">
		<origin xyz="0 -0.4051616942026166 -0.18586969420261662"/>
		<axis xyz="1 0 0"/>
		<child link="link_1"/>
		<parent link="link_2"/>
	</joint>
<joint name="joint_3" type="continuous">
		<origin xyz="-0.0015881912962019719 0 -0.18586980870379802"/>
		<axis xyz="0 1 0"/>
		<child link="link_3"/>
		<parent link="link_2"/>
	</joint>

However, then I do print(j.type) and np.rad2deg(t3d.euler.quat2euler(j.get_pose_in_parent().q)),
which for joint_1 gives type revolute and:
[ 1.80000000e+02 -0.00000000e+00 -2.03555475e-12] = [180, 0, 0]
which for joint_3 gives type revolute and:
[-9.00000000e+01 -3.41509397e-06 9.00000000e+01] = [-90, 0, 90]
By default, in a urdf file, I guess origin rpy should be [0,0,0], why would the two joint have different get_pose_in_parent result and why is the joint type changed?
Thanks a lot!

Set gravity in a scene

Sapien version: 0.7.0.dev0
system: Ubuntu16.04
When I want to set gravity in a scene, I use sapien.engine.create_scene(gravity =[0,0,-2]) but get the TypeError:

TypeError: create_scene(): incompatible function arguments. The following argument types are supported:
1. (self: sapien.core.pysapien.Engine, config: sapien.core.pysapien.SceneConfig = <sapien.core.pysapien.SceneConfig object at 0x7f92ec7b6928>) -> sapien.core.pysapien.Scene
Invoked with: <sapien.core.pysapien.Engine object at 0x7f92f2338fb8>; kwargs: gravity=[0, 0, -2]

Why would this happen and how can I solve this problem?

Scenarios from paper

Thanks for the great resource!

Does this repo contain these scenes presented in the paper:
image

Or do you have to manually populate all these objects/robots?

Trouble downloading the dataset

System:

  • OS version: [Ubuntu 20.04]
  • Python version : [Python 3.8]
  • SAPIEN version (pip freeze | grep sapien):
  • Environment: [Desktop]

Describe the bug
I am getting the following error when downloading the dataset.
NameError Traceback (most recent call last)
in
----> 1 urdf = download_partnet_mobility(sapien_assets_id, token=my_token)
NameError: name 'download_partnet_mobility' is not defined

To Reproduce
Steps to reproduce the behavior (use pastebin for code):

  1. Pip install sapien and then follow the download steps on the Sapein website
  2. import sapien
    my_token = "my token in quotes"
    sapien_assets_id = 179
    urdf = download_partnet_mobility(sapien_assets_id, token=my_token)

Expected behavior
Successful download of the partnet dataset

Additional context
I have successfully downloaded sapien using pip command into my virtual environment in Python. At the moment, my virtual environment has python = 3.8, sapien, requests modules.

Vulkan ErrorInitializationFailed

System:

  • OS version: Ubuntu 18.04
  • Python version: Python 3.8.8
  • SAPIEN version: 0.8.0.dev
  • Environment: Server with xvfb

Describe the bug
It works fine with the OpenGL version of sapien, but when I switch to Vulkan, it reports:

RuntimeError: vk::PhysicalDevice::getSurfaceFormatsKHR: ErrorInitializationFailed

Any advice?

Any demo for grasping ?

Is there any detailed demo or example code for grasping operation, since it is really a basic problem for robotics. which is useful for us to reprogram on it.
Thanks.

install sapien0.6.0

how can i insall sapien 0.6.0 version? can i install it using the command "pip install"?

Manual object manipulation

Hi, thanks for sharing the simulator. I have been very impressed, and would like to invest more time on this. I have a few questions and I am not sure if you have any easy solution (potentially for non-robotics people).

  1. I want to try out how good the physical simulation is in terms of articulating objects by a gripper (say panda). However, since I come from a vision background, writing a custom controller is pretty resource-intensive for just a try-out. I am thinking if you have any easy solution to enable me poking around with a gripper to open doors etc?
  2. I am also looking for a robotic hand (similar to human hands). Is there anything in other places that I could find and it should be compatible to SAPIEN?

Thanks

Soft finger model loading

The parameters like patchRadius, minPatchRadius are for the hole model or just the finger. Can I change these parameters after building the object? I find that the attributes of object do not contain patchRadius.

Add proximity sensor?

Is your feature request related to a problem? Please describe.
It is hard to implement behavior like suction cup where I only need a distance information along a ray. (Check is suction is successful via checking if proximity sensor detects any object; the V-Rep way).

Describe the solution you'd like
I would like a proximity sensor parameterized by pose, sensing distance and sensing volume. The sensor could be attached to body like cameras. The sensor could return distance and the object ID if detected. Please advise PyRep proximity sensor

Describe alternatives you've considered
Camera with 1x1 resolution.

Additional context
Trying to implement this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.