Giter Club home page Giter Club logo

mohou_ros's Introduction

mohou_ros pytest lint rostest-noetic rostest-melodic

This packages provides a ros-wrapper for mohou, which enables collect data (via kinesthetic teaching or HTC vive controller), data conversion to mohou's format, train, and execute on the real robot.

installation

install this package as pip

pip3 install -e .

If you are scared, please use virtualenv or whatever. Future direction would be using catkin virtual env.

If you get stuck at installing opencv-python because of skbuild, please refere to https://stackoverflow.com/questions/63448467/installing-opencv-fails-because-it-cannot-find-skbuild

Workspace build (Noetic)

Currently, there is no ros pacakge dependency which must be installed from source. So, no need to create new workspace and you can install this package by We need to build a workspace, because pr2eus is not released yet in noetic.

sudo apt-get install python3-wstool python3-catkin-tools
source /opt/ros/noetic/setup.bash
mkdir -p ~/mohou_ws/src
cd ~/mohou_ws/src
git clone https://github.com/HiroIshida/mohou_ros.git
wstool init
wstool merge mohou_ros/rosinstall.noetic
wstool update
rosdep install --from-paths . --ignore-src -y -r
cd ~/mohou_ws
catkin init
catkin build

Workspace build (Melodic + Python3)

If you are using melodic, you need to build some packages from source with the following configuration to use python3.

sudo apt-get install python3-catkin-pkg-modules python3-rospkg-modules python3-venv python3-empy
sudo apt-get install python-catkin-tools python-wstool ros-melodic-rostest
source /opt/ros/melodic/setup.bash
mkdir -p ~/mohou_ws/src
cd ~/mohou_ws/src
git clone https://github.com/HiroIshida/mohou_ros.git
wstool init
wstool merge mohou_ros/rosinstall.melodic
wstool update
rosdep install --from-paths . --ignore-src -y -r
cd ~/mohou_ws
catkin init
catkin config -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so
catkin build

Usage

Most of the command below end with -pn {your_project_name} option. However this can be omitted by setting the global configuration of the project. The global configuration can be set by adding and editting ~/.mohou/setting.yaml file, and by setting primary_project_name, you can omit -pn {your_project_name} option. To do that, please hit the following command (replace your_project_name by your projcet name)

echo "primary_project_name: your_project_name" >> ~/.mohou/setting.yaml

(1) prepare per-project config file

mohou and mohuo_ros manage each project by a corresponding project directory under ~/.mohou. For example, if your project is named pr2_kitchen, all the config, rosbags must be put under ~/mohou/pr2_kitchen directory. Also, any training results such as trained autoencoder and trained lstm is put under the project directory. So, the first step is creating a project directory by

mkdir ~/.mohou/{your_project_name}

Then, create a ~/.mohou/{your_project_name}/main_config.yaml file and edit this file while referencing an example. main_config.yaml basically configure which ros topic (e.g. CompressedImage and JointStates) is converted to which mohou primitive type (e.g. AngleVector and RGBImage).

(1') adding custom topic conversion rule (advanced)

Each conversion rule can be found in mohou_ros_utils/conversion.py. Let us explain how to add a new custom rule. Say, you bought a new robot named Pikachu and add a new rule from PikachuDualArmGripperStates(I just made up) to GripperState. In that case please add the following class to mohou_ros_utils/conversion.py

@dataclass
class PikachuGripperStateConverter(MessageConverter[PikachuDualArmGripperStates, GripperState]):
    @classmethod
    def out_element_type(cls) -> Type[GripperState]:
        return GripperState

    def apply(self, msg: PikachuDualArmGripperStates) -> GripperState:  # type: ignore[override]
        # please write the main conversion rule that convert the message to a single vector 
        values = []
        values.append(msg.right_arm_gripper.finger1.command)
        values.append(msg.right_arm_gripper.finger2.command)
        values.append(msg.left_arm_gripper.finger1.command)
        values.append(msg.left_arm_gripper.finger2.command)
        vec = np.array(values)
        return GripperState(vec)

    @classmethod
    def from_config(cls, config: Config):  # in many case you don't have to edit this
        assert cls.is_compatible(config)
        return cls.from_config_topic_name_only(config)

    @classmethod
    def input_message_type(cls) -> Type[PikachuDualArmGripperStates]:
        return PikachuDualArmGripperStates

where the anything is ok for the class name. After adding the class, the high-level converter, seeing the ~/.mohou/{your_project_name}/main_config.yaml, automatically select the compatible converter according to the input and output type (In this case PikachuDualArmGripperStates to GripperState).

(2) save home position

home position is the initial robot joint configuration. By saving and applying this, you cam keep the data-collection phase and test phase condition consistent.

rosrun mohou_ros save_home_position.py -pn {your_project_name}

(3) save rosbag

Please save your rosbag files under ~/.mohou/{project_name}/rosbag. Each rosbag file name must be ended with .bag extension.

You can use whatever your favorite way to collect rosbag. To make is easier, this package provides

save_rosbag.py

rosrun mohou_ros save_rosbag.py -pn {your_project_name} # with `--gif` option, gif files of rgb image will be dumped for debugging

(pr2 only) Also you can use vive controller to teach pr2-robot and save:

rosrun mohou_ros vive_controller_pr2.py -pn {your_project_name}

the button to usage map of vive controllers is follows. Note that right and left arm controllers have a little bit different usage. The following image is took from https://github.com/knorth55/eus_vive Vive controller

Right arm controller

Button Usage
1 start / stop saving rosbag
2 start / stop tracking
7 grasp / open
8 return to home position

Left arm controller

Button Usage
1 delete the latest rosbag
2 start / stop tracking
7 grasp / open
8 return to home position

NOTE: when you delete that latest rosbag after stop saving rosbag, please wait few seconds.

(4) tuning the image config

Interactively create image config, which include crop and gaussian blur and hsv filter.

# press Ctrl-C to quit and save the configuration as `image_config.yaml` under the project folder.
rosrun mohou_ros tune_image_filter.py -pn {your_project_name}

GaussianBlurFilter:kernel_width = 5 is recommended. Altering HSV value is not recommended. ResolutionChangeResizer:resol change the image resolution. This must be 112 or 224 due to the implementation of mohou side.

(5) creating bundles

Bundle here means the binary dataset that will be used in the training deep network. We must create this bundle by converting the rosbag data. This section explains how to do this.

convert to the rosbag data to bundle for training the autoencoder.

For better training of autoencoder, much image is required. So we want to create a bundle with high hz. (20 hz or higher is recommended)

rosrun mohou_ros bags2chunk.py -hz 20 -remove_policy donothing -postfix autoencoder -untouch 5 --compress --gif -pn {your_project_name}

Here, untouch means number of episodes which will be kept untouch (will not used in the training). This is helpful when you want to use it only for visualization or debugging.

Other command line options for this script is

option meaning
--compress compress RGBImage sequence using mp4 when dumping EpisodeBundle. This reduce the bundle size by a factor of more than 10.
--gif dump gif image corresponding with each episode

convert to the rosbag data to bundle for training the lstm

On the other hand, lstm training require lower frequency data (5hz ~ 8hz) is recommended.

rosrun mohou_ros bags2chunk.py -hz 5 -remove_policy remove -untouch 5 --compress --gif -pn {your_project_name}

remove init policy

Sometimes, in the initial phase of the episode, data is static, which is usually bad for learning lstm because the policy becomes long-time-dependent. remove_policy may fix such data.

  1. remove_policy = remove, the such too long static initial sequence will be removed and the removeed sequence will be added to bundle. (recommended if your many of your episode needs to be ammended)

  2. remove_policy = skip, too long static initial sequence will be skipped and will not be added to the bundle.

  3. remove_policy = donothing, regardless of the initial static phase, any episode will be added to the bundle. (recommended for autoencoder training)

Currently theses remover handles only initial state.

(6) training

rosrun mohou_ros train.py -pn {your_project_name} -n_vae 1500 -n_lstm 20000

(7) execution (currently pr2 only)

set to home position

rosrun mohou_ros reset_to_home.py -pn {your_project_name}

execution using the trained policy

Without the --force argument, the real robot will not move, i.e., it will dryrun. The --terminate argument can be used to automatically terminate the program if the value of TerminateFlag exceeds the threshold value. The threshold value can be set with -tt argument.

rosrun mohou_ros execute_pr2.py -pn {your_project_name} --force

mohou_ros's People

Contributors

hiroishida avatar kanazawanaoaki avatar ojh6404 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mohou_ros's Issues

Executor does not work on melodic because tf2 cannot be imported

desction

I am trying to add a rostest for executor #31
And I found that melodic-test fails, though the noetic-test passed. The CI log reveals the causes that it causes by the import error of tf2 when importing scikitrobot.interface.ros.pr2 module.

As I remember correctly, this bug showed up when you @Kanazawanaoaki tried to run the executor on melodic when you tried mohou package at the first time. So, I want you to fix this error.

Traceback (most recent call last):
  File "/home/runner/work/catkin_ws/src/mohou_ros/rostest/run_data_collection.py", line 17, in <module>
    from mohou_ros_utils.rosbag import bag_to_seqs
  File "/home/runner/.local/lib/python3.6/site-packages/mohou_ros_utils/rosbag.py", line 6, in <module>
    from mohou_ros_utils.interpolator import (
  File "/home/runner/.local/lib/python3.6/site-packages/mohou_ros_utils/interpolator.py", line 12, in <module>
    from mohou_ros_utils.conversion import imgmsg_to_numpy, numpy_to_imgmsg
  File "/home/runner/.local/lib/python3.6/site-packages/mohou_ros_utils/conversion.py", line 23, in <module>
    from mohou_ros_utils.utils import deprecated
  File "/home/runner/.local/lib/python3.6/site-packages/mohou_ros_utils/utils.py", line 8, in <module>
    from skrobot.coordinates import Coordinates
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/__init__.py", line 15, in <module>
    from skrobot import interfaces
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/interfaces/__init__.py", line 6, in <module>
    from .ros import *
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/interfaces/ros/__init__.py", line 8, in <module>
    from .pr2 import PR2ROSRobotInterface
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/interfaces/ros/pr2.py", line 10, in <module>
    from .move_base import ROSRobotMoveBaseInterface
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/interfaces/ros/move_base.py", line 22, in <module>
    from .transform_listener import TransformListener
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/interfaces/ros/transform_listener.py", line 3, in <module>
    import tf
  File "/opt/ros/melodic/lib/python2.7/dist-packages/tf/__init__.py", line 30, in <module>
    from tf2_ros import TransformException as Exception, ConnectivityException, LookupException, ExtrapolationException
  File "/opt/ros/melodic/lib/python2.7/dist-packages/tf2_ros/__init__.py", line 38, in <module>
    from tf2_py import *
  File "/opt/ros/melodic/lib/python2.7/dist-packages/tf2_py/__init__.py", line 38, in <module>
    from ._tf2 import *
ImportError: dynamic module does not define module export function (PyInit__tf2)

NOTE

When you look the log of the CI, you may think it is strange that the error caused in the data-collection phase, which worked fine so far. The following is the reason for that.

When one use the scikit-robot via pip installation, this error does not show up in the data-collection phase, but only in the execution phase, because only in the execution phase, pr2 module is imported from scikit-robot. However, in order to show the error expressively, I am using the following modified scikit-robot pacakge in the github workflow. This is the reason why, the import error appears in the data-collection test.

modified scikit-robot package
HiroIshida/scikit-robot@3ffb23c

modification in the workflow:
68f0e9f

SVD error

Only in rostest in github action, in training LSTM phase in the rostest, sometime SVD error occured when augmenting the trajectory

Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/runner/.local/lib/python3.6/site-packages/mohou/script/train_lstm.py", line 76, in <module>
    context_list=context_list,
  File "/home/runner/.local/lib/python3.6/site-packages/mohou/script_utils.py", line 130, in train_lstm
    static_context_list=context_list,
  File "/home/runner/.local/lib/python3.6/site-packages/mohou/dataset/sequence_dataset.py", line 197, in from_bundle
    seq_auged = augmentor.apply(seq)
  File "/home/runner/.local/lib/python3.6/site-packages/mohou/dataset/sequence_dataset.py", line 87, in apply
    noises = np.random.multivariate_normal(mean, covmat_scaled, n_seqlen)
  File "mtrand.pyx", line 4084, in numpy.random.mtrand.RandomState.multivariate_normal
  File "<__array_function__ internals>", line 6, in svd
  File "/usr/local/lib/python3.6/dist-packages/numpy/linalg/linalg.py", line 1626, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
  File "/usr/local/lib/python3.6/dist-packages/numpy/linalg/linalg.py", line 106, in _raise_linalgerror_svd_nonconvergence
    raise LinAlgError("SVD did not converge")
numpy.linalg.LinAlgError: SVD did not converge
Traceback (most recent call last):
  File "/home/runner/.local/lib/python3.6/site-packages/mohou/default.py", line [104](https://github.com/HiroIshida/mohou_ros/runs/7648812659?check_suite_focus=true#step:10:105), in create_default_propagator
    tcach_lstm = TrainCache.load(project_path, compat_lstm_type)
  File "/home/runner/.local/lib/python3.6/site-packages/mohou/trainer.py", line 237, in load
    tcache_list = cls.load_all(project_path, model_type, **kwargs)
  File "/home/runner/.local/lib/python3.6/site-packages/mohou/trainer.py", line 226, in load_all
    raise FileNotFoundError
FileNotFoundError

`reset_to_home.py` causes error on pr1040 but not on gazebo simulator

problem (probably pr2 hardware or pr1040 specific issue)

(This issue is using version after #63)
rosrun mohou_ros reset_to_home.py on real robot
causes the following error

[ERROR] [1662316480.793608]: 1 non-controlling joint angles are not consistent with home position.
{'joint_name': 'head_tilt_joint', 'angle': 1.1361046232931888, 'angle_desired': 1.0728538912009142, 'error': 0.06325073209227461, 'threshold': 0.03490658503988659}

However, in the gazebo simulator, after roslaunch pr2_gazebo pr2_empty_world.launch the same command works without problem.

Also, I conducted additional experiments. running the same command changing the initial tilt angle.
case1: tilt_joint is set to the highest value (neck is pointing to the ground)
case2: tilt_jotin is set to the lowest value (neck is pointing to the sky)
In case2, the command works well, but in case1 the same error occurred.

Also I check the /head_traj_controller/follow_joint_trajectory/feedback topic when testing on case1. And the output is like the following. The output shows that desired neck position is set correctly, but controller cannot drive the joint to the desired angle. So, I conclude that it is a hardware-rooted problem rather than a scikit-robot software issue.

desired: 
    positions: [-9.237630237555333e-05, 1.0728548206764066]
    velocities: [-0.00010230853856615929, -0.0008065962224733658]
    accelerations: [0.0459219098688491, 0.3620464093036396]
    effort: [-0.3858327788697062, -6.280792393178126]
    time_from_start: 
      secs: 1
      nsecs: 997752683
  actual: 
    positions: [0.00734261242871972, 1.1346385467215137]
    velocities: [0.0, 0.0]
    accelerations: []
    effort: [-0.38978100827336315, -6.277091636535645]
    time_from_start: 
      secs: 1
      nsecs: 997752683
  error: 
    positions: [-0.007434988731095274, -0.06178372604510707]
    velocities: [0.00010230853856615929, 0.0008065962224733658]
    accelerations: []
    effort: [-0.0039482294036569665, 0.003700756642481373]
    time_from_start: 
      secs: 1
      nsecs: 997752683

Actually, the same problem occurs when using euslisp. The following code compute the difference between desired and actual joint angle. and output was 3.26 [deg].

(load "package://pr2eus/pr2-interface.l")
(pr2-init)

(setq *desired* 60)
(setq *joint* (send *pr2* :head_tilt_joint))
(send *joint* :joint-angle *desired*)

(send *ri* :angle-vector (send *pr2* :angle-vector))
(send *ri* :wait-interpolation)

(send *pr2* :angle-vector (send *ri* :state :potentio-vector))

(print (- (send *joint* :joint-angle) *desired*))

home position

home_position.yaml

bl_caster_l_wheel_joint: -1747.4960929448357
bl_caster_r_wheel_joint: -1598.6965015232092
bl_caster_rotation_joint: 0.7839066620752853
br_caster_l_wheel_joint: 1070.9315347971171
br_caster_r_wheel_joint: 1234.7029486263784
br_caster_rotation_joint: 2.355731935983192
fl_caster_l_wheel_joint: 2303.9117993989953
fl_caster_r_wheel_joint: 2468.8539379256895
fl_caster_rotation_joint: -0.7855964009142032
fr_caster_l_wheel_joint: -1962.6303188481893
fr_caster_r_wheel_joint: -1812.2672281283776
fr_caster_rotation_joint: 0.7872200605797226
head_pan_joint: -9.249018477611539e-05
head_tilt_joint: 1.0728538912009142
l_elbow_flex_joint: -1.919922381206603
l_forearm_roll_joint: -0.34919022050429266
l_gripper_joint: 0.08644129011040648
l_gripper_l_finger_joint: 0.5043376890868512
l_gripper_l_finger_tip_joint: 0.5043376890868512
l_gripper_motor_screw_joint: 0.0
l_gripper_motor_slider_joint: 0.0
l_gripper_r_finger_joint: 0.5043376890868512
l_gripper_r_finger_tip_joint: 0.5043376890868512
l_shoulder_lift_joint: 0.8726545105347081
l_shoulder_pan_joint: 1.30900678614617
l_upper_arm_roll_joint: 1.9197462652738873
l_wrist_flex_joint: -0.17464744792839326
l_wrist_roll_joint: -0.17455149038717366
laser_tilt_mount_joint: 0.4299841839399005
r_elbow_flex_joint: -2.0506503335859287
r_forearm_roll_joint: 4.584582902038177
r_gripper_joint: 0.08704873817414047
r_gripper_l_finger_joint: 0.5082783783631448
r_gripper_l_finger_tip_joint: 0.5082783783631448
r_gripper_motor_screw_joint: 0.0
r_gripper_motor_slider_joint: 0.0
r_gripper_r_finger_joint: 0.5082783783631448
r_gripper_r_finger_tip_joint: 0.5082783783631448
r_shoulder_lift_joint: 0.10846794379949935
r_shoulder_pan_joint: -0.6483372056934948
r_upper_arm_roll_joint: -1.1107506377386362
r_wrist_flex_joint: -1.9900264969748427
r_wrist_roll_joint: -3.3873396124245696
torso_lift_joint: 0.29982514252114517
torso_lift_motor_screw_joint: 0.0

Segmentation fault occurred when tuning the image config

When I followed Tuning the image config's directions and pressed "q", a segmentation fault occurred and image_config.yam was not saved.

$ rosrun mohou_ros tune_image_filter.py -pn blue_box_continue_long
Failed to load Python extension for LZ4 support. LZ4 compression will not be available.
initialize window
Segmentation fault (コアダンプ)

Workflow

  1. mirror mode
  2. save_rosbag
  3. tune image_cropper
  4. convert_to_chunk
  5. mohou learning
  6. executor

TODO list

  1. Use scikit_robot to implement mirror mode (1hour)
  2. show debug_gif when converting chunk (as pybullet demo) 1hour
  3. implement executor using scikit robot 1hour

How to start vive node

GUI

  1. click Steam icon on desktop
  2. Start (click green button) STEAM VR.

terminal

  1. leus@p1040-vrstation:~$ source ~/mohou_vive_ws/devel/setup.bash
  2. rossetmaster pr1040 && rossetip
  3. roslaunch vive_ros vive_ctrl.launch

TODO

  1. stop subscriber when certain executor is terminated
  2. experiment!

学習時に出力されるRGB画像が変

現在学習時にログとして出力されるRGB画像(train_data_gifs, autoencoder_result, lstm_resultの画像)の色が変になっています.これはROSでの画像がBGRの順番なのに対し,mohou側で想定している画像の順番がRGBであることが原因のようです.

result-touch-0
実行時の画像出力execution_debug_dataについては,mohou_ros側で出力時にbgr2rgbで順番を変更しているようです.

class DebugImages:
robot_camera: RGBImage
network_input: RGBImage
reconstructed: RGBImage
onestep_lookahaed_reconstructed: RGBImage
def numpy(self) -> np.ndarray:
fig = plt.figure()
def bgr2rgb(arr: np.ndarray) -> np.ndarray:
return arr[..., ::-1].copy()
font = {"fontsize": 8, "fontweight": "medium"}
ax1 = fig.add_subplot(1, 4, 1)
ax1.imshow(bgr2rgb(self.robot_camera.numpy()))
ax1.set_title("robot camera", fontdict=font)
ax2 = fig.add_subplot(1, 4, 2)
ax2.imshow(bgr2rgb(self.network_input.numpy()))
ax2.set_title("network input", fontdict=font)
ax3 = fig.add_subplot(1, 4, 3)
ax3.imshow(bgr2rgb(self.reconstructed.numpy()))
ax3.set_title("network output reconstructed", fontdict=font)
ax3 = fig.add_subplot(1, 4, 4)
ax3.imshow(bgr2rgb(self.onestep_lookahaed_reconstructed.numpy()))
ax3.set_title("reconstructed lookahead \n (one step)", fontdict=font)
arr = canvas_to_ndarray(fig)
plt.figure().clear()
plt.close()
plt.cla()
plt.clf() # noqa
return arr

学習時のプログラムについては基本的にmohou側のものを呼び出しているようので,やはりmohou_rosからmohouの関数を実行する時の引数かなにかでBGR画像が入っているという情報を渡して,mohou側でその引数が来た時は画像出力時に順番を変更してから出力するという風にするのが良いでしょうか?

train_data_gifs

画像出力部分:

if dump_gif:
gif_dir_path = config.project_path / "train_data_gifs"
gif_dir_path.mkdir(exist_ok=True)
if postfix is None:
gif_file_path = gif_dir_path / "{}.gif".format(bagname)
else:
gif_file_path = gif_dir_path / "{}-{}.gif".format(bagname, postfix)
episode_init_removed.save_debug_gif(str(gif_file_path), fps=20)

mohouのこの部分の関数が呼ばれている:https://github.com/HiroIshida/mohou/blob/505f4ed5378c4499fdde673eb00c0f025df7338a/mohou/types.py#L877-L889
この例についてはもしかしたらexecution_debug_dataと近いかたちでmohou_ros側の変更のみで対応可能?

autoencoder_result

mohou_rosでの対応部分:

cmd_visualize_vae = "python3 -m mohou.script.visualize_autoencoder_result -pp {pp}".format(
pp=project_path
)

mohou内の対応プログラムの画像出力部分:https://github.com/HiroIshida/mohou/blob/05a8cfd7862d7374592dd1ef6668f8d01d063557/mohou/script/visualize_autoencoder_result.py#L34-L37
その中で呼ばれている関数:https://github.com/HiroIshida/mohou/blob/05a8cfd7862d7374592dd1ef6668f8d01d063557/mohou/script_utils.py#L164-L205
https://github.com/HiroIshida/mohou/blob/05a8cfd7862d7374592dd1ef6668f8d01d063557/mohou/script_utils.py#L208-L223

lstm_result

mohou_rosでの対応部分:

cmd_visualize_lstm = "python3 -m mohou.script.visualize_lstm_result -pp {}".format(project_path)

mohou内の対応プログラムの画像出力部分:https://github.com/HiroIshida/mohou/blob/05a8cfd7862d7374592dd1ef6668f8d01d063557/mohou/script/visualize_lstm_result.py#L43
その中で呼ばれている関数:https://github.com/HiroIshida/mohou/blob/05a8cfd7862d7374592dd1ef6668f8d01d063557/mohou/script_utils.py#L323-L406

Poliy Execution with GrayImage

GrayImageで訓練させたポリシーを実行するときのエラーです。

[INFO] [1687937233.427162]: on timer..
a
Exception in thread Thread-87:
Traceback (most recent call last):
File "/usr/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/timer.py", line 237, in run
self._callback(TimerEvent(last_expected, last_real, current_expected, current_real, last_duration))
File "/home/oh/ros/pr2_ws/src/mohou_ros/mohou_ros_utils/pr2/executor.py", line 37, in on_timer
super().on_timer(event)
File "/home/oh/ros/pr2_ws/src/mohou_ros/mohou_ros_utils/executor.py", line 186, in on_timer
self.propagator.feed(edict_current)
File "/home/oh/ros/pr2_ws/src/mohou_ros/mohou/mohou/propagator.py", line 130, in feed
self._feed(elem_dict)
File "/home/oh/ros/pr2_ws/src/mohou_ros/mohou/mohou/propagator.py", line 141, in _feed
state_with_flag = self.encoding_rule.apply(elem_dict)
File "/home/oh/ros/pr2_ws/src/mohou_ros/mohou/mohou/encoding_rule.py", line 308, in apply
vector = encoder.forward(elem_dict[elem_type])
File "/home/oh/ros/pr2_ws/src/mohou_ros/mohou/mohou/types.py", line 485, in getitem
return super().getitem(key) # type: ignore
KeyError: <class 'mohou.types.GrayImage'>

rostest ci fail by pyopenssl error

rostest ci fails with the following error. Problem seems to that the pyopenssl is not compatible with newest pip used in github action
see: https://askubuntu.com/questions/1428181/module-lib-has-no-attribute-x509-v-flag-cb-issuer-check

Traceback (most recent call last):
  File "/home/runner/work/catkin_ws/src/mohou_ros/rostest/euslisp_core/test_pr2eus_commander.py", line 7, in <module>
    from skrobot.interfaces.ros import PR2ROSRobotInterface  # type: ignore
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/__init__.py", line 10, in <module>
    from skrobot import data
  File "/home/runner/.local/lib/python3.6/site-packages/skrobot/data/__init__.py", line 4, in <module>
    import gdown
  File "/home/runner/.local/lib/python3.6/site-packages/gdown/__init__.py", line 5, in <module>
    from .cached_download import cached_download
  File "/home/runner/.local/lib/python3.6/site-packages/gdown/cached_download.py", line 12, in <module>
    from .download import download
  File "/home/runner/.local/lib/python3.6/site-packages/gdown/download.py", line 13, in <module>
    import requests
  File "/usr/lib/python3/dist-packages/requests/__init__.py", line 84, in <module>
    from urllib3.contrib import pyopenssl
  File "/usr/lib/python3/dist-packages/urllib3/contrib/pyopenssl.py", line 46, in <module>
    import OpenSSL.SSL
  File "/usr/lib/python3/dist-packages/OpenSSL/__init__.py", line 8, in <module>
    from OpenSSL import crypto, SSL
  File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1550, in <module>
    class X509StoreFlags(object):
  File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1570, in X509StoreFlags
    CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.