Giter Club home page Giter Club logo

agile_flight's People

Contributors

antonilo avatar kelia avatar lbfd avatar tongtybj avatar yun-long avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

agile_flight's Issues

Goals?

Hi, could you please let me know where is the goal coordinates of each environment specified?
Thanks

rewards ?

Hi, when I tried to print the info given when an episode has done, I found this problem:
Screenshot from 2022-04-04 22-50-24
As you can see, the total reward 'r' is not equal to the sum of all 4 reward components? Why would this happen? Can you explain more, please?

Changing environment/scene during training

Hi
I've spent some time finding a way to change the environment/scene during the training I didn't find anything easy.
I've found addDynamicObstacle (but not its counterpart ), also found addStaticObsacle but it seems that it is not used anywhere.

Do you have some tips on How I can achieve that? (I want to randomly appear in an environment/scene during training)

I've questions on continuing learning with rpg_baselines. is it possible?
because I've used model.save() and model.load() but the agent does not seem to gain something from the previous training it's like it starts over.

If it's possible with rpg_baselines, could please tell me how to do continuing learning properly?

(if I can't change environment/scene during training, I would like to train, stop, switch environment/scene, load, and retrain in a fresh new environment)

Best r

RL Environment Command Modes

Hi! I followed through with the python setup steps and was looking into training an RL policy with the provided environment using the run_vision_demo.py script as a reference. Looking into the environment code, it seems like only one type of action is supported, namely the CTBR actions. In contrast, the ROS setup supports all three command modes. Are there any plans to support the other action modes for the RL environment? I'm aware that your lab has found the CTBR commands to be the best ones for policy training, but I feel like this should still be an option for other participants to experiment with.

On a somewhat related note, I also had a few other concerns with the current configs for the simulation environments. I could open other issues if this is the preferred method to address these concerns.

  1. I noticed that the simulation step time is 0.02s (i.e., sim_dt in the flightmare/flightpy/configs/vision/config.yaml file) by default, which amounts to 50Hz. Is this going to be fixed for the competition or are we free to modify this as we please? I feel like if we are going to use these lower level command modes like SRT and CTBR, having a higher simulation step frequency would be important.
  2. I'm not entirely sure about this concern, but looking through the LINVEL command mode code it seems like it doesn't actually modify the rotor thrusts of the drone and instead just changes the current position of the drone in the environment (I'm using the velocity_reference.cpp file as a reference). I was wondering if there was a plan to translate these state updates to actual rotor thrusts, since they would provide more realistic movement dynamics for the drone.

Finally, I feel like a lot of questions can be answered more quickly through a different communication platform. Do you all consider creating a discord server, for instance, to facilitate these types of quick questions that one might have about the competition? I feel like having a space for all participants to discuss their hurdles and concerns would be benefical for everyone.

Custom reward function

For the DodgeDrone competition, what is the best way to go about making our custom reward function? Currently, the reward function seems to be defined in the Flightmare flightrl source code, which we would have to recompile every time we edit the reward function.

Could not import "pyqt" bindings of qt_gui_cpp library

Hi,
After following completing the installation, when running roslaunch envsim visionenv_sim.launch render:=True, I get the following output:

...
[VisionEnv]    Camera has been added. Skipping the camera configuration.
[UnityBridge]  Initializing ZMQ connection!
[UnityBridge]  Initializing ZMQ connections done!
[VisionEnv]    Flightmare Bridge created.
[UnityBridge]  Trying to Connect Unity.
[.......Could not import "pyqt" bindings of qt_gui_cpp library - so C++ plugins will not be available:
Traceback (most recent call last):
  File "/opt/ros/noetic/lib/python3/dist-packages/qt_gui_cpp/cpp_binding_helper.py", line 43, in <module>
    from . import libqt_gui_cpp_sip
ImportError: /home/name/anaconda3/lib/python3.9/site-packages/PyQt5/../../../libQt5Core.so.5: version `Qt_5.12' not found (required by /opt/ros/noetic/lib/python3/dist-packages/qt_gui_cpp/libqt_gui_cpp_sip.so)
............................................................................................................................................................[UnityBridge]  Flightmare Unity is connected.
...

Ignoring the Could not import "pyqt" bindings of qt_gui_cpp library... message, everything seems to be fine. However, when pressing Connect and Arm in the GUI, I get the following errors:

[ INFO] [1649252353.976127531, 1649252345.667026281]: Computing active: true!
[ROS Bridge]   Deactivated!
[ROS Bridge]   Activated!
[Pipeline]     Bridge failed!
[ROS Bridge]   Deactivated!
[Pipeline]     Bridge failed!
[Pipeline]     Bridge failed!
[Pipeline]     Bridge failed!
[Pipeline]     Bridge failed!
[Pipeline]     Bridge failed!
[Pipeline]     Bridge failed!
...

Unfortunately, this [Pipeline] Bridge failed! message keeps running unless I kill the node(s).

I am looking forward to your answer,

Jon

Using published topics vs. user code

Hello,
Are we permitted to use data from the published topics in our code or must everything originate from the parameters passed to the functions in user_code.py?

goal_vel ?

ReadMe says "The goal is to proceed as fast as possible 60m in positive x-direction without colliding into obstacles and exiting a pre-defined bounding box". Then, what the goal_vel stands for? Should drone be such velocity when we reach the goal? Or there are no restriction for velosity?

Drone control

Hi Team,

I want to ask about whether Flightmare supports users to control the drone with keyboard or radio controller?

Thank you in advance.

Flightmare Client IP

I am not sure with the IP for Flightmare, since I am still unable to connect to the simulator. Could you give any information about it?

Thanks!

Data interpretation logged by tensorboard_log

Thank you for interesting simulator!

I checked run_vision_ppo.py by following command.
python3 -m python.run_vision_ppo --render 0 --train 1
And, I found data when they train in the envtest/python/saved directory (e.g. PPO_1, PPO_2). I found some policies when they're training(/policy), and Test Trajectory(/TestTraj).
The questions I would like to ask is as follows.

  1. where is the logging of reward transition when they train?
  2. what does each axis means in the graph of TestTraj/Plots?
  3. which code define the parameter of plotting or logging?

unable to run visionenv_sim.launch file

After following the instruction given in the README.md, I ran the command roslaunch envsim visionenv_sim.launch render:=True, the flightmare simulator and rviz windows opened up but also got the following error -:

[dodgeros_gui-4] process has died [pid 2562, exit code 1, cmd /opt/ros/melodic/lib/rqt_gui/rqt_gui -s dodgeros_gui.basic_flight.BasicFlight --args --quad_name kingfisher __name:=dodgeros_gui __log:=/home/deepak/.ros/log/cc19a17a-b780-11ec-b7cf-a4b1c1152af2/dodgeros_gui-4.log].
log file: /home/deepak/.ros/log/cc19a17a-b780-11ec-b7cf-a4b1c1152af2/dodgeros_gui-4*.log

[ INFO] [1649452436.562074185]: Loading Pilot Params from simple_sim_pilot.yaml in /home/deepak/DodgeDrone/src/agile_flight/envsim/parameters
Loading Pilot parameters from "/home/deepak/DodgeDrone/src/agile_flight/envsim/parameters/simple_sim_pilot.yaml"
terminate called after throwing an instance of 'agi::ParameterException'
what(): Dodgelib Parameter Exception: Quadrotor file is set manually an in YAML!

Q: Depth camera tranformation?

Hello,

I'm having a hard time relating the output of the depth camera to real world coordinate space (x,y,z in meters)?
Could you help me out interpreting the 8Bit image a bit by explaining the transformations under the hood.
For example does the depth scale linearly, is the value a projected line onto the camera plane or straight towards the camera position etc.

Thank you,
Fuda

From RGBD to point cloud

Hi,

Given the current RGBD sensor data, I wondered if we are allowed to convert it to a point cloud?

To do this, I intended to use the depth_image_proc package. For its usage two topics camera_info(sensor_msgs/CameraInfo) and image_rect (sensor_msgs/Image) are required. From my understanding, the latter is already available in /kingfisher/dodgeros_pilot/unity/depth, but the first is not.

Trying to come up with a solution, I have bumped into this method. Unfortunately, I have not been able to make it work yet. Given that this post is little old and not specific for this DroneDodge challenge, I wondered if there is any other way to retrieve the camera_info topic.

Thanks in advance,

Jon

visionenv_sim.launch prints out yellow and red messages

Thanks for the great work. When I follow the steps in the instructions:
roslaunch envsim visionenv_sim.launch render:=true
finally shows 'UnityBridge] Flightmare Unity is connected.' but prints out some warning (in yellow) and red messages. I just would like to check whether it is normal. It says:

****************** message in yellow ****************
[PilotParams] Did not create inner controller ''!
[Pilot] Did not create bridge 'ROS'.
Using debug bridge, register externally!


******message in red *****************
[dodgeros_gui-4] process has died [pid 47655, exit code -11, cmd /opt/ros/noetic/lib/rqt_gui/rqt_gui -s dodgeros_gui.basic_flight.BasicFlight --args --quad_name kingfisher __name:=dodgeros_gui __log:=/home/minghan/.ros/log/c1d886b4-ab18-11ec-b3ca-07c585af6c36/dodgeros_gui-4.log].
log file: /home/minghan/.ros/log/c1d886b4-ab18-11ec-b3ca-07c585af6c36/dodgeros_gui-4
.log


Could I ask whether these are normal or there is a setting-up issue? Thank you very much.

Which code desines "itaration" in eval funcion in ppo.py when learning

Thank you for interesting challenge!

I know recording of Policy, RMS, and Test Traj is conducted in eval function .

I would like you to ask following questions.

  1. Which code defines how often eval function is called?
  2. When agent learned with PPO in the condition of total_timesteps=int(5 * 1e7), the iteration always finish at iter_02000, please tell me why this iteration number(2000) does not change
  3. Does "iteration" mean how many episodes the agent learned? Or how many learning iteration agent learned?

No run_competition.py in launch_evaluation.bash

In Readme, you found following sentence

If you want to perform steps 1-3 automatically, you can use the launch_evaluation.bash N script provided in this folder.

It is great, but I can not find run_competition.py in launch_evaluation.bash.

no IMU information available

This is the command I tried
$ rostopic echo /kingfisher/dodgeros_pilot/imu_in

The only message popped out is:
WARNING: no messages received and simulated time is active.
Is /clock being published?

Any idea how I can obtain IMU information? Thanks!

Training On Camera Data

Hey,

While using the vision env, if I call getImage it returns an array of zeros. After reading through the flightmare documentation, it seems this is because the vision env dose not actually setup Unity?

setUnity(unity_render_);
connectUnity();
updateUnity(frame_id);

Dose this seem correct? If so then is there a way to train a server with no display? My GPU box is a remote box and I can't forward the Unity GUI. Is there a headless mode to Unity that I am missing?

Error -roslaunch unable to launch [kingfisher/dodgeros_pilot-2]

Hi!
I faced with the following error when I typed "roslaunch envsim visionenv_sim.launch render:=True".

... logging to /home/user/.ros/log/9b8562be-a8f9-11ec-8908-f9ada68fd0d8/roslaunch-user-ubuntu-3814.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://user-ubuntu:35019/

SUMMARY
========

PARAMETERS
 * /kingfisher/dodgeros_pilot/agi_param_dir: /home/user/icra2...
 * /kingfisher/dodgeros_pilot/camera_config: /home/user/icra2...
 * /kingfisher/dodgeros_pilot/low_level_controller: Simple
 * /kingfisher/dodgeros_pilot/pilot_config: simple_sim_pilot....
 * /kingfisher/dodgeros_pilot/real_time_factor: 1.0
 * /kingfisher/dodgeros_pilot/render: True
 * /kingfisher/dodgeros_pilot/ros_param_dir: /home/user/icra2...
 * /kingfisher/dodgeros_pilot/use_bem_propeller_model: False
 * /rosdistro: noetic
 * /rosversion: 1.15.14
 * /use_sim_time: True

NODES
  /
    dodgeros_gui (rqt_gui/rqt_gui)
    flight_render (flightrender/RPG_Flightmare.x86_64)
  /kingfisher/
    dodgeros_pilot (envsim/visionsim_node)
    viz_face (rviz/rviz)

auto-starting new master
process[master]: started with pid [3822]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 9b8562be-a8f9-11ec-8908-f9ada68fd0d8
process[rosout-1]: started with pid [3832]
started core service [/rosout]
process[kingfisher/dodgeros_pilot-2]: started with pid [3839]
process[kingfisher/viz_face-3]: started with pid [3840]
process[dodgeros_gui-4]: started with pid [3841]
process[flight_render-5]: started with pid [3842]
[ WARN] [1647855055.853602462]: Pilot Config:        simple_sim_pilot.yaml
[ WARN] [1647855055.854849416]: Agi Param Directory: /home/user/icra22_competition_ws/src/agile_flight/dodgedrone_simulation/dodgelib/params
[ WARN] [1647855055.854895142]: ROS Param Directory: /home/user/icra22_competition_ws/src/agile_flight/envsim/parameters
[ INFO] [1647855055.854923451]: Loading Pilot Params from simple_sim_pilot.yaml in /home/user/icra22_competition_ws/src/agile_flight/envsim/parameters
Loading Pilot parameters from "/home/user/icra22_competition_ws/src/agile_flight/envsim/parameters/simple_sim_pilot.yaml"
[PilotParams]  Did not create inner controller ''!
[Pilot]        Did not create bridge 'ROS'.
Using debug bridge, register externally!
[ INFO] [1647855055.860944218]: Loaded pipeline:
Estimator:
Type: Feedthrough
File: "/home/user/icra22_competition_ws/src/agile_flight/dodgedrone_simulation/dodgelib/params/feedthrough.yaml"
Sampler:
Type: Time
File: ""
Outer Controller:
Type: GEO
File: "/home/user/icra22_competition_ws/src/agile_flight/dodgedrone_simulation/dodgelib/params/geo.yaml"
Inner Controller:
Type: 
File: ""
Bridge:
Type: ROS
File: ""

[Pilot]        Register external bridge: [ROS Bridge]   
 which was not active and used.
[kingfisher/dodgeros_pilot-2] process has died [pid 3839, exit code -11, cmd /home/user/icra22_competition_ws/devel/lib/envsim/visionsim_node __name:=dodgeros_pilot __log:=/home/user/.ros/log/9b8562be-a8f9-11ec-8908-f9ada68fd0d8/kingfisher-dodgeros_pilot-2.log].
log file: /home/user/.ros/log/9b8562be-a8f9-11ec-8908-f9ada68fd0d8/kingfisher-dodgeros_pilot-2*.log
`

installation issues with ubuntu 18.04

Hi,

I'm trying to install the package on ubuntu 18.04, there are some issues.

  1. I change the gcc/g++ version of my machine to 9.4.0 and execute the 'cmake .. && make -j3' commands. They finished without errors. However, when I ran the unittest file ./test_lib, there are 9 tests failed. The outpurs are:

Running main() from /home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightlib/externals/googletest-src/googletest/src/gtest_main.cc
[==========] Running 32 tests from 10 test suites.
[----------] Global test environment set-up.
[----------] 4 tests from QuadrotorDynamics
[ RUN ] QuadrotorDynamics.Constructor
Quadrotor Dynamics:
mass = [1]
t_BM = [ 0.075 -0.075 -0.075 0.075]
[-0.1 0.1 -0.1 0.1]
[0 0 0 0]
inertia = [0.0025 0 0]
[ 0 0.0021 0]
[ 0 0 0.0043]
motor_omega_min = [0]
motor_omega_max = [2e+03]
motor_tau_inv = [30.3]
thrust_map = [1.56e-06 0 0]
kappa = [0.016]
thrust_min = [0]
thrust_max = [6.25]
cthrust_min = [0]
cthrust_max = [25]
omega_max = [6 6 2]

 1      1      1      1

-0.1 0.1 -0.1 0.1
-0.075 0.075 0.075 -0.075
-0.016 -0.016 0.016 0.016
[ OK ] QuadrotorDynamics.Constructor (0 ms)
[ RUN ] QuadrotorDynamics.Dynamics
[ OK ] QuadrotorDynamics.Dynamics (0 ms)
[ RUN ] QuadrotorDynamics.VectorReference
[ OK ] QuadrotorDynamics.VectorReference (0 ms)
[ RUN ] QuadrotorDynamics.LoadParams
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] QuadrotorDynamics.LoadParams (13 ms)
[----------] 4 tests from QuadrotorDynamics (14 ms total)

[----------] 4 tests from Quadrotor
[ RUN ] Quadrotor.Constructor
[Quadrotor] Configuration file P�-ʣU does not exists.
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] Quadrotor.Constructor (0 ms)
[ RUN ] Quadrotor.ResetSimulator
[ OK ] Quadrotor.ResetSimulator (0 ms)
[ RUN ] Quadrotor.RunQuadCmdFeedThrough
[ OK ] Quadrotor.RunQuadCmdFeedThrough (1 ms)
[ RUN ] Quadrotor.RunSimulatorBodyRate
[ OK ] Quadrotor.RunSimulatorBodyRate (1 ms)
[----------] 4 tests from Quadrotor (3 ms total)

[----------] 1 test from RGBCamera
[ RUN ] RGBCamera.Constructor
[ OK ] RGBCamera.Constructor (0 ms)
[----------] 1 test from RGBCamera (0 ms total)

[----------] 3 tests from QuadrotorEnv
[ RUN ] QuadrotorEnv.Constructor
[QuadrotorEnv] Environment configuration path "$/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml".
[QaudrotorEnv] Configuration file P�-ʣU does not exists.
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] QuadrotorEnv.Constructor (0 ms)
[ RUN ] QuadrotorEnv.ResetEnv
[QaudrotorEnv] Configuration file P�-ʣU does not exists.
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] QuadrotorEnv.ResetEnv (0 ms)
[ RUN ] QuadrotorEnv.StepEnv
[QaudrotorEnv] Configuration file P�-ʣU does not exists.
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] QuadrotorEnv.StepEnv (0 ms)
[----------] 3 tests from QuadrotorEnv (0 ms total)

[----------] 3 tests from QuadrotorVecEnv
[ RUN ] QuadrotorVecEnv.Constructor
[QaudrotorEnv] Configuration file P�-ʣU does not exists.
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] QuadrotorVecEnv.Constructor (0 ms)
[ RUN ] QuadrotorVecEnv.ResetEnv
[QaudrotorEnv] Configuration file P�-ʣU does not exists.
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] QuadrotorVecEnv.ResetEnv (0 ms)
[ RUN ] QuadrotorVecEnv.StepEnv
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/control/config.yaml" thrown in the test body.
[ FAILED ] QuadrotorVecEnv.StepEnv (0 ms)
[----------] 3 tests from QuadrotorVecEnv (0 ms total)

[----------] 3 tests from VisionEnv
[ RUN ] VisionEnv.Constructor
[VisionEnv] Environment configuration path "$/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/vision/config.yaml".
[VisionEnv] Configuration file P�-ʣU does not exists.
unknown file: Failure
C++ exception with description "bad file: $/home/xintong/Documents/PyProjects/RL_Drone_2022/flightmare/flightpy/configs/vision/config.yaml" thrown in the test body.
[ FAILED ] VisionEnv.Constructor (0 ms)
[ RUN ] VisionEnv.ResetEnv
[ OK ] VisionEnv.ResetEnv (0 ms)
[ RUN ] VisionEnv.StepEnv
[ OK ] VisionEnv.StepEnv (0 ms)
[----------] 3 tests from VisionEnv (0 ms total)

[----------] 5 tests from EigenChecks
[ RUN ] EigenChecks.EigenVersionOutput
Eigen Version: 3.3.4
[ OK ] EigenChecks.EigenVersionOutput (0 ms)
[ RUN ] EigenChecks.EigenQuaternionSequence
[ OK ] EigenChecks.EigenQuaternionSequence (0 ms)
[ RUN ] EigenChecks.EigenQuaternionRotationDirection
[ OK ] EigenChecks.EigenQuaternionRotationDirection (0 ms)
[ RUN ] EigenChecks.QuaternionCrossMatrix
[ OK ] EigenChecks.QuaternionCrossMatrix (0 ms)
[ RUN ] EigenChecks.MatrixColumnwiseDotProduct
[ OK ] EigenChecks.MatrixColumnwiseDotProduct (0 ms)
[----------] 5 tests from EigenChecks (0 ms total)

[----------] 4 tests from Integrators
[ RUN ] Integrators.ManualEulerAccelerationCheck
[ OK ] Integrators.ManualEulerAccelerationCheck (0 ms)
[ RUN ] Integrators.ManualRungeKuttaAccelerationCheck
[ OK ] Integrators.ManualRungeKuttaAccelerationCheck (1 ms)
[ RUN ] Integrators.QuadStateInterface
[ OK ] Integrators.QuadStateInterface (1 ms)
[ RUN ] Integrators.CheckEulerAgainstRungeKutta
[ OK ] Integrators.CheckEulerAgainstRungeKutta (12 ms)
[----------] 4 tests from Integrators (15 ms total)

[----------] 2 tests from Logger
[ RUN ] Logger.SimpleLogging
[Test] This is a text stream log.
[Test] This is an info log.
[Test] This could be a warning, but just for demo.
[Test] This could be an error, but just for demo.
[Test] You can print strings like "text", and formatted numbers like 3.142.
[Test] You can use the stream operator '<<' just like with 'std::cout'.
[Test] This can be helpul for printing complex objects like Eigen vector and matrices:
A vector:
0 1 2 3
A Matrix:
1 0 0
0 1 0
0 0 1
[Test] And also our own defined objects, like so:
A timer:
[Printing] Timing Timer in 1 calls
[Printing] mean|std: 0.0311 | 0 ms [min|max: 0.0311 | 0.0311 ms]

A QuadState
State at 0s: [0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ OK ] Logger.SimpleLogging (0 ms)
[ RUN ] Logger.NoColorLogging
[Test] This is a text stream log.
[Test] Info: This is an info log.
[Test] Warning: This could be a warning, but just for demo.
[Test] Error: This could be an error, but just for demo.
[ OK ] Logger.NoColorLogging (0 ms)
[----------] 2 tests from Logger (0 ms total)

[----------] 3 tests from QuadState
[ RUN ] QuadState.Constructor
[ OK ] QuadState.Constructor (0 ms)
[ RUN ] QuadState.Accessors
[ OK ] QuadState.Accessors (0 ms)
[ RUN ] QuadState.Compare
[ OK ] QuadState.Compare (0 ms)
[----------] 3 tests from QuadState (0 ms total)

[----------] Global test environment tear-down
[==========] 32 tests from 10 test suites ran. (35 ms total)
[ PASSED ] 23 tests.
[ FAILED ] 9 tests, listed below:
[ FAILED ] QuadrotorDynamics.LoadParams
[ FAILED ] Quadrotor.Constructor
[ FAILED ] QuadrotorEnv.Constructor
[ FAILED ] QuadrotorEnv.ResetEnv
[ FAILED ] QuadrotorEnv.StepEnv
[ FAILED ] QuadrotorVecEnv.Constructor
[ FAILED ] QuadrotorVecEnv.ResetEnv
[ FAILED ] QuadrotorVecEnv.StepEnv
[ FAILED ] VisionEnv.Constructor

9 FAILED TESTS

reset env?

Hi I initialized the env as following:

      self.num_envs = self.env_sim_config["simulation"]["num_envs"]
      # load the Unity standardalone, make sure you have downloaded it.
      os.system(os.environ["FLIGHTMARE_PATH"] + "/flightrender/RPG_Flightmare.x86_64 &")
      self.env = VisionEnv_v1(dump(self.env_sim_config, Dumper=RoundTripDumper), False)
      self.env = wrapper.FlightEnvVec(self.env)
      self.env.reset(random=True)

For example, the num_envs is 4 meaning that I have 4 envs. So if when an env is done (done=True), how can I reset just this env.

Thanks.

Unable to build flightlib

When I try to build flightlib with

cd flightlib/build
cmake ..
make -j10

I get the following error on the make -j10 step:

/usr/bin/ld: libflightlib.a(vec_env_base.cpp.o): in function `flightlib::VecEnvBase<flightlib::QuadrotorEnv>::VecEnvBase()':
vec_env_base.cpp:(.text._ZN9flightlib10VecEnvBaseINS_12QuadrotorEnvEEC2Ev[_ZN9flightlib10VecEnvBaseINS_12QuadrotorEnvEEC5Ev]+0x11b): undefined reference to `omp_set_num_threads'
/usr/bin/ld: libflightlib.a(vec_env_base.cpp.o): in function `flightlib::VecEnvBase<flightlib::QuadrotorEnv>::configEnv(YAML::Node const&)':
vec_env_base.cpp:(.text._ZN9flightlib10VecEnvBaseINS_12QuadrotorEnvEE9configEnvERKN4YAML4NodeE[_ZN9flightlib10VecEnvBaseINS_12QuadrotorEnvEE9configEnvERKN4YAML4NodeE]+0x6ec): undefined reference to `omp_set_num_threads'
/usr/bin/ld: libflightlib.a(vec_env_base.cpp.o): in function `flightlib::VecEnvBase<flightlib::VisionEnv>::VecEnvBase()':
vec_env_base.cpp:(.text._ZN9flightlib10VecEnvBaseINS_9VisionEnvEEC2Ev[_ZN9flightlib10VecEnvBaseINS_9VisionEnvEEC5Ev]+0x11b): undefined reference to `omp_set_num_threads'
/usr/bin/ld: libflightlib.a(vec_env_base.cpp.o): in function `flightlib::VecEnvBase<flightlib::VisionEnv>::configEnv(YAML::Node const&)':
vec_env_base.cpp:(.text._ZN9flightlib10VecEnvBaseINS_9VisionEnvEE9configEnvERKN4YAML4NodeE[_ZN9flightlib10VecEnvBaseINS_9VisionEnvEE9configEnvERKN4YAML4NodeE]+0x6ec): undefined reference to `omp_set_num_threads'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/test_lib.dir/build.make:287: test_lib] Error 1
make[1]: *** [CMakeFiles/Makefile2:193: CMakeFiles/test_lib.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 85%] Linking CXX shared module flightgym.cpython-38-x86_64-linux-gnu.so
[ 85%] Built target flightgym
make: *** [Makefile:141: all] Error 2

I did add functions to vision_env.cpp and vision_vec_env.cpp, but I didn't change any of the omp related code

Could we change environment during training?

Hi, I want to switch to other environment automatically during training. Is that possible with the actual provided code ?

Also can we use both depth image and distance from obstacles?

Does the simulator give only the distance of obstacles in front of the drone ?

RuntimeError: Address already in use

error occour running : python run_vision_demo.py.

[UnityBridge] Initializing ZMQ connection!
Traceback (most recent call last):
File "run_vision_demo.py", line 120, in
main()
File "run_vision_demo.py", line 50, in main
env = VisionEnv_v1(dump(cfg, Dumper=RoundTripDumper), False)
RuntimeError: Address already in use

Unable to run the vision-flight demo

I ran through the install script, first setup_ros.bash then setup_py.bash everything ran without an error but when I run the vision demo I get the below

Traceback (most recent call last):
  File "run_vision_demo.py", line 10, in <module>
    from rpg_baselines.torch.envs import vec_env_wrapper as wrapper
ModuleNotFoundError: No module named 'rpg_baselines.torch'

ROS Bridge deactivated / uniplot

Hi. First, thanks to your great work and efforts to hold this competition.
I am suffering some issues that the ROS_Bridge continuously gets deactivated even if I input the command via ROS faster than 100Hz.
The UAV in the simulator seems to fly as I expected with the command I input, but anyway it gets deactivated.

I attach the error messages as below:

[ INFO] [1647953337.210806394, 1647953328.236202240]: OFF command received!
[ROS Bridge]   Deactivated!

[ INFO] [1647953423.736785555, 1647953389.336202383]: OFF command received!
[ROS Bridge]   Deactivated!
publishing and latching message for 3.0 seconds
[ INFO] [1647953427.241718100, 1647953391.756202221]: Resetting simulator!
publishing and latching message for 3.0 seconds
[ INFO] [1647953430.747328953, 1647953394.196202278]: Computing active: true!
[ROS Bridge]   Deactivated!
[ROS Bridge]   Activated!
rollout_9
/home/mason/ws/competition_ws/src/agile_flight
Traceback (most recent call last):
  File "evaluation_node.py", line 12, in <module>
    from uniplot import plot
ImportError: No module named uniplot

Thank you in advance.
K.

P.S. I tried to install uniplot in multiple ways and I checked it is installed, but getting the same error.

Obstacle in state-based case

Hi, I've got question about obstacle definition in state-based case. README states about "metric distance to obstacles". On the other hand Obstacle msg has position and scale. Is position relative? What exactly is scale? Regards,

Does the forest scene look like the promotion video

Hi,

Below is what I see when setting the environment to forest:
image

But it seems different from the promotion video's forest:
image

Is there anything wrong with my installation or configuration? Any feedback is appreciated!

Changing environment

Following the description in the Readme, we tried changing the environment level in this file and setting (unity scene_id) to easy/medium/hard and to all possible environments. However, when we launch the simulator using the given roslaunch command (roslaunch envsim visionenv_sim.launch render:=True), the environment has not changed.

Are we missing some parameters that we additionally need to set to change the environment in Rviz and in Flightmare ?

Thanks in advance!

rpg_baselines ppo

Hi,
I wonder what was the reason behind reimplementing sb3 ppo in rpg_baselines?
Regards,

Goal check in RL

Thank you for interesting challenge!

I tried to learn quadrotor to move well.
I read code of vision_env.cpp and config.yaml but there are no information of goal position (+x 60), so where is this information? or should I input this information when I want to use this information?

Pyside error?

Hi, when I run roslaunch envsim visionenv_sim.launch render:=True, it has successfully opened rviz, unity,.. but still got one or two error.

  1. pyside error: ImportError: cannot import name 'libqt_gui_cpp_shiboken'
  2. error in Yaml file

[](url
Screenshot from 2022-03-28 22-24-43
)
Do you know what the problem? thanks

Unable to run evaluation_node.py

I went through the setup, ran the roslaunch file which launched the simulator , the GUI, Rviz.
Now as soon as I launched the evaluation node.py file I got the below

Traceback (most recent call last):
  File "evaluation_node.py", line 8, in <module>
    from dodgeros_msgs.msg import Command, QuadState
ModuleNotFoundError: No module named 'dodgeros_msgs'

Any fixes for the above ?

Drone control

Hi Team,

I want to ask about whether Flightmare supports users to control the drone with keyboard or radio controller?

Thank you in advance.

Training script - RL

Hi Team,

The control modality choices are available on the ROS side of scripts and we wanted to use that for training an RL policy.

  1. Which is the ROS side of the script similar to run_vision_ppo in Python to get started with training a RL policy?
  2. Is there a corresponding VisionEnv_v1 training environment for the ROS side to train so that we could utilise the other control modalities. (Kindly let me know if such a thing is not reqd and this itself can be used too. I am new to it)

Thanks

Some environments are maybe buggy

Hi, First thanks for your great work and your endeavor to push the boundaries of flying drones.

level: "hard"
env_folder: "environment_0"
scene_id: 3

Maybe I'm wrong, but It seems that there may be something wrong with some environments provided (some obstacles seem invisible, this makes things more "challenging" ^_^).
If you look carefully at the attached video, the evaluation code running in the bottom right corner prints "crashed" even
when there are no obstacles around the drone. Note that I did not modify anything in the evaluation code and since I began this challenge it's the first time (also the first environment) where I encountered that behavior.

Thanks in advance to check that out.

Best regards,

debug.mp4

g++ and gcc to version 9.3.0

Hi,
Is this requirement strict or just g++/gcc >= 9.3 is enough? Because the default version in Ubuntu 20 is 9.4.0. The downgrade from 9.4 to 9.3 is not so easy and it's not described in the referenced link. I've managed to build and run the stack so far with 9.4 but wonder if I should go further with this setup. Regards,

ModuleNotFoundError: No module named 'flightrl'

Thank you for interesting challenge.
I ran python.run_vision_ppo with the folloing command, but the command says there are no module named flightrl

$ python3 -m python.run_vision_ppo --render 0 --train 1
Traceback (most recent call last):
  File "/home/myworkspace/anaconda3/envs/agileflight/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/hamyworkspaceu/anaconda3/envs/agileflight/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/myworkspace/icra22_competition_ws/src/agile_flight/envtest/python/run_vision_ppo.py", line 14, in <module>
    from rpg_baselines.torch.common.ppo import PPO
  File "/home/myworkspace/anaconda3/envs/agileflight/lib/python3.8/site-packages/rpg_baselines/torch/common/ppo.py", line 21, in <module>
    from flightrl.rpg_baselines.torch.common.on_policy_algorithm import \
ModuleNotFoundError: No module named 'flightrl'

I did the following things.(cf. https://github.com/uzh-rpg/agile_flight/blob/main/envtest/python/README.md)

  • Run ./setup_py.bash with conda environment.
  • Checked to run well the $ python3 -m python.run_vision_demo --render 1

I have several bugs when run ./setup_py.bash , and I cured these by myself.
If these cause some problem, prease tell me

A. ModuleNotFoundError: No module named 'rpg_baselines.torch'
Based on this issue, I changed as follows
lightmare/flightpy/flightrl/setup.py from
packages=['rpg_baselines'],
to
packages=find_packages()
#5 (comment)

training on cloud

I met a few problems to compile on the cloud, such as GCC version issues.

Could you help to indicate which files shall be move to cloud without compile on the cloud.

Could you help to provide a simplified setup_py.bash which can be used on cloud with compiled files transferred from local machine to cloud.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.