Giter Club home page Giter Club logo

pddlstream's People

Contributors

aidan-curtis avatar caelan avatar cpaxton avatar sea-bass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pddlstream's Issues

Customize environments

Hello!

I am trying to modify the code to be able to compare it to other planning approaches. I would be glad if you could provide some guidance. The first task I would like to solve is moving objects in 3D, f.e. if I have some initial pose and some goal pose. Can you please advise how to modify the code to solve this problem with custom objects (created as blocks of a certain size or constructed from meshes) in some custom environment (table with a certain size, the certain pose of the robot)? Maybe you can point me to some files/functions that I would need to update in order to customize the environment. Thank you very much n advance!

Some issues when running drake

Hello, I'm trying to run the demo in drake. However, I find that pydrake only supports python3 now, and the document of drake API changes a lot from your codes. Can you please show your environment settings of the demo of drake? The settings should include your Ubuntu version, python version and the date of binary package . Thanks a lot!

Write a manual

Hello,

Can you write a short manual on how to use your solver? I'd like to try pddlstream on my custom problem, and it's very hard to start. I especially interested for use in pybullet kuka environment.

-Dmitry

Include individual action costs in solution

When you solve a problem using PDDLStream, you get access to all the individual actions and the total cost of the plan as the first and second elements of the solution tuple.

However, it would be really convenient to also get the cost of each action individually as part of this solution.

The easiest thing to do would be for the Action namedtuple to include name, args, and an additional cost field.

Right now there is the workaround of re-evaluating each cost function for each action in the plan, but it would be nice to avoid having to do this.

Does PDDLStream support multi agent planning?

In the example figure below, the goal is to put block B in red region. First, I want r0 to move block B closer towards r1. Then r1 takes block B towards the red region.

If it supports multi agent planning, where do I specify that?
image

Make a package

Hello,

Can you make a package to simplify the use?

-Dmitry

examples.pybullet.pr2.run failed

Hi Caelan,

Thanks for the great repo! I was able to run the kuka example, but when I was trying to run examples.pybullet.pr2.run, it gives me the following error:

assert(isinstance(fd, pddl.Literal) and not fd.negated)

Any idea what might be the cause?

incremental algorithm on kitchen example

Hi Caelan, thanks for sharing your code.
I chose to start with the simplest algorithm, so I'm trying to run the incremental algorithm on the kitchen example. I have commented "stream sample-motion" in stream.pddl and uncommented stream sample-motion-h. Now the incremental algorithm is running to solve the PDDLStream problem but the sample-motion-h stream is always generating the same pose, causing the algorithm to never solve the problem.
Perhaps there are some other things I should edit in the code to get the algorithm running properly. Can you please guide me on how to do that?

PyBullet visualization problem

The installation of PDDLStream and pybullet went on well. But when I tried the pr demo: python -m examples.pybullet.pr2.run, the visualization didn't work (snapshot below) while the planning itself seems to work smoothly. The only things I can see in the Bullet browser are two moving cooked and cleaned labels...

I know it is not necessarily PDDLStream's problem and likely to be a pybullet thing, but any hint on this? The pybullet install on my Ubuntu 16.04 is clean without any modification.

image

EDIT: This error has been reproduced on two of my Ubuntu 16.04 visual machines.

Collision disabled

Hi,

When running the tamp example (from main branch), with or without --cfree flag plans ignore collision. Is there something else I need to change? I am also getting the following warnings that seem related.

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: r_gripper_tool_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: l_gripper_led_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: l_gripper_tool_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: l_forearm_cam_optical_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

Thanks in advance for any help!

file handle leak

I noticed that there are some file handle leaks in the code. The symptom is that if you run the planning, e.g., solve_focused in an infinite loop, eventually it will crash with OSError: [Errno 24] Too many open files: 'temp/'. I've been looking for where the leak happens, but no luck so far. You did a really great job in blocking such leak by using context manager for opening files. Hence I suspect the leak happens in the FastDownward process as opposed to the main python process, but it's just a guess.

'examples.pybullet.kuka.run' failed when I used 'AtPose'

Hi Caelan,

I really appreciate your great work. I was able to run the example code "examples.pybullet.kuka.run", but when I was trying to revise the goal state, it could not run successfully. I tried 'AtPose', but it didn't work. In order to eliminate the influence of variables, I set the goal object pose as the initial pose. The code in 'pddlstream_from_problem' function is:

    #Line 93 function:pddlstream_from_problem
    body = movable[0]
    pose = BodyPose(body, get_pose(body))
    goal = ('and',
            ('AtConf', conf),
            ('AtPose', body, pose),
    )

The code with the above goal could not be performed successfully. No matter what pose parameters I tried, all failed. The log shows the Stream plan and Action plan are None:

    Attempt: 1 | Results: 24 | Depth: 0 | Success: False
    Stream plan (inf, 0, inf): False
    Action plan (inf, inf): False

If I comment the line of 'AtPose', the code can run successfully. I tried to debug it but could not find the reason. Could you help me check this problem? I really appreciate it.

Thanks,
Nick Tian

How to control the order of the objects to be manipulated?

Hi Caelan,

Thanks for your great work. I have tried the example code. I tried to manipulate a set of the same objects without priority to the goal panel. The final result shows the algorithm randomly chooses the object to be manipulated. My question is, if I hope the robot first tries to manipulate a subset of the objects, how could I control the order?

Thanks for your help. Looking forward to your reply.

Nick

No module named pddl.f_expression

python -m examples.continuous_tamp.run

pddlstream/pddlstream/algorithms/downward.py", line 58, in <module>
    import pddl.f_expression
ModuleNotFoundError: No module named 'pddl.f_expression'

Optimistic algorithms unable to solve TAMP problem with movable obstacles

Hello,

I'm trying to solve a TAMP problem where a robot r is in an office-like map in a starting position s, and it has to traverse n doors {d0, d1, ..., dn}, initially closed, to reach a final destination g. Walls of the map are fixed obstacles, while doors are movable obstacles.
I have two actions: open and move. The move action moves the robot between two locations while avoiding obstacles (the doors and the walls) - I have a stream that tries to compute a collision-free path via RRT and an ad-hoc collision checker. open allows the robot to open a door when positioned in front of it, like pushing a button (b0 for d0 , b1 for d1, ..., bn for dn). Once the button is pushed, the door configuration changes instantaneously from closed to open.

The incremental algorithm (with FastDownward) can find a valid plan. For example, if n=2:

move(r, s, b0, d0, c0, d1, c1) # move r from s to button b0 with d0 and d1 closed
open(d0, c0, o1) # open d0
move(r, b0, b1, d0, o0, d1, c1) # move r from button b0 to button b1 with d0 open and d1 closed
open(d1, c1, o1) # open d1
move(r, b1, g, d0, o0, d1, o1) # move from button b1 to g with all doors open

All other algorithms (focused, binding, adaptive - with FastDownward), instead, are unable to solve the problem even if the number of doors is small. For example, with n=2, the focused algorithm exits with

Stream plan (inf, 0, inf): False
Action plan (inf, inf): False
Summary: {complexity: 2, cost: inf, evaluations: 31, iterations: 10, length: inf, run_time: 36.669, sample_time: 35.535, search_time: 1.134, skeletons: 0, solutions: 0, solved: False, timeout: False}

Precisely, the status is INFEASIBLE because exhausted=True when calling iterative_plan_streams() of refinement.py.

Is this behavior reasonable? Could you please help me solving this issue?

Can't find a plan in the 'Kuka Cleaning and Cooking' scenario using a Panda

Hello, I've tried to change the robot in the 'Kuka Cleaning and Cooking' scenario, using the Franka Panda arm (without hand). Here are the changes I did:

  • pybullet_tools/kuka_primitives.py
TOOL_FRAMES = {
    'panda': 'panda_link8',
  }
  • pybullet_tools/utils.py
PANDA_ARM_URDF = "models/franka_description/robots/panda_arm.urdf"
  • models/franka_description/robots/panda_arm.urdf
    • replace every occurrence of package://franka_description with ..
  • examples/pybullet/kuka/run.py
robot = load_model(PANDA_ARM_URDF, fixed_base=True)

However the plan exploration stops after just few attempts and I get the following result:

Solved: False
Cost: inf
Length: inf
Deferred: 0
Evaluations: 2

The program exits at https://github.com/caelan/pddlstream/blob/main/examples/pybullet/kuka/run.py#L192 (because plan is None).

Maybe I need to change something else to use another robot arm? Any (other) hints?
Thanks!
Matteo

TypeError: unsupported operand type(s) for -: 'tuple' and 'float'

I encountered this issue when I tried to run the command python -m examples.pybullet.tamp.run.
File "/home/lu/Desktop/Code/pddlstream/examples/pybullet/utils/pybullet_tools/utils.py", line 3728, in <genexpr> return tuple(circular_difference(value2, value1) if circular else (value2 - value1) TypeError: unsupported operand type(s) for -: 'tuple' and 'float'
After debugging I found the problem was caused by the apply function in pr2_primitives.py.

    def apply(self, state, **kwargs):
        joints = get_gripper_joints(self.robot, self.arm)
        start_conf = get_joint_positions(self.robot, joints)
        end_conf = [self.position] * len(joints)
        if self.teleport:
            path = [start_conf, end_conf]
        else:
            extend_fn = get_extend_fn(self.robot, joints)
            path = [start_conf] + list(extend_fn(start_conf, end_conf))
        for positions in path:
            set_joint_positions(self.robot, joints, positions)
            yield positions

The type of start_conf is tuple with value (0.548, 0.548, 0.548, 0.548),
however the end_conf is list with value [(0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276), (0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276), (0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276), (0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276)].

In joint_from_name functions, there're four joints which are l_gripper_l_finger_joint, l_gripper_r_finger_joint, l_gripper_l_finger_tip_joint and l_gripper_r_finger_tip_joint.

I guess the problem was caused by the difference with the dimension of start_conf and end_conf. As it will call the get_extend_fn which will recursively call the get_difference_fn function and finally will lead to the program crash at line 3728.
I would appreciate if anyone could offer any advice.

FastDownward version broken?

There seems to be something wrong with the version of fast downward that is included here? I think it has to do with the blind search heuristic. It fails to solve trivial problems.

Here is a minimal domain and problem file that reproduce the bug:

(define (domain sanity)
    (:requirements :strips)
    (:predicates
        (isA ?obj)
        (isB ?obj)
    )
)
(define (problem check) 
 (:domain sanity)
    (:objects
        A
        B
    )
    (:INIT
        (isA A)
        (isB B)
    )
    (:goal 
        (or (isA A) (isB B))
    )
)

Planner output:

# /pddlstream/FastDownward/fast-downward.py --plan-file plan pddl/sanity/domain.pddl pddl/sanity/problem.pddl  --heuristic "h=blind()" --search "astar(h)"
INFO     Running translator.
INFO     translator stdin: None
INFO     translator time limit: None
INFO     translator memory limit: None
INFO     translator command line string: /opt/conda/envs/ikea/bin/python /pddlstream/FastDownward/builds/release32/bin/translate/translate.py pddl/sanity/domain.pddl pddl/sanity/problem.pddl --sas-file output.sas
Parsing...
Parsing: [0.000s CPU, 0.002s wall-clock]
Normalizing task... [0.000s CPU, 0.000s wall-clock]
Instantiating...
Generating Datalog program... [0.000s CPU, 0.000s wall-clock]
Normalizing Datalog program...
Normalizing Datalog program: [0.000s CPU, 0.000s wall-clock]
Preparing model... [0.000s CPU, 0.000s wall-clock]
Generated 5 rules.
Computing model... [0.000s CPU, 0.000s wall-clock]
10 relevant atoms
0 auxiliary atoms
10 final queue length
11 total queue pushes
Completing instantiation... [0.000s CPU, 0.000s wall-clock]
Instantiating: [0.000s CPU, 0.001s wall-clock]
Computing fact groups...
Finding invariants...
0 initial candidates
Finding invariants: [0.000s CPU, 0.000s wall-clock]
Checking invariant weight... [0.000s CPU, 0.000s wall-clock]
Instantiating groups... [0.000s CPU, 0.000s wall-clock]
Collecting mutex groups... [0.000s CPU, 0.000s wall-clock]
Choosing groups...
1 uncovered facts
Choosing groups: [0.000s CPU, 0.000s wall-clock]
Building translation key... [0.000s CPU, 0.000s wall-clock]
Computing fact groups: [0.000s CPU, 0.000s wall-clock]
Building STRIPS to SAS dictionary... [0.000s CPU, 0.000s wall-clock]
Building dictionary for full mutex groups... [0.000s CPU, 0.000s wall-clock]
Building mutex information...
Building mutex information: [0.000s CPU, 0.000s wall-clock]
Translating task...
Processing axioms...
Simplifying axioms... [0.000s CPU, 0.000s wall-clock]
Processing axioms: [0.000s CPU, 0.000s wall-clock]
Translating task: [0.000s CPU, 0.000s wall-clock]
0 effect conditions simplified
0 implied preconditions added
Detecting unreachable propositions...
0 operators removed
0 axioms removed
0 propositions removed
Detecting unreachable propositions: [0.000s CPU, 0.000s wall-clock]
Reordering and filtering variables...
1 of 1 variables necessary.
0 of 0 mutex groups necessary.
0 of 0 operators necessary.
1 of 1 axiom rules necessary.
Reordering and filtering variables: [0.000s CPU, 0.000s wall-clock]
Translator variables: 1
Translator derived variables: 1
Translator facts: 2
Translator goal facts: 1
Translator mutex groups: 0
Translator total mutex groups size: 0
Translator operators: 0
Translator axioms: 1
Translator task size: 5
Translator peak memory: 37164 KB
Writing output... [0.000s CPU, 0.000s wall-clock]
Done! [0.000s CPU, 0.005s wall-clock]

translate exit code: 0
INFO     Running search (release32).
INFO     search stdin: output.sas
INFO     search time limit: None
INFO     search memory limit: None
INFO     search command line string: /pddlstream/FastDownward/builds/release32/bin/downward --heuristic 'h=blind()' --search 'astar(h)' --internal-plan-file plan < output.sas
reading input... [t=2.1421e-05s]
done reading input! [t=6.4301e-05s]
Initializing blind search heuristic...
Building successor generator...done! [t=0.000172728s]
peak memory difference for successor generator creation: 0 KB
time for successor generation creation: 2.683e-06s
Variables: 1
FactPairs: 2
Bytes per state: 4
Conducting best first search with reopening closed nodes, (real) bound = 2147483647
Initial state is a dead end.
Initial heuristic value for blind: infinity
pruning method: none
Completely explored state space -- no solution!
Actual search time: 7.493e-06s [t=0.000232899s]
Expanded 0 state(s).
Reopened 0 state(s).
Evaluated 1 state(s).
Evaluations: 1
Generated 0 state(s).
Dead ends: 0 state(s).
Number of registered states: 1
Int hash set load factor: 1/1 = 1
Int hash set resizes: 0
Search time: 2.8906e-05s
Total time: 0.000236778s
Search stopped without finding a solution.
Peak memory: 4908 KB

search exit code: 12
Driver aborting after search

Issue with running the Tamp and Kuka examples with args.simulate=True

I've been trying to run examples.pybullet.tamp.run and examples.pybullet.kuka.run with args.simulate set to be True; however, once I reach the simulation (particularly the line with control_commands() and command.control(), respectively), it freezes. In TAMP, the simulation runs through four steps before freezing and the problem seems to be that it cannot step forward in the simulation since some condition is unsatisfied. In Kuka, as soon as the simulating starts, the platform and all objects immediately drop from the plane they are on and the robot is unable to find them so it freezes soon after. This can be recreated by inserting args.simulate=True immediately after args is defined in each of the examples.

Easiest way to extract predicate value

What would be the easiest way to tell if a predicate is true given the current state of the environment? Looks like the evaluations returned by solve_focused are for the entire plan.

What is :rule ?

In the stream file for the discrete TAMP problem example, the first few lines read.

  (:rule
    :inputs (?q ?p)
    :domain (Kin ?q ?p)
    :certified (and (Conf ?q) (Pose ?p))
  )

If I comment this out, no plan is found. I'm wondering what is the use of this :rule and what it does

:typing support

Hi. As the number of arguments in my PDDL actions have grown, I've noticed solving the underlying PDDL problem has become more and more of a limiting factor. I believe this could be mitigated substantially if my action preconditions were typed so as to reduce the number of actions considered by the planner, however this is not supported at the moment. Could you give some insight as to why that is the case (my intuition is that typing the output of streams might be tricky for some reason)? If you've given any thought to adding support for this feature, do you know what would need to be changed in the codebase to enable it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.