Giter Club home page Giter Club logo

dd's Introduction

TuLiP

This is the source repository for TuLiP, the temporal logic planning toolbox. The project website is http://tulip-control.org

Installation

In most cases, it suffices to:

pip install .

TuLiP can be installed also from PyPI:

pip install tulip

This will install the latest release, together with required dependencies. To find out what dependencies (including optional ones) are installed, call:

tulip.interfaces.print_env()

For detailed instructions, including notes about dependencies and troubleshooting, consult https://tulip-control.sourceforge.io/doc/install.html The next section describes how to build documentation. A test suite is provided under tests/. Consult the section "Testing" below.

Pip can install the latest development snapshot too:

pip install https://github.com/tulip-control/tulip-control/archive/master.zip

Code under development can be unstable so trying pip install tulip first is recommended.

Documentation

There are two main sources of documentation outside the code. The "user" documentation is under doc/ and is built with Sphinx, so the usual steps apply, :

make html

API documentation is generated using Epydoc and can also be built from the doc directory, now by :

make api

Built copies for the most recent release of TuLiP are available online at:

Command summaries are provided by make help. Besides the above sources, you may also read API documentation using the standard pydoc tool. E.g., :

pydoc tulip

Testing

Tests are performed using pytest. From the root of the source tree (i.e., where setup.py is located), :

./run_tests.py

to perform basic tests. To try all available tests, ./run_tests.py full. For alternatives and a summary of usage, ./run_tests.py -h

Contact and support

License

This is free software released under the terms of the BSD 3-Clause License. There is no warranty; not even for merchantability or fitness for a particular purpose. Consult LICENSE for copying conditions.

When code is modified or re-distributed, the LICENSE file should accompany the code or any subset of it, however small. As an alternative, the LICENSE text can be copied within files, if so desired.

If this software is useful to you, we kindly ask that you cite us:

I. Filippidis, S. Dathathri, S. C. Livingston, N. Ozay and R. M. Murray,
"Control design for hybrid systems with TuLiP: The Temporal Logic Planning
toolbox," 2016 IEEE Conference on Control Applications (CCA), Buenos Aires,
Argentina, 2016, pp. 1030-1041, doi: 10.1109/CCA.2016.7587949.

dd's People

Contributors

johnyf avatar lummax avatar maweki avatar slivingston avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dd's Issues

steps for installing BuDDy and `dd.buddy`

Python: Python 3.8.2
OS: 5.6.2-arch1-2

python setup.py install --fetch --buddy installs CUDD and then fails while compiling BuDDy with

gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/home/h3ssto/_git/masterarbeit/code/dd_test/.venv/include -I/usr/include/python3.8 -c dd/buddy.c -o build/temp.linux-x86_64-3.8/dd/buddy.o
dd/buddy.c:541:10: fatal error: bdd.h: No such file or directory
  541 | #include "bdd.h"
      |          ^~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1

Checking queries where variable is True

Hey

This is a beginner level question.
From a bdd, how can you extract the possible queries where a particular variable is TRUE/FALSE.
Right now I have a list of variables and I generate the BDD.
Not quite sure how to retrieve the queries for the cases where a variable is TRUE and FALSE.

Thanks in advance for your help

Cheers,
Chris

Installation does not work with python3-based pip

As it says in the title.

$ pip --version
pip 7.1.2 from /home/maweki/.local/lib/python3.4/site-packages (python 3.4
$ python --version
Python 2.7.10

when I try to install dd via pip install --user dd I get the following error

Collecting dd
Using cached dd-0.1.3.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 20, in
File "/tmp/pip-build-ew5h83vp/dd/setup.py", line 4, in
import download
File "/tmp/pip-build-ew5h83vp/dd/download.py", line 7, in
import urllib2
ImportError: No module named 'urllib2'

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-ew5h83vp/dd

It does work with pip2 install --user dd though. So maybe this package should not be available for python3 maybe

wrap CUDD v3.0.0

It should be decided whether to offer an option for linking to cudd < 3.0.0.

ensure source distribution is independent of optional dependencies

9254305 introduced use of cysignals to allow Ctrl+C interruption of CUDD calls. Cython's conditional compilation was used to cimport the cysignals package or not. If cysignals is present (i.e., already installed) when cythonizing dd/cudd.pyx, then dd/cudd.c and thus the module dd.cudd will depend on cysignals.

Windows is unsupported as of cysignals >= 1.7.0. So cysignals is an optional dependency of dd:

https://github.com/johnyf/dd/blob/dc0b3c3a178064a3eeb1d2a8245bc6635c52860f/setup.py#L48-L49

https://github.com/johnyf/dd/blob/dc0b3c3a178064a3eeb1d2a8245bc6635c52860f/requirements.txt#L2

Wheel and source distributions present a challenge in this area.

Wheels

If cysignals is present when creating a wheel file (python setup.py bdist_wheel), then dd.cudd will depend on cysignals. Wheels with compiled dd.cudd are platform specific, so in principle this dependence can always be satisfied on that specific platform (otherwise cysignals would have been absent when building the wheel file). However, cysignals is absent from install_requires. So the resulting wheel file will be installed with missing requirements.

A solution is to use an environment marker for conditional dependency specification (PEP 508, see setuptools docs). For example:

install_requires=[
    'cysignals >= 1.7.0; platform_system = "Linux" or platform_system = "Darwin"',
    ...
    ]

In theory, this would work, adding the cysignals to the requirements for Linux and Darwin wheels.

Source archives

What about the source distribution? As recommended by Cython's docs, we include both cudd.pyx (source) and cudd.c (generated by cythonize) in the source distribution, in order to allow compiling dd.cudd in absence of Cython (at least in principle).

If cysignals happens to be installed in the environment of python setup.py sdist, then the generated cudd.c depends on cysignals (headers due to cimport). So compiling in absence of cysignals will raise an error, even though the environment marker platform_system (discussed above) would ensure that cysignals is present where possible.

In other words, Cython's conditionals are interpreted "early", when cythonize runs. If that happens on a platform different than the target one, then cysignals is injected as a dependency when it shouldn't (contrast to an #ifdef).

Moreover, there should be one source distribution, not multiple (unlike wheels, which can be platform-specific). So dd/cudd.c should be independent of cysignals.

With the current implementation:

https://github.com/johnyf/dd/blob/dc0b3c3a178064a3eeb1d2a8245bc6635c52860f/download.py#L86-L87

we need to uninstall cysignals before running python setup.py sdist --cudd (and clean dd/cudd.c, if it already exists; an additional complication). It is easy to forget this step. Asserting that:

assert cysignals is None or 'sdist' not in args  # where `args` those passed to `setup.py sdist`

would avoid forgetting. A smoother alternative is to set HAVE_CYSIGNALS = False if building a source distribution (in that case we should rename the directive to USE_CYSIGNALS).

Conclusion

In principle, all this would work. Simple is better than complex [PEP 20]. Indeed, if install_requires contains cysignals, then python setup.py install --cudd raises an error:

error: Setup script existed with error: The package cysignals will not function correctly when built as egg. Therefore, it cannot be installed using 'python setup.py install' or 'easy_install'. Instead, use 'pip install' to install cysignals.

However, we cannot install dd.cudd from source using pip (via pip only from wheels), due to how --install-option propagates. Even if we could, python setup.py install should remain possible.

Thus, cysignals cannot be included in install_requires, so neither source nor wheel distributions should depend on cysignals. To ensure this, a suitable assertion is needed (as above, but also for bdist), and cleaning before creating the distribution. Adapting the following Makefile rules seems appropriate:

https://github.com/johnyf/dd/blob/dc0b3c3a178064a3eeb1d2a8245bc6635c52860f/Makefile#L15-L17

https://github.com/johnyf/dd/blob/dc0b3c3a178064a3eeb1d2a8245bc6635c52860f/Makefile#L36-L44

To use cysignals, one would pip install cysignals cython (or pip install -r requirements.txt), and then python setup.py install --cudd.

error when linking to CUDD built by user

I've downloaded CUDD-2.5.1 separately and installed it using make. Then, I've downloaded "dd-master" and issued the following command in the terminal:

~/dd-master$ sudo python setup.py install --cudd="/home/cse/cudd-2.5.1"

I got the following error message:

/usr/bin/ld: /home/cse/cudd-2.5.1/cudd/libcudd.a(cuddAPI.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/home/cse/cudd-2.5.1/cudd/libcudd.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

I'm unable to rectify this error. I'm using Ubuntu-16.04, X86_64 machine with Python 2.7.12, gcc version 5.4.0. Kindly help me to install dd-master successfully.

consider referencing the manager `cudd.BDD` in `cudd.Function.manager`

Currently, the attribute dd.autoref.Function.bdd points to the "low-level" manager dd.bdd.BDD, as indicated by dd.autoref.BDD._wrap.
Similarly, the attribute dd.cudd.Function.manager points to the "low-level" manager cudd/cudd.h/DdManager.
(Also, note the non-uniformity of the attribute name.)

If I recall correctly, the reason was that flat is better than nested. In other words, if Function.manager pointed to the associated python object dd.cudd.BDD, then accessing the associated DdManager would require reading the corresponding attribute of Function.manager. Besides, this is less efficient.

However, not referencing the python manager object within the Function object has some undesired and clumsy consequences:

  • having to del each node defined, in order to avoid assertion errors upon exit, or even segfaults (depending on the underlying C package)

  • need two lines to initialize a node, for example

    f = Function()
    f.init(self.manager, u)

    where self.manager is a low-level manager.

An alternative worth considering is to reference the python object of the corresponding manager within the Function instance. Possibly, also retrieve the underlying "low-level" manager in Function.init and assign its value to some other attribute of Function. This will ensure less clutter in user code. (Unless one writes del to explicitly delete nodes to dereference them, which is an optimization.)

It is never the case that a DdManager isn't associated to an existing BDD instance, so the proposed approach doesn't introduce any restrictions.

Initialization

Either way, the initialization of a Function costs the same, because the CUDD manager is obtained after access to an attribute (either BDD.manager or Function.manager). In other words, the current

f = Function()
f.init(self.manager, u)

will be of equal cost as

f = Function()
f.init(self, u)

because the attribute self.manager that is accessed within the caller in the first approach will be accessed within init in the second approach.
Besides, this cost should be negligible, compared to the bulk of the runtime, which is consumed within CUDD BDD operations.
The init method is not visible outside Cython. So, changes to how a Function is initialized are strictly internal.

Currently, there is no way to initialize a Function from "scratch" outside cudd.pyx, because Function.init is a cdef. Suppose that we make the first argument a BDD (instead of a DdManager). Then, we still cannot call Function.init in Python. We must make the second argument a Function (instead of a DdNode). This second change results also from considering symmetry of arguments to the initializer (BDD and Function vs DdManager and DdNode).

We could consider whether it is useful to have the constructor's second argument be a Function.

There is little use for such an init (which could also be __init__ after these changes in argument types) in user space. Usually, we make a Function to wrap a DdNode. So, the reason is that we want to create a Function that wraps a lower-level object, not because we want to copy an existing Function.

(Python offers other interfaces for copying. Also, I've never needed to copy a Function. BDD nodes are treated as immutable objects for almost all purposes. In the context of a specific variable order, this corresponds well to their uniqueness. Therefore, "copying" a Function shouldn't be needed for anything, and a second reference to the same Function instance should suffice.)

From the above, it follows that an initialization function is needed that takes a BDD and a DdNode.

merge some of `compose`, `rename`, `cofactor`, `evaluate`

These methods are redundant, because the method BDD.compose subsumes the rest.
compose substitutes a given function for a variable (in general, simultaneous substitution).
PEP20: "There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch."

Their differences are in:

  1. signature (contrasted below)
  2. efficiency of algorithm, made possible due to specificity.

In more detail:

  • compose takes a dict that maps each variable name to a BDD node (e.g., as int in dd.bdd).
  • rename substitutes variables for variables.
    It has more convenient interface and more efficient implementation.
    rename takes a dict that maps each variable name to a variable name as str.
  • cofactor substitutes constants for variables.
    cofactor takes a dict that maps each variable name to a bool.
  • evaluate substitutes constants for variables.
    evaluate takes a dict that maps each variable name to a bool.
    evaluate requires that all variables in the support be given a value.
    This restriction is unnecessary.

Relevant discussion where some of these similarities have been observed too are #10 and #11.

It can also be remarked that "rename" is somewhat inaccurate, because the object to be renamed is not the given argument, so the reference is indirect (i.e., it is grammatically incorrect).
"cofactor" is confusing, because it has several meanings, none of which is a non-technical word, and can be mystifying to users less familiar with English.

So, evaluate can be removed, in favor of cofactor.
This leaves rename and cofactor as two different special cases of compose (for projection functions and terminals, respectively), with signature adapted by case (str and bool).

It remains to compare the algorithms of compose, rename, and cofactor, and decide whether there is any difference in efficiency.
If not, then rename and cofactor can be removed, and the signature of compose modified to allow for variables to be mapped to:

  • BDD nodes, or
  • variable names as str, or
  • constant values as bool,

as discussed also below, regarding argument types.
The difference in substituted items can be recognized within compose, at the time of substitution.
In principle, this can allow mixed composition too, for example:

substitutions = dict(x='y', z=True, w=bdd.add_expr('u /\ v'))

If there is a difference in algorithm efficiency, then either the functions can be automatically selected from within compose, depending on arguments (PEP 20: nested design, so likely not a good idea), or they be left as separate methods.

A method substitute as alternative to compose

compose is a model-theoretic name.
substitute is a proof-theoretic name, and arguably more elegant.
I prefer "substitute". However, substitute is:

  • more difficult to type on a QWERTY keyboard
  • less readable
  • a longer word
  • less standard as name in BDD software implementations
  • requires the argument mapping to be values |-> keys (to match readably with "substitute b for a"), in contrast with keys |-> values (to match readably with "to compose, map a to b). The latter reflects the order of operations, and accommodates slightly more linear thinking. It also agrees with how dict and comprehension are usually described within Python.

than compose. So, it seems that substitute may not be a choice better than compose.

A method map as alternative to compose

map is a simple and widely used name, that corresponds to the underlying mapping that occurs.
However, the mapping of the given node by applying substitution is different than the given mapping of what variable should be substituted with what function.
So, compose seems to more accurately describe the operation.

Too liberal input is dangerous

Besides, untyped arguments are admitted, by checking their bool value.
This can allow bugs where the node dd.BDD.false == -1 is passed as "value" for a variable, but bool(-1) == True, so the result is wrong, without any exception raised.
This inconsistency arises mainly due to the multiple alternatives allowed here and there:

  • bool
  • BDD nodes, which in dd.bdd are int
  • variable names as str
  • variable levels as int

This lack of uniformity is a consequence of incremental refactoring and API changes that accumulated, and should be considered transitory.
By restricting the allowed inputs by type to:

  • bool for convenient truth values
  • str for human-friendly and convenient variable names
  • anything else considered a BDD node (whatever the representation happens to be: int, user-defined class like Function, etc. -- excluding bool and str)

the conflicts and bugs of this sort will be avoided, because:

  • user reference to variable levels is a legacy feature that is deprecated (user should not use levels to identify variables). This removes the int conflict.
  • no casting (explicit nor implicit as if foo:) as bool

consider using LRU cache for BDD methods

In dd.bdd.BDD, some memoization happens per function call. The exception is the attribute BDD._ite_table that persists between calls to BDD.ite. Consider using an LRU cache via a decorator. Python 3 offers functools.lru_cache, and a backport to Python 2 exists. A side improvement will be more readable code. Should clear the cache during reordering, as done for _ite_table.

Places that would benefit from this change:

Benchmark the change in performance.
Some packages other than the standard library worth considering:

bdd._next_free_int throws exception when using CPython implementation

Arguments to the xrange function are limited to the size of a native C long when using CPython which can cause the bdd._next_free_int method to throw an exception (e.g. when self.max_nodes is set to default of sys.maxsize). I suggest using a simple while loop instead with increment in each iteration until a free node is found or itertools.count if generator is preferred.

support mixing of automatic and user-defined level initialization

An incorrect initialization can result if a user initializes some dd.bdd.BDD bits without passing explicitly a level, and some other bits by passing explicit levels. The error arises if:

  1. a variable is placed lower, skipping some levels,
  2. then automated initialization is resumed for the remaining variables, and
  3. the number of variables remaining is sufficient for the levels selected automatically to overlap with the explicitly requested one.

This behavior should be checked and avoided (there are already checks for other exceptions that can arise, for example trying to redefine a variable but at a different level). The simplest check to avoid this is to ensure that if a variable is fresh, then its level is occupied by no other variable.

The culprit is how new levels are picked:

level = len(self.ordering)

This fails when:

import dd

b = dd.bdd.BDD()
b.add_var('b', level=1)  # skip a level
b.add_var('a')  # mix with automated initialization
b.vars
>>> {'a': 1, 'b': 1}

This observation resulted while discussing tulip-control/tulip-control#117.

Error with bdd.dump

First, thank you for the excellent library.

I installed dd[dot] and I pasted in the example from https://github.com/johnyf/dd/blob/master/doc.md#plotting to try to get a pdf of my bdd. However, I get an error whenever I call dump for either a pdf or a pickle. I'm using version 0.3.0 but haven't tried any earlier versions.

...
/usr/local/lib/python3.4/dist-packages/dd/bdd.py in descendants(self, roots)
    238         Nodes are represented as positive integers.
    239         """
--> 240         for u in roots:
    241             assert u in self, u
    242         nodes = set()

TypeError: 'NoneType' object is not iterable

Any thoughts on this?

One additional question out of curiosity. I noticed in the docs you say "Some less frequently used BDD methods are...evaluate...". I was wondering why evaluate isn't used very often. Since BDDs are basically binary functions, wouldn't people want to evaluate them? I first noticed this in the pyeda library that is missing this feature completely.

Method for obtaining the paths from leaves to root

Hi @slivingston @johnyf , great lib, especially interested in dd.

I was trying to find or add a method for discovering all paths from "true" leaves all way up to root (sometimes called cutsets). For example, attached .

https://github.com/glarange/BDD/blob/master/fig_5_.paths.pdf
https://github.com/glarange/BDD/blob/master/fig_4_Paths.pdf

Most BDD applications I've seen are related to circuit design or mapping. Cutsets have applications in reliability theory, probabilistic contingency analysis (for nuclear reactors or electric grid, etc.) Once you find all paths, you can assign probabilities to nodes and finding the exact probability of the top event (root), or failure, becomes trivial.

First step would be to find parent nodes ending in "true" leaves. I've indicated such nodes in fig 4 and 5 above.

Is there a method in dd that does something similar? Or if not, could you point me in the right direction to add such a method?

Thanks,
Gui

decide what `full` and `care_bits` mean

dd enumeration API

In dd.bdd.BDD.sat_iter and dd.cudd.BDD.sat_iter the meaning is:

  • if not full, then return assignments as they result from traversing the BDD. Depending on the variable order, this can yield varying results
  • if full, then:
    • if care_bits is None, then return assignments to all the variables of the BDD manager (usually this isn't what we want)
    • else assert care_bits \subseteq support

So, full says whether full assignments are desired.
care_bits matters only if full is given, and is explicit, in that it specifies exactly over which variables to enumerate.

However, there is little point in providing fewer care_bits than support, because that is guaranteed to raise an AssertionError about missing variables.

In particular, care_bits = set() is useless, because why would you enumerate a non-constant BDD over no variables?

Another point is whether care_bits should be used as an upper or lower bound. Upper bound means "at most these bits". Lower bound means "at least these bits".

Perhaps a more reasonable alternative would be to enumerate over the union support \cup care_bits, and so avoid both checking whether support \subseteq care_bits, as well as the need to raise any exceptions. (We could consider allowing even variables that are not in the symbol table, though that would work only in BDDs, not for variables that take integer values.)

The exception case leaves that case free for a new meaning to be chosen. It may be worth considering letting full=False, care_bits={...} mean that care_bits should be assigned always, and any missing vars in support assigned only along BDD paths that they appear on.

Another problem is that support \subseteq care_bits is checked whenever care_vars is not None. Even when full = False. Consider changing this behavior in dd.

remove Python 3.4, add Python 3.7 and Python 3.8

remove support for Python 3.4 because

Python 3.4 has now reached its end-of-life and has been retired. No more releases will be made.

(https://www.python.org/dev/peps/pep-0429/)

also, add support for Python versions 3.7 and 3.8 (released earlier this month; https://www.python.org/dev/peps/pep-0569/)

to complete this issue, perform the following tasks:

  1. update CI testing to remove or add Python versions accordingly
  2. update list of Python versions in setup.py
  3. open new issues if errors are discovered against Python 3.7 or 3.8

consider renaming `sat_iter` to `pick_iter`

Following the TLA+ proof syntax PICK x: P(x). A method pick has been added in a2dfd2a, following the example of omega.symbolic.fol.Context.pick, which includes a method pick_iter. The pick method is only a convenience wrapper around pick_iter.

Motivation:

  • "pick" is a verb, so compliance to standard practice (use verbs for functions/methods that modify program state)
  • "sat" is not a word
  • sat_iter isn't readable
  • sat_iter contains an underscore
  • "pick" emphasizes the operation's metatheoretic nature, reminding that the returned entity is an assignment as dict, not a BDD (syntactic entity)
  • CUDD prefers this naming choice too: Cudd_bddPickOneCube and Cudd_bddPickOneMinterm.

unreachable: ftp://vlsi.colorado.edu/pub/cudd-3.0.0.tar.gz

The download location for CUDD is unreachable, raising a timeout error during tests on Travis CI (https://travis-ci.org/johnyf/dd/jobs/351104140#L862), when calling urllib.request.urlopen:

https://github.com/johnyf/dd/blob/c07b6cf0edec1b4caa0cff42c0f9b6bbd12d786f/download.py#L154

To ensure stability of tests and package installation, fetch the same (compressed) tarball from a mirror location, for example: https://sourceforge.net/projects/cudd-mirror/files/cudd-3.0.0.tar.gz/download
The tarball hash will remain unchanged:

https://github.com/johnyf/dd/blob/c07b6cf0edec1b4caa0cff42c0f9b6bbd12d786f/download.py#L34-L36

The download location of CUDD does not affect the installation from PyPI of dd >= 0.5.3 on Linux environments with Python 3.5 or 3.6, because wheel files containing a compiled CUDD are available from PyPI.

Reachability Example not working

Thank you for the great package.

I encountered following exception:

bash-4.4$ pwd
/home/weigl/dd-0.5.1/examples
bash-4.4$ python2 reachability.py 
Traceback (most recent call last):
  File "reachability.py", line 56, in <module>
    q = least_fixpoint(transitions, bdd)
  File "reachability.py", line 44, in least_fixpoint
    next_q = _bdd.rename(q, bdd, prime)
  File "/home/weigl/.local/lib/python2.7/site-packages/dd/autoref.py", line 237, in rename
    r = _bdd.rename(u.node, u.bdd, dvars)
  File "/home/weigl/.local/lib/python2.7/site-packages/dd/bdd.py", line 1536, in rename
    assert abs(u) in bdd, u
  File "/home/weigl/.local/lib/python2.7/site-packages/dd/autoref.py", line 45, in __contains__
    assert self is u.bdd
AttributeError: 'int' object has no attribute 'bdd'

dd verion: 0.5.1, installed via pip
Python: 2.7.13 on a Fedora 26

Cudd pick_iter reordering

The current implementation of pick_iter in the cudd wrapper sets reordering to true before terminating instead of setting it to the original configuration.

Showing false values sets as true in diagram

I have used solution for queens problem from the "examples" foder in order to draw png graph and use it in another program, but

  1. there is only one tetminal node "True"
  2. going lower by the desicion diagram, we can find false value sets, which seem to ignore the only rule (concerning at least one queen in every row)

Added image processed with a code after minor changes.
On the image varibles x11-x22 indicate all variables created.

2

Seems that problem is solved only for the n>4, but not confident.

plot nodes that represent external references

Wouldn't the rule that "only an “else” branch can be negated, signified by a -1 annotation" handicap the power to form BDD for certain clauses.
For instance,
BDD for
p cnf 1 1
1 0
and
p cnf 1 1
-1 0
is shown to be the same by this tool which is obviously not the case and it seems that former example is incorrectly drawn by dd.
new.pdf

Assignment of BDD/ ZDD

  1. If I use from dd.cudd_zdd import ZDD as BDD in the beginning of a code does that mean all the subsequent computations will calculate ZDD? Even bdd.dump will dump ZDD?

  2. Here is my code:

declaredvariables=['a','b','c' ]
iteration 
declaredvariables[index]=bdd.add_expr(expression)

At the very end, I want to calculate a combined bdd and zdd both so:

combined = a & b & c
bdd.dump(' ', combined)
zdd.dump(' ', combined)

However, I see that after executing the first part of the code all indexes of declaredvariables[index] are changed to some numbers like @94046918771 @94046918667 @9404698931

If I do this will it work:

combined = declaredvariables[0] & declaredvariables[1] & declaredvariables[2]

Setting memory limits

This is less of an issue and more of a documentation refinement.
It has been said "BDD.configure to read and set the parameters “max memory”, “loose up to”, “max cache hard”, “min hit”, and “max growth”." This is followed by tips to set "memory_estimate".

It would be great if you/someone can provide definitions for all these settings. Particularly, memory_estimate and max memory are quite confusing as both appear to mean the same.

the graph function is not equal to the original function

The function I want to expressed in the dd package is "!x & y & z", and the following is my code.

from dd import autoref as _bdd

bdd = _bdd.BDD()
bdd.declare('x', 'y', 'z')

u = bdd.add_expr('(!x) & y & z')
bdd.collect_garbage()
bdd.dump('a.png')

I get the following graphic:
a

According to the rules, I deduce the function "x | !x & !(y&z)" from the graph function, and it is not equal to the original function "!x & y & z".
What did I do wrong during this process?

setuptools installs release candidates, unlike pip

The recommended way to fetch and compile CUDD and its Cython bindings is python setup.py install --fetch --cudd. This uses setuptools for installing dependencies, so it allows release candidates, as it happened with networkx == 1.11rc2. In contrast, pip does not allow release candidates by default. This inconsistency can cause confusion, especially between local and CI builds. Fix the behavior of pip --install-option to avoid running python setup.py directly.

Returning BDDs from function

Hi, I'd like to use dd in an experimental model checker, but I keep having an issue with memory management in Python. Namely, such method of 'bigClass':

def next(self):
    setBDDencoding = self.manager.true
    #... compute stuff
    return setBDDencoding

used like this:

niceBDD = oldBDD | bigClass.next()

throws:

Exception ignored in: <bound method BDD.__del__ of <dd.bdd.BDD object at 0x7f14e009d7f0>>
Traceback (most recent call last):
  File "/home/me/.local/lib/python3.6/site-packages/dd/bdd.py", line 183, in __del__
AssertionError: {1: 7, 10: 2, 16: 1, 17: 1, 24: 1, 25: 1, 28: 1, 29: 1}

I'd appreciate some suggestions how to deal with it.

A note: the error is not reproduced under dd.cudd.

Same code works with pure python (dd import autoref as _bdd) but not dd.cudd

From what I understand I may not be using the bdd.add_expr correctly or it might be naming the bdd :

This is an example line :
expression ='!a&!b&c&d | !a&b&!c&d' ---Example---

a = bdd.add_expr( expression ) ---ERROR HERE---
b = bdd.add_expr( expression )
z = a & b

I have newly installed the comprehensive cudd version as I need to work with zdd and working in Spyder IDE. This is the error I get when I try to run the same code that works with pure python version: https://ibb.co/yWbfRtW

Any advice please?

consider renaming `add_var` to `declare`

Methods and functions that change state should preferably have a verb as name. "add_var" isn't a verb. It's not even an English word. To what are we "adding"?

Strictly speaking, we are extending a formal system by a definition of a nullary operator name. We call a nullary operator a "variable". In any given interpretation, the operator is applied to no arguments, yielding a value (for variables in BDD.vars, it so happens that we always reason about models that assign to the variables values in the set BOOLEAN in TLA+). For example, the operator "x" is applied to yield "x()". For brevity, we never write the empty parentheses.

In Python, "add" is used mostly for unordered containers, with set as a prototype. I named the method add_var after networkx conventions. In networkx though, adding nodes and edges makes more sense for a graph as a collection of containers. It is awkward to conceptualize a BDD as a container of variables.

Practicality:

  • on widely used keyboard arrangements, add_var involves pressing SHIFT + -
  • declare is shorter than add_vars (the plural)

Readability:

  • declare reads more naturally: declare('x', 'y', 'z') reads "declare [the variables] x, y, z"
  • declare is a little ambiguous as to what we are declaring. But a little familiarity with programming should suffice. Besides, after one reads the docstring once, they don't have to repeat "vars" throughout their code. Forgetting that variables are "declared" is a little difficult.

obtain SSL certificates with Python >= 3.6 on macOS

Running python setup.py install --fetch --cudd with Python 3.6 (built from source) on OS X can encounter the following error due to absent SSL certificates:

urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:748)>

Solution:

Other approaches (with reference to dc0b3c3):

diff --git a/download.py b/download.py
index 81a228c..711e279 100644
--- a/download.py
+++ b/download.py
@@ -12,6 +12,7 @@ except ImportError:
     import urllib.request, urllib.error, urllib.parse
     urllib2 = urllib.request

+import certifi
 try:
     import cysignals
 except ImportError:
@@ -152,7 +153,7 @@ def _join(paths):

 def fetch(url, sha256, fname=None):
     print('++ download: {url}'.format(url=url))
-    u = urllib2.urlopen(url)
+    u = urllib2.urlopen(url, cafile=certifi.where())
     if fname is None:
         fname = CUDD_TARBALL
     with open(fname, 'wb') as f:
  • Manually download cudd-3.0.0.tar.gz to the setup.py directory (e.g., from SourceForge), unpack, and run make with the appropriate flags. The flags are listed inside download.py, so a shortcut after unpacking is:

    python -c 'from download import make_cudd; make_cudd()'
    python setup.py install --cudd
    
  • The tarball hash is checked anyway, so bypassing SSL certificate verification inside download.py is an (unrecommended) option.

diff --git a/download.py b/download.py
index 81a228c..b89b2d4 100644
--- a/download.py
+++ b/download.py
@@ -3,6 +3,7 @@ import ctypes
 import hashlib
 import os
 import shutil
+import ssl
 import subprocess
 import sys
 import tarfile
@@ -152,7 +153,8 @@ def _join(paths):

 def fetch(url, sha256, fname=None):
     print('++ download: {url}'.format(url=url))
-    u = urllib2.urlopen(url)
+    context = ssl._create_unverified_context()
+    u = urllib2.urlopen(url, context=context)
     if fname is None:
         fname = CUDD_TARBALL
     with open(fname, 'wb') as f:

invoke reordering dynamically

As of 355fc6b, reordering with Rudell's sifting can be invoked by the user. It is not triggered dynamically by changes in the size of the manager. This can likely be achieved simply by making the computation re-entrant and wrapping it with a decorator that manages the reordering and reentry.

best links for CUDD ?

The original reference URL for CUDD, http://vlsi.colorado.edu/~fabio/CUDD/, is valid no more. Does anyone know of a preferred alternative?

There are various copies of the CUDD API documentation and code hosted around the Web, but I did not find one that is ordained by Fabio (the original author of CUDD).

One idea is to link to the most recent snapshot of http://vlsi.colorado.edu/~fabio/CUDD/ on the Internet Archive,
https://web.archive.org/web/20180127051756/http://vlsi.colorado.edu/~fabio/CUDD/html/index.html, which is just the Doxygen-generated API documentation for CUDD v3. This might be usefully supplemented by a link to the documentation as of version 2.5, which has a more welcoming introduction at
https://web.archive.org/web/20150317121927/http://vlsi.colorado.edu/~fabio/CUDD/node1.html

dump BDDs from CUDD to JSON

Dump BDDs from dd.cudd.BDD to a JSON file, and load them too. A two-stage approach seems most promising:

  1. dump to a shelf file using shelve
  2. iteratively load the shelf file, and dump it iteratively to a JSON file.

The first step uses the shelf as a cache for the depth-first traversal (typically implemented recursively) of the BDD to find which nodes to dump. We cannot know which nodes to dump without first finding those reachable from the roots we want to dump. Naively, we could traverse the BDD and so dump all reachable nodes. Due to a BDDs structure, this can result in an exponential amount of duplicate work.

This is why visited nodes are memoized, which corresponds to maintaining a set of "visited" nodes in a graph search. In other algorithmic applications, one would simply mark the visited nodes as visited. In CUDD, there isn't space for "marking" the nodes. We could instead add the visited nodes to a separate "set" in main memory. In demanding use cases, CUDD fills most of the main memory, so this isn't possible, because it would essentially duplicate within main memory the BDD we want to save.

I think that DDDMP approaches this problem by removing and adding nodes to the unique table. I consider this undesirable, because it affects the existing cache (hashing information, repeatability, etc.). A dumping operation is extraneous to the BDD manager, so it shouldn't have side effects on the manager.

Main memory cannot serve for storing "visited" information during traversal without interfering with CUDD, but there's the disk. Nowadays, the disk is vastly larger than main memory. So, why not use the disk to store the "visited" status of each node? (For example, the enumerative model checker TLC uses the disk to store the state space.)

Even better, since all we want to do is dump to the disk, why not use the target file itself as the store of "visited" information? The only challenge is a dict-like interface for quickly checking containment of nodes in the file. The shelve module from Python's standard library provides exactly this interface.

The second step is just a conversion of the entire shelf file to a JSON file. In other words, the first step identifies and isolates the information that we want to store, and the second step puts this information in the target file format.

ijson seems suitable for step 2. Compared to json-streamer, ijson is preferred:

  • due to simpler API
  • several backends, including a pure Python one
  • both are based on the C library yajl
  • apparently wider adoption, which usually reflects experience
  • more years in development.

Previous comments that are relevant:

undeclaring unused variables

I propose to write a method for the dd.bdd class that deletes unnecessary variables.

This is how I have written it:

    def prune(self):
        """Remove variables not of importance to BDD."""

        # Clean garbage
        self.collect_garbage()
        levels = self._levels()
        empty_levels = []
        full_levels = []

        i = 0
        trans_lev = dict()
        inv_lev = dict()
        lev = 0
        for lev in sorted(levels):
            if levels[lev]==set():  # for empty set
                empty_levels += [lev] # unneeded level
            else:
                full_levels += [lev] # needed level
                trans_lev[lev] = i
                inv_lev[i] = lev
                i += 1
        # created dictionary trans_lev: old_level => new_level
        # created dictionary inv_lev: new_level => old_level
        trans_lev[lev+1] = i

        inv_lev[i] = lev+1
        print(trans_lev)
        print(inv_lev)
        empty_vars = [self._level_to_var.pop(emp_lev)   for emp_lev in empty_levels]
        # list with empty vars (same order as empty_levels)
        # remove empty levels from self._level_to_var..

        # start changing the BDD
        for lev in full_levels:
            var = self._level_to_var.pop(lev)  # remove old level
            self.vars[var] = trans_lev[lev]  # new level
            self._level_to_var[trans_lev[lev]] =var  # new level to var
        for e_vars in empty_vars:
            self.vars.pop(e_vars)

        for u in self._succ:
            if not (u == 1): # last node is not saved in self._pred
                self._pred.pop(self._succ[u])
                self._pred[(trans_lev[self._succ[u][0]],self._succ[u][1],self._succ[u][2])] = u
            self._succ[u] = (trans_lev[self._succ[u][0]],self._succ[u][1],self._succ[u][2])
        # self._pred = dict()  # (i, low, high) -> u
        # self._succ = dict()  # u -> (i, low, high)

        return empty_vars

First the levels of the bdd are checked and the empty levels are remembered. A mapping from the old to the new levels is created. Variables names of empty levels are removed. Then the levels in the dictionaries _succ, _pred are changed.

dddmp support_vars

Support variable names in dddmp headers is an optional field in dddmp 2.0 files. As of now, dddmp.py will crash at lines 267 and 202-203.

Vector Compose Algorithm?

I wonder which algorithm used for the implementation of vector compose. Are you aware of any result on time and space complexities of this operation?

consider removing the argument `bdd` of `cudd` and `autoref` functions

At the low level (dd.bdd, and within CUDD), edges (signed nodes) are simply integers. They do not contain information that identifies the manager to which they are associated. Care is required to avoid using a node from one manager with another manager (because the same integer can appear in two different managers). Therefore, the manager should be an argument for functions in the dd.bdd module (and is passed around as DdManager * within CUDD). (Methods obtain the same information from the argument self.)

At higher levels, this isn't needed. A cudd.Function references the manager to which it belongs (#13), so functions in the module cudd can obtain this information from the Function object they get as argument. So, a separate BDD argument is unnecessary. This is the case for autoref functions too.

With this change, calls to functions from these two modules will be simplified a little.

interpretation of BDD graph plot

When plotting a simple bdd via dump, I get an inconsistent graph.
Expression: "x /\ y /\ ~z"
Output graph is attached.
sat_iter returns the right decision set dict(x=True, y=True, z=False}, but the graph (attached) looks wrong to me.

out.pdf

Code to reproduce:

>> from dd import autoref as _bdd
>> bdd = _bdd.BDD()
>> [bdd.add_var(k) for k in "xyz"]
>> u = bdd.add_expr("x /\ y /\ ~z")
>> bdd.collect_garbage()
>> bdd.dump("out.pdf")

Subsumption check "bdd1 \subseteq bdd2"

hi! this is a question not an issue:

is there a way to check one bdd node is subsumed by another?
Why: to check in the fixpoint computation if the initial state is in the winning region.

In pure quantification and boolean operators this can be implemented as:

def is_subset_of(x,y):
    return (forall{vars} (x=>y)) == BDD.TRUE

But may be there is a more efficient way in dd for doing this?
E.g., CUDD has Cudd_bddLeq which seems to be the subsumption check.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.