ssoelvsten / adiar Goto Github PK
View Code? Open in Web Editor NEWAn I/O-efficient implementation of (Binary) Decision Diagrams
Home Page: https://ssoelvsten.github.io/adiar/
License: MIT License
An I/O-efficient implementation of (Binary) Decision Diagrams
Home Page: https://ssoelvsten.github.io/adiar/
License: MIT License
Currently the Quantification and the ZDD subset functions take a label_file as an input. This works but it feels quite clunky. One can keep this interface, but provide more general and easy-to-use functions.
Provide a predicate, which is a function of the type label → bool. Common examples of such a predicate would be odd, even, eq(arg), and lt(arg) / gt(arg).
bdd_exists
bdd_forall
zdd_project
Currently we always use a single bucket for the levelized priority queue. Experimentally this does provide a performance increase of 25% or more for reasonably large instances. Yet, this idea should not work well in practice, if the levels are very narrow. In that case, it would work better with a normal priority queue.
In #98 , #168 , #165 we derive estimates on the maximal size of the priority queue. This is also an upper bound on the size of each sorter in the levelized priority queue. If this estimate falls below some specific ε value, then we should use a non-bucketed version instead.
Implement a non-bucketed levelized priority queue as a separate template class. This one should have a similar public interface to the bucketed version we currently provide.
This also needs to be thoroughly unit tested to have similar behaviour to the bucketed one.
The question is: what value of ε should be used?
Another question is: can we not do better than just reusing the above bounds? Can we weigh that value against one or more of the following?
Currently the code is littered with statements like this:
#if COOM_DEBUG
print_node(node)
#endif
which quickly gets unwieldy. Maybe we should move the #if COOM_DEBUG
inside the print functions and place them in a debug
namespace (inside the coom
namespace), such that the line would become
debug::print_node(node)
Then the C++ compiler should be able to merely remove the unnecessary function call if it is empty.
The type uint16_t
only occupies 16 bits, whereas uint64_t
occupies 64 bits. Since we only use 16 bits for the label anyway, we may as well reduce the size of label_t
, which will impact the size of the meta file and the stack; the latter probably is not too important.
This changes to uint32_t
that only occupies 32 bits as discussed in issue #40 / changed in pull request #107 . The nice thing about 16 bits was, that the type/compiler directly could help with overflows etc.
Following up on #16 we could also clean up the code heavily by moving the COOM_ASSERT
compiler flag behind a forceinline
function. As it is inlined, we know that the compiler is able to remove all computations related to these assertions in the production version.
We already provide a dot output that can be used before and after reduction, and most debugging anyways uses specifically added console logging that is not the standard. So, we might as well remove the complexity and clean up the code.
Currently, the Apply algorithm treats being given the same underlying node_file
twice the same way as being given two different files. Since the refactor we have begun returning the same file in multiple places, so we would gain from identifying this case and treat the cases.
One will notice, that the result is at this point independant of the given BDD, except for whether an odd number of the two have been negated.
If no or both inputs are negated, then the sinks will always be the same at the end of recursion.
Cases to check for (T,T) and (F,F)
If one of the inputs are negated, then the sinks will always be different.
Cases to check for (T,F) and (F,T)
All of these cases can only result in one of the following four cases
Checking all of this takes only O(1) work, and all of that would be hidden behind a cheap node_file
pointer comparison guard.
As pointed out by @AnnaBlume99 in Logic in Computer Science the low edge is dashed while the high edge is solid. The DOT output has it the other (wrong) way around. Double checking with the papers of Tom van Dijk the same is the case.
Many pieces of software that use an OBDD make use of the CUDD Package. To better interface and ease transitioning to this library (for whatsoever reason anyone would like to do so), one might want to also provide the same interface for COOM. This is most likely best done with an adapter.
Our current examples do not show this problem, but the bdd_pathcount
and bdd_satcount
functions can potentially output values much larger than 264-1. There are three possible solutions
Add the option to use double instead of uint64_t for a "non-exact" output.
Use a fixed-width unsigned integer that is std::trivially_copyable
. This could be with a fixed width of 1024 bits. This effectively postpones the problem to 21024 number of paths and satisfiable solutions.
Use boost::cpp_int
or boost::gmp_int
. These support any arbitrary number of bits. This means, that unlike in (1) and (2) we do not just delay when the problem happens.
To this end, we need to create a custom serialize and deserialize function for these variable-length entities needed. TPIE already supports a lot of this out-of-the-box when it comes to tpie::file_stream
and tpie::merge_sorter
but not (yet) their tpie::priority queue
. So, we would need to first add support for such non-trivial elements in a priority queue.
tpie::merge_sorter
already supports variable sized elements.The quantify(const bdd&, const label_file&)
functions are good for when the argument is an lvalue and should be kept. But if the value is the result of some other computation (i.e. is an rvalue), then we can optimise the garbage collection a bit, by accumulating onto the input bdd rather than the new variable out
.
Currently we have memory-based computations in three places:
The last two even contain some code duplication. So instead, I would like a adiar/memory.h file that has these functions:
void set_limit(size_t memory_limit_bytes)
from adiar/adiar.h .
Functions for tpie::merge_sorter
:
Alternatively, these can be made static functions on the wrapper classes described in #98 .
merge_sorter_min()
:
Returns a struct { size_t phase_1, size_t phase_2, size_t phase_3 }
that provides the minimal amount of memory for each phase. The phase_1
and phase_3
values are to be copied from the m_single_block()
function in adiar/levelized_priority_queue.h and the sorter_phase_1
variable in adiar/reduce.cpp. The phase_2
value should be twice the phase_
value.
merge_sorter_parallel(size_t memory_bytes, size_t sorters = 1, size_t max_elem = UINT_MAX)
:
Derives the best values to be used for sorters
number of tpie::merge_sorters
that should exist in parallel and share some memory (e.g. the levelized priority queue and the Reduce algorithm).
The max_elem
argument may be skipped until we look at #152 .
Functions for tpie::file_stream
:
file_stream_memory_usage()
:tpie::file_stream
with T
typed elements. This is necessary to implement #98 .This file should probably only be used internally but then be called from adiar/adiar.h. Maybe a new nested namespace adiar::memory
is needed to resolve name conflicts. That is, if we want to provide an adiar::set_limit
function in adiar/adiar.h that calls something in adiar/memory.cpp.
Adding the level size in the meta file provides multiple benefits
Also, should the meta file maybe be renamed to level file?
The current set of GitHub actions try to help ensure the correctness of the BDD library. To this end, we may also want to look into one or more of the following.
Unit test coverage: CodeCov is free to use for open source projects and works easily with the lcov reports one can generate.
Code checkers: Examples could be SonarQube, but it requires a server of our own to run the software.
Other things I've looked at already.
Codacy looks to be easy to get started with and has a lot of things it complains about. Quite a few of them I recognise from Cppcheck. One should double check what options are available first.
Qodana looks promising, but does (yet) not support C++.
SoftaCheck does not work well, since it does not think to include submodules. I got 247 errors, most of which were due to the Bandit framework not being in scope.
We should have a simple action for each OS, such that we can check whether it truly compiles on most common configurations. One can copy over most of it from the Actions on my programming challenge repository.
Benchmark change in performance: Whenever a pull request changes a prior algorithm, then I have to manually run a benchmark to gauge whether there was any dangerous slowdown in performance. The only preexisting Action I have found is Benchmark Action.
But, it is not sufficient for my cases. To minimise the possible error in the results I have to run the same benchmark on the main and then the feature branch interchangeably. We are talking about so precise measurements, that we cannot use numbers that are on a different machine several months ago. Furthermore, the disk/CPU of the computer may slow down unexpectedly while it is running.
The relative performance difference only makes sense for a computer in the same state. So, I run my experiments as follows:
Added with #176
Skip jobs for unrelated changes.
Added with #117
Cancel workflow actions: This is useful when force-pushing to fix some minor things on a pull request. For example, look into using this GitHub Action.
Added in #128
Use of the Release Drafter action to automate/keep track of the changes since last release.
Context
With #128 the sink arcs in an arc_file
was split in two: the ones given in-order and the ones given out-of-order. This reduces the number of elements involved in the sorting of sink arcs before calling adiar::reduce
. Yet, this also introduces an overhead when writing and reading from these files, since the two sink arc files need to be merged.
This seems to result in a 1% or 2% relative slowdown in performance. When we are have implemented many of the internal memory milestone issues, then these "few" percent performance decrease can be quite noticeable.
Tasks
We can reuse the idea from the rest of the internal memory milestone and change the type of the arc_stream
and arc_writer
at the start of the each algorithm.
arc_stream
and arc_writer
classes with a boolean sort_sinks
. Based on this boolean, they run the current merging/splitting logic or (at compile time) go-to one of the branches.sort_sinks
parameter in a top-down sweep.__reduce
with the boolean moved into a template parameter.Alternatively, we may also consider whether we can redesign the if-statement to favour the no-sorting case.
Additional Context
The following is a list whether an operation outputs edges in order ( 🟢 ) or not ( 🔴 )
bdd_apply
bdd_ite
bdd_restrict
bdd_exists
/ bdd_forall
zdd_binop
zdd_change
zdd_complement
zdd_expand
zdd_offset
zdd_onset
zdd_project
If one tries to compile the test_coom_data.cpp
test cases with #include <coom/data.h>
rather than #include <coom/data.cpp>
, then compilation fails with the following errors:
test.cpp:(.text+0x648): undefined reference to `coom::value_of(unsigned long)'
test.cpp:(.text+0x659): undefined reference to `coom::value_of(unsigned long)'
After messing around a lot it is very much evident, that the coom/data.h
path is correctly placed and the compiler happily does all its work. The linker on the other hand is unable to find the implementation of coom::value_of(unsigned long)
in the object file of data.cpp
.
This also seems to be due to the fact, that I tried to have the header file include the source file. Apparently one should do it the other way around (as we currently already almost do). So, the linker may work, if we actually do as expected.
This would also heavily improve the compilation time...
The BuDDy package has functions within C++ to obtain the library version.
One can place CMake variables inside of the source code such that we can merely place define and bump the project version inside of CMake, and then C++ follows along nicely. Here is another source of how to do so. Most reasonable place to put it would be adiar/adiar.h.in
or in a new file adiar/version.h.in
.
In [Arge96] he describes the possibility to use an edge-based representation (source, data, target) which immediately makes the output of Apply in reverse sorted order for the next Reduce. This also happens when one has a node-based representation with the outgoing edges containing all the information of the children.
We notice though, that there also on top of this observation of Arge is another important benefit to using an edge-based representation.
The increase in size is completely covered by the need for the dependency graph.
The unit tests for temporary files has since 655aa7b been passing, though not for the GitHub Action, since it sees an exit code of 134 due to a "terminate called without an active exception"
Tom van Dijk has suggested the following format for a BDD:
Options are (optional) key-value pairs; each written on a single line as follows
.<option> <value>
Possible options are:
option name count order type string uint "pre", "post" This is then followed by a set of nodes, starting with
.nodes
- Terminals are given with the t key and are of the form.
t "type" "value"
- Nodes are of the form
where then is also known as high and else is also known as low. The optional '-' character marks an edge to be complemented, which depends on the attributed edge feature. Every node is identified by referencing its linenumber or the name of another bdd in the same file.b <label> '-'?<then> '-'?<else>
Finally, the list is ended with
.end
This feature might be blocked until we support attributed edges.
The simplest way to implement this would be to do the following:
{ source_line, target_line }
to a tpie::merge_sorter
.By use of a b-tree we could also store the id of all prior nodes and then that way skip the merge sorter. Most likely this will not run as fast as using the sorter. Furthermore, it also takes up much more space
The "reference another bdd" also makes things much more complicated. At that point one probably has to first check whether some reference like this exists to then "inline" the other BDD inside of this one and then run the actual Reduce algorithm.
Notice, while this is quite close to #126 , it also is quite disjoint (both in terms of how to do these and the usecase they are solving. The purpose of this is to have a common file format that can be used between different BDD packages, while the other is supposed to be used for Adiar alone by using the least amount of space.
I wonder why it works anyways...
Currently, if no files are provided, then make dot
will create an input for a filestream<node>
and create an output of filestream<arc>
for testing. This should be removed.
When running the 15-Queens benchmark on the Grendel-S cluster with varying amount of memory we get a Resource memory limit exceeded. Specifically, we get one for 128, 256, 512, and 1024 MB of memory.
Presumably, this is because we somewhere are allocating more memory than we actually are given. Maybe due to some stream object being created after the priority queues.
The branch feature/statistics adds gathering and printing of statistics ( see also pull request #141 ). This has been requested by Tom van Dijk.
Specifically, this adds the two CMake options:
ADIAR_STATS
: Gather the statistics that have a very low O(1) overhead in every function.ADIAR_STATS_EXTRA
: Gather much more verbose statistics for every recursion step.Both add some overhead, but I hope that the hierarchical gathering of statistics allows you to choose a balance between the fidelity of your statistics and the impact on running time. One can argue whether the naming of these options could be better?
It then adds the two new functions for Adiar (names derived from BuDDy).
adiar::adiar_stats()
obtains a struct with all the gathered raw data.adiar::adiar_printstat(std::ostream &o)
pretty prints the set of statistics to o (default std::cout).This was done on the main branch before the ZDD refactor in #128 . The feature/statistics branch should be made anew on top of the ZDD branch while also adding the remaining set of interesting statistics.
We can abstract the create_node
function to also automatically extract the pointer to its children.
WIth #169 and #168 we added support for the levelized_priority_queue switching to internal memory if its max_size fits into the given amount of main memory. We now similarly want to derive a max_size for the levelized priority queues in all other algorithms and then switch to an internal memory variant if possible.
We'll start with using the simple bounds below. We will later ( #165 ) use some (most often) much better bounds. So, it is a good idea to already now add an (inline) helper function to compute each max_size below.
The following are some simple O(1) derivable bounds on the levelized priority queues. Based on these, we can then be used to switch between the different versions.
count
: N (Steffan)
There are a total of 2N arcs to forward the count, but to create two requests from a single node at least one request has to be consumed.
product_construction
: (Nf+2) · (Ng+2) + 2 (Anna)
At worst, every node is paired with every other node (or leaf). Yet again, even though this totals 2(N1+2)·(N2+2) arcs at least one arc is consumed for every two created. Unlike for Path/SAT Count, we push the requests for the children before popping their parent.
quantification
: Nf2 + 2 (Anna)
At worst, every node is paired with every other node and we are also traversing the entire original BDD. Notice, that from the use of is_negating
and is_commutative
we never have tuples with boolean values, so there is no +2 inside of the quadratic.
bdd_ite
: Nf · (Ng+2) · (Nh+2) + Ng + Nh + 2 (Anna)
Same argumentation as for the product construction, but similar to Quantification we can abuse the collapsing into only one or two parts of the input.
substitution
: N+2 (Anna)
There are a total of 2N number of arcs, where again at most half can be present at the same time except plus the two arcs for the current node.
compare_check
: N1 · N2 (Steffan or Anna)
This depends on the early-termination strategy for the level. The one used for zdd_subset
, zdd_subseteq
, and zdd_disjoint
all have the above quadratic.
is_isomorphic
: N intercut
: (Anna)
This algorithm has two priority queues, each of which has its own responsibilities.
This way we can graphically draw the output
With #14 an O(N) time and O(N/B) I/O algorithm was created for the not operator. One will notice though, that it is merely a mapping operator on each element. That means one could merely map to the negated node inside the other algorithms at no extra cost to the number of I/O's and merely a very small O(N) amount of time spent.
The most elegant solution would be some kind of simplistic decorator pattern on top of the tpie::file_stream
creating a coom::node_stream
. With this we can then do the following
read()
, read_back()
, and peek()
.
peek_back()
, which might be useful.seek_node(node_ptr)
function to abstract away quite a lot. This will on the other hand also possibly result in a considerable slowdown if used incorrectly.seek_root()
/ seek_start()
method together with a seek_end()
, which internally do the exact opposite thing. Similarly, read()
would internally call read_back()
.
The only problem would be that one should vary not to parse the same internal tpie::file_stream
to the same algorithm twice, since that would result in the file stream going out of sync with the algorithms. But that by itself already is an issue at the current moment.
Currently, there are places one can optimise the appD_data
priority queue within Apply.
It uses 64 bits for the entire from_1
bit-flag. Yet, all flags are already occupied for is_high
(on source
) or reserved for complement edges (t1
and t2
). The sink-flag on t1
and t2
are also in use, so that only leaves the sink flag on source
as the only place to put it.
EDIT: The from_1
flag can be removed. See #101 .
It stores from_1
, data_low
, and data_high
for all requests forwarded across the layer. Yet, two requests to the same t1
and t2
do not need to store this same data multiple times. We could split appD_data
into two priority queues: One still to pick the next request from and another in sync with it with all other requests to the same (and without duplication of data
).
While working on #71 , I introduced by accident some Resource Limit exceeded warnings, where it then claimed more memory than was intended. For competitive comparison with other libraries, it's fine to just be sure, that this case does not occur.
We can also enforce it to crash, if this happens with tpie::resource_manager::set_enforcement(ENFORCE_THROW)
. The question is, if that is really of interest?
All the functions in bdd/build.h are not canonical in the sense, that their id
are not from MAX_LABEL
and decreasing, and they are (maybe?) not sorted based on their children.
TPIE handles possible errors by throwing exceptions, In the case of running out of space on the disk, it throws an we internally catch the tpie::out_of_space_exception .
In this case, the user may want to do one of the following things (specified in the exec_policy
)
adiar::out_of_space_exception
(which is just an alias for TPIE's exception. Otherwise, we'd introduce breaking changes).null_ptr
Decision Diagram (similar to CUDD) to silently propagate the error.For the cases of adiar::bdd_ite
where the inputs have disjunct levels they are all merely zipped together. Here the canonicity is not set correctly in the case that all three BDDs are canonical (which is the usual case).
Some of the worst-case bounds in #98 have a +2 which could be removed by popping all ingoing arcs from the priority queue before pushing the new recursion requests for the children.
Yet, removing the +2 is not really worth making the algorithms more incomprehensible; they already are hard to get your head around as of now. So, if this makes the code much worse, then one may want to just ignore it.
Before publication of this repository, then the setup
and setup-*
targets should be removed from the makefile
. Instead one should ensure that all these dependencies are mentioned in README.md
The bdd
and __bdd
classes work really well to completely hide the file management, and garbage collection from both the clients and the developers point of view. The copy-constructor of the bdd
class does not kick in though in ternaries.
bdd a = bdd_ithvar(0);
bdd b = bdd_nithvar(1);
bdd c = a == b ? a : bdd_xor(a,b);
Will give an "operands to ?: have different types ‘adiar::bdd
’ and ‘adiar::__bdd
" error.
Currently the arc-based representation uses 6 bytes while we could sacrifice one more bit in the node_ptr representation to reduce down to only 4 bytes. This will result in only a 4/3 increase in size due to the change of representation.
No sorting is by is_high first but always second; either after source or target. So, if we place it on the lowest bit on source (or target), then we incorporate sorting secondly by is_high directly within the integer comparison.
Currently, there is a lot of code duplication between all of the examples, such as the statistics update and timing functions. This only got worse as I added these examples to the benchmarking repository.
One thing we'd need to investigate is whether we can do the following:
If we can do that, then I'd propose that we
example
, as it will be removed when moving to bdd-benchmark.We have a bdd_nodecount
to obtain the number of nodes in the BDD, but from the Sylvan Bdd class it also looks like we may want to provide a bdd_varcount
for the number of variables (i.e. the size of the meta file)
Currently, each file is a set of unit tests by itself. The test output is quite hard to look around in. Together with #69 we may also want to sort the unit tests into data structures, bdd algorithms, and other
Since all algorithms are layer-by-layer, the graph is acyclic, and all nodes within a layer are independent, then one may have a priority queue aware of the current layer number i and have k buckets for the layers i+1, i+2, ..., i+k. Each bucket is then sorted, when one finally reaches layer j > i. If a request is made to a layer j > i + k, then it is placed within a spillover priority queue that one merges with the buckets.
Mathias Rav mentions, that when all layers are not very wide (L = Θ(N)), then the mere TPIE priority queue is the best bet. So, we might want to be able to differentiate between "good" input OBDDs where we have k = 0 and "bad" inputs, where k > 0.
Provided all algorithms output a secondary file_stream of layer-sizes, one can make the priority queue able to dynamically infer the number of buckets needed to optimise its behaviour by looking ahead on the size of the layers?
As long as a single layer can stay in memory, then nothing is specifically gained by use of tpie::merge_sorter
, but when the layer has to go to disk (the total size of all buckets exceed M), then the tpie::merge_sorter
already sorts the base cases, before they go out to the disk; this will safe N/B very heavy I/Os during the sorting. For k small enough and each sorter given M/k memory this will definitely have quite a positive impact on performance - for larger k maybe also.
As mentioned by Gerth Stølting Brodal, to further optimise it we may want to manually manage the blocks used for the k buckets, as long as k blocks may exist in memory at any given point in time.
Have one block in memory for the bucket for the current layer i in memory to pop from. When it is empty, get the next block. This will then only use O(Ni/B) where Ni is the size of the i'th bucket.
Have k blocks in memory for each bucket. When the block is full, then push it out of memory to the bucket. Similar to the merge_sorter, we may already have this sorted before pushing.
I'm not sure whether there is a major difference between the above or just using the tpie::merge_sorter
, but it is worth thinking about and have written down.
Currently we use coom_assert
to primarily check user input. We should have it split coom_assert
up for internal and external usage.
Currently we already use can_left_shortcut
and can_right_shortcut
to prune the generated temporary result. We also shortcut on the root, but we can also inverse shortcut the root. For example, if one is computing the AND of two bdd's and one of the two BDDs is a true sink, then the result is exactly the other one. Similarly, the OR has a false sink as being irrelevant, and XOR merely negates the other input.
Right now all source files are in the src/coom/
folder. The evaluate
, apply
, restrict
, and quantify
algorithms and more all could be moved to a bdd/
subfolder. This both increases how legible the files are to find, but also ensures we can differentiate between files for _BDD_s and _ZDD_s, should that become a Bachelor's project.
Despite the fact that Adiar works by storing everything on disk, we currently do not allow the user to truly save/load files
on disk.
Solution
__levelized_file_stats<elem_t>
struct that levelized_file<elem_t> publicly inherits from.__levelized_file_stats<elem_t>
in the file header of the levels file, as shown in the External memory queue example..version = <adiar::version()>
( see #140 ).diagram_type = <bdd / zdd>
.negate = <negation flag>
elem_size = <sizeof(node)>
nodes_path = <path/to/nodes>
levels_path = <path/to/levels>
void bdd_save(const bdd &f, std::string file_path)
:move(prefix_path)
) to place the files at the requested place.
copy(file)
+ move(prefix_path)
. Alternatively, if we want to reuse persisted files, then we can abort early and just goto step 4.make_persistent()
)bdd bdd_load(std::string file_path)
:
node_file
using the non-default path-specific constructor.node_file
and then use the current constructor for a bdd
that takes both a node_file
and a negate
boolean and return that.These functions can be defined in adiar/bdd/io.cpp and adiar/zdd/io.cpp
There are functions within data.h
and data.cpp
that we'd like the end-user to use. When they do, we'd also like to run a few sanity checks on their input and provide error messages, if they use them incorrectly. But, we don't want them to be run when within any of the COOM algorithms.
The best solution at this point seems to be a template function with a bool check
and a constexpr if
. This will also make the functions inlinable again, which might be of benefit.
Also, we should find a better name than bdd_counter
. Maybe bdd_threshold
... or? Or, maybe instead of exposing the templated function, we should maybe just provide a set of functions; one for each operator.
The bdd_sink
and bdd_ithvar
functions will be used often. We can safe space and improve computation on them by ensuring the reuse of said files.
bdd_sink
: We can have a node_file
initiated on the first call; in fact, that way we can make both the true and false sinks the very same file.
See #87 for an attempt to do so.
bdd_ithvar
: We could have a hash-table of the prior created ithvars.
The Composition of two BDDs f and g for some label i ∊ [n] is f i = g is to be interpreted as f(x1, ..., xi-1, g(x1, ..., xn), xi+1, ..., xn), i.e. g dictates the value of the ith input variable. This function is used in some model checking algorithms.
Based on BuDDy, the declarations in adiar/bdd.h should be
//////////////////////////////////////////////////////////////////////////////
/// \brief Functional composition.
///
/// \param f Outer function
///
/// \param g Inner function
///
/// \param var The variable that \c g replaces in the input of \c f
///
/// \returns \f$ f|_{x_{\mathit{var}} = g} \f$
//////////////////////////////////////////////////////////////////////////////
__bdd bdd_compose(const bdd &f, const bdd &g, label_t var);
inline __bdd bdd_compose(const bdd &f, const bdd &g, const bdd &var)
{ bdd_compose(f, g, min_label(v)); }
Implementation
This can be implemented with a single sweep through f and g by reusing the ideas in the Quantification and the If-Then-Else algorithms. A priority queue contains requests on triples t1
, t2
, t3
where t2
and t3
are nodes from f and t1
is from g. Here, similar to If-Then-Else, t2
represents the true (high) branch on variable i and t3
the false branch (low).
For level i we deal with it somewhat similar to Quantify where a single node of f is turned into two. To this end, we need to introduce the same idea as we did for the product construction etc. with the std::variant<skipto, output>
and the policy::may_skip
optimisation, where we use skipto
if g is not on level i.
The pruning of the If-Then-Else still apply. Ideas from Quantification also somewhat applies here.
All of this should be possible to do with the policy pattern on the algorithm in the adiar/bdd/if_then_else.cpp.
bdd_ite
into a call to prod3. The following things should stay in bdd_ite
When this is done, then slowly add enough hooks in the policy of prod3 to vary the behaviour between the two algorithms.
Partially in the same spirit as #15 it would be of use to us to hide initialisation of TPIE in an init function for Adiar. This function can provide one or more of the following options:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.