Giter Club home page Giter Club logo

pathfinding's People

Contributors

art049 avatar bors[bot] avatar catlee avatar conradludgate avatar cuviper avatar danielhuang avatar danielvschoor avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar dsp avatar dzhu avatar eholum avatar ehsanul avatar enet4 avatar felix91gr avatar gbid avatar jarede-dev avatar kerollmops avatar leesongun avatar ltratt avatar ludwigpacifici avatar mdsteele avatar nwolber avatar radix avatar renovate[bot] avatar samueltardieu avatar sylvainroy avatar uriopass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pathfinding's Issues

A* with limited # iterations

Maybe I have a niche use case, but I think it would be nice to have a version of astar where I can pass in the maximum number of iterations. This is useful for real-time setting where I might want to bail out early and just take the best path I have so far. It should be easy to add such a thing, if you would merge a PR, I will write it.

topological_sort: Please either provide full cycle or document how to obtain it

The documentation for pathfinding::topological_sort states that if a cycle exists, it'll return one of the nodes in the cycle. Any number of algorithms could easily obtain the full cycle, or for that matter all cycles. If it would be trivial to do so, please consider providing the full cycle in the error. If not, please consider documenting, in the same function documentation, a simple, efficient way to find that cycle (or all cycles).

Can't use dijkstra with float costs

dijkstra(
^^^^^^^^ the trait `std::cmp::Ord` is not implemented for `{float}`

C: Zero + Ord + Copy,
          --- required by this bound in `pathfinding::directed::dijkstra::dijkstra`

Since f32 is PartialOrd but not Ord, I can't use the dijkstra function with e.g. f32 as the Cost type.
My workaround for now has been to multiply by some constant factor like 1000 and cast to i32, but I think the algorithm should still work if the constraint on C was reduced to PartialOrd instead of Ord. Maybe PartialOrd + PartialEq?

A more consistent API using traits

The most of algorithms have a very similar signature, since the objective or all are finding the best path.

Therefore, this seems to be a perfect case to use traits, making the API more consistent and functions more clean with less generic arguments. The inconveniences are that this would be a breaking change and that closures wouldn't be used directly without a struct (although putting some closures as parameters of a function is a bit cumbersome, so this inconvenience is not so important).

For instance, a trait Graph could have a type N, a type C, a neighbors function to get neighbors, a heuristic function to return the heuristic (with default implementation returning 0 for the cases when a heuristic algorithm is not wanted) and a cost function returning the cost between two neighbors (with default implementation returning 1 for the unweighted graphs). This would also solve #145.

Of course, this is only a preliminary suggestion.

Consistent function signatures

In the current implementation, IDA* and IDDFS have different trait bounds and function signatures while IDDFS is a special case of IDA*. The following signature might be more general and useful in some cases, and it allows us to implement IDDFS as a special case of IDA*.

pub fn idastar_general<N, C, FN, IN, FH, FS, E>(
    start: &N,
    mut successors: FN,
    mut heuristic: FH,
    mut success: FS,
) -> Option<(Vec<E>, C)>
where
    N: Eq,
    C: Zero + Ord + Copy,
    FN: FnMut(&N) -> IN,
    IN: IntoIterator<Item = (E, C)>,
    FH: FnMut(&N) -> C,
    FS: FnMut(&N) -> bool,
    E://stands for edge, able to output arriving node

In my personal opinion, this does not necessarily have to be a breaking change, as existing functions can be implemented using the above.

#Opened-Nodes benchmark

This is an idea that I think is worth pursuing. The explanation (copied from my question on the forums):

This technique consists on counting how many nodes a particular algorithm opens. The less nodes an algorithms opens, the faster it tends to be in general.

For example, if you count the nodes opened in A* v/s Dijkstra (assuming your heuristic is admissible in A*) you’ll find that A* always opens less nodes. Similarly, when comparing different heuristics for A*, you’ll find that between two admissible heuristics h1 and h2 such that h1 ≤ h2, the opened_node_count for A* will be always less when using h2 than when using h1. And these differences tend to show in the time of execution as well.

Not always however, because of cache and usage of heap v/s the stack in different algorithms, but in general it’s a good guiding measurement.

I'd like to have some sort of benchmark ran with the command cargo run --example node_count or so, which runs some example problems of differing structure against the available algorithms, and then displays the results.

I was also thinking that it could be useful for users of this crate to be able to pick an algorithm and run this "counting benchmark" against it, to see how well one algorithm behaves on their particular problem. But that might be out of scope for this idea atm.

less heap allocation in star

I was wondering if there is a way cache the BinaryHeap and IndexMap that astar uses internally to reduce heap allocations? Is there a way to have a method of astar that allows us to pass those in as a param?

JPS

I've been having a play around with trying to implement jump point search. I've got some very basic working code and I was wondering whether there would be any interest in a pr to add it to the library?

I'm still very new to rust, so it likely will take me a while to try and modify to fit in with the existing algorithms, but if someone is willing to review my code and provide some pointers I'm happy to take a shot at it.

topological_sort_into_groups

For an application I'm working on, I'd be interested in adding a new topological_sort_into_groups function to pathfinding::directed::topological_sort that, rather than just producing a single ordering of nodes, would partition the nodes into groups -- the first group would have all nodes with no dependencies, the second group would have all nodes whose only dependencies are in the first group, and so on. Concatenating the groups would then produce a valid topological sort regardless of how the nodes in each group were reordered.

This should be doable in O(V+E) time with a pretty straightforward variation on Kahn's algorithm. (Maybe this specific variation already has a name? If so, I'm not sure what it is.)

I'd be happy to put together a PR for this if it would be welcome.

Obstacles

Is there a way to easily implement obstacles for a_star?

Negative cost values lead to wrong results in dijkstra and A*

I know this was stupid on my part, but during advent of code I tried using Dijkstra to optimize a path. So instead of minimizing cost, I wanted to optimize a score. I tried it by using a negative cost, which returned invalid results. It was easily fixed by using max_score_per_step - current_score as a cost, then calculating the final score with final_score = step_count * max_score_per_step - total_cost.
As I said, it's my fault for trying to misuse the algorithms.

That said, I think there should either be a warning in the documentation that all costs must be positive or even a check/panic in case costs are negative. That way the algorithms never return wrong paths.
Another possibility would be to use the minimum value for cost, instead of zero. For example by creating a trait MinValue that returns 0 for unsigned values and the smallest negative number for signed values. But that would break compatibility with custom types, so I can see how it could be problematic.

Possible to quickly compute "all nodes within a distance of D" from an initial node?

Given an undirected graph G (obviously fine to think of this as directed), I'd like to have a quick way to answer the question: given a starting node N and a distance D, what are all the nodes that are less than or equal to D from node N?

Context: I'm working on a tiny city sim game on a grid. The rule is that the building you may build on node N is constrained by the pre-existing buildings that are "connected" to node N, i.e. the buildings that are within distance D of node N. I'm currently pre-computing a set of the nodes that are connected to any given node, but my algorithm is so slow that it takes seconds on even a 6x6 graph 😞! The reason that this isn't just "check the Manhattan distance" is because there are some buildings that you can travel between for zero cost.

[Brainstorm?] More kinds of problems for benchmarks

Currently the benchmarks only do search on a grid. But the structure of a grid isn't representative of all the kinds of problems for Search, far from it. The idea would be to have at least 2 more kinds of problems, with a different state-space structure.

DFS takes `start` in by move, but BFS and A* take it by reference

The algorithms provided in this crate have almost identical APIs, which is really nice. I was able to switch from A* to IDA* to Fringe to DFS with almost no work, which was really neat. (It turned out that I couldn't come up with a heuristic that would make A* better than DFS for my use-case. But the point is it was really easy to try different stuff out thanks to you.)

My only complaint is that DFS inexplicably requires start to be passed in via a move, while the other algorithms don't.

I haven't looked at the other algorithms all that much, but from a cursory glance I think toposort should also take its start parameter in by reference as well.

Anyway, thanks for making this crate, and sorry for raining on your parade by asking for a breaking change =P

Bottlenecked on astar allocation and hashing performance

All of the below refers to astar, but it may also apply to other functions. I only use astar at this time.


I'm working on optimizing some pathing code that depends on the pathfinding crate. Pathfinding is a very hot code path for my application.

As I profile and benchmark my application I'm finding that roughly 50% of the pathing calculation time is being spent allocating memory in hashbrown.

image

The other roughly 50% of the time is being spent hashing successors when looking them up in the parents IndexMap.

Both of these bottlenecks are addressable, I think.

Non API changing potential improvements

  • It looks like the default hasher is being used for the IndexMap. This could be sped up by using a faster hasher. This would be at the cost of an extra dependency, however.
    • Note. The API changing idea below would handle this without any new dependencies on pathfinding's side. In my opinion the below approach is better. I'm only suggesting this in case the below approach is controversial for some reason.

API changing potential improvements

The caller of pathfinding's astar will often have a great deal of knowledge about the aspects that impact how much memory is needed to pathfind.

For example, imagine a flat 100x100 grid. When pathfinding, in this example case, a tile's successors are the 8 tiles that surround it.

We know that

  • A maximum of 10,000 unique tiles will be visited
  • A tile has its 8 adjacent successors (or fewer if it is along the edge of the grid)

Given this information, it is possible to avoid all of the parents' allocations by simply passing in a map that has a capacity of 10,000.

Also, because we know that we only need a maximum of 10,000 entries, all of the hashing could be avoided as well. For example, each neighbor could be given an identifier between 0-9999 and that can be used to insert / look it up in a pre-allocated vector.

I've explained a simple case, but these types of controls would also exist in more complex cases. Although they might require more code to implement.

Ideally, all of this is of no concern to pathfinding.

Instead of using the IndexMap for parents, astar accepts an &dyn SomeParentHolderTrait.

Now the caller can hand tailor how parents are looked up, how hashing occurs (if at all), etc.

In short

  • allocating parents is a performance bottleneck in astar
  • hashing is a performance bottleneck in astar
  • both of these bottlenecks can be removed by allowing the caller to pass in the parents holder

Open Questions

The latter approach would be a breaking change.

Could we potentially introduce a new generic method and have the old astar call it? Then mark astar as deprecated noting that it will be replaced with astar_generic in v3?

This would allow this to get released without needing to introduce a breaking changing?

Potential Path Forwards

  1. See what @samueltardieu has to say
  2. I'd be happy to help with implementing this by:
  • First PRing a benchmark(s) so that we know where perf started at
  • PRing the API changing potential improvements and seeing how they impact the benchmark(s)

Related Issues

#234

What about adding IDDFS? (I can do it)

I think it's a good algorithm to have. It has the following properties:

  • It's complete (like BFS)
  • Its memory footprint is small (like DFS)

I'll talk about its time complexity at the end because it's not as important as these properties.

IDDFS may be the best of the "basic", non-heuristic, non-weighted algorithms. I like it a lot because it uses the same concepts behind IDA*, which is my favorite heuristic search algorithm ❤️


As promised, here is the time complexity:

  • If O(f(n)) is the time complexity of BFS, the time complexity of IDDFS will be O(k * f(n)) = O(f(n)) for some constant k. That makes it suitable as an alternative to BFS.
  • If IDDFS and IDA* are implemented without using heap allocation, they tend to be faster than BFS and A*. If the neighbors function can be given as an iterator, then this certainly should be the case ✨

Cost of movements

If I have understood well the documentation, all algorithms suppose a constant cost of all movements. Thus, the best path is the path with less movements.

However, the most of algorithms (probably all, but I do not know all) support costs. For instance, the heuristic defined by A* and IDA* is f = g + h where g is the cost and h is the user defined heuristic. Dijkstra's evaluation function is f = g as well, considering only the cost.

I think that supporting costs would do the library very more useful.

Demo for Munkkres Algorithm

Hi there, I'm playing around with the crate and I havent been able to get the munkres algorithm working, I suspect i'm missing something fundamentally. Please have a look

        let d2: Vec<Vec<i32>> = vec![vec![2;3];3];
        let matrix = Matrix::from_vec(
            d2.len(),
            d2.get(0).unwrap().len(),
            d2,
        )
        .unwrap();
        let res = kuhn_munkres(&matrix);

But i'm getting trait bound errors on passing matrix to the function. How should I be going about it?

how to set timeout for algoritms?

In some cases,complex or large scenes,the search time can be large,do me have a way to break the loop? or we need to add a new timelimit argument for algorithms?

Further work for benchmarks

I think there are two changes/upgrades to the benchmarks that could make them more precise:

  1. Shuffling the neighbours function's output, such that any algorithm that could benefit from preordering of the nodes is on the same playing field as the other algorithms. I'm mainly thinking here about DFS, which always picks the first option it can use, and therefore it benefits (or falls behind) by a huge amount depending on the ordering of its options.
    I was more keen on this idea a while ago, but now I'm thinking that it could be detrimental if some algorithm depends on a "consistent" ordering of the neighbors. By this I don't mean that it benefits or falls behind, but that it actually affects its correctness. This may not be a good idea, or it could actually be used to test for robustness. I want to think more about it 🤔

  2. Adding more kinds of problems to the benchmarks. Currently the benchmarks use only a search on a grid. But the structure of a grid isn't representative of all the kinds of problems for Search. I think we could use a couple more kinds of problems, with a different state-space structure 🙂

Adding some method of cancelation to A* computations of unknown length

Hello!

I was hoping to use the A* implementation in this crate for a project of mine, but I'm running into this problem: if the space I'm searching doesn't have a quick enough solution, the function will eventually eat all available memory and grind the entire computer to a halt. The only workaround I can think of would be spawning a new process to run it and killing the entire process if the computation becomes too costly.

I'd like to add some method to signal to cancel the operation so I can do something like spawn a thread to run this computation, and if I don't receive a result quickly enough I'll signal to drop the calculation and use some fallback strategy instead.

I can always fork and add this functionality myself, but if this is a helpful piece of functionality to have, we can discuss how best to add it to this project (maybe a new version of the function taking a receiver as a parameter?).

Thoughts on removing `successors` allocation

Hey!

I'm exploring using pathfinding's astar implementation.

I'm noticing that the successors function requires you to allocate a vector.

This is probably fine for most use cases - but if you're targeting a top-performance pathfinding implementation it might be a non-starter.

Has removing this allocation been considered?


The signature for successors is currently:

fn successors(&self) -> Vec<(Pos, usize)> {

Have it been considered to instead have fn successors(&self) -> &Vec<(Pos, usize)> { be the signature?

Then the documentation could just show an example of using a re-usable Vec<T, usize>. The ergonomics would be roughly identical minus the user needing to clear the vector out or something like that...

i.e. (rough sketch)

let mut successors = vec![];

astar(
        start,
        |p| {
            successors.clear();
            p.successors(&successors)
        }
        |p| p.chebyshev_distance(end),
        |p| p == end,
);

Thoughts?

Thanks!

Usage of FnMut in `astar`

Hi @samueltardieu, long time no see ^^
I hope the global situation is treating you as well as it can. ❤️

I was taking a peek at the docs once again, and the fact that FnMut is used in astar grabbed my attention. Why is it that successors, heuristic and success have to be FnMut? Couldn't they be just Fn?

Thanks in advance :)

[Feature Request] Expand planner algorithm offering

This project already has a good selection of planning algorithms, but some common categories that are currently not represented include:

Any Angle Planners (such as Theta* and its derivatives)
Sample Based Planners (such as RRT and its derivatives)

Is there appetite for expanding the project along this axis?

A "which function" guide for beginners?

The selection of algorithms here is amazing! 😍 As an effect of that, it's taking me a while to figure out which function I want, even though I've heard of most of the algorithms before. For beginners (or lazy people like me 😁), what do you think of adding a "which function should I choose" section to the README? I'd be happy to try writing this. The goal wouldn't be to solve every problem, but rather to provide reasonable guidance on where to start.

Question: is the astar function deterministic?

I'm coding a game that relies on deterministic outcomes and wanted to make sure that, given the same inputs, the astar function will always produce the same results I'm fairly new to rust and notice that the crate uses rand, hence the question.

Slight optimizations

The astar and bfs implementations seem to use a pattern if !hash.contains(key) { hash.insert(key); } which is essentially two separate lookups into the hash table. The entry API would allow to avoid one with slightly better performance, e.g. for astar

match parents.entry(neighbour.clone()) {
    Vacant(ent) => {
        ent.insert((node.clone(), new_cost));
        to_see.push(..);
    },
    Occupied(ent) => if new_cost < ent.get().1 {
        ent.into_mut().1 = new_cost;
        to_see.push(..);
    },
}

Using BTreeMap instead of HashMap could be better in some cases provided that nodes that are visited after each other are also sorted close to each other; in theory BTreeMap would be more cache friendly in such case. That may not be generally applicable though, and the Ord vs. Eq + Hash requirement might be too strict.

Question about kuhn-munkres / assignment problem

Hi, thanks for making this crate :)
Do you know any crate that works like kuhn_munkres but works for assigning multiple columns to one row?
(If there are a different number of rows and columns.)

I have an unbalanced Assignment Problem and which is also modified to allow one "destination" to be assigned to multiple "sources". There are also limits, it's not the case that it can be reduced to a balanced assignment problem and then leftover sources are assigned to their best-matching destination:
In my use case (a hobby project) I want to do midi channel routing between up to N source channels (which each have an assigned GM instrument and a certain ADSR envelope and a certain maximum concurrent voices (their effective polyphony)) and M (often <=N but can be larger, too) destination channels (synthesizers), that each have a max polyphony limit and also a ADSR envelope and there is a certain compatibility score for each pairing of the possible 128 midi instruments and each of the synthesizers (because not every synth can play the timbre of every instrument).
Without the polyphony limit, I could just treat it as an assignment problem (calculating the score of each pairing by only taking into account instrument and envelope compatibility) and use this crate to solve it, and then assign leftover source channels to their best-matching destination channel.
But with the polyphony limit, considering each pair score in isolation is not valid, because the polyphony of a destination is used up by all sources, so they are competing in a knapsack-like way.
This means that the global configuration has to be considered/scored as a whole.
Not sure what kind of name this problem has, or which algorithm would be best.
N and M are both up to 16, and it's important to find the best matching in the shortest time possible.
Do you know which algorithm/approach (or existing crate) is most suitable for this? :)

yen returns paths with the same cost for big graphs

Here is a part of a test program that uses the yen function:

let path = yen(
  &start,
  |&vertex| {
    graph
      .get_neighbors(vertex)
      .iter()
      .map(|&neighbor| (neighbor, 1))
      .collect::<Vec<_>>()
  },
  |&vertex| vertex == end,
  10,
);

println!("{:?}", path);

And its output:

[([0, 77, 10], 2), ([0, 5, 49, 10], 3), ([0, 6, 97, 10], 3), ([0, 9, 90, 10], 3), ([0, 66, 83, 10], 3), ([0, 15, 77, 10], 3), ([0, 66, 35, 10], 3), ([0, 16, 90, 10], 3), ([0, 32, 2, 10], 3), ([0, 16, 35, 10], 3)]

The k-th shortest path is the k-th element of a sorted list of paths with DIFFERENT costs that start in the shortest path.

The full program can be found here.

An implementation of the Yen's algorthm can be found on Wikipedia.

kuhn_munkres doesn't terminate for some inputs

Here's how to reproduce:

use ordered_float::OrderedFloat;
use pathfinding::{prelude::{kuhn_munkres_min, Matrix}};

fn main() {
    // Using f32::NAN or f32::NEG_INFINITY has the same effect here
    let mat = Matrix::new(2, 2, OrderedFloat(f32::INFINITY));
    let res = kuhn_munkres_min(&mat);
    // This line is never reached
    dbg!(res);
}

OrderedFloat comes from the ordered-float crate.

Maximum distance

It would be nice to short-circuit a search if you only want to consider solutions within a certain cost.

I'm mostly interested in the astar implementation (though at least Dijkstra's would also be useful), so passing the current cost and returning an Option<book> (where None means terminate the search) could work nicely without changing much.

In building a turn based strategy game, but I think this could be useful for other uses of a pathfinding library.

Retrieving the next shortests paths from astar_bag

Hello,

I am working with the astar_bag algorithm right now this is because I needed to retrieve all of the shortests paths. But I need one more thing, I would like to also retrieve the next shortest paths.

I am wondering how I could construct an astar_bag_iter or something that keep the state of the astar_bag and reuse it at each iteration.

let mut iter = astar_bag_iter(...);

assert_eq!(iter.next(), Some((0, ...)));
assert_eq!(iter.next(), Some((1, ...)));
assert_eq!(iter.next(), Some((2, ...)));
// ...
assert_eq!(iter.next(), None);

Do you think it could even be possible? I see that the min_cost is used to retrieve the shortests paths but is updated in place and bigger costs are not saved.

Can you help me a little bit?

OSM based offline routing library

I'm not sure this is the right repo/location for this, but here guess anyway:
I'm looking for a solution for offline routing (pathfinding) library that can work on a mobile device.
This should include different profiles, for example bike, hiking, 4wd...
The data which I use is OSM based and has tags to indicate different road, blocks etc.
The closest thing I found which is cross platform is Valhalla, but it's in C++ which is not "nice".
Any information on this aspect from this library point of view would be great.
Is it possible to use this library for this? What's needed in order for this to happen? Has anyone attempted it?

Possibly incorrect Ord for SmallestCostHolder (in astar)

Here is the code in question:

impl<K: Ord> Ord for SmallestCostHolder<K> {
    fn cmp(&self, other: &Self) -> Ordering {
        match other.estimated_cost.cmp(&self.estimated_cost) {
            Ordering::Equal => self.cost.cmp(&other.cost),
            s => s,
        }
    }
}

It seems to me that the inner cost comparison should be other.cost.cmp(&self.cost) preserving the sense (of the outer comparison) that if self has a lesser cost than other, cmp will produce Ordering::Greater (for the max heap).

[question - documentation] How good/relevant `pathfinding` compared to `petgraph`?

Hello,
I was searching a crate to manipulate a graph, and use some algorithms on it, especially Dijkstra. I read on many places that petgraph is the goto solution in Rust to do such things. But when I searched "Dijkstra" on lib.rs, you crate was higher ranked. Given how petgraph seems recommended in the Rust community, I was really surprised that there is no mention of it in the readme. Can this crate interact with petgraph, is it better (faster, more memory efficient, …). Why should I choose to use one over the other, especially since it appears that lib.rs considers that this crate has better quality?
Sincerely and thanks for the hard work.

Partial Typescript Deno/Node port

I've made a partial Typescript port of this library for Deno and Node.js: https://github.com/Macil/lazy_pathfinding. It currently has implementations of A*, Dijkstra, path counting, and connected components algorithms, ported from this library with similar APIs exposed and tests. I just wanted to mention it somewhere that people might find it interesting.

Thanks for this library! I've previously used this library when solving Advent of Code with Rust, and when I wanted to solve some newer Advent of Code challenges with Deno, I found myself sorely missing this library. Most other implementations I could find of A* required the entire graph to be created ahead of time, which does not work well for certain Advent of Code challenges, unlike this library where you pass in a successors function that only needs to lazily create nodes as they're encountered.

Ability to return edges instead of/addition to nodes

I'm using the A* implementation to find the shortest path in a complicated directed graph, which works great!

The problem is, my graph often has multiple edges of different weight going between two nodes and I need to know which ones is taken. I can manually search between each pair of nodes, but this feels cumbersome. I could also replace the node type with (Edge, Node) to keep track of the edges taken, but this breaks the Eq and Hash traits`.

'assertion failed: `(left == right)`

hello, i got this:
thread 'main' panicked at 'assertion failed: (left == right)
left: 7,
right: 5', src/main.rs:22:1

with this:

use pathfinding::prelude::bfs;

#[derive(Clone, Debug, Eq, Hash, Ord, PartialEq, PartialOrd)]
struct Pos(i32, i32);

impl Pos {
  fn successors(&self) -> Vec<Pos> {
    let &Pos(x, y) = self;
    vec![Pos(x+1,y+2), Pos(x+1,y-2), Pos(x-1,y+2), Pos(x-1,y-2),
         Pos(x+2,y+1), Pos(x+2,y-1), Pos(x-2,y+1), Pos(x-2,y-1)]
  }
}

fn main() {


static GOAL: Pos = Pos(**8, 8**);

let result = bfs(&Pos(1, 1), |p| p.successors(), |p| *p == GOAL);

//println!("{:?}",result);
assert_eq!(result.expect("no path found").len(), 5);
}

bfs_loop perfomance problem

Hi!
bfs_loop() function is O(N**2), unfortunately.
Consider such an example:
Graph containts 2N+1 node, from 0 to 2N.
Edges:

  • From 0 to each node in segment [1, N] - N edges
  • From each node in segment [1, N] - to N+1 - N edges
  • From each node in segment [N+1, 2N-1] - to next node - N-1 edge
  • There is no edges from last, 2Nth node.
    Here is visualisation of graph with N=5
    image
    How bfs_loop(0, ...) processes this graph:
    It can't find edge from 0 to 0, so it starts iterating over 0 siblings
    For each sibling bfs is launched, which traverses O(N) nodes.
    Bfs is launched O(N) times - from each node in [1, N].
    O(N*N) in total.

Matrix: direction aware varient of neighbours method

Would it make sense to add to a direction aware version of Matrix::neighbours to this crate?

Along the lines of:

pub fn neighbours_enumerated_by_direction(
    &self,
    (r,c): (usize, usize),
    diagonals: bool,
) -> impl Iterator<Item = ((isize, isize), (usize, usize))>

Potentially it could be better if the directions were an enum.

My context for this is that I'm working on some flow field pathfinding algorithms in which it's required to keep track of the direction of minimum values of neighbours.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.