Giter Club home page Giter Club logo

pyflagser's Introduction

image

wheels_ ci_ Twitter-follow_ Slack-join_

pyflagser

pyflagser is a python API for the flagser C++ library by Daniel Lütgehetmann which computes the homology of directed flag complexes. Please check out the original luetge/flagser GitHub repository for more information.

Project genesis

pyflagser is the result of a collaborative effort between L2F SA, the Laboratory for Topology and Neuroscience at EPFL, and the Institute of Reconfigurable & Embedded Digital Systems (REDS) of HEIG-VD.

Installation

Dependencies

pyflagser requires:

  • Python (>= 3.7)
  • NumPy (>= 1.17.0)
  • SciPy (>= 0.17.0)

User installation

If you already have a working installation of numpy and scipy, the easiest way to install pyflagser is using pip :

python -m pip install -U pyflagser

Documentation

API reference (stable release): https://docs-pyflagser.giotto.ai

Contributing

We welcome new contributors of all experience levels. The Giotto community goals are to be helpful, welcoming, and effective. To learn more about making a contribution to pyflagser, please see the CONTRIBUTING.rst file.

Developer installation

C++ dependencies:

  • C++14 compatible compiler
  • CMake >= 3.9

Source code

You can check the latest sources with the command:

git clone https://github.com/giotto-ai/pyflagser.git

To install:

From the cloned repository's root directory, run

python -m pip install -e ".[tests]"

This way, you can pull the library's latest changes and make them immediately available on your machine.

Testing

After installation, you can launch the test suite from outside the source directory:

pytest pyflagser

Changelog

See the RELEASE.rst file for a history of notable changes to pyflagser.

Contacts:

[email protected]

pyflagser's People

Contributors

flomlo avatar giotto-learn avatar gtauzin avatar reds-heig avatar ulupo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pyflagser's Issues

Computations take a long time

Description

I am not aware of how long the computations wth flagser should take, but it seems like the example in #15 takes more than a few minutes ( actually, it did not terminate on my machine).

Steps/Code to Reproduce

Follow step 1. in #15.

Expected Results

Computation terminates in a few minutes for a distance matrix of size50x50.

Actual Results

Does not terminate in 15 minutes.

Versions

Darwin-19.3.0-x86_64-i386-64bit
Python 3.7.6 (default, Dec 30 2019, 19:38:26)
[Clang 11.0.0 (clang-1100.0.33.16)]
pyflagser 0.1.0

Computations following an interrupted call fail

Description

When run, the flagser function creates an output (output_flagser_file) file that is deleted at termination. However, if, one computation terminates early (for example it is interrupted), and a subsequent one is launched, in the same way, it fails:
The output file already exists, aborting.

Steps/Code to Reproduce

  1. Run
import numpy as np
from pyflagser import flagser
dist = np.loadtxt('dist_eight.txt')
phs = flagser(dist, min_dimension=0, max_dimension=3, directed=True, coeff=2, approximation=20)
  1. Interrupt after some time(computations take several minutes on my local machine).
  2. Run in the same directory.
import numpy as np
from pyflagser import flagser
dist = np.array([[0.        , 0.        , 2.98500989, 4.35619449],
       		 [3.35619449, 0.        , 0.        , 4.91401302],
       		 [3.77040805, 2.57079633, 0.        , 0.        ],
       		 [3.57079633, 4.27051191, 2.19961173, 0.        ]])
phs = flagser(dist, min_dimension=0, max_dimension=3, directed=True, coeff=2, approximation=20)

Expected Results

Terminating without an error.

Actual Results

The output file already exists, aborting

Versions

Darwin-19.3.0-x86_64-i386-64bit
Python 3.7.6 (default, Dec 30 2019, 19:38:26)
[Clang 11.0.0 (clang-1100.0.33.16)]
pyflagser 0.1.0

Relax condition on square shape on sparse input

Description

It seems to me that it is unnecessarily unfriendly for the user dealing with sparse matrices to impose that these must be square.

Steps/Code to Reproduce

The example of a directed 2-simplex already displays this. Since this is a triangle with vertex 0 -> 1, 1 -> 2, and 0 -> 2, the user might decide to represent this as:

from scipy.sparse import coo_matrix

data = np.ones(3)
row = np.array([0, 1, 0])
col = np.array([1, 2, 2])
ad = coo_matrix((data, (row, col)))

ad has shape (2, 3) when constructed this way. When passing this input to either _extract_unweighted_graph or _extract_weighted_graph, the user first gets a warning, and then ultimately an error from the C++ code like this:

RuntimeError: Out of bounds, tried to add an edge involving vertex 2, but there are only 2 vertices.

This is because the number of vertices is guessed to be adjacency_matrix.shape[0] in

vertices = np.ones(adjacency_matrix.shape[0], dtype=np.float)
or
vertices = np.asarray(adjacency_matrix.diagonal())
.

Proposal

I suggest that we estimate the number of vertices as max(adjacency_matrix.shape) instead of the size of axis 0. I also propose that we remove the square warning or only keep it for dense input.

Support for filtration algos

Description

Right now, we only support the default filtration algorithm. However, flagser incudes a number of ways to compute filtration from edges and vertices weights. It can even allow the user to build its own filtration function.

`pyflagser.flagser_count_unweighted` crashes on really huge graphs

Description

Whilst analysing the (admittedly, huge) connectome from this statistical reconstruction of a mouse v1 (https://www.dropbox.com/sh/w5u31m3hq6u2x5m/AAAGAOP5nQlTuyA8Uhg3hMTZa/GLIF_network/network/v1_v1_edges.h5?dl=0),

pyflagser.flagser_count_unweighted throws RuntimeError around 20s after starting:
RuntimeError: Out of bounds, tried to add an edge involving vertex 34902, but there are only 34316 vertices.

Admittedly, this .h5 file contains a massive graph (250k vertices, 70mio edges).

Steps/Code to Reproduce

  1. load big graph into python
  2. run flagser_count_unweighted(graph)

Expected Results

pyflagser performs it duties and does not crash

Actual Results

----> 1 flagser_count_unweighted(x)

~/.local/lib/python3.9/site-packages/pyflagser/flagser_count.py in flagser_count_unweighted(adjacency_matrix, directed)
     51 
     52     # Call flagser_count binding
---> 53     cell_count = compute_cell_count(vertices, edges, directed)
     54 
     55     return cell_count

RuntimeError: Out of bounds, tried to add an edge involving vertex 34902, but there are only 34316 vertices.

Versions

Python 3.9.2 (default, Feb 20 2021, 18:40:11) 
[GCC 10.2.0]
pyflagser 0.4.4

Further notes

I'll look into it soon. I've got a few suspects (e.g. compiling flagser with 32-bit uints for vertices, which is a compile flag), I would also like to see what happens with the flagio functions.

I'll also have a look at the minimum graph size required to trigger the bug.

If you have other hot leads, let me know.

Missing features from flagser binding

Description

The first version of the bindings for flagser is done in #1. The current python Interface does not support all the possible features described in flagser docs.

The following to-do list report the current status of the missing features (the list is not sorted by priority):

  • max-dim support: the maximal homology dimension to be computed
  • min-dim dim: the minimal homology dimension to be computed
  • approximate n: skip all cells creating columns in the reduction matrix with more than n entries. Use this for hard problems, a good value is often 100000. Increase for higher precision, decrease for faster computation. Warning: for non-trivial filtrations, this makes for hard to estimate theoretical bounds on the error, although usually the real error is much lower than the theoretical one. Use for exploration and validate later with longer computation time
  • components: compute the directed flag complex for each individual connected component of the input graph. Warning: this currently only works for the trivial filtration. Additionally, this ignores all isolated vertices.
  • undirected: computes the undirected flag complex instead
  • h5 support ? Is it necessary if we eventually implement to pass directly the data to flagser ?
  • Defining a filtration algorithm, I personally did not fully understand this section.
  • modulus or coeff (called in python code)

Other useful improvement could be:

  • instead of passing a file to flagser directly give him the data. This allow to not store any data on disk.
  • Because of the implementation of flagser, it needs to write a file on disk. we'll need to check with the maintainer of flagser to update this behavior and not make this a requirement. Because I need to manage the writing of the file in the python bindings.

This list shall be updated when features are implement and if other missing features should be added.

Regards,
Julián

Filtration algorithm vertex_degree produces segmentation fault

Description

PR #21 Added test for different filtrations. But testing algorithm vertex_degree produced segmentation fault with dataset d5.flag.

Steps/Code to Reproduce

import 
flag_matrix = loadflag('flagser/test/d5.flag')
res = flagser(flag_matrix, max_dimension=1, directed=False, filtration='vertex_degree')

Expected Results

Results of filtration, but in general no segmentation fault.

Actual Results

segmentation fault (core dumped)

Versions

Version 0.2.0

Zero values outside of the diagonal

Description

When we set multiple values in the flagser matrix to 0 in the undirected case, pyflagser behaves differently than ripser.

Interestingly, when we replace setting the value used to set the upper and lower diagonal in the flagser matrix by a small, positive one (ex. 1e-8), the result is as expected.

Steps/Code to Reproduce

import numpy as np
from pyflagser import flagser
from gtda.homology import VietorisRipsPersistence
from sklearn.metrics import euclidean_distances

nb_points = 5
t = np.linspace(0, 1, nb_points + 1)
x, y = np.cos(2*np.pi*t - 0.1), np.sin(2*np.pi*t - 0.1)
X = np.stack([x, y], axis=1)
dist_0 = euclidean_distances(X)

dist_0[range(0, nb_points-1), range(1, nb_points)] = 0.
dist_0[range(1, nb_points), range(0, nb_points-1)] = 0.

flagser(dist_0, min_dimension=0, max_dimension=1, directed=False, coeff=2)['dgms'][1]

Expected Results

array([[1.17557049, 1.90211308]])

Actual Results

array([[1.90211308,        inf]])

Versions

macOS-10.15.3-x86_64-i386-64bit
Python 3.8.2 (default, Mar 26 2020, 10:43:30)
[Clang 4.0.1 (tags/RELEASE_401/final)]
pyflagser 0.2.1

[BUG] Error installing pyflagser

python -m pip install -U pyflagser

Error I am getting:

`Collecting pyflagser
Using cached pyflagser-0.3.0.tar.gz (25 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/private/var/folders/lq/dh7kcps136n72nc_dhrly7nw0000gn/T/pip-install-3x_4gd09/pyflagser_02a23086291f40b789f9e2eed638fbda/setup.py", line 19, in
with open('requirements.txt') as f:
^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.`

error 'flagser/src/flagser-count.cpp' file not found

Dear all,
if I try to compile after cloning git i received this error:

fatal error: 'flagser/src/flagser-count.cpp' file not found
#include <flagser/src/flagser-count.cpp>

I checked flagser/src/ and the file is not present.

could you help me?

Thanks

Error installing pyflagser in macOS 12.4 FileNotFoundError

Description

I have been trying to install pyflagser to use giotto-tda. I have already installed all the requirements but I keep having this error.

Steps/Code to Reproduce

python3 -m pip install -U pyflagser

Expected Results

No error is thrown

Actual Results

juanfelipe@MacBook-Pro ~ % python3 -m pip install -U pyflagser
Collecting pyflagser
Using cached pyflagser-0.3.0.tar.gz (25 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "", line 2, in
File "", line 34, in
File "/private/var/folders/t6/zxsqvny541x63pkb9k4wphrc0000gn/T/pip-install-sbk29w75/pyflagser_697adfa9f3e645cea0ee7553000be1f6/setup.py", line 19, in
with open('requirements.txt') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Versions

What can I do to solve this? Thanks

Warnings and crashes for FlagserPersistence with n_jobs>1

Description

With FlagserPersistence (from giotto-tda), for n_jobs>1 , I get many notifications

"Error deleting flagser output file: No such file or directory"

and eventually a

TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.
The exit codes of the workers are {UNKNOWN(255)}

It seems to be related to a memory issue, but I could not trace an increased usage (to my surprise, actually).

Steps/Code to Reproduce

import numpy as np

from gtda.point_clouds import ConsecutiveRescaling
from gtda.homology import FlagserPersistence

pc = np.random.randn(1000, 60, 2)
CR = ConsecutiveRescaling(factor=1.)
distance_matrices = CR.fit_transform(pc)
FlagserPersistence(n_jobs=9).fit_transform(distance_matrices)

Expected Results

An array of persistence diagrams.

Actual Results

---------------------------------------------------------------------------
TerminatedWorkerError                     Traceback (most recent call last)
<ipython-input-4-ee0d8999551c> in <module>
      2 CR = ConsecutiveRescaling(factor=1.)
      3 distance_matrices = CR.fit_transform(pc)
----> 4 FlagserPersistence(n_jobs=9).fit_transform(distance_matrices)

~/Libs/giotto-tda/gtda/utils/_docs.py in fit_transform_wrapper(*args, **kwargs)
    104         @wraps(original_fit_transform)
    105         def fit_transform_wrapper(*args, **kwargs):
--> 106             return original_fit_transform(*args, **kwargs)
    107         fit_transform_wrapper.__doc__ = \
    108             make_fit_transform_docs(fit_docs, transform_docs)

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
    688         if y is None:
    689             # fit method of arity 1 (unsupervised transformation)
--> 690             return self.fit(X, **fit_params).transform(X)
    691         else:
    692             # fit method of arity 2 (supervised transformation)

~/Libs/giotto-tda/gtda/homology/simplicial.py in transform(self, X, y)
   1306         X = check_point_clouds(X, accept_sparse=True, distance_matrices=True)
   1307 
-> 1308         Xt = Parallel(n_jobs=self.n_jobs)(
   1309             delayed(self._flagser_diagram)(x) for x in X)
   1310 

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/parallel.py in __call__(self, iterable)
   1059 
   1060             with self._backend.retrieval_context():
-> 1061                 self.retrieve()
   1062             # Make sure that we get a last message telling us we are done
   1063             elapsed_time = time.time() - self._start_time

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/parallel.py in retrieve(self)
    938             try:
    939                 if getattr(self._backend, 'supports_timeout', False):
--> 940                     self._output.extend(job.get(timeout=self.timeout))
    941                 else:
    942                     self._output.extend(job.get())

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
    540         AsyncResults.get from multiprocessing."""
    541         try:
--> 542             return future.result(timeout=timeout)
    543         except CfTimeoutError as e:
    544             raise TimeoutError from e

~/snap/miniconda3/envs/ts_rp/lib/python3.8/concurrent/futures/_base.py in result(self, timeout)
    437                 raise CancelledError()
    438             elif self._state == FINISHED:
--> 439                 return self.__get_result()
    440             else:
    441                 raise TimeoutError()

~/snap/miniconda3/envs/ts_rp/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
    386     def __get_result(self):
    387         if self._exception:
--> 388             raise self._exception
    389         else:
    390             return self._result

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/externals/loky/_base.py in _invoke_callbacks(self)
    623         for callback in self._done_callbacks:
    624             try:
--> 625                 callback(self)
    626             except BaseException:
    627                 LOGGER.exception('exception calling callback for %r', self)

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/parallel.py in __call__(self, out)
    364         with self.parallel._lock:
    365             if self.parallel._original_iterator is not None:
--> 366                 self.parallel.dispatch_next()
    367 
    368 

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/parallel.py in dispatch_next(self)
    797 
    798         """
--> 799         if not self.dispatch_one_batch(self._original_iterator):
    800             self._iterating = False
    801             self._original_iterator = None

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator)
    864                 return False
    865             else:
--> 866                 self._dispatch(tasks)
    867                 return True
    868 

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/parallel.py in _dispatch(self, batch)
    782         with self._lock:
    783             job_idx = len(self._jobs)
--> 784             job = self._backend.apply_async(batch, callback=cb)
    785             # A job can complete so quickly than its callback is
    786             # called before we get here, causing self._jobs to

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback)
    529     def apply_async(self, func, callback=None):
    530         """Schedule a func to be run"""
--> 531         future = self._workers.submit(SafeFunction(func))
    532         future.get = functools.partial(self.wrap_future_result, future)
    533         if callback is not None:

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/externals/loky/reusable_executor.py in submit(self, fn, *args, **kwargs)
    175     def submit(self, fn, *args, **kwargs):
    176         with self._submit_resize_lock:
--> 177             return super(_ReusablePoolExecutor, self).submit(
    178                 fn, *args, **kwargs)
    179 

~/snap/miniconda3/envs/ts_rp/lib/python3.8/site-packages/joblib/externals/loky/process_executor.py in submit(self, fn, *args, **kwargs)
   1100         with self._flags.shutdown_lock:
   1101             if self._flags.broken is not None:
-> 1102                 raise self._flags.broken
   1103             if self._flags.shutdown:
   1104                 raise ShutdownExecutorError(

TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.

The exit codes of the workers are {UNKNOWN(255), UNKNOWN(255)}

Versions

Linux-5.0.0-1070-oem-osp1-x86_64-with-glibc2.10
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0]
pyflagser 0.4.1

flagser_weighted does not terminate/takes forever when coeff > 2

Description

flagser_weighted seems to take forever or even not to terminate, on computations of trivial weighted graphs whencoeff is 3 or above.

Steps/Code to Reproduce

import numpy as np

X = np.random.random((5, 5))
np.fill_diagonal(X, 0.)

flagser_weighted(X, coeff=3)

Expected Results

flagser_weighted terminates in a reasonable time.

Actual Results

flagser_weighted does not terminate. But note that it would terminate if I had X = np.random.random((3, 3)) or X = np.random.random((4, 4)) instead.

Versions

macOS-10.15.6-x86_64-i386-64bit
Python 3.8.2 (default, May 6 2020, 02:49:43)
[Clang 4.0.1 (tags/RELEASE_401/final)]
pyflagser 0.4.0

I observe this both with the PyPI version and with the developer install.

ERROR pyflagser - ModuleNotFoundError: No module named 'pyflagser.modules'

The pytest failed after running it in Ubuntu 16.04.7 LTS, with the following messages:

  1. Log after installation of pyflagser:

(giottoTDA) oarti001@raptor:~/pyflagser$ python -m pip install -U pyflagser
Collecting pyflagser
Downloading pyflagser-0.4.4-cp39-cp39-manylinux2010_x86_64.whl (400 kB)
|████████████████████████████████| 400 kB 33.2 MB/s
Collecting numpy>=1.17.0
Downloading numpy-1.21.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB)
|████████████████████████████████| 15.7 MB 133.9 MB/s
Collecting scipy>=0.17.0
Using cached scipy-1.7.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (28.4 MB)
Installing collected packages: numpy, scipy, pyflagser
Successfully installed numpy-1.21.0 pyflagser-0.4.4 scipy-1.7.0

  1. Log after running pytest:

(giottoTDA) oarti001@raptor:~$ pytest pyflagser
========================================================================================== test session starts ===========================================================================================
platform linux -- Python 3.9.5, pytest-6.2.4, py-1.9.0, pluggy-0.12.0
rootdir: /disk/raptor/lclhome/oarti001/pyflagser, configfile: setup.cfg
collected 0 items / 1 error

=============================================================== ERRORS
_______________________________________________________________________ ERROR collecting test session
/lclhome/oarti001/miniconda3/envs/giottoTDA/lib/python3.9/importlib/init.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
:1030: in _gcd_import
???
:1007: in _find_and_load
???
:972: in _find_and_load_unlocked
???
:228: in _call_with_frames_removed
???
:1030: in _gcd_import
???
:1007: in _find_and_load
???
:972: in _find_and_load_unlocked
???
:228: in _call_with_frames_removed
???
:1030: in _gcd_import
???
:1007: in _find_and_load
???
:986: in _find_and_load_unlocked
???
:680: in _load_unlocked
???
:855: in exec_module
???
:228: in _call_with_frames_removed
???
pyflagser/pyflagser/init.py:5: in
from .flagser import flagser_unweighted, flagser_weighted
pyflagser/pyflagser/flagser.py:6: in
from .modules.flagser_pybind import compute_homology, AVAILABLE_FILTRATIONS
E ModuleNotFoundError: No module named 'pyflagser.modules'
====short test summary info =========================================================================================
ERROR pyflagser - ModuleNotFoundError: No module named 'pyflagser.modules'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection
============================================================ 1 error in 0.21s =======

Am I missing something? Thanks

Feature request: Allow (read only) access to the directed flag complex via python

Motivation

The implementation of the data structure needed for the directed flag complex seems to be quite memory efficient in the flagser code. The same holds true for the generation of the directed flag complex.
Despite betti-numbers and simplex counts, which are currently available via the pyflagser api, one can make use of the flag complex to calculate other interesting characteristics about the underlying directed graph.

Goal

  • Enhance pyflagser with the option to generate the directed flag complex (via flagser)
  • Python bindings to this C++ flag complex object.

As the flag complex corresponding to a directed graph should not be muteable anyway (once generated), read-only access would be enough.

Current progress:

I volunteer to look into this matter, as I need it for my own project anyway.

  • I've got reasonable knowledge of directed flag complex generation (already implemented that in python myself)
  • I've got absolutely no clue about python/C++ bindings and thus have research that first.
  • I've never contributed to a project which takes itself serious. So please have patience with me :)

Hit me up if you're interested in discussing details!

Versions

Python: 3.9, CPython interpreter
pyflagser: 0.4.3

One dimensional output for empty persistence diagram

Description

Let us denote diag the output of flagser_weighted. According to the docstrings, diag['dgms'] is a list of persistence diagrams, each of shape (*, 2). Applied to the example below, the shape is (0,).
null_h_one.npy.zip

Steps/Code to Reproduce

import numpy as np
from pyflagser import flagser_weighted
null_h = np.load('null_h_one.npy')
diag = flagser_weighted(null_h,
                 directed=True,
                 filtration='max', coeff=2, max_edge_weight=np.inf,
                 min_dimension=0,
                 max_dimension=1,
                 approximation=10000,)

Expected Results

diag['dgms'][1].shape == (something, 2)

Actual Results

diag['dgms'][1].shape == (0,)

Versions

macOS-10.15.5-x86_64-i386-64bit
Python 3.8.2 (default, Mar 26 2020, 10:43:30)
[Clang 4.0.1 (tags/RELEASE_401/final)]
pyflagser 0.3.1

Incorrect outputs of the flagser bindings.

Description

  • The min and max dimensions parameter do not allow a barcode of the adequate dimension to be returned
  • cell_count is only being returned if min_dim = 0 and max_dim = np.inf

The problem arises in C++ flagser and I have made a PR to flagser fixing it.

`save_weighted_flag` crashes (and is uncovered by tests)

Description

save_weighted_flag throws error instead of doing it's job.

Steps/Code to Reproduce

  1. Have a binary adjacency matrix G
  2. call save_weighted_flag('foo.bar', G)

Expected Results

saves flag to foo.bar

Actual Results

crashes:

Traceback (most recent call last):
  File "/home/florian/project/digraph-analyzer/playground.py", line 14, in <module>
    save_unweighted_flag('/tmp/bbp_1.flag', G)
  File "/home/florian/.local/lib/python3.9/site-packages/pyflagser/flagio.py", line 196, in save_unweighted_flag
    np.savetxt(f, edges, comments='', header='dim 1', fmt='%i %i')
  File "<__array_function__ internals>", line 5, in savetxt
  File "/usr/lib/python3.9/site-packages/numpy/lib/npyio.py", line 1404, in savetxt
    raise error
ValueError: fmt has wrong number of % formats:  %i %i

Versions

everything is fresh from the git

Fix:

The following diff fixes this:

[florian@frankenstein-archlinux pyflagser]$ git diff
diff --git a/pyflagser/flagio.py b/pyflagser/flagio.py
index 7f3ebc9..446bf17 100644
--- a/pyflagser/flagio.py
+++ b/pyflagser/flagio.py
@@ -191,9 +191,8 @@ def save_unweighted_flag(fname, adjacency_matrix):
     vertices, edges = _extract_unweighted_graph(adjacency_matrix)
 
     with open(fname, 'w') as f:
-        np.savetxt(f, vertices, delimiter=' ', comments='', header='dim 0',
-                   fmt='%.18e')
-        np.savetxt(f, edges, comments='', header='dim 1', fmt='%i %i')
+        np.savetxt(f, vertices.reshape((1,-1)), delimiter=' ', header='dim 0', fmt='%i')
+        np.savetxt(f, edges, comments='', header='dim 1', fmt='%i %i %i')
 
 
 def save_weighted_flag(fname, adjacency_matrix, max_edge_weight=None):

I'll open a PR soon. First need to figure out how :D

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.