Giter Club home page Giter Club logo

simulated-bifurcation-algorithm's Introduction

Simulated Bifurcation for Python

PyTorch PyPI package codecov Status Documentation Status GitHub stars

The Simulated Bifurcation (SB) algorithm is a fast and highly parallelizable state-of-the-art algorithm for combinatorial optimization inspired by quantum physics and spins dynamics. It relies on Hamiltonian quantum mechanics to find local minima of Ising problems. The last accuracy tests showed a median optimality gap of less than 1% on high-dimensional instances.

This open-source package utilizes PyTorch to leverage GPU computations, harnessing the high potential for parallelization offered by the SB algorithm.

It also provides an API to define Ising models or other NP-hard and NP-complete problems (QUBO, Karp problems, ...) that can be solved using the SB algorithm.

⚙️ Install

Compute Plateform CPU GPU
Instructions
pip install simulated-bifurcation     

    Install PyTorch with GPU support

pip install simulated-bifurcation     

🧪 The Simulated Bifurcation (SB) algorithm

Ising model

An Ising problem, given a null-diagonal square symmetrical matrix $J$ of size $N \times N$ and a vector $h$ of size $N$, consists in finding the spin vector $\mathbf{s} = (s_{1}, ... s_{N})$ called the ground state, (each $s_{i}$ being equal to either 1 or -1) such that the following value, called Ising energy, is minimal:

$$- \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} J_{ij}s_{i}s_{j} + \sum_{i=1}^{N} h_{i}s_{i}$$

This problem is known to be NP-hard but is very useful since it can be used in many sectors such as finance, transportation or chemistry or derived as other well-know optimization problems (QUBO, MAXCUT, Knapsack problem, etc.).

The Simulated Bifurcation algorithm was originally introduced to solve Ising problems by simulating the adiabatic evolution of spins in a quantum Hamiltonian system, but can also be generalized to a wider range of optimization problems.

Usage on polynomial instances

The SB algorithm can be written as the minimization or maximization of multivariable polynomials of degree two, i.e. written as

$$\sum_{i=1}^{N} \sum_{j=1}^{N} M_{ij}x_{i}x_{j} + \sum_{i=1}^{N} v_{i}x_{i} + c$$

for which the $x_{i}$'s can be spins, binary or non-negative integer.

This can also be seen as the sum of a quadratic form, a linear form and a constant term and such a formulation is the basis of many optimization problems.

The minimize and maximize functions allow to respectively minimize and maximize the value of such polynomials for a given type of input values, relying on the SB algorithm. They both return the optimal polynomial value found by the SB algorithm, along with its associated input vector.

The input types must be passed to the domain argument:

  • spin (default value) for a spin optimization: the optimal vector will only have ±1 values
  • binary for a binary optimization: the optimal vector will only have 0 or 1 values
  • intX for a X-bits encoded integer optimization: the optimal vector will only have integer value encoded with X bits or less, i.e. belonging to the range 0 to $2^{X} - 1$.

For instance, 9-bits integer correspond to the int9 input type and the accepted values span from 0 to 511.

import simulated_bifurcation as sb
matrix = torch.tensor([[1, 1, 2], [0, -1, -2], [-2, 0, 2]])
vector = torch.tensor([-1, 0, 2])
constant = 2.0

The package provides a polynomial API to build quadratic multivariate polynomials from such tensors. Four options are possible.

A polynomial can be defined using coefficient tensors or SymPy expressions to define polynomials in a more natural way from mathematical equations.

The four following code snippets all create equivalent polynomials

  1. Using the QuadraticPolynomial class
from simulated_bifurcation.core import QuadraticPolynomial

With tensors

polynomial = QuadraticPolynomial(matrix, vector, constant)

With a SymPy expression

from sympy import poly, symbols
x, y, z = symbols("x y z")
expression = poly(
    x**2 - y**2 + 2 * z**2
    + x * y + 2 * x * z
    - 2 * y * z
    - 2 * z * x
    - x + 2 * z
    + 2
)

polynomial = QuadraticPolynomial(expression)
  1. Using the sb.build_model function

With tensors

polynomial = sb.build_model(matrix, vector, constant)

With a SymPy expression

from sympy import poly, symbols
x, y, z = symbols("x y z")
expression = poly(
    x**2 - y**2 + 2 * z**2
    + x * y + 2 * x * z
    - 2 * y * z
    - 2 * z * x
    - x + 2 * z
    + 2
)

polynomial = sb.build_model(expression)

The minimize and maximize functions allow to respectively minimize and maximize the value of such polynomials for a given type of input values, relying on the SB algorithm. They both return the optimal polynomial value found by the SB algorithm, along with its associated input vector.

Minimization

# Spin minimization
spin_value, spin_vector = sb.minimize(matrix, vector, constant, domain='spin')

# Binary minimization
binary_value, binary_vector = sb.minimize(matrix, vector, constant, domain='binary')

# 3-bits integer minimization
int_value, int_vector = sb.minimize(matrix, vector, constant, domain='int3')

Or, using a SymPy expression:

# Spin minimization
spin_value, spin_vector = sb.minimize(expression, domain='spin')

# Binary minimization
binary_value, binary_vector = sb.minimize(expression, domain='binary')

# 3-bits integer minimization
int_value, int_vector = sb.minimize(expression, domain='int3')

Maximization

# Spin maximization
spin_value, spin_vector = sb.maximize(matrix, vector, constant, domain='spin')

# Binary maximization
binary_value, binary_vector = sb.maximize(matrix, vector, constant, domain='binary')

# 10-bits integer maximization
int_value, int_vector = sb.maximize(matrix, vector, constant, domain='int10')

Or, using a SymPy expression:

# Spin minimization
spin_value, spin_vector = sb.maximize(expression, domain='spin')

# Binary minimization
binary_value, binary_vector = sb.maximize(expression, domain='binary')

# 3-bits integer minimization
int_value, int_vector = sb.maximize(expression, domain='int10')

For both functions, only the matrix is required, the vector and constant terms are optional.

Parallelization (multi-agent search)

The Simulated Bifurcation algorithm is highly parallelizable since it only relies on linear matrices equations. To take advantage of this property, this implementation offers the possibility to perform a multi-agent search of the optimal solution by evolving several spin vectors in parallel (each one being called an agent). The number of agents is set by the agents parameter in the minimize and maximize functions.

💡 Tip: it is faster to run once the algorithm with N agents than to run N times the algorithm with only one agent.

# Efficient computation ✔️
sb.minimize(matrix, agents=100)

# Slower cumbersome computation ❌
for _ in range(100):
    sb.minimize(matrix, agents=1)

GPU computation

This parallelization of the algorithm can also be utilized by performing calculations on GPUs to speed them up significantly. To do this, simply specify the calculation device argument to cuda when instantiating an Ising model:

sb.minimize(matrix, device='cuda')

Early stopping

The Simulated Bifurcation algorithm stops after a certain number of iterations, defined by the parameter max_steps of the minimize and maximize functions. However, this implementation comes with the possibility to perform early stopping and save computation time by defining convergence conditions.

At regular intervals, the energy of the agents is sampled and compared with its previous value to calculate their stability period. If an agent's stability period exceeds a convergence threshold, it is considered to have converged and its value is frozen. If all agents converge before the maximum number of iterations has been reached, the algorithm stops.

  • The sampling period and the convergence threshold are respectively set using the sampling_period and convergence_threshold parameters of the minimize and maximize functions.
  • To use early stopping in the SB algorithm, set the use_window parameter to True.
  • If only some agents have converged when the maximum number of iterations is reached, the algorithm stops and only these agents are considered in the results.
# Stop with maximal iterations
sb.minimize(matrix, max_steps=10000)

# Early stopping
sb.minimize(
    matrix,
    sampling_period=30,
    convergence_threshold=50,
    use_window=True,
)

Optimization results

By default, SB returns the best vector and objective value found. However, it is also possible to configure it to so it returns all the vectors for each agent with the associated objective value. To do so, the best_only parameter of the minimize and maximize functions must be set to False (default is True).

best_vector, best_value = sb.minimize(matrix, best_only=True)
vectors, values = sb.maximize(matrix, best_only=False)

💡 Advanced usages

This section deals with a more complex use of the SB algorithm, as it is closer to the quantum theory from which it is derived. To better understand the significance of the subjects at stake, we recommend reading the theory behind the SB algorithm by Goto et al..

  • Goto, H., Tatsumura, K., & Dixon, A. R. (2019). Combinatorial optimization by simulating adiabatic bifurcations in nonlinear Hamiltonian systems. Science advances, 5(4), eaav2372.
  • Kanao, T., & Goto, H. (2022). Simulated bifurcation assisted by thermal fluctuation. Communications Physics, 5(1), 153.
  • Goto, H., Endo, K., Suzuki, M., Sakai, Y., Kanao, T., Hamakawa, Y., ... & Tatsumura, K. (2021). High-performance combinatorial optimization based on classical mechanics. Science Advances, 7(6), eabe7953.

SB Algorithm modes

The SB algorithm is available in four different versions (Goto et al.) that result in small variations in the algorithm general operation. The four modes are:

  1. Ballistic SB (bSB): uses the particles' position for the SB matrix computations; usually faster but less accurate.
  2. Discrete SB (dSB): uses the sign of the particles' position for the SB matrix computations; usually slower but more accurate.
  3. Heated ballistic SB (HbSB): uses the bSB algorithm with a supplementary non-symplectic term to allow a higher solution space exploration.
  4. Heated discrete SB (HdSB): uses the dSB algorithm with a supplementary non-symplectic term to allow a higher solution space exploration.

These mode can be selected setting the parameters ballistic and heated to either True or False in the Ising.optimize method or the minimize/maximize functions.

sb.minimize(matrix, ballistic=True, heated=False)  # bSB
sb.minimize(matrix, ballistic=False, heated=True)  # HdSB

sb.maximize(matrix, ballistic=False, heated=False)  # dSB
sb.maximize(matrix, ballistic=True, heated=True)  # HbSB

SB Algorithm's hyperparameters setting

The SB algorithm has a set of hyperparameters corresponding to physical constants derived from quantum theory, which have been fine-tuned (Goto et al.) to give the best results most of the time. Nevertheless, the relevance of specific hyperparameters may vary depending on the properties of the instances. For this purpose, the set_env function can be used to modify their value.

# Custom hyperparameters values
sb.set_env(time_step=.1, pressure_slope=.01, heat_coefficient=.06)

# Default hyperparameters values
sb.reset_env()

Derived optimization models

A lot of mathematical problems (QUBO, Travelling Salesman Problem, MAXCUT, ...) can be written as order-two multivariate polynomials problems, and thus can be solved using the Simulated Bifurcation algorithm. Some of them are already implemented in the models module:

🔬 Physics

  • Ising model

📐 Mathematics

  • Quadratic Unconstrained Binary Optimization (QUBO)
  • Number partitioning

💸 Finance

  • Markowitz model

Custom models

You are also free to create your own models using our API. Depending on the type of model you wish to implement, you can create a subclass of the ABCModel class to quickly and efficiently link your custom model to an Ising problem and solve it using the SB algorithm. Such a model must have a domain class attribute that set the definition domain of all the instances.

The advantage of doing so is that your model can directly call the optimize method that it inherits from the QuadraticPolynomial interface without having to redefine it.

For instance, here is how the QUBO model was implemented:

The QUBO problem consists, given an upper triangular matrix $Q$, in finding the binary vector that minimizes the value $$\sum_{i=1}^{N} \sum_{j=1}^{N} Q_{ij}x_{i}x_{j}$$

from simulated_bifurcation.models import ABCModel


class QUBO(ABCModel):

    domain = "binary"

    def __init__(
        self,
        Q: Union[torch.Tensor, np.ndarray],
        dtype: Optional[torch.dtype] = None,
        device: Optional[Union[str, torch.device]] = None,
    ) -> None:
        super().__init__(Q, dtype=dtype, device=device)
        self.Q = self[2]

You can check Andrew Lucas' paper on Ising formulations of NP-complete and NP-hard problems, including all of Karp's 21 NP-complete problems.

🔎 Lucas, A. (2014). Ising formulations of many NP problems. Frontiers in physics, 2, 5.

🔗 Cite this work

If you are using this code for your own projects please cite our work:

@software{Ageron_Simulated_Bifurcation_SB_2023,
    author = {Ageron, Romain and Bouquet, Thomas and Pugliese, Lorenzo},
    month = nov,
    title = {{Simulated Bifurcation (SB) algorithm for Python}},
    url = {https://github.com/bqth29/simulated-bifurcation-algorithm},
    version = {1.2.1},
    year = {2023},
}

simulated-bifurcation-algorithm's People

Contributors

bqth29 avatar busybeaver-42 avatar dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

simulated-bifurcation-algorithm's Issues

RuntimeError: expected scalar type Double but found Float

I used the following code to optimize the proposed dataset:

import yfinance as yf
import torch
import random # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import yfinance as yf
import matplotlib.pyplot as plt


import json
import numpy as np
#from deap import base, creator, tools, algorithms

from datetime import datetime as dt, timedelta as td
#from datetime import datetime
import simulated_bifurcation as sb


asset_name = 'AAPL'

import numpy as np

def generate_weights(data):
    # Get the number of columns in the data
    num_cols = data.shape[1]

    # Generate random weights between 0 and 1
    weights = np.random.rand(num_cols, num_cols)

    # Normalize the weights
    normalized_weights = weights / np.sum(weights)

    return normalized_weights


def generate_normalized_weights(data):
    # Get the number of rows in the data
    num_rows = data.shape[0]

    # Generate random weights between 0 and 1
    weights = np.random.rand(num_rows, 1)

    # Normalize the weights
    normalized_weights = weights / np.sum(weights)
    print(normalized_weights.shape)

    return normalized_weights.flatten()


data = yf.download(tickers=asset_name, period='1y', interval='1d')
data
m_sb = (torch.DoubleTensor(generate_weights(data)))
m_sb = m.double()
m_sb 
sb.set_env(time_step=.1, pressure_slope=.01, heat_coefficient=.06)
best_vector, best_value = sb.maximize(m_sb, #domain='int10',
                                      agents=100, device='cuda',
                                      max_steps=10000, sampling_period=30, ballistic= True,
                                      convergence_threshold=50, use_window=True, heated=True, best_only=True)

I faced the following issue, any suggestions to solve this issue:

🔁 Iterations       :   0%|          | 0/10000 [00:00<?, ? steps/s]
🏁 Bifurcated agents:   0%|          | 0/100 [00:00<?, ? agents/s]
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
[<ipython-input-65-480d0b2ec372>](https://localhost:8080/#) in <cell line: 2>()
      1 sb.set_env(time_step=.1, pressure_slope=.01, heat_coefficient=.06)
----> 2 best_vector, best_value = sb.maximize(matrix, #domain='int10',
      3                                       agents=100, device='cuda',
      4                                       max_steps=10000, sampling_period=30, ballistic= True,
      5                                       convergence_threshold=50, use_window=True, heated=True, best_only=True)

7 frames
[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/optimizer/stop_window.py](https://localhost:8080/#) in __compare_energies(self, sampled_spins)
    109 
    110     def __compare_energies(self, sampled_spins: torch.Tensor) -> None:
--> 111         energies = torch.nn.functional.bilinear(
    112             sampled_spins.t(), sampled_spins.t(), torch.unsqueeze(self.ising_tensor, 0)
    113         ).reshape(self.n_agents)

RuntimeError: expected scalar type Double but found Float

[ENH] Add Integer Portfolio Optimization

In this paper, they demonstrate the use of SB for solving the portfolio integer optimization problem. I'm sure you are super busy but it would be useful to reproduce this work on different number of assets in order to demonstrate both the quality as well as scalability of this package.

[ENH] Replace `logging.warning` with `warnings.warn`

As discussed in #30

It might be more useful to use warnings.warn instead of logging.warning as the former can be caught like an exception and the user can specify some other settings to try that might improve convergence and not require human intervention. Currently, with logging.warning, there's no easy way to programmatically know that the simulation did not converge (there's no way to introspect the agents or the state of the simulator that I am aware of).

Thank you for your consideration!

[BUG] Algorithm performance dwindles when the number of agents increases

Considering 6 binary variables $x_1$, $x_2$, $x_3$, $x_4$, $x_5$ and $x_6$, we seek to solve

$$\mbox{Maximize } 2x_1 + 2x_2 + 2x_3 + 2x_4 + 4.5x_5 + 3x_6$$

subjected to the following constraints:

  • $x_1 + x_2 \leq 1$
  • $x_1 + x_3 \leq 1$
  • $x_2 + x_3 + x_4 \leq 1$
  • $x_3 + x_4 \leq 1$
  • $x_4 + x_5 \leq 1$
  • $x_5 + x_6 \leq 1$

Introducing a penalty coefficient $P$, we can write this LP problem as a QUBO problem that we want to maximize:

$$ x_1 + x_2 + x_3 + x_4 - Px_1x_2 - Px_1x_3 - Px_2x_3 - Px_2x_4 - 2Px_3x_4 - Px_4x_5 - Px_5x_6$$

We can chose any value of $P$ as long as one violated constraint results in a negative objective function. Thus, we set $P = 2 + 2 + 2 + 2 + 4.5 + 3 = 15.5$

The optimal situation is met when only $x_1$, $x_4$ and $x_6$ are set to $1$ with an objective function of $7$.

This can be written in terms of PyTorch tensors:

import torch

P = 15.5
Q = torch.tensor(
    [
        [2, -P, -P, 0, 0, 0],
        [0, 2, -P, -P, 0, 0],
        [0, 0, 2, -2 * P, 0, 0],
        [0, 0, 0, 2, -P, 0],
        [0, 0, 0, 0, 4.5, -P],
        [0, 0, 0, 0, 0, 3],
    ]
)

I tried to use the SB algorithm to solve this problem but got an unexpected behavior. The code snippet below allows to reproduce it. In a few words, I used different numbers of agents with 100 runs of the SB algorithm each time and looked at the obtained objective values:

from collections import defaultdict
from simulated_bifurcation.models import QUBO

torch.manual_seed(42) #  Reproducibility
study = {}

for agents in [1, 2, 5, 20, 50, 100]:
    study[agents] = defaultdict(int)
    for _ in range(100):
        _, obj = QUBO(Q).maximize(agents=agents, verbose=False)
        study[agents][obj] += 1

Plotting the study's data as pie charts for a more understandable visualization, I got:
agents

We can clearly see that the optimal value is getting more and more rare as the number of agents increases which is counter-intuitive since using more agents should result in an objective function at least as good as what we get with fewer agents. In the same time, as the number of agents increases, the bad values get more and more frequent.

SB Optimizer computation dtype v. Model dtype

Currently, the oscillators in the SB optimizer have the same dtype as the IsingCore model which itself inherits its dtype from the polynomial model defined by the user. Although it makes sense to create a polynomial model with an integer dtype (float32, float64, ...) and to cast the SB results to this integer dtype to allow a full-integer computation, it is counter-productive to use this very dtype for the SB optimization because the oscillators' range of values is [-1, 1] which would not work with integer values.

Thus, it would be nice to allow the user to chose a dtype for the model and a dtype for the optimization.

Several options are availables to remedy this problem:

Option 1: int to float mapping

The dtype provided in the sb.optimize, sb.minimize and sb.maximize functions, is used for the model and the SB computation is derived from it:

  • if the dtype is a float (float8, float16, float32, float64) it is also used for SB
  • if the dtype is an integer (int8, int16, int32, int64), SB uses the float dtype encoded on the same number of bits (int8 -> float8, int16 -> float16, etc.)

Option 2: dtype is only for SB computation

The dtype passed is only used for the SB computation (a float dtype is required). If the model to optimize is created first, it can have any dtype, but the equivalent Ising model will have its own dtype. If the polynomial is directly provided in the sb.maximize or sb.minimize function, its dtype will be the SB computation one as well.

Option 3: use two parameters in functions

The optimization functions use 2 parameters: model_dtype and computation_dtype which are respectively used to create the model and run SB

ValueError: Matrix must be square.

Screenshot from 2023-12-14 23-39-52
Screenshot from 2023-12-14 23-40-36

I used this code to perform an opt task and I read on the docs that the An Ising problem, given a null-diagonal square symmetrical matrix.

sb.set_env(time_step=.1, pressure_slope=.01, heat_coefficient=.06)
best_vector, best_value = sb.maximize(matrix, agents=100, device='cuda', 
                                      max_steps=10000, sampling_period=30, ballistic= True, 
                                      convergence_threshold=50, use_window=True, heated=True, `best_only=True)`

The following error appears:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-65-6fffbfba281a>](https://localhost:8080/#) in <cell line: 2>()
      1 sb.set_env(time_step=.1, pressure_slope=.01, heat_coefficient=.06)
----> 2 best_vector, best_value = sb.maximize(matrix, agents=100, device='cuda', 
      3                                       max_steps=10000, sampling_period=30, ballistic= True,
      4                                       convergence_threshold=50, use_window=True, heated=True, best_only=True)
      5 

6 frames
[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/simulated_bifurcation.py](https://localhost:8080/#) in maximize(matrix, vector, constant, domain, dtype, device, agents, max_steps, best_only, ballistic, heated, verbose, use_window, sampling_period, convergence_threshold, timeout, input_type)
    864         domain = input_type
    865 
--> 866     return optimize(
    867         matrix,
    868         vector,

[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/simulated_bifurcation.py](https://localhost:8080/#) in optimize(matrix, vector, constant, domain, dtype, device, agents, max_steps, best_only, ballistic, heated, minimize, verbose, use_window, sampling_period, convergence_threshold, timeout, input_type)
    315         domain = input_type
    316 
--> 317     model = build_model(
    318         matrix=matrix,
    319         vector=vector,

[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/simulated_bifurcation.py](https://localhost:8080/#) in build_model(matrix, vector, constant, domain, dtype, device, input_type)
   1043 
   1044     if domain == "spin":
-> 1045         return SpinQuadraticPolynomial(
   1046             matrix=matrix,
   1047             vector=vector,

[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/polynomial/spin_polynomial.py](https://localhost:8080/#) in __init__(self, matrix, vector, constant, dtype, device, silence_deprecation_warning)
    155             )
    156 
--> 157         super().__init__(
    158             matrix,
    159             vector,

[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/polynomial/base_multivariate_polynomial.py](https://localhost:8080/#) in __init__(self, matrix, vector, constant, accepted_values, dtype, device, silence_deprecation_warning)
     81             )
     82         self.__check_device(device)
---> 83         self.__init_matrix(matrix, dtype, device)
     84         self.__init_vector(vector, dtype, device)
     85         self.__init_constant(constant, dtype, device)

[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/polynomial/base_multivariate_polynomial.py](https://localhost:8080/#) in __init_matrix(self, matrix, dtype, device)
    189     ) -> None:
    190         tensor_matrix = self._cast_matrix_to_tensor(matrix, dtype, device)
--> 191         self.__check_square_matrix(tensor_matrix)
    192         self.__matrix = tensor_matrix
    193         self.__dimension = tensor_matrix.shape[0]

[/usr/local/lib/python3.10/dist-packages/simulated_bifurcation/polynomial/base_multivariate_polynomial.py](https://localhost:8080/#) in __check_square_matrix(matrix)
    252             raise ValueError(f"Matrix requires two dimension, got {matrix.ndim}.")
    253         if matrix.shape[0] != matrix.shape[1]:
--> 254             raise ValueError("Matrix must be square.")
    255 
    256     def __check_vector_shape(self, vector: torch.Tensor) -> None:

ValueError: Matrix must be square.

[ENH] Speed up computation when agents have converged

When the stop window is used, it could be interesting to remove the converged agents to reduce the number of computations and thus speed up the convergence of the other agents.

Currently, the converged agents' spins are stored so their final value is frozen but they are not removed from the oscillators tensors which means that some spins are updated for no reason.

[BUG] MemoryError when creating an instance of IntegerPolynomial for integers with a large number of bits

Description

Creating an instance of IntegerPolynomial over integers with a large number of bits raises a MemoryError.

The IntegerPolynomial.__init__ creates a list of all accepted values which has size 2**number_of_bits.

Code example

import torch
import simulated_bifurcation as sb

sb.build_model(matrix, input_type="int42")

Traceback

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\agero\PycharmProjects\simulated-bifurcation-algorithm\.test_env\.pypi_env_no_gpu\lib\site-packages\simulated_bifurcation\simulated_bifurcation.py", line 823, in build_model
    return IntegerPolynomial(
  File "C:\Users\agero\PycharmProjects\simulated-bifurcation-algorithm\.test_env\.pypi_env_no_gpu\lib\site-packages\simulated_bifurcation\polynomial\integer_polynomial.py", line 60, in __init__
    matrix, vector, constant, [*range(2**number_of_bits)], dtype, device
MemoryError

TypeError: 'type' object is not subscriptable when importing simulated_bifurcation

Hi,

When importing simulated_bifurcation, I got en error "TypeError: 'type' object is not subscriptable "

Error message:
class IsingPolynomialInterface(ABC):
11
12 """

simulated_bifurcation/polynomial/ising_polynomial_interface.py in IsingPolynomialInterface()
28 vector: Union[torch.Tensor, np.ndarray, None] = None,
29 constant: Union[int, float, None] = None,
---> 30 accepted_values: Union[torch.Tensor, np.ndarray, list[int], None] = None,
31 dtype: torch.dtype = torch.float32,
32 device: str = "cpu",

Could you help solve this problem? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.