Giter Club home page Giter Club logo

ljvmiranda921 / pyswarms Goto Github PK

View Code? Open in Web Editor NEW
1.2K 39.0 330.0 65.51 MB

A research toolkit for particle swarm optimization in Python

Home Page: https://pyswarms.readthedocs.io/en/latest/

License: MIT License

Makefile 0.95% Python 97.91% TeX 0.89% Shell 0.25%
particle-swarm-optimization optimization-tools pso global-optimization swarm-intelligence machine-learning discrete-optimization optimization optimization-algorithms metaheuristics algorithm

pyswarms's Introduction

PySwarms Logo

PyPI version Build Status Documentation Status License: MIT DOI Code style: black Gitter Chat

NOTICE: I am not actively maintaining this repository anymore. My research interests have changed in the past few years. I highly recommend checking out scikit-opt for metaheuristic methods including PSO.

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python.

It is intended for swarm intelligence researchers, practitioners, and students who prefer a high-level declarative interface for implementing PSO in their problems. PySwarms enables basic optimization with PSO and interaction with swarm optimizations. Check out more features below!

Features

  • High-level module for Particle Swarm Optimization. For a list of all optimizers, check this link.
  • Built-in objective functions to test optimization algorithms.
  • Plotting environment for cost histories and particle movement.
  • Hyperparameter search tools to optimize swarm behaviour.
  • (For Devs and Researchers): Highly-extensible API for implementing your own techniques.

Installation

To install PySwarms, run this command in your terminal:

$ pip install pyswarms

This is the preferred method to install PySwarms, as it will always install the most recent stable release.

In case you want to install the bleeding-edge version, clone this repo:

$ git clone -b development https://github.com/ljvmiranda921/pyswarms.git

and then run

$ cd pyswarms
$ python setup.py install

To install PySwarms on Fedora, use:

$ dnf install python3-pyswarms

Running in a Vagrant Box

To run PySwarms in a Vagrant Box, install Vagrant by going to https://www.vagrantup.com/downloads.html and downloading the proper packaged from the Hashicorp website.

Afterward, run the following command in the project directory:

$ vagrant provision
$ vagrant up
$ vagrant ssh

Now you're ready to develop your contributions in a premade virtual environment.

Basic Usage

PySwarms provides a high-level implementation of various particle swarm optimization algorithms. Thus, it aims to be user-friendly and customizable. In addition, supporting modules can be used to help you in your optimization problem.

Optimizing a sphere function

You can import PySwarms as any other Python module,

import pyswarms as ps

Suppose we want to find the minima of f(x) = x^2 using global best PSO, simply import the built-in sphere function, pyswarms.utils.functions.sphere(), and the necessary optimizer:

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
best_cost, best_pos = optimizer.optimize(fx.sphere, iters=100)

Sphere Optimization

This will run the optimizer for 100 iterations, then returns the best cost and best position found by the swarm. In addition, you can also access various histories by calling on properties of the class:

# Obtain the cost history
optimizer.cost_history
# Obtain the position history
optimizer.pos_history
# Obtain the velocity history
optimizer.velocity_history

At the same time, you can also obtain the mean personal best and mean neighbor history for local best PSO implementations. Simply call optimizer.mean_pbest_history and optimizer.mean_neighbor_history respectively.

Hyperparameter search tools

PySwarms implements a grid search and random search technique to find the best parameters for your optimizer. Setting them up is easy. In this example, let's try using pyswarms.utils.search.RandomSearch to find the optimal parameters for LocalBestPSO optimizer.

Here, we input a range, enclosed in tuples, to define the space in which the parameters will be found. Thus, (1,5) pertains to a range from 1 to 5.

import numpy as np
import pyswarms as ps
from pyswarms.utils.search import RandomSearch
from pyswarms.utils.functions import single_obj as fx

# Set-up choices for the parameters
options = {
    'c1': (1,5),
    'c2': (6,10),
    'w': (2,5),
    'k': (11, 15),
    'p': 1
}

# Create a RandomSearch object
# n_selection_iters is the number of iterations to run the searcher
# iters is the number of iterations to run the optimizer
g = RandomSearch(ps.single.LocalBestPSO, n_particles=40,
            dimensions=20, options=options, objective_func=fx.sphere,
            iters=10, n_selection_iters=100)

best_score, best_options = g.search()

This then returns the best score found during optimization, and the hyperparameter options that enable it.

>>> best_score
1.41978545901
>>> best_options['c1']
1.543556887693
>>> best_options['c2']
9.504769054771

Swarm visualization

It is also possible to plot optimizer performance for the sake of formatting. The plotters module is built on top of matplotlib, making it highly-customizable.

import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
from pyswarms.utils.plotters import plot_cost_history, plot_contour, plot_surface
import matplotlib.pyplot as plt
# Set-up optimizer
options = {'c1':0.5, 'c2':0.3, 'w':0.9}
optimizer = ps.single.GlobalBestPSO(n_particles=50, dimensions=2, options=options)
optimizer.optimize(fx.sphere, iters=100)
# Plot the cost
plot_cost_history(optimizer.cost_history)
plt.show()

CostHistory

We can also plot the animation...

from pyswarms.utils.plotters.formatters import Mesher, Designer
# Plot the sphere function's mesh for better plots
m = Mesher(func=fx.sphere,
           limits=[(-1,1), (-1,1)])
# Adjust figure limits
d = Designer(limits=[(-1,1), (-1,1), (-0.1,1)],
             label=['x-axis', 'y-axis', 'z-axis'])

In 2D,

plot_contour(pos_history=optimizer.pos_history, mesher=m, designer=d, mark=(0,0))

Contour

Or in 3D!

pos_history_3d = m.compute_history_3d(optimizer.pos_history) # preprocessing
animation3d = plot_surface(pos_history=pos_history_3d,
                           mesher=m, designer=d,
                           mark=(0,0,0))    

Surface

Contributing

PySwarms is currently maintained by a small yet dedicated team:

And we would appreciate it if you can lend a hand with the following:

  • Find bugs and fix them
  • Update documentation in docstrings
  • Implement new optimizers to our collection
  • Make utility functions more robust.

We would also like to acknowledge all our contributors, past and present, for making this project successful!

If you wish to contribute, check out our contributing guide. Moreover, you can also see the list of features that need some help in our Issues page.

Most importantly, first-time contributors are welcome to join! I try my best to help you get started and enable you to make your first Pull Request! Let's learn from each other!

Credits

This project was inspired by the pyswarm module that performs PSO with constrained support. The package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Cite us

Are you using PySwarms in your project or research? Please cite us!

  • Miranda L.J., (2018). PySwarms: a research toolkit for Particle Swarm Optimization in Python. Journal of Open Source Software, 3(21), 433, https://doi.org/10.21105/joss.00433
@article{pyswarmsJOSS2018,
    author  = {Lester James V. Miranda},
    title   = "{P}y{S}warms, a research-toolkit for {P}article {S}warm {O}ptimization in {P}ython",
    journal = {Journal of Open Source Software},
    year    = {2018},
    volume  = {3},
    issue   = {21},
    doi     = {10.21105/joss.00433},
    url     = {https://doi.org/10.21105/joss.00433}
}

Projects citing PySwarms

Not on the list? Ping us in the Issue Tracker!

  • Gousios, Georgios. Lecture notes for the TU Delft TI3110TU course Algorithms and Data Structures. Accessed May 22, 2018. http://gousios.org/courses/algo-ds/book/string-distance.html#sop-example-using-pyswarms.
  • Nandy, Abhishek, and Manisha Biswas., "Applying Python to Reinforcement Learning." Reinforcement Learning. Apress, Berkeley, CA, 2018. 89-128.
  • Benedetti, Marcello, et al., "A generative modeling approach for benchmarking and training shallow quantum circuits." arXiv preprint arXiv:1801.07686 (2018).
  • Vrbančič et al., "NiaPy: Python microframework for building nature-inspired algorithms." Journal of Open Source Software, 3(23), 613, https://doi.org/10.21105/joss.00613
  • Häse, Florian, et al. "Phoenics: A Bayesian optimizer for chemistry." ACS Central Science. 4.9 (2018): 1134-1145.
  • Szynkiewicz, Pawel. "A Comparative Study of PSO and CMA-ES Algorithms on Black-box Optimization Benchmarks." Journal of Telecommunications and Information Technology 4 (2018): 5.
  • Mistry, Miten, et al. "Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded." Imperial College London (2018).
  • Vishwakarma, Gaurav. Machine Learning Model Selection for Predicting Properties of High Refractive Index Polymers Dissertation. State University of New York at Buffalo, 2018.
  • Uluturk Ismail, et al. "Efficient 3D Placement of Access Points in an Aerial Wireless Network." 2019 16th IEEE Anual Consumer Communications and Networking Conference (CCNC) IEEE (2019): 1-7.
  • Downey A., Theisen C., et al. "Cam-based passive variable friction device for structural control." Engineering Structures Elsevier (2019): 430-439.
  • Thaler S., Paehler L., Adams, N.A. "Sparse identification of truncation errors." Journal of Computational Physics Elsevier (2019): vol. 397
  • Lin, Y.H., He, D., Wang, Y. Lee, L.J. "Last-mile Delivery: Optimal Locker locatuion under Multinomial Logit Choice Model" https://arxiv.org/abs/2002.10153
  • Park J., Kim S., Lee, J. "Supplemental Material for Ultimate Light trapping in free-form plasmonic waveguide" KAIST, University of Cambridge, and Cornell University http://www.jlab.or.kr/documents/publications/2019PRApplied_SI.pdf
  • Pasha A., Latha P.H., "Bio-inspired dimensionality reduction for Parkinson's Disease Classification," Health Information Science and Systems, Springer (2020).
  • Carmichael Z., Syed, H., et al. "Analysis of Wide and Deep Echo State Networks for Multiscale Spatiotemporal Time-Series Forecasting," Proceedings of the 7th Annual Neuro-inspired Computational Elements ACM (2019), nb. 7: 1-10 https://doi.org/10.1145/3320288.3320303
  • Klonowski, J. "Optimizing Message to Virtual Link Assignment in Avionics Full-Duplex Switched Ethernet Networks" Proquest
  • Haidar, A., Jan, ZM. "Evolving One-Dimensional Deep Convolutional Neural Netowrk: A Swarm-based Approach," IEEE Congress on Evolutionary Computation (2019) https://doi.org/10.1109/CEC.2019.8790036
  • Shang, Z. "Performance Evaluation of the Control Plane in OpenFlow Networks," Freie Universitat Berlin (2020)
  • Linker, F. "Industrial Benchmark for Fuzzy Particle Swarm Reinforcement Learning," Liezpic University (2020)
  • Vetter, A. Yan, C. et al. "Computational rule-based approach for corner correction of non-Manhattan geometries in mask aligner photolithography," Optics (2019). vol. 27, issue 22: 32523-32535 https://doi.org/10.1364/OE.27.032523
  • Wang, Q., Megherbi, N., Breckon T.P., "A Reference Architecture for Plausible Thread Image Projection (TIP) Within 3D X-ray Computed Tomography Volumes" https://arxiv.org/abs/2001.05459
  • Menke, Tim, Hase, Florian, et al. "Automated discovery of superconducting circuits and its application to 4-local coupler design," arxiv preprint: https://arxiv.org/abs/1912.03322

Others

Like it? Love it? Leave us a star on Github to show your appreciation!

Contributors

Thanks goes to these wonderful people (emoji key):


Aaron

🚧 💻 📖 ⚠️ 🤔 👀

Carl-K

💻 ⚠️

Siobhán K Cronin

💻 🚧 🤔

Andrew Jarcho

⚠️ 💻

Mamady

💻

Jay Speidell

💻

Eric

🐛 💻

CPapadim

🐛 💻

JiangHui

💻

Jericho Arcelao

💻

James D. Bohrman

💻

bradahoward

💻

ThomasCES

💻

Daniel Correia

🐛 💻

fluencer

💡 📖

miguelcocruz

📖 💡

Steven Beardwell

💻 🚧 📖 🤔

Nathaniel Ngo

📖

Aneal Sharma

📖

Chris McClure

📖 💡

Christopher Angell

📖

Kutim

🐛

Jake Souter

🐛 💻

Ian Zhang

📖 💡

Zach

📖

Michel Lavoie

🐛

ewekam

📖

Ivyna Santino

📖 💡

Muhammad Yasirroni

📖

Christian Kastner

📖 📦

Nishant Rodrigues

💻

msat59

💻 🐛

Diego

📖

Shaad Alaka

📖

Krzysztof Błażewicz

🐛

Jorge Castillo

📖

Philipp Danner

💻

Nikhil Sethi

💻 📖

firefly-cpp

📖

This project follows the all-contributors specification. Contributions of any kind welcome!

pyswarms's People

Contributors

ainigiri avatar allcontributors[bot] avatar archer6621 avatar blazewicz avatar bradahoward avatar carl-k avatar cpapadim avatar craymichael avatar danielcorreia96 avatar dannerph avatar ewekam avatar firefly-cpp avatar fluencer avatar jayspeidell avatar jazcap53 avatar kutim avatar ljvmiranda921 avatar mamadyonline avatar miek770 avatar miguelcocruz avatar msat59 avatar nikhil-sethi avatar nishnash54 avatar pyup-bot avatar siokcronin avatar slek120 avatar stevenbw avatar thomberthou avatar whzup avatar yasirroni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyswarms's Issues

Port all tests into pytest

It might be better to re-structure unit tests via pytest. I enjoyed their parametric testing, and may be useful when expanding our feature set.

Implement Constrained Objective Functions

Context

For the v.0.2.0 release (#5), we're planning to add two new major features, and one of them is constrained optimization via PSO (#8). As with PySwarms, we don't only provide primitives for optimizers, but also built-in objective functions to test various methods (such as in the single-objective case). In this Issue, we will be implementing constrained objective functions.

What you'll do

  • Write a module pyswarms.utils.functions.contrained_obj containing two constrained objective functions (let's do two for now).
  • Create unit tests for the said functions in the module tests.utils.functions.test_constrainedobj.

What makes this different from the single_obj module?

Single objective functions takes an input vector, and computes the fitness of that vector given an equation. So if your objective function is the sphere function, you simply supply it by an input vector, say [0,0], and it gives you its corresponding fitness i.e., 0.

Constrained objective functions are different in a sense that you don't only have a method that computes a vector's fitness, but also a constraint where these vectors are limited. These are useful in applications where you want to optimize with respect to a certain limit, say, in mechanical engineering where the yield stress should not go overboard a value x. Of course, you can have multiple constraints in a given problem, and we will consider that here.

What should the data structure look like?

Unlike single-objective functions where we only have one method per function, constrained optimizations will take on a more object-oriented approach. Before, we have something like this:

def objective_function(X):
    # Compute a fitness given X, say, just (X-5)**2
    J = (X-5)**2
    return J

Now, we will have something like:

class MyConstrainedObjective(object):
    """My constrained objective function

    Most constrained optimization problems are defined in the following manner:
    min J(X) = something something, such that
        equality constraint: h(x), g(x)
        inequality constraint: i(x), j(x)
    """

    @classmethod
    def objective_func(cls, X):
        """Contains the actual fitness function that computes for an input X"""
        # Compute for J
        return J

    @classmethod
    def ineq_cons(cls):
        """Returns all inequality constraints of the given problem"""
        # Assuming we have two inequality constraints
        return (cls.ineq_1, cls.ineq_2)

    @classmethod
    def eq_cons(cls):
       """Returns all equality constraints of the given problem"""
      # Assuming we only have one equality constraint
      return cls.eq_1

    @staticmethod
    def ineq_1(X):
       """Inequality constraint 1 that checks X"""
       return X <= 5 # for example

    # the same goes for ineq_2 and eq_1

If you think there's a better data model to such, feel free to suggest a better way of handling these constraints. This means that when we're performing a constrained optimization problem, we only need to call the objective function, and the inequality and equality constraints if possible:

cons_pso = ConstrainedPSO()
cons_pso.optimize(objective_func=MyConstrainedObjective.objective_func, 
                  ineq_cons=MyConstrainedObjective.ineq_cons
                  eq_cons=MyConstrainedObjective.eq_cons)

A good resource for good constrained optimization problems can be in this link. It's Wikipedia I know, but this one is pretty accurate. For starters, you can try the first two methods.

Writing tests

You can check on how we write tests in the tests module of this package. Currently, we're using the unittest package to do these tests. Some of the things we'd like to check are the following:

  • Does the objective function approximately return the minima when fed with argmin (the argument that gives the optimal value)?
  • Do the methods raise an Error when fed with values outside of bounds, or of wrong type etc?
  • Other things you can think of that increases test coverage. Meaning, all methods you write should be tested.

Other things you need to know

  • PySwarms is pretty much a documentation junkie so please put docstrings in all classes, methods, and modules you'll create. This is important so that others can easily pick-up what you're working on. You can refer to the numpy documentation for a primer on how we do things.
  • We also follow PEP8 rules for styling Python code. If you wish to check your code against PEP8, please run a flake8 test on your file.
  • This whole Issue may be quite long and overwhelming at first, but don't worry! I also wanted to help you make your first contribution! And hope we both learn in the process. 👍
  • Currently, I'm building a prototype of a constrained optimizer. Ideally, I hope that we can test how the optimizer I'm building will interface in the objective functions you are building 😄

Getting Started

Capitalized, short (50 chars or less) summary

More detailed explanatory text, if necessary.  Wrap it to about 72
characters or so.  In some contexts, the first line is treated as the
subject of an email and the rest of the text as the body.  The blank
line separating the summary from the body is critical (unless you omit
the body entirely); tools like rebase can get confused if you run the
two together.

Write your commit message in the imperative: "Fix bug" and not "Fixed bug"
or "Fixes bug."  This convention matches up with commit messages generated
by commands like git merge and git revert.

Further paragraphs come after blank lines.

- Bullet points are okay, too

- Typically a hyphen or asterisk is used for the bullet, followed by a
  single space, with blank lines in between, but conventions vary here

- Use a hanging indent

Author: your-github-username
E-mail: your-email

Fix issue on Ackley Function

  • PySwarms version: 0.1.5
  • Python version: 2.7
  • Operating System: Linux

Description

It seems that the Ackley function is not working on Python 2.7 version

def ackley_func(x):
    if not np.logical_and(x >= -32, x <= 32).all():
        raise ValueError('Input for Ackley function must be within [-32, 32].')

    d = x.shape[1]
    j = (-20.0 * np.exp(-0.2 * np.sqrt((1/d) * (x**2).sum(axis=1)))
        - np.exp((1/d) * np.cos(2 * np.pi * x).sum(axis=1))
        + 20.0
        + np.exp(1))

    return j

When given an input np.zeros([3,2]), most Python 3.X versions return something that isclose to np.zeros(3). But when ran against a Python 2.7.X version, we're getting [1.718, 1.718, 1.718]

Add plotting module

I think it is better to decouple visualization from optimization.

Before, we have a PlotEnviroment class that takes an optimizer instance, runs
the optimization, and plots whatever the user requires (cost history, animations, etc.).

Some problems I see:

  • What if I want to experiment with my swarm first before visualization?
  • What if optimization takes a long time? Does it mean that the PlotEnvironment class
    will repeat this whole process again?
  • What if I really want to plot this particular history because of some interesting property
    I found in this rollout? PlotEnvironment will just ignore that because it has its own rollout.

It's better if we have the user do the optimization first, get the histories (via the properties
in the SwarmOptimizer class, e.g. my_optimizer.cost_history) and pass that to the plotting
function (doesn't need to be contained in a class)

Rough draft:

import pyswarms as ps
from pyswarms.utils.plotters import (plot_cost_history, plot_2d_trajectory)

# Setup PSO
optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=2, options=my_options)
optimizer.optimize(f, iters=1000) # Run for 1k iterations

# Obtain history
swarm_cost_history = optimizer.cost_history  # Of shape (1000, )
swarm_pos_history = optimizer.pos_history # Of shape (1000, 100, 2)
swarm_velocity_history = optimizer.velocity_history # Of shape (1000, 100, 2)

# Plot!
# Plotters can still accept an Axis class from matplotlib

plot_cost_history(swarm_cost_history, title='My cost history')

# Plotting velocity is optional. Only plot if an arg is passed
# Still returns an animate class from matplotlib
# Contour can be turned on or off. Make a separate "backend" for this so that proper errors can
#   be passed.
plot_2d_trajectory(swarm_pos_history, swarm_velocity_history, contour=True)

Applying large bounds results in a non-optimal solution and a Value Error

Describe the bug
If I try to apply a large upper bound to my problem, I no longer get the same optimal solution I was getting. I would think that the bounds would only come into play if the particle reaches the bound and therefore should not affect the solution to much.

Also, if the bounds are large enough PySwarms returns the error "ValueError: operands could not be broadcast together with shapes (0,) (10,2)"

Here are some examples, using the basic example provided. This first example works fine:

# Create bounds 
max_bound = 10e2 * np.ones(2)
min_bound = -10e2 * np.ones(2)
bounds = (min_bound, max_bound)

# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options, bounds=bounds)

# Perform optimization
cost, pos = optimizer.optimize(fx.sphere_func, print_step=100, iters=1000, verbose=3)

This next one gives a result that far from the near zero optimum.

# Create bounds
max_bound = 10e20 * np.ones(2)
min_bound = -10e2 * np.ones(2)
bounds = (min_bound, max_bound)

# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options, bounds=bounds)

# Perform optimization
cost, pos = optimizer.optimize(fx.sphere_func, print_step=100, iters=1000, verbose=3)

And this last one returns the error "ValueError: operands could not be broadcast together with shapes (0,) (10,2) "

# Create bounds# Creat 
max_bound = 10e200 * np.ones(2)
min_bound = -10e2 * np.ones(2)
bounds = (min_bound, max_bound)

# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options, bounds=bounds)

# Perform optimization
cost, pos = optimizer.optimize(fx.sphere_func, print_step=100, iters=1000, verbose=3)

Maybe I am misunderstanding what is meant by the term bounds, but I don't think it should behave like this. Also, a nice feature would be to allow the user to set an infinite upper bound say, (0,inf) in the case that they only want to consider positive numbers.

More explicit documentation in paper.md

Reference: openjournals/joss-reviews#433

Right now, the article is a bit short; we don't want a full-length paper, but per the author guidelines we do expect between 250-1000 words. Perhaps you could describe the implementation a bit more, or better yet explain some example use cases. Examples of the software being used in research (whether published already, or in progress) are also helpful. In addition, it may be helpful to explain PSO in a sentence or two, with an appropriate reference, at the beginning.

Successfully implemented PySwarms in my regression problem!

Good day, sir! I just want to thank you for helping me in my assignment. I really appreciate you and most especially, PySwarms <3 I really love it! I am happy that I was able to implement PySwarms in a regression problem using MSE as the loss function. I will not be able to accomplish it if it weren't for your kindness. Once again, thank you very much and may you have a bright future 👍

Make a Jupyter Notebook tutorial for Inverse Kinematics and PSO

The idea is simple, implement a Jupyter notebook tutorial for inverse kinematics using PySwarms. I already have a tutorial (using natively-written PSO) in this blog. It might be nice to have a PySwarms equivalent. You just need to follow that tutorial, and swap everything in with PySwarms.

A good template can be found here

Notes: Please work on the development branch. You can find a good StackOverflow question here.

Binary PSO cost problem

  • PySwarms version: 0.1.6a

Description

"The cost doesn't seem to monotonically decrease from one iteration to the next. It seems like the cost that should be returned each iteration as the best cost is the one that is the historically best across all particles, yet on some iterations the cost seems to jump up, and the final cost when the algorithm completes isn't the minimum cost it encountered and returned in previous iterations. "

"When I reduce the inertia, the problem reduces (but it's still there). That behavior suggests to me that the global best position is computed from only particles' current positions and not their past positions. That way, as particles become more likely to explore, they are more likely to move into and out of good solutions. Could that be what's going on? That the global / social best positions are computed only on the current iteration and not on the history of found solutions?"

What I Did

import numpy as np
import pyswarms as ps
from sklearn.datasets import make_regression
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split

X, y = make_regression(n_samples=100, n_features=300, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)

# Define cost function
line_model = LinearRegression()
def rmse_particle(pos):
    line_model.fit(X_train[:,pos==1], y_train)
    pred = line_model.predict(X_test[:,pos==1])
    return np.sqrt(mean_squared_error(pred, y_test)) 

def rmse_func(positions):
    n_particles = positions.shape[0]
    j = [rmse_particle(positions[i]) for i in range(n_particles)]
    return np.array(j)

# Perform optimization
options = {'c1': 0.5, 'c2': 0.5, 'w':0, 'k':30 , 'p':2}
optimizer = ps.discrete.BinaryPSO(n_particles=30, dimensions=300, options=options)
optimizer.reset() 
cost, pos = optimizer.optimize(rmse_func, print_step=1, iters=300, verbose=2)

Crash on custom objective function

  • PySwarms version: head
  • Python version: 3.6
  • Operating System: linux

Description

Tried to optimize a harder function than the sphere, which is basically trivial. The sphere function, with all else held constant, works fine.

What I Did

import pyswarms as ps
from numpy import cos, sqrt

def Griewangk(x):
    sum1 = 0 
    prod1 = 1
    for i in range(len(x)):
        sum1 += pow(x[i], 2)
        prod1 *= cos(x[i]/sqrt(i+1))
    f = sum1/4000.0 - prod1 + 1
    return f

# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}

# Call instance of GlobalBestPSO
optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=10,
                                    options=options)

# Perform optimization
stats = optimizer.optimize(Griewangk, iters=1000)
print(stats)

yields

ValueError                                Traceback (most recent call last)
<ipython-input-40-515ac38c5127> in <module>()
     19 
     20 # Perform optimization
---> 21 stats = optimizer.optimize(Griewangk, iters=1000)
     22 print(stats)

~/miniconda3/envs/jupy36/lib/python3.6/site-packages/pyswarms/single/global_best.py in optimize(self, objective_func, iters, print_step, verbose)
    153             _m = np.repeat(m[:, np.newaxis], self.dimensions, axis=1)
    154             self.personal_best_pos = np.where(~_m, self.personal_best_pos,
--> 155                                               self.pos)
    156 
    157             # Get theValueError                                Traceback (most recent call last)
<ipython-input-40-515ac38c5127> in <module>()
     19 
     20 # Perform optimization
---> 21 stats = optimizer.optimize(Griewangk, iters=1000)
     22 print(stats)

~/miniconda3/envs/jupy36/lib/python3.6/site-packages/pyswarms/single/global_best.py in optimize(self, objective_func, iters, print_step, verbose)
    153             _m = np.repeat(m[:, np.newaxis], self.dimensions, axis=1)
    154             self.personal_best_pos = np.where(~_m, self.personal_best_pos,
--> 155                                               self.pos)
    156 
    157             # Get the minima of the pbest and check if it's less than

ValueError: operands could not be broadcast together with shapes (10,10) (100,10) (100,10)  minima of the pbest and check if it's less than

ValueError: operands could not be broadcast together with shapes (10,10) (100,10) (100,10) 

Fix .travis.yml

  • PySwarms version: 0.1.0
  • Python version: 3.6.2
  • Operating System: Windows 10

Description

Uh-oh, I think I messed up with travis.yml, breaking up the build. Now, I just turned off the sync to
Travis CI so that it will stop sending me notifications of my failure :(

What I Did

I added these lines of code in .travis.yml. My test suite uses the numpy package so I decided to install it as a system environment. Little did I know that it will break like this. Please check build #21 in your Travis site.

virtualenv:
  system_site_packages: true
before_install:
  - sudo apt-get install -qq python-numpy python-scipy

Boundary Conditions

High-dimensional searches with the current bounds conditions result in a lot of lost optimization steps where many particles do not move.

Multiple solutions exist to deal with these issues. We want to implement reflective boundary conditions that will keep the particle within the bounds in a way that still samples the whole parameter space.

Fix PEP-8 Violations

As of 2017-10-28, flake8 was reporting 397 violations.

A summary of the output of flake8 --statistics --count pyswarms test is below:

6 E125 continuation line with same indent as next logical line
3 E127 continuation line over-indented for visual indent
46 E128 continuation line under-indented for visual indent
1 E202 whitespace before ')'
4 E211 whitespace before '('
2 E221 multiple spaces before operator
78 E231 missing whitespace after ','
1 E261 at least two spaces before inline comment
15 E265 block comment should start with '# '
2 E271 multiple spaces after keyword
21 E302 expected 2 blank lines, found 1
2 E303 too many blank lines (2)
69 E501 line too long (80 > 79 characters)
1 E502 the backslash is redundant between brackets
10 F401 'logging' imported but unused
1 F841 local variable 'params' is assigned to but never used
108 W291 trailing whitespace
13 W292 no newline at end of file
10 W293 blank line contains whitespace
4 W391 blank line at end of file
397

Documentation fixes

Documentation Task-List

  • RST file for Inverse Kinematics
  • Add citiation in the Pyramid topology to this paper
  • Use command style in the doc string's "subject line"(PEP257)
  • Update documentation and docstrings using http://www.hemingwayapp.com/ (Readability ~Grade 9)
  • Merge (file merge, not git merge! 😆 ) basic_optimization example with **kwargs example (add RST file too)
  • Note on large boundaries, etc. (see #152)
  • Note on precision and overflow (see #152)
  • Note on creating custom-objective function (return value should be (n_particles) see #109 )
  • Update README (put Siobhan and Aaron as Collaborators).

[DOC] Allow easier access to different doc pages

Going from two pages on the documentation requires going to an intermediate page. For example, going from discrete.binary to [single.global_best module] requires two clicks.

On the discrete.binary page, I see no link to single.global_best. That's illustrated by the figures below:

discrete.binary single.global_best
screen shot 2017-11-02 at 2 43 31 pm screen shot 2017-11-02 at 2 43 41 pm

It would be useful to have links in-between sections.

This isn't blocking for openjournals/joss-reviews#433 but it'd make docs for this package more accessible.

The documentation would be easier to use if this was resolved.

Prepare for new release (v.0.1.9)

Thanks to @mamadyonline , we have expanded our feature-set that includes:

  • Set tolerance to decide when to break iteration
  • Set initial position if the user wants to; and
  • Fix for rosenbrock function.

I will try to draft release notes next week so we can go to v0.1.9 by the end of the month.

Things to do:

How can I implement PySwarms in regression problems?

I actually have one, sir! I am trying to compare the implementation of BP NN vs PSO NN. I have a code for BP NN using TensorFlow. Here is my code:

from future import absolute_import
from future import division
from future import print_function
from tensorflow.contrib import learn
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn import datasets, linear_model
from sklearn import model_selection
from sklearn import preprocessing

import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import math
import calendar
import time

region = "NCR"
filename = region + ".csv"
x = np.array(pd.read_csv(filename, usecols=[1,2,3,4], header=None))
y = np.array((pd.read_csv(filename, usecols=[5], header=None)))
normalized_X = preprocessing.scale(x)

X_train, X_test, Y_train, Y_test = model_selection.train_test_split(
normalized_X, y, test_size=0.33333333, shuffle=False, stratify=None)

total_len = X_train.shape[0]

Parameters

learning_rate = 0.01
training_epochs = 600
batch_size = X_train.shape[0]
display_step = 1
model_path = "/tmp/models/" + region + ".pkt"

Network Parameters

n_hidden_1 = 256
n_hidden_2 = 256
n_hidden_3 = 256
n_hidden_4 = 256
n_input = X_train.shape[1]
n_classes = 1

TensorFlow graph input

normalized_X = tf.placeholder("float", [None, 4])
y = tf.placeholder("float", [None, 1])

Create model

def multilayer_perceptron(normalized_X, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(normalized_X, weights['w1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)

# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['w2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)

# Hidden layer with RELU activation
layer_3 = tf.add(tf.matmul(layer_2, weights['w3']), biases['b3'])
layer_3 = tf.nn.relu(layer_3)

# Hidden layer with RELU activation
layer_4 = tf.add(tf.matmul(layer_3, weights['w4']), biases['b4'])
layer_4 = tf.nn.relu(layer_4)

# Output layer with linear activation
out_layer = tf.matmul(layer_4, weights['output']) + biases['output']
return out_layer

Define weights and bias

weights = {
'w1': tf.Variable(tf.random_normal([n_input, n_hidden_1], 0, 1)),
'w2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], 0, 1)),
'w3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], 0, 1)),
'w4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4], 0, 1)),
'output': tf.Variable(tf.random_normal([n_hidden_4, n_classes], 0, 1))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1], 0, 1)),
'b2': tf.Variable(tf.random_normal([n_hidden_2], 0, 1)),
'b3': tf.Variable(tf.random_normal([n_hidden_3], 0, 1)),
'b4': tf.Variable(tf.random_normal([n_hidden_4], 0, 1)),
'output': tf.Variable(tf.random_normal([n_classes], 0, 1))
}

Construct model

pred = multilayer_perceptron(normalized_X, weights, biases)

Define cost and optimizer functions

cost = tf.reduce_mean(tf.square(pred-y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

Saver op to save and restore all the variables

saver = tf.train.Saver()

Launch the training session

start_time = time.time()
print("Initializing training session...\n")
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)

# Training cycle
for epoch in range(training_epochs):
    avg_cost = 0.0
    total_batch = int(total_len/batch_size)
    
    # Loop over all batches
    for i in range(total_batch):
        batch_x = X_train[i*batch_size:(i+1)*batch_size]
        batch_y = Y_train[i*batch_size:(i+1)*batch_size]
        feed_dict = {
            normalized_X: batch_x,
            y: batch_y,
        }
        
        # Run optimization op (backprop) and cost op (to get loss value)
        _, c, p = sess.run([optimizer, cost, pred], feed_dict=feed_dict)
        
        # Compute average cost
        avg_cost += c / total_batch

    # Sample prediction
    actual_value = batch_y
    predicted_value = p
    
    # Display outputs per epoch
    if epoch % display_step == 0:
        # print ("Epoch:", '%03d' % (epoch+1), "cost:", "{:.3f}".format(avg_cost))
        print ("Epoch:", '%03d' % (epoch+1))
        print ("----------------------------------------------------------------------------------------------------------------")
        j = 1
        year = 2008
        for i in range(len(actual_value)):
            if i % 12 == 0:
                print ("\n" + str(year))
                j = 1
                year += 1
            else:
                j += 1
            print (calendar.month_name[j] + ":", "Actual value =", '%01f' % actual_value[i] + " mwh, Predicted value =", '%01f' %  predicted_value[i] + \
            " mwh, Forecast error:", "{:.6f}".format(np.mean(np.abs((actual_value[i] - predicted_value[i]) / actual_value[i])) * 100))
            
        print ("================================================================================================================")

print ("Training Finished!\n")
print("Execution Time:", (time.time() - start_time), "seconds")

# Save model weights
save_path = saver.save(sess, model_path)

# Test model
predictions = sess.run(pred, feed_dict={normalized_X: X_test})

# Accuracy metrics
MAPE = 1/len(actual_value) * np.sum(abs(actual_value-predicted_value)/actual_value) * 100
Accuracy = 100 - MAPE
print("MAPE:", "{:.6f}".format(MAPE), "\nPrediction Accuracy:", "{:.6f}".format(Accuracy) + "%")

print ("\nPredictions:\n",predictions)

Add pre-commit hooks and formatting on code

Is your feature request related to a problem? Please describe.
We can streamline code review if the formatting is done automatically, and stylistic problems (e.g., flake8) are done on the contributor's side. I don't want to sound very nitpicky during code reviews whenever I point out formatting mistakes. So let's automate that part.

Describe the solution you'd like

  • Add pre-commit hooks, describe how to do this on CONTRIBUTING.md
  • Add flake8 and black formatting styles on top-level directory
  • Add black badge on README

Example on Binary PSO for Feature Selection

Hi everyone, I have already completed the Binary PSO algorithm and I would like it to be tested. This issue may require some domain expertise, but beginners are also welcome to join!

So I need some help on providing an example in using Binary PSO for Feature Selection. As you can see, we would like to expand our use-cases in the documentation. Would you like to lend us a hand?

Things you will do:

  1. Create a tutorial in a Jupyter notebook* You can find some example tutorials in this link. Make sure that everything is clear, concise and easy-to-follow.
  2. Convert the Jupyter notebook into an .rst (reST) file* After writing the tutorial, simple convert the notebook as an .rst file. You can find this under the [File] -> [Download/Save As...] -> [.rst (reST) file] in the Menu Bar.
  3. Make a pull request and we'll iterate from there!

Swarm visualization for different topologies

The goal is to have Jupyter Notebook where different topologies traversing a fairly difficult objective function are visualized. As a reference for how it should look see the README or the visualization example. There is also a discussion in a former issue about this topic.

Possible things to include in this visualization are:

  • The behaviour of different topologies
  • Differences in the swarm behaviour for extreme social or cognitive parameters
  • Your own ideas😋

I guess this is a very demanding first contribution so if you have any question don't hesitate 😄.
I recommend using the Anaconda distribution for Python. If you use Anaconda you can follow the instructions here if you have don't have the ffmpeg package and the instructions for getting started in the CONTRIBUTING file as well. (don't forget to open the cmd as an administrator on Windows so you can install packages without problems)

Notes: Please work on the development branch. You can find a good StackOverflow question here. For a more advanced guide to the GitHub workflow, there is this cheatsheet available.

See also: #148

Update documentation in Development Branch

Description

A lot of things have happened when we added the backend module in PR #115 . The API is still the same, but we changed the backend. It would be nice if we can add some documentation in ReadTheDocs for this backend.

In addition, it would also be nice to add new notebook examples on how to use this backend to develop your own swarm algorithms without using the pre-made GlobalBestPSO, etc. implementations.

If you're up to the task, just ping me here! We'll be working together.

Things to do:

  • Add backend module (.rst files) in the ReadTheDocs documentation
  • Maybe add a separate RTD docs for master and development branch?
  • Jupyter notebook example on backend module (and clean-up/updates of existing notebooks if necessary).

Implement Discrete PSO Variants

Hi! Thank you for checking out this issue!

Currently, this whole part is pretty much open. As of now, I'm planning to give PySwarms four major optimization capabilities:

  • single-objective continuous optimization
  • single-objective discrete optimization
  • multi-objective optimization, and
  • constrained optimization.

We've established some grounds on single-objective continuous optimization (with the standard implementations of global-best and local-best PSO). But we haven't done anything yet as for discrete optimization. Would you like to give us a headstart?

These are the steps that will be undertaken to close this issue:

  1. Creating an abstract class in the pyswarms.base module. This will provide a skeleton on how other implementations of the same optimization nature would be written. Take for example how global-best (pyswarms.single.gbest) and local-best (pyswarms.single.lbest) are inheriting from the class SwarmBase (this is the abstract class for single-objective continuous).

  2. Implementing a standard PSO algorithm inheriting the abstract class written in Step 1. This means that a particular discrete PSO optimization algorithm will be implemented while inheriting the base class.

  3. Writing unit tests and use-case examples This is to show how the proposed skeleton and algorithm will be used by the end-user, and of course some unit tests to check its robustness (please check the tests directory)

As you can see, the steps is asking for a lot of things. Right now, we're setting this a low-priority because I am currently writing the abstract classes for the other PSO variants. If you think you wanted to be a super-contributor, then go ahead and do all the steps above. 👍 😄 But I believe it would be much better if I set-up a basis first then we iterate from there.

But perhaps, the best way to contribute on this issue would be the following:
(note that these contributions don't require pull requests)

  • Propose features on how to implement the abstract classes. What do you think are the things to consider when making an abstract class for discrete PSO? You can use your domain-knowledge, and your past experience in handling discrete optimization problems to point out some helpful guides on how to set-up the abstract classes. I can take all of these into consideration when making the first commit in this issue.
  • Suggest discrete PSO implementations that can be implemented in the future. If you're planning to do this, please link the paper where it came from (it's okay if there's paywall). It would be better if the research is highly-cited, and is coming from reputable journals in the field of computational intelligence.

That's it for this issue!! For any questions, just drop a comment here!

Bound checking in rastrigin_func()

I posted this in a previous issue that is now closed.

When trying to have the global-best PSO solve the Rastrigin function, I got the following error:

ValueError: Input for Rastrigin function must be within [-5.12, 5.12].

The initial positions are clearly within bounds. I also checked the source code, and even tried the condition that checks for the bounds. All seems to be in order. What makes it even more puzzling is that it works perfectly fine for the Ackley function! Am I not seeing something very obvious?

My wrapper is as follows:

import itertools
import numpy as np
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx

def run_global_best_pso(n_dims, test_func, n_inds, n_gens,\
                        initial_positions=None,\
                        c1=0.5, c2=0.3, w=0.9,\
                        random_seed=12345
        ):
        options = {'c1':c1, 'c2':c2, 'w':w}

        optimizer = ps.single.GlobalBestPSO(n_particles=n_inds, dimensions=n_dims, options=options)
        if initial_positions is not None:
                optimizer.pos = np.array(initial_positions).copy()

        stats = optimizer.optimize(test_func, iters=n_gens)
        pos_history = optimizer.get_pos_history

        return(pos_history)

if __name__ == "__main__":
        n_inds = 100
        n_gens = 10000

        initial_positions = list(itertools.repeat([2.0, 3.0, -3.0], n_inds))

        run_global_best_pso(n_dims=3, test_func=fx.rastrigin_func,\
                                n_inds=n_inds, n_gens=n_gens,\
                                initial_positions=initial_positions)

Update: It works when using different values for the initial positions to solve Rastrigin, e.g., [2.0, 0, -0.02] works but [2.0, 3.0, -0.02] does not.

Implement Multi-Objective PSO Variants

Hi! Thank you for checking out this issue!

Currently, this whole part is pretty much open. As of now, I'm planning to give PySwarms four major optimization capabilities:

  • single-objective continuous optimization
  • single-objective discrete optimization
  • multi-objective optimization, and
  • constrained optimization.

We've established some grounds on single-objective continuous optimization (with the standard implementations of global-best and local-best PSO). But we haven't done anything yet as for multi-objective optimization. Would you like to give us a headstart?

These are the steps that will be undertaken to close this issue:

  1. Creating an abstract class in the pyswarms.base module. This will provide a skeleton on how other implementations of the same optimization nature would be written. Take for example how global-best (pyswarms.single.gbest) and local-best (pyswarms.single.lbest) are inheriting from the class SwarmBase (this is the abstract class for single-objective continuous).

  2. Implementing a standard PSO algorithm inheriting the abstract class written in Step 1. This means that a particular multi-objective PSO optimization algorithm will be implemented while inheriting the base class.

  3. Writing unit tests and use-case examples This is to show how the proposed skeleton and algorithm will be used by the end-user, and of course some unit tests to check its robustness (please check the tests directory)

As you can see, these steps are asking for a lot of things. Right now, we're setting this a low-priority because I am currently writing the abstract classes for the other PSO variants. If you think you wanted to be a super-contributor, then go ahead and do all the steps above. 👍 😄 But I believe it would be much better if I set-up a basis first then we iterate from there.

But perhaps, the best way to contribute on this issue would be the following:
(note that these contributions don't require pull requests)

  • Propose features on how to implement the abstract classes. What do you think are the things to consider when making an abstract class for multi-objective PSO? You can use your domain-knowledge, and your past experience in handling multi-objective optimization problems to point out some helpful guides on how to set-up the abstract classes. I can take all of these into consideration when making the first commit in this issue.
  • Suggest multi-objective PSO implementations that can be implemented in the future. If you're planning to do this, please link the paper where it came from (it's okay if there's paywall). It would be better if the research is highly-cited, and is coming from reputable journals in the field of computational intelligence.

That's it for this issue!! For any questions, just drop a comment here!

Update (10/5/2017)

  • Setting this to high-priority signifying that this is a major undertaking for the development roadmap

TypeError: 'bool' object is not subscriptable

  • PySwarms version: 0.19
  • Python version: 3.6
  • Operating System: Mac OS

Description

Describe what you were trying to get done.
Run a GlobalBestOptimization

Tell us what happened, what went wrong, and what you expected to happen.
I have a fairly complex objective function, but I can get it to run and return the expected values when running outside of pyswarms. but when I kick off an optimization i get this 'bool' object is not subsriptable TypeError after what appears to be the 1st iteration of the optimization (or maybe it's at the beginning of the 2nd iteration).

What I Did

# Create bounds
br = 0.2
max_bound = np.array( [ -1.0+br, 0.75,    0.5,    0.20,   0.9475,    0.9,    0.85,    0.8,    -1.5+br, 1.25,    6.0 ] )
min_bound = np.array( [ -1.0,    0.75-br, 0.5-br, 0.0-br, 0.9475-br, 0.9-br, 0.85-br, 0.8-br, -1.5,    1.25-br, 4.0 ] )

print( "Running strategy optimization with, # of parameters (p-swarm dimensions):", len(max_bound) )

bounds = (min_bound, max_bound)

# Initialize swarm
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}

# Call instance of PSO with bounds argument
optimizer = ps.single.GlobalBestPSO(n_particles=2, dimensions=11, options=options, bounds=bounds)

# Perform optimization
cost, pos = optimizer.optimize(strat_obj_func_wrapper, print_step=5, iters=100, verbose=3)

Then running the optimizer results in the following output and traceback:

Running strategy optimization with, # of parameters (p-swarm dimensions): 11

 opt_run_inputs_multiple: [[-0.88239126  0.63317997  0.41620925 -0.06704406  0.93384025  0.77059523
   0.80944611  0.63739677 -1.43850201  1.10322874  5.4486611 ]
 [-0.90738229  0.57249487  0.30730725 -0.07771865  0.85251403  0.85871976
   0.68927081  0.62064886 -1.36711124  1.10884316  4.84986957]]
-0.07815433772752717
-0.00789521337447165

 opt_run_inputs_multiple: [[-0.88239126  0.63317997  0.41620925 -0.06704406  0.93384025  0.77059523
   0.80944611  0.63739677 -1.43850201  1.10322874  5.4486611 ]
 [-0.90738229  0.57249487  0.30730725 -0.07771865  0.85251403  0.85871976
   0.68927081  0.62064886 -1.36711124  1.10884316  4.84986957]]
-0.07815433772752717
-0.00789521337447165
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-547-2949e347e2b5> in <module>()
     15 
     16 # Perform optimization
---> 17 cost, pos = optimizer.optimize(strat_obj_func_wrapper, print_step=5, iters=100, verbose=3)

~/python/anaconda3510/anaconda3/envs/quant_py364/lib/python3.6/site-packages/pyswarms/single/global_best.py in optimize(self, objective_func, iters, print_step, verbose)
    151             pbest_cost = np.where(~m, pbest_cost, current_cost)
    152             # Create 2-D mask to update positions
--> 153             _m = np.repeat(m[:, np.newaxis], self.dimensions, axis=1)
    154             self.personal_best_pos = np.where(~_m, self.personal_best_pos,
    155                                               self.pos)

TypeError: 'bool' object is not subscriptable

IndexError: index 5309 is out of bounds for axis 1 with size 1

Sir, I'm having problems with the out of bounds error. '5309' is the first element in my y. Here is the complete traceback:

IndexError                                Traceback (most recent call last)
<ipython-input-66-107cafabd9c0> in <module>()
     76 
     77 # Perform optimization
---> 78 cost, pos = optimizer.optimize(f, print_step=100, iters=1000, verbose=3)
     79 
     80 def predict(X_train, pos):

~\Anaconda3\envs\tensorflow\lib\site-packages\pyswarms\single\global_best.py in optimize(self, objective_func, iters, print_step, verbose)
    131         for i in xrange(iters):
    132             # Compute cost for current position and personal best
--> 133             current_cost = objective_func(self.pos)
    134             pbest_cost = objective_func(self.personal_best_pos)
    135 

<ipython-input-66-107cafabd9c0> in f(X_train)
     65 def f(X_train):
     66     n_particles = 100
---> 67     j = [forward_prop(X_train[i]) for i in range(n_particles)]
     68     return np.array(j)
     69 

<ipython-input-66-107cafabd9c0> in <listcomp>(.0)
     65 def f(X_train):
     66     n_particles = 100
---> 67     j = [forward_prop(X_train[i]) for i in range(n_particles)]
     68     return np.array(j)
     69 

<ipython-input-66-107cafabd9c0> in forward_prop(params)
     59     # Compute for the negative log likelihood
     60     N = 84 # Number of samples
---> 61     correct_logprobs = -np.log(probs[range(N), Y_train])
     62     loss = np.sum(correct_logprobs) / N
     63     return loss

IndexError: index 5309 is out of bounds for axis 1 with size 1

Implement a GeneralOptimizer class for new topologies

As proposed in #147 we could implement a new GeneralOptimizer class in addition to the two existing GlobalBest and LocalBest classes. In this class, it will then be possible to select a custom topology for the optimization. We could also include a foundation for multilayer PSO (as proposed in #75).

The only issue I see is that a GeneralOptimizer might make the GlobalBest or LocalBest classes obsolete as it might optimize faster and/or run with less memory consumption especially if we implement higher-level topologies such as Von Neumann or Pyramid (#129).

Functionality to pass arguments to objective function

Parameterizing Objective Functions
The ability to send parameters into the objective function will provide much more applicability for the project.

Solution
Use *args and/or **kwargs to enable the passing arguments into the objective function.

Implement Constrained PSO Variants

Hi! Thank you for checking out this issue!

Currently, this whole part is pretty much open. As of now, I'm planning to give PySwarms four major optimization capabilities:

  • single-objective continuous optimization
  • single-objective discrete optimization
  • multi-objective optimization, and
  • constrained optimization.

We've established some grounds on single-objective continuous optimization (with the standard implementations of global-best and local-best PSO). But we haven't done anything yet as for constrained optimization. Would you like to give us a headstart?

These are the steps that will be undertaken to close this issue:

  1. Creating an abstract class in the pyswarms.base module. This will provide a skeleton on how other implementations of the same optimization nature would be written. Take for example how global-best (pyswarms.single.gbest) and local-best (pyswarms.single.lbest) are inheriting from the class SwarmBase (this is the abstract class for single-objective continuous).

  2. Implementing a standard PSO algorithm inheriting the abstract class written in Step 1. This means that a particular constrained PSO optimization algorithm will be implemented while inheriting the base class.

  3. Writing unit tests and use-case examples This is to show how the proposed skeleton and algorithm will be used by the end-user, and of course some unit tests to check its robustness (please check the tests directory)

As you can see, these steps are asking for a lot of things. Right now, we're setting this in low-priority because I am currently writing the abstract classes for the other PSO variants. If you want to be a super-contributor, then go ahead and do all the steps above. 👍 😄 But I believe it would be much better if I set-up a basis first then we iterate from there.

But perhaps, the best way to contribute on this issue would be the following:
(note that these contributions don't require pull requests nor git commits)

  • Propose features on how to implement the abstract classes. What do you think are the things to consider when making an abstract class for constrained PSO? You can use your domain-knowledge, and your past experience in handling constrained optimization problems to point out some helpful guides on how to set-up the abstract classes. I can take all of these into consideration when making the first commit in this issue.
  • Suggest constrained PSO implementations that can be implemented in the future. If you're planning to do this, please link the paper where it came from (it's okay if there's paywall). It would be better if the research is highly-cited, and is coming from reputable journals in the field of computational intelligence.

That's it for this issue!! For any questions, just drop a comment here!

Use Ipython shells to run examples in docs

Are you interested in converting the docs on RTD to use IPython shells rather than hard-coded code snippets? I like the Ipython code blocks because they are copy-pastable by other users. For instance: http://www.coolprop.org/coolprop/HighLevelAPI.html#propssi-function . The IPython code is run at documentation build time. I know how to get it working on RTD as well. It's pretty slick overall.

I've not yet used your library, but the docs look quite nice, and I wonder if you would be interested in this addition. If so, I'll get started.

rosenbrock_func does not behave in the same manner like the other utilitiy functions

why does the rosenbrock_func not behave in the same way as the others, especially in the return output ?
for example calling the utility functions on this array

arr = np.random.uniform(size=(7,3))

the rosenbrock_func outputs an array of shape (14,) whilst others output an array of shape (7,) which should be the normal behavior. This raises an exception in the optimize method when applying the mask to update the positions.

Can you tell me if this is a normal thing or if it was done on purpose ?

Thank you

Drop Python 2.7 Support for 0.2.0

For v.0.2.0, I am planning to introduce some backend API changes:

  • Port all tests (from unittest) into pytest
  • Add a backend module to handle lower level implementations (this module will be imported by GlobalBestPSO and other PSOs).

Because of this, I might drop Python 2.X support to accommodate some of these backend changes (such as using pytest). And because Python itself is already in its 2.7 countdown, we might as well follow suit.

What do you think @SioKCronin ?

Development Roadmap

Development Roadmap

Here is the feature list needed for the major release. We will be implementing different PSO variants, more test functions, and a hyperparameter search utility.

Optimizers

  • Binary PSO (pyswarms.single.BPSO) [1]
  • Vanilla Multi-Objective PSO (pyswarms.multi.MOPSO) [2]
  • Constrained PSO (pyswarms.constrained.CPSO) [3]

Test Functions

Most of the good test functions can be found here. If you want to implement a single function, just make a pull request, and then implement it. In fact, even Wikipedia (I know, I know) has a good resource for test functions.

Single-objective pyswarms.utils.single_obj

  • Beale's Function
  • Goldstein–Price function
  • Booth's function
  • Bukin function N.6
  • Matyas function
  • Lévi function N.13
  • Schaffer function N. 2

Multi-Objective pyswarms.utils.multi_obj

  • Binh and Korn function
  • Chakong and Haimes function
  • Fonseca and Fleming function
  • Test function 4
  • Schaffer function N. 1
  • Schaffer function N. 2

Constrained Problems pyswarms.utils.constrained_obj

  • Rosenbrock function constrained with a cubic and a line
  • Rosenbrock function constrained to a disk
  • Mishra's Bird function - constrained
  • Townsend function
  • Simionescu function

Utilities

  • Hyperparameter search methods (grid search and random search)

Examples

We need various examples or use-cases to help a user use PySwarms. For now, these things are much better written in a Jupyter Notebook.

No. Reference
[1] J. Kennedy and R.C. Eberhart, "A discrete binary version of the particle swarm algorithm," in IEEE International Conference on Systems, Man, and Cybernetics, 1997.
[2] M.R. Sierra and C.A. Coello, "Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art," International Journal of Computational Intelligence Research, 2006.
[3] Parsopoulos, Konstantinos E., and Michael N. Vrahatis. "Particle swarm optimization method for constrained optimization problems," Intelligent Technologies–Theory and Application: New Trends in Intelligent Technologies pp. 214-220, 2006.

Improve API documentation

Currently the documentation is untouched, and much of the docstrings in the modules are not yet presented. Can you lend us a hand?

More explicit documentation

  • Clarify the statement of need, explain what a "high-level interface" is in README.rst, ReadTheDocs, and Paper.md
  • I think it would be better to explicitly acknowledge the v.0.1.7 committers aside from the README.rst, and their respective Issues and PRs in the Changelog. And if possible, in the Paper.md

Implement assertions() in search methods

Thanks to @SioKCronin , we're now able to implement hyperparameter search tools such as GridSearch and RandomSearch . These are awesome implementations, but we can maximize its robustness by adding an assertions() method in our classes. Would you like to try?

What you'll do

Simply put, you will just add an assertions() method in the SearchBase class. This should contain various
checks when instantiating any class that inherits from it. You can check the docstrings for each class and create a proper method call for it.

If you want to check previous implementations, see the SwarmBase.assertions method, and check how it is being implemented in GlobalBestPSO (it is not called explicitly because it just inherits from the base) or in LocalBestPSO (additional lines of code were added after calling the super() because of extra attributes).

Go the extra mile?

If you want to go the extra mile, you can add tests to check for edge cases and see if your assertions are working when passed with an invalid input (maybe an invalid type, out-of-bounds, etc.). For this, you just need to add an Instantiation class in the test.utils.search.test_gridsearch and test.utils.search.test_randomsearch modules.

Again, if you want a template, you can check the Instantiation classes in test.optimizers.test_global_best and others. Notice how we try to feed the class with invalid arguments/edge cases and have the unit test capture them. It's a very fun activity!

If you wish to take this on, then feel free to drop a comment here! I'd be glad to help you!

Update: 9/17/2017

Follow the instructions in this quick-start guide to get you started!

Commit Guidelines

I'd appreciate if we lessen the number of commits per issue to 1-2. Of course you can commit often, but before merging, I would ask you to rebase them depending on the changes you've done. A commit format that we follow is shown below:

Short and sweet imperative title  (#27)

Description of the commit. The commit title should be short
and concise, and must be in imperative form. It could be as
simple as `Add tests for optimizer` or `Implement foo`. It describes
the "what" of the commit. You must also reference the issue, if any,
in the title. Here, in the description, we explain the
"why" of the commit. This is more or less free-for-all. So you can
describe as detailed as possible, in whatever tense/form you like.

Author: <your-github-username>
Email: <your-email>

Returning an empty subset as the best subset for feature selection

  • PySwarms version: 0.1.9
  • Python version: 3.5.4
  • Operating System: windows 10

Description

The code is returning an empty subset of attributes as the best subset selection. It is even returning the cost of the subset with only zeros. Even with a minor change on the exemple code it is possible to replicate this output.

There is a checking condition on the code to avoid empty subsets but it still outputs empty as the best subset.
if np.count_nonzero(m) == 0:
#if the particle subset is only zeros, get the original set of attributes
X_subset = X

Describe what you were trying to get done.
Return a non-empty subset of attributes (a binary list containing at least one element 1)

What I Did

I ran the "feature_subset_selection" notebook (https://github.com/ljvmiranda921/pyswarms/blob/master/examples/feature_subset_selection.ipynb) up to the # Perform optimization point. It all runs ok, but when I do a minor change, just on the number of iterations from 1000 to 10, it most of the times return an empty subset as the best subset.

The only line changed was the following:

cost, pos = optimizer.optimize(f, print_step=1, iters=10, verbose=2)
print(cost,pos)

and the output was:

INFO:pyswarms.discrete.binary:Iteration 1/10, cost: 0.2696
INFO:pyswarms.discrete.binary:Iteration 2/10, cost: 0.2696
INFO:pyswarms.discrete.binary:Iteration 3/10, cost: 0.2696
INFO:pyswarms.discrete.binary:Iteration 4/10, cost: 0.2696
INFO:pyswarms.discrete.binary:Iteration 5/10, cost: 0.2696
INFO:pyswarms.discrete.binary:Iteration 6/10, cost: 0.2696
INFO:pyswarms.discrete.binary:Iteration 7/10, cost: 0.2376
INFO:pyswarms.discrete.binary:Iteration 8/10, cost: 0.2376
INFO:pyswarms.discrete.binary:Iteration 9/10, cost: 0.2376
INFO:pyswarms.discrete.binary:Iteration 10/10, cost: 0.2376
INFO:pyswarms.discrete.binary:================================
Optimization finished!
Final cost: 0.2376
Best value: [ 0.000000 0.000000 0.000000 ...]

0.2376 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

Implement new Topologies

There are some topologies that we can implement in pyswarms.backend:

  • Ring (static): currently, our Ring topology is dynamic. It looks for the nearest neighbors in the current iteration and there is some neighbor overlap. In a static-ring topology, the neighbors are locked down for each particle. There is still overlap, but it doesn't change every iteration. low-hanging fruit
  • Von Neumann: a two-dimensional grid with neighbors in all four directions [North, East, West, South]
    References: HeuristicLab.
  • Random: the position and velocity updates are affected randomly by any particle. Thus, each particle can move in any direction depending on the random particle that was chosen.
  1. If you have any other ideas, just ping me!

Implementing a Topology

To implement a Topology, simply:

  • Inherit from the pyswarms.backend.topology.Topology class
  • Implement the following functions: compute_gbest(), compute_position, and compute_velocity.

You can check the implementations in the Star or Ring topologies.

References:

Improve the tolerance on the evaluation of the objective function

The actual version of the tolerance feature is a basic one where we only account for the tolerance on the objective function for the best position found.
It would be better to take into account the relative tolerance as in how good the new solution found performs compared to the previous.
The idea is to implement something in the same idea of the function tolerance of Matlab found here. It is more elaborate than the previous one.

GPU Implementation?

How can I use this package with Keras in optimiser class?
Any thoughts on GPU implementation?

Write a backend module

I think it's better to decompose some methods in the swarm loop into its own backend module.
Makes it easier for us to test what's happening inside.

Deps: #101
Port all tests to pytest first before doing this.

  • Accomplish Issue #101
  • Write a backend module
  • Add tests to backend module
  • Update documentation about the module (make new Issue on this)

Implement single-objective test functions

Thank you for checking-out this issue, help us build our way to our First Major Release! (#5 )

What you'll learn

  • Learn writing basic methods in Python, especially in Numpy

What you'll do

TL;DR,

  • Write a single method for a single-objective function.

We wanted you to write some single-objective test functions for our optimization algorithm. For an optimization perspective, single-objective only means that the algorithm aims to minimize for only one function/goal. These test functions will serve as utilities so that users can test their optimization models on various benchmark optimization tasks.

Most of these are very easy to write in Python using the NumPy library, and they just require defining a single function with a single input and a single output. Very suitable for most first-timers!

Requirements

  1. It must output the expected minima when fed with argmin Like for example in sphere_func,$f(x) = x^2$, we expect an output of 0 (its global minima) when we input [0,0] (the argmin or the argument that makes it a minima).
  2. It must throw an assertion when fed with an input that is out of its bounds This may be applicable to some functions. You can check its bounds in this Wikipedia page (I know, I know 😞) and cross-reference in this link
  3. It must throw an assertion when with an input that is more than the possible dimensions This may be applicable to some functions (Beale, Goldstein, Booth, etc.) where only two-dimensions are acceptable.
  4. Its output size must be of shape (n_particles, ) This is really important because the computation of the optimization algorithm leverages on this shape.

I have provided a skeleton for some functions (and already even wrote the assertions!), and I also wrote some unit tests to check them (pyswarms/tests/utils/functions/test_singleobj.py). Check them out as well!

Steps

Here are some steps that you need to do:

Writing the method

  1. Go to the list of Test Functions needed in Issue #5
  2. Choose a function that doesn't have any checks yet. I believe that most of the single-objective functions are the easiest to implement. If you have chosen already, just comment on #5 that you're on it (just leave a message saying "I'll do <function>!").
  3. Go to ./pyswarms/utils/functions/single_obj.py and create a new method def <function_name>_func()

Afterwards

  1. After writing your method, perform a Pull Request.
  2. We have some unit tests set-up in tests/utils/test_functions.py, if the method you wrote passes this test, then you're good! (Hint: You can check these tests to gauge what we are expecting from your method)
  3. And once the changes are approved, you have now landed your First Commit on an optimization library! Awesome!

Notes on implementing the single-objective test function

  • Each method must take a numpy.ndarray of shape (n_particles, dims) and must return another numpy.ndarray of shape (n_particles). This means that each row represents a particle, and each column represents its "characteristic for each dimension."

  • (continued) That means we're computing for the function value for each particle, given its set of characteristics. Thus, in a two-dimensional (x-y) space, if we have 5 particles at points (0,0), (5,2), (6,2), (-1,-1), (3,2). Then our input looks like this:

import numpy as np
from pyswarms.utils.functions.single_obj import sphere_func
input = np.array([[0,0],[5,2],[6,2],[-1,-1],[3,2]])

If we feed it to sphere_func(), the output should look like this

>>> sphere_func(input)
np.array([0, 29. , 40., 2., 13.])
  • Moreover, assure that you also have to be clear on how many dimensions and the bounds in which your function can work. Again, you can check this simply in this Wikipedia page (I know, I know, again 😞) and cross-reference in this link if possible to know the expected bounds. Thus, if you want to make sure that the input is only within a certain dimension, write this:
assert x.shape[1] == 2, "Dimensions should be of size 2"

Or, if you want to assert a certain bound, just write:

# Say min bound is -5.12 and max bound is 5.12 for a Rastrigin function
assert np.logical_and(x >= -5.12, x <= 5.12).all(), "Input for \
            Rastrigin function must be within [-5.12, 5.12]."
  • Lastly, we will appreciate it more if you will perform the computations in a vectorized manner, that is, without using for-loops. That's a minor challenge to give you something to think about.

Goodluck and happy coding! If you have any questions, simply drop a comment here. I will be glad to help you out! Let's work together! I'd love to help people who are starting out!

Add tests for plot_environment

Hi Contributor! This might get messy and unreadable because this is just a self-assigned issue that I will work on. In case you pieced out something from my garbled notes and want to help out, feel free to leave a comment below!

TODO:

  • Make sure that conda installs ffmpeg during setup.py and during travis check
  • Should it automatically install ffmpeg? Or have it as an option?

User-provided initial positions

Is your feature request related to a problem? Please describe.
I would like to specify the starting positions of every particle in a swarm.

Describe the solution you'd like
I see that init_pos can be specified, but it only allows the user to specify a starting position for the swarm's center. Looking at the code of pyswarms.base.base_single, it seems that reset() needs to be extended or modified. That is, I need to update how self.pos is being initialized. So, I would provide a list of starting positions to self.pos. Is there an already existing alternative to this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.