Giter Club home page Giter Club logo

cobyqa / cobyqa Goto Github PK

View Code? Open in Web Editor NEW
27.0 1.0 5.0 8.92 MB

A derivative-free solver for general nonlinear optimization.

Home Page: https://www.cobyqa.com

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
black-box-optimisation black-box-optimization blackbox-optimizer derivative-free-optimisation numerical-optimization-library numerical-optimization-methods optimisation optimisation-framework optimization-framework optimization-library

cobyqa's Introduction

COBYQA: Constrained Optimization BY Quadratic Approximations

image

image

image

image

image

image

COBYQA, an acronym for Constrained Optimization BY Quadratic Approximations, is designed to supersede COBYLA as a general derivative-free optimization solver. It can handle unconstrained, bound-constrained, linearly constrained, and nonlinearly constrained problems. It uses only function values of the objective and constraint functions, if any. No derivative information is needed.

Documentation: https://www.cobyqa.com.

Installation

COBYQA can be installed for Python 3.8 or above.

Dependencies

The following Python packages are required by COBYQA:

  • NumPy 1.17.0 or higher, and
  • SciPy 1.10.0 or higher.

If you install COBYQA using pip or conda (see below), these dependencies will be installed automatically.

User installation

The easiest way to install COBYQA is using pip or conda. To install it using pip, run in a terminal or command window

pip install cobyqa

If you are using conda, you can install COBYQA from the conda-forge channel by running

conda install conda-forge::cobyqa

To check your installation, you can execute

python -c "import cobyqa; cobyqa.show_versions()"

If your python launcher is not python, you can replace it with the appropriate command (similarly for pip and conda). For example, you may need to use python3 instead of python and pip3 instead of pip.

Testing

To execute the test suite of COBYQA, you first need to install pytest. You can then run the test suite by executing

pytest --pyargs cobyqa

The test suite takes several minutes to run. It is unnecessary to run the test suite if you installed COBYQA using the recommended method described above.

Examples

The folder examples contains a few examples of how to use COBYQA. These files contain headers explaining what problems they solve.

Support

To report a bug or request a new feature, please open a new issue using the issue tracker.

cobyqa's People

Contributors

dependabot[bot] avatar ragonneau avatar zaikunzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

cobyqa's Issues

Check the definition of knew after a trust-region step

return np.argmax(np.abs(sigma) * np.square(dsq))

  1. Here, np.square(dsq) is used as a weight, which is the one used in LINCOA. Check whether dsq^3 or some other weights work better. A carefully chosen weight can boost performance. See, e.g.,

https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/bobyqa/geometry.f90#L22
https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/uobyqa/geometry.f90#L22
https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/cobyla/geometry.f90#L95

  1. Should xopt be updated or not before calculating dsq? Test it. I agree that it should be, but the numerical result may surprise you.

  2. Be cautious that sigma very often contains NaN --- or even contains only NaN. What will happen to your code in such a case?

Misc

Feature description

  1. For the "About" of the GitHub repo:

1.1 The website should be https://www.cobyqa.com

1.2 Add nonlinear-optimization, simulation-based-optimization, hyperparameter-optimization, hyperparameter-tuning, constrained-optimization, unconstrained-optimization

  1. Add a benchmarking directory under the root, which should contain the benchmarking code and the PDF of the performance/data profiles, which should be update automatically when the code is updated.

  2. Add the thesis to doc. Make links to the thesis in the repo and at www.cobyqa.com.

Proposed solution

No response

Considered alternatives

No response

More careful and systematic (unit) tests

Feature description

The unit tests should do both basic tests and advanced tests:

Basic: No NaN/Inf appear in the result, the result is in the expected range, etc ...
Advanced: The result is mathematically correct, e.g., KKT (for optimization problems) or residual (for equations), etc.

Doing only the basic tests is NOT sufficient.

Note the following.

  1. Each function/procedure should have its own unit tests, documented clearly.

  2. The unit tests should be randomized, accepting a random seed, and should be run automatically by GitHub Actions daily, even if there is no update to the code. Of course, it should also be triggered by every push to GitHub.

  3. With the same random seed, the result should be the same. The random seed should be changed periodically (e.g., weekly). so that the tests indeed change. In this way, the longer time passes, the more confident we are with our code.

Why should we run the tests even if the code is not updated??? For two reasons.

  1. The random seed may change.
  2. Even if the random seed is the same, the behavior of the code may be random:
    2.1. Due to uninitialized variables. This would be a bug. Without a large amount of randomized tests, the bug may not be triggered and spotted.
    2.2. Due to parallel computation. Things like matrix multiplications and inner products may not return the same result from run to run. Nevertheless, we should not use this as an excuse for non-reproducibility.

As an example, I have been conducting daily random tests to verify the faithfulness of the modernized implementation of Powell's solvers (not really a "unit test", but the idea is similar):

https://github.com/libprima/prima/blob/main/.github/workflows/verify_small.yml
https://github.com/libprima/prima/blob/main/.github/workflows/verify_big.yml
https://github.com/libprima/prima/blob/main/.github/workflows/verify_large.yml
https://github.com/libprima/prima/blob/main/.github/workflows/verify_base.yml

In this way, time proves the correctness of my code, automatically. The longer time passes, the safer we are.

Proposed solution

No response

Considered alternatives

No response

The definitions of `reduct` and `violation` are problematic

https://github.com/ragonneau/cobyqa/blob/41d334bc365afc634cbb7ae6d05861e188491d24/cobyqa/optimize.py#L2312

First of all, in trust-region methods, given the trust region center $x_k$ and a trial step $s_k$, the "reduction" of a function F (no matter what F means) is defined as
$$F(x_k) - F(x_k + s_k).$$
The current definition of reduct is the opposite. Mathematically, nothing is wrong, but it violates the convention and people will be confused.

Secondly, what is currently defined as violation is not a (constraint) violation in any sense. It is and should be a reduction in the constraint violation. Otherwise, it does not make sense to take a ratio between this quantity and reduct. (Again, mathematically, there is nothing wrong to take the ratio between any quantities, but "physically”, it rarely makes sense to divide a reduction in objective function by a constraint violation).

Produce performance/data profiles automatically whenever there is an update to the code.

Feature description

Produce performance/data profiles automatically (against the last version, and against PDFO) by GitHub Actions whenever there is an update to the code. In this way, we keep a clear track of the impact of the updates on the performance, so that we can revoke the "bad" updates before too late.

For this automation, I suggest you use pyCUTEst on Linux, which should be easier to deploy rather than on macOS. Note that the list of testing problems should be stable, not to be affected by "loading error". Otherwise, we cannot draw definite conclusions about the performance.

I have been doing similar things for the modernization of Powell's solver. See, for example,

https://github.com/libprima/prima/actions

If I can do this, you should be able to do something much better. :)

Proposed solution

No response

Considered alternatives

No response

Improve the website of COBYQA

Feature description

The current look of https://www.cobyqa.com is good, but can be much better.

I suggest we have something similar to https://www.pdfo.net/

First and foremost, remove the advertisement.

The number of downloads should also be tracked and recorded.

Proposed solution

No response

Considered alternatives

No response

cobyqa.minimize() not working when most solutions are Nonlinear-equality unfeasible and unfeasibility cannot be quantified

Hey,

I'm trying to solve a problem with cobyqa with a feasibility check that has no quantification on the amount of violation from the constraint. So the constraint_check() function I'm considering will return 1 if the function satisfies the constraint and 0 if it does not (this is by design, as I am working in a drone design problem with this type of constraint). Note that in this problem, most solutions are unfeasible, but the amount of constraint violation cannot be quantified.

def constraint_check(x: numpy.typing.ArrayLike):
    if x[0] > x[1] > x[2] > x[3] > x[4]:
        return 1
    return 0

The objective function is just the L2 norm of the input array, and returns np.nan when the solution is unfeasible:

def f(x: numpy.typing.ArrayLike):
    # If solution not feasible, cannot be evaluated. return np.nan.
    if not np.all(constraint_check(x)):
        return np.nan
    return np.linalg.norm(x)

And this is the whole code I am using to reproduce the behaviour:

import numpy as np
import numpy.typing
import os
from cobyqa import minimize
from scipy.optimize import NonlinearConstraint, Bounds


def f(x: numpy.typing.ArrayLike):
    # If solution not feasible, return np.nan
    if not np.all(constraint_check(x)):
        return np.nan
    return np.linalg.norm(x)

def constraint_check(x: numpy.typing.ArrayLike):
    if x[0] > x[1] > x[2] > x[3] > x[4]:
        return 1.0
    return 0.0

def random_feasible_sol(dim, ):
    feasible = False
    seed = 2
    rs = np.random.RandomState(seed)
    while not feasible:
        x0 = rs.random(dim)
        feasible = constraint_check(x0)
    return x0

problem_dim = 50
x0 = random_feasible_sol(problem_dim) # get a random feasible solution to initialize algorithm.
bounds = Bounds([0.0 for _ in range(problem_dim)], [1.0 for _ in range(problem_dim)])
constraints = NonlinearConstraint(lambda x: [constraint_check(x)], 1.0, 1.0)

options = {
"disp": True,
"feasibility_tol": np.finfo(float).eps,
}

minimize(f, x0, bounds=bounds, constraints=constraints, options=options)

With this code, cobyqa is unable to find any feasible solutions (even tough I initialize the algorithm to a feasible solution). It also terminates the problem with ExitStatus.RADIUS_SUCCESS, which seems to me like its not working. This is the output I get:

click to expand output

Output

Starting the optimization procedure.
Initial trust-region radius: 1.0.
Final trust-region radius: 1e-06.
Maximum number of function evaluations: 25000.
Maximum number of iterations: 50000.

f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 1.000e+00  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 1.000e+00  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  1.000e+00 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  1.000e+00 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  1.000e+00  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  1.000e+00  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  5.000e-01]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  5.000e-01]) = [ 0.000e+00]
f([ 0.000e+00  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 0.000e+00  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  0.000e+00 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  0.000e+00 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  0.000e+00  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  0.000e+00  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  0.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  0.000e+00]) = [ 0.000e+00]

New trust-region radius: 0.05.
Number of function evaluations: 101.
Number of iterations: 3.
Least value of f: 1.2676506002282294e+30.
Maximum constraint violation: 1.0.
Corresponding point: [ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00].

f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]

New trust-region radius: 0.005000000000000001.
Number of function evaluations: 104.
Number of iterations: 7.
Least value of f: 1.2676506002282294e+30.
Maximum constraint violation: 1.0.
Corresponding point: [ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00].

f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]

New trust-region radius: 0.0005000000000000001.
Number of function evaluations: 107.
Number of iterations: 11.
Least value of f: 1.2676506002282294e+30.
Maximum constraint violation: 1.0.
Corresponding point: [ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00].

f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]

New trust-region radius: 5.0000000000000016e-05.
Number of function evaluations: 110.
Number of iterations: 15.
Least value of f: 1.2676506002282294e+30.
Maximum constraint violation: 1.0.
Corresponding point: [ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00].

f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]

New trust-region radius: 7.071067811865476e-06.
Number of function evaluations: 113.
Number of iterations: 19.
Least value of f: 1.2676506002282294e+30.
Maximum constraint violation: 1.0.
Corresponding point: [ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00].

f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]

New trust-region radius: 1e-06.
Number of function evaluations: 116.
Number of iterations: 23.
Least value of f: 1.2676506002282294e+30.
Maximum constraint violation: 1.0.
Corresponding point: [ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00].

f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
f([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = nan
<lambda>([ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00]) = [ 0.000e+00]
cobyqa terminated:  ExitStatus.RADIUS_SUCCESS

The lower bound for the trust-region radius has been reached.
Number of function evaluations: 119.
Number of iterations: 27.
Least value of f: 1.2676506002282294e+30.
Maximum constraint violation: 1.0.
Corresponding point: [ 5.000e-01  5.000e-01 ...  5.000e-01  1.000e+00].

Let me know if this is the expected result, or if cobyqa is capable of solving this type of problems, and thank you in advance.

`linalg` is not a proper name

Bug description

linalg means "linear algebra", which obviously mismatches the nature of the subroutines (or whatever you call them) included in it.

The basic problems in linear algebra include linear systems, linear least squares, and eigenvalue problems, and the basic tools include matrix factorization, etc.

Since you are a fan of Python, you may refer to https://docs.scipy.org/doc/scipy/reference/linalg.html and see what the Python community includes in linalg. Also check what LAPACK contains. I think they align with what I said above.

Although closely related, optimization is not part of linear algebra. For example, the linear least squares problem is both a linear algebra and an optimization problem, but nobody will agree that the trust-region subproblem belongs to linear algebra.

The name must be corrected.

Bug reproduction

NA

Expected Results

NA

Actual Results

NA

Versions

NA

`python -m pytest --pyargs cobyqa` fails

Bug description

python -m pytest --pyargs cobyqa
=========================================== test session starts ============================================
platform linux -- Python 3.9.7, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/zaikunzhang
collected 361 items                                                                                        

.local/lib/python3.9/site-packages/cobyqa/linalg/tests/test_geometry.py ..........                   [  2%]
.local/lib/python3.9/site-packages/cobyqa/linalg/tests/test_optimize.py ............................ [ 10%]
.................................................................................................... [ 38%]
........F...........                                                                                 [ 43%]
.local/lib/python3.9/site-packages/cobyqa/tests/test_cobyqa.py ..................................... [ 54%]
.................................................................................................... [ 81%]
................................................................                                     [ 99%]
.local/lib/python3.9/site-packages/cobyqa/tests/test_utils.py ..                                     [100%]

================================================= FAILURES =================================================
_________________________________________ TestNNLS.test_simple[5] __________________________________________

self = <cobyqa.linalg.tests.test_optimize.TestNNLS object at 0x7fcdb3eb4220>, n = 5

    @pytest.mark.parametrize('n', [1, 5, 10, 100])
    def test_simple(self, n):
>       self.nnls_test(n, n)

.local/lib/python3.9/site-packages/cobyqa/linalg/tests/test_optimize.py:219: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.local/lib/python3.9/site-packages/cobyqa/linalg/tests/test_optimize.py:213: in nnls_test
    assert_array_less_equal(np.abs(grad[:k] * x[:k]), lctol)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

x = array([3.57681936e-13]), y = 2.075669894676348e-13, err_msg = '', verbose = True

    def assert_array_less_equal(x, y, err_msg='', verbose=True):
        """
        Raise an AssertionError if two objects are not less-or-equal-ordered.
    
        Parameters
        ----------
        x : array_like
            Smaller object to check.
        y : array_like
            Larger object to compare.
        err_msg : str, optional
            Error message to be printed in case of failure.
        verbose : bool, optional
            Whether the conflicting values are appended to the error message
            (default is True).
    
        Raises
        ------
        AssertionError
            The two arrays are not less-or-equal-ordered.
        """
>       assert_array_compare(operator.__le__, x, y, err_msg, verbose,
                             'Arrays are not less-or-equal-ordered')
E       AssertionError: 
E       Arrays are not less-or-equal-ordered
E       
E       Mismatched elements: 1 / 1 (100%)
E       Max absolute difference: 1.50114947e-13
E       Max relative difference: 0.72321204
E        x: array([3.576819e-13])
E        y: array(2.07567e-13)

.local/lib/python3.9/site-packages/cobyqa/tests/__init__.py:28: AssertionError
========================================= short test summary info ==========================================
FAILED .local/lib/python3.9/site-packages/cobyqa/linalg/tests/test_optimize.py::TestNNLS::test_simple[5]
================================ 1 failed, 360 passed in 115.22s (0:01:55) =================================

Bug reproduction

python -m pip install -U cobyqa
python -m pytest --pyargs cobyqa

Expected Results

Test passes.

Actual Results

Test fails.

Versions

Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: cobyqa in ./.local/lib/python3.9/site-packages (1.0.dev0)
Requirement already satisfied: scipy>=1.1.0 in /opt/intel/oneapi/intelpython/python3.9/lib/python3.9/site-packages (from cobyqa) (1.6.2)
Requirement already satisfied: numpy>=1.13.3 in /opt/intel/oneapi/intelpython/python3.9/lib/python3.9/site-packages (from cobyqa) (1.21.2)

More careful geometry step

Feature description

Study Powell's geometry steps of BOBYQA and LINCOA:

https://github.com/libprima/prima/blob/main/fortran/bobyqa/geometry.f90
https://github.com/libprima/prima/blob/main/fortran/lincoa/geometry.f90

N.B.: Instead of looking at only the mathematical formulation of the geometry steps, please pay much attention to how they are implemented. Some subtleties in the implementation may lead to significant improvement in performance. Read the comments very carefully. They contain important information on proper implementations. See, for example (this is only an example!):

https://github.com/equipez/prima/blob/7f0806ac4630a58256a1de9361e1ca2223c65c04/fortran/bobyqa/geometry.f90#L480-L494

Proposed solution

Study the code of BOBYQA and LINCOA.

Please use the modernized code, which reimplements Powell's code faithfully except for the changes indicated by the comments.

Considered alternatives

No response

Add an option called `honour_bounds` (default to `true`), and decide whether to respect bounds accordingly

Feature description

Add an option called honour_bounds (default to true), and decide whether to respect bounds accordingly.

When bounds are not necessarily respected, the performance should be evidently better, because there is more freedom to choose the interpolation points. Otherwise, there is something strange in the implementation.

Proposed solution

No response

Considered alternatives

No response

Initialization with less points

Feature description

The algorithm should begin to optimize the problem starting from the second function evaluation, rather than wait until the (NPT+1)th evaluation.

Proposed solution

Think about it. Investigate the possibilities. Study the literature.

Considered alternatives

No response

How would COBYQA behave on feasiblity problems?

Suppose that the problem has not objective function, or the objective function is a constant. How would COBYQA behave? Will self.penalty be always zero? Will the predicted / actual reduction be always zero? What will happen to the reduction ratio and the update of the trust region radius?

CUTEst contains many feasibility problems.

Comments on `v1.1.1` Improve handling of NaNs

Hi Tom @ragonneau ,

I saw your remarks on https://github.com/cobyqa/cobyqa/releases/tag/v1.1.1

If the returned objective function value or the maximum constraint violation is NaN, the optimization procedure is now considered unsuccessful.

Two comments.

  1. The returned point should not contain NaN unless all the points visited, particularly the starting point, by the algorithm contain NaN. Indeed, if the starting point contains NaN, then the algorithm should either return immediately with an error/unsuccessful code, or should replace all the NaN with 0.

  2. The returned function value or constraint violation should not be NaN unless all the points visited by the algorithm return a NaN function value or a NaN constraint violation.
    In particular, if the function value and constraint violation at the starting point are not NaN, then the algorithm should not return NaN for these values.

This is basic, should be ensured, and should be checked (in the debug mode). It should be ensured by the functions that select the returned point.

Thanks.

Investigate the exceptional jump to geometry improvement or to reduction of $\delta^k$

Feature description

Conduct a more careful investigation into the exceptional jump to geometry improvement or to the reduction of $\delta^k$. This strategy is crucial for the performance of Powell's solvers, leading to a performance difference of 30%. It should be as important in COBYQA. If a boost to the performance is not observed, there must be something wrong in the implementation.

See the jumping criteria in the following, paying particular attention to the logical variables IMPROVE_GEO, REDUCE_RHO, ADEQUATE_GEO, and ACCURATE_MOD.

https://github.com/libprima/prima/blob/main/fortran/uobyqa/uobyqb.f90
https://github.com/libprima/prima/blob/main/fortran/newuoa/newuob.f90
https://github.com/libprima/prima/blob/main/fortran/bobyqa/bobyqb.f90
https://github.com/libprima/prima/blob/main/fortran/lincoa/lincob.f90
https://github.com/libprima/prima/blob/main/fortran/cobyla/cobylb.f90

Proposed solution

No response

Considered alternatives

No response

Infrastructure (finish this before making any other movements)

Feature description

The development of COBYQA should be equipped with the following infrastructure. They should be done BEFORE making any further changes to the current version of the code.

  1. Unit tests.

This means the tests for the correctness of each component of the software.

It should be triggered by each push and also periodically (e.g., once every day).

It should contain sufficiently many "tough" tests (strongly ill-conditioned problems, data with NaN/Inf, improper inputs, large-scale problems, etc). It is a joke to test code with only normal data.

The tests should be both randomized and reproducible. The randomization seed should be changed regularly (I would change the seed once every week so that I have a week to debug if the tests detect something wrong).

  1. Verifications.

This will be needed (not now) if you implement a new version that aims to behave the same as a standard version (e.g., when porting the software to C++, Julia, etc). The verification script should verify that the new and standard versions behave the same on a sufficiently large set (hundreds) of problems.

The same as the unit tests, the verifications should be triggered by each push and also periodically, should contain sufficiently many tough tests, and should be randomized and reproducible with the seed changed regularly.

  1. Profilings.

This means profiling the code against a previous / standard version, in order to make sure that the changes you make really improve the performance rather than the opposite. Using the artifacts of the GitHub Actions, we can also keep a record of the (recent) evolution of the performance.

It is extremely important to make sure that the comparisons are fair and unbiased because we will make decisions/changes according to the comparisons! We also have to make sure that the difference we observe is truly due to the decisions/changes we make rather than because of rounding errors or pure luck.

To achieve this, we have to ensure that the problem set is sufficiently large, and we have to randomize the tests. Randomization means introducing a reasonable amount of perturbation to the problems and to the data (initial point, initial trust region radius, etc) and then comparing the average performance. The perturbation should not be too high --- a perturbation as small as the machine epsilon would be enough to produce significant changes to the iterative process, while a perturbation too large may lead to "low performance" of all solvers. Note that I am not talking about testing noisy problems, which should also be included in the profiling.

  1. "Programming by Contract".

I strongly suggest you add preconditions and postconditions to the code.

See, e.g.,

https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/newuoa/trustregion.f90#L123
https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/newuoa/trustregion.f90#L435

Pre/postconditons overlap with unit tests, but they cannot replace each other. They may seem tedious, and some conditions may even feel silly, but the benefit is enormous. This is essential especially if you want to provide a "reference implementation" for other people/languages. The pre/postcondtions do not only ensure the basic correctness of your code, but also tell others the basic requirements that have to be met if they make a new implementation.

I have implemented everything mention above for the PRIMA project (see the GitHub Actions and the workflows). Programming is not something I am particularly enjoying or good at, but the situation is different for you. Since I can do this, I believe that you can do similar things, but much better than what I have done (or can ever possibly do).

Thanks, and enjoy!

Proposed solution

No response

Considered alternatives

No response

Tough test, stress test, recursive test, and parallel test

These tests intend to verify that the solver works properly under different conditions.

Tough test: whether the solver works properly when the objective / constraint function fails or returns exceptional values randomly. For example
https://github.com/libprima/prima/blob/main/matlab/tests/private/tough.m

Stress test: whether the solver works properly when the problem is exceptionally large (e.g., 500 variables and 5000 constraints). For example
https://github.com/libprima/prima/blob/main/matlab/tests/stress.m

Recursive test: whether the solver works properly when invoked recursively. For example
https://github.com/libprima/prima/blob/main/matlab/tests/recursive.m

Parallel test: whether the solver works properly when invoked in parallel. For example
https://github.com/libprima/prima/blob/main/matlab/tests/parallel.m

N.B.

  1. We have to test all platforms (linux, macos, windows and different versions of Python). The behavior may be different when the platform changes.
  2. We need to run these tests periodically on randomized data. Success on a particular problem does not mean anything.
  3. You will need this: https://github.com/orgs/community/discussions/58716

The logo of COBYQA should be explainable

Feature description

Good looking is only the basic step, but far from being sufficient. It must be explainable. Redesign it if necessary.

Proposed solution

No response

Considered alternatives

No response

Investigate more carefully the strategy of "moving towards boundaries of active constraints" in the trust-region solver with linear constraints

Feature description

This strategy should be implemented in exactly the same way as in LINCOA:

https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/lincoa/trustregion.f90#L238-L291

For LINCOA, it leads to a 20% improvement in performance. A similar boost should be observable in COBYQA. Otherwise, something is wrong.

Proposed solution

No response

Considered alternatives

No response

Sometimes, although rarely, a "poor" trust-region trial point should not be included in the next interpolation point

Feature description

It is not always a good idea to include a trust-region trial point in the next interpolation point. Sometimes, although rarely, a "poor" trust-region trial point should not be included in the next interpolation point.

Check the strategy in Powell's solvers. For example, for NEWUOA:

https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/newuoa/newuob.f90#L300-L311

https://github.com/libprima/prima/blob/367a8e7bb3eb4a388f10f72d776c7ad0898bb75e/fortran/newuoa/geometry.f90#L22

Proposed solution

No response

Considered alternatives

No response

Make the expriments reproducible

Feature description

For profiling, the testing problem set must be fixed (while being sufficiently large). It is hard to draw any conclusion based on the comparison on a changing set of problems.

Please, please, fix the "loading failure" problem of PyCutest before starting to make ANY comparison. It feels very frustrating for me to see that the comparison results are not reproducible/consistent due to loading failures of testing problems. If this is impossible due to defects of PyCutest, then turn to a different solution.

I do not want to encounter ever again loading failure problems in our later investigation, no matter on GitHub Actions or on local computers.

I do not want to encounter ever again the situation that "this comparison result is different from my last experiment, and I do not know/remember what happened". This has happened too many times in our previous investigations, and it is very annoying.

We have been working on this project for a sufficiently long time to avoid any non-reproducible experiments.

Thanks.

Proposed solution

No response

Considered alternatives

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.