Giter Club home page Giter Club logo

mpmath's Introduction

mpmath

pypi version Build status Code coverage status Zenodo Badge

A Python library for arbitrary-precision floating-point arithmetic.

Website: https://mpmath.org/ Main author: Fredrik Johansson <[email protected]>

Mpmath is free software released under the New BSD License (see the LICENSE file for details).

0. History and credits

The following people (among others) have contributed major patches or new features to mpmath:

Numerous other people have contributed by reporting bugs, requesting new features, or suggesting improvements to the documentation.

For a detailed changelog, including individual contributions, see the CHANGES file.

Fredrik's work on mpmath during summer 2008 was sponsored by Google as part of the Google Summer of Code program.

Fredrik's work on mpmath during summer 2009 was sponsored by the American Institute of Mathematics under the support of the National Science Foundation Grant No. 0757627 (FRG: L-functions and Modular Forms).

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors.

Credit also goes to:

  • The authors of the GMP library and the Python wrapper gmpy, enabling mpmath to become much faster at high precision
  • The authors of MPFR, pari/gp, MPFUN, and other arbitrary-precision libraries, whose documentation has been helpful for implementing many of the algorithms in mpmath
  • Wikipedia contributors; Abramowitz & Stegun; Gradshteyn & Ryzhik; Wolfram Research for MathWorld and the Wolfram Functions site. These are the main references used for special functions implementations.
  • George Brandl for developing the Sphinx documentation tool used to build mpmath's documentation

Release history:

  • Version 1.3.0 released on March 7, 2023
  • Version 1.2.1 released on February 9, 2021
  • Version 1.2.0 released on February 1, 2021
  • Version 1.1.0 released on December 11, 2018
  • Version 1.0.0 released on September 27, 2017
  • Version 0.19 released on June 10, 2014
  • Version 0.18 released on December 31, 2013
  • Version 0.17 released on February 1, 2011
  • Version 0.16 released on September 24, 2010
  • Version 0.15 released on June 6, 2010
  • Version 0.14 released on February 5, 2010
  • Version 0.13 released on August 13, 2009
  • Version 0.12 released on June 9, 2009
  • Version 0.11 released on January 26, 2009
  • Version 0.10 released on October 15, 2008
  • Version 0.9 released on August 23, 2008
  • Version 0.8 released on April 20, 2008
  • Version 0.7 released on March 12, 2008
  • Version 0.6 released on January 13, 2008
  • Version 0.5 released on November 24, 2007
  • Version 0.4 released on November 3, 2007
  • Version 0.3 released on October 5, 2007
  • Version 0.2 released on October 2, 2007
  • Version 0.1 released on September 27, 2007

1. Download & installation

Mpmath requires Python 3.8 or later versions. It has been tested with CPython 3.8 through 3.13 and for PyPy 3.9 through 3.10.

The latest release of mpmath can be downloaded from the mpmath website and from https://github.com/mpmath/mpmath/releases

It should also be available in the Python Package Index at https://pypi.python.org/pypi/mpmath

To install latest release of Mpmath with pip, simply run

pip install mpmath

or from the source tree

pip install .

The latest development code is available from https://github.com/mpmath/mpmath

See the main documentation for more detailed instructions.

2. Documentation

Documentation in reStructuredText format is available in the docs directory included with the source package. These files are human-readable, but can be compiled to prettier HTML using Sphinx.

The most recent documentation is also available in HTML format:

https://mpmath.org/doc/current/

3. Running tests

The unit tests in mpmath/tests/ can be run with pytest, see the main documentation.

You may also want to check out the demo scripts in the demo directory.

The master branch is automatically tested on the Github Actions.

4. Known problems

Mpmath is a work in progress. Major issues include:

  • Some functions may return incorrect values when given extremely large arguments or arguments very close to singularities.
  • Directed rounding works for arithmetic operations. It is implemented heuristically for other operations, and their results may be off by one or two units in the last place (even if otherwise accurate).
  • Some IEEE 754 features are not available. Inifinities and NaN are partially supported; denormal rounding is currently not available at all.
  • The interface for switching precision and rounding is not finalized. The current method is not threadsafe.

5. Help and bug reports

General questions and comments can be sent to the mpmath mailinglist.

You can also report bugs and send patches to the mpmath issue tracker, https://github.com/mpmath/mpmath/issues

mpmath's People

Contributors

ariasdereyna avatar asmeurer avatar asmodehn avatar casevh avatar cclauss avatar certik avatar chefaharoni avatar d-torrance avatar damsenviet avatar fangchenli avatar fredrik-johansson avatar georgostrovski avatar kagalenko-m-b avatar keszybz avatar klkuhlm avatar krastanov avatar maxgaukler avatar ndattani avatar nsights avatar parcly-taxel avatar paulmasson avatar pearu avatar skirpichev avatar sonntagsgesicht avatar thartmann15 avatar tminka avatar vks avatar warnerjon12 avatar warrenweckesser avatar ylemkimon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mpmath's Issues

Making a new release

It's been too long since the last release, given all the new features
(especially the GMPY support).

I think issues 40, 41 and 42 should be fixed first, though. It would also
be nice to include Vinzent's solvers module. Anything else? What parts of
the documentation need to be updated?

Unfortunately, we're running out of version numbers :) I don't have any
definite plans for 1.0, but I'd maybe like to make some fundamental
interface changes before then, and it'd be good to have at least one major
release in between.

I could also just release the current code immediately as 0.8.1 and
postpone 0.9.

Thoughts?

Original issue for #90: http://code.google.com/p/mpmath/issues/detail?id=50

Original author: https://code.google.com/u/111502149103757882156/

Original owner: https://code.google.com/u/111502149103757882156/

more range-like behaviour of arange

What about something like this?

def arange(_args):
    """arange([a,] b[, dt]) -> list [a, a + dt, a + 2_dt, ..., b]"""
    if not len(args) <= 3:
        raise TypeError('arange expected at most 3 arguments, got %i' 
                        % len(args))
    if not len(args) >= 1:
        raise TypeError('arange expected at least 1 argument, got %i'
                        % len(args))
    # set default
    a = 0
    dt = 1
    # interpret arguments
    if len(args) == 1:
        b = args[0]
    elif len(args) >= 2:
        a = args[0]
        b = args[1]
    if len(args) == 3:
        dt = args[2]
    a, b, dt = mpf(a), mpf(b), mpf(dt)
    result = []
    i = 0
    t = a
    while 1:
        t = a + dt*i
        i += 1
        if t < b:
            result.append(t)
        else:
            break
    return result

Maybe there should be a warning when dt <= eps.
Small dt are taking forever anyway.
(Sorry for not submitting a patch)

Original issue for #76: http://code.google.com/p/mpmath/issues/detail?id=36

Original author: https://code.google.com/u/[email protected]/

secant fails for multiple roots

>>> from mpmath import secant

> > > f = lambda x: (x-1)**100
> > > secant(f, 0)
> > > mpf('0.33989945043882264')
> > > secant(f, 0, 3)
> > > mpf('-4.7331654313260708e-30')
> > > g = lambda x: x**2
> > > secant(g, -2)
> > > mpf('-0.00010708112159826222')
> > > secant(g, -2, 3)
> > > mpf('-0.0003107520198881292')

This is an algorithmical problem inherited from Newton's method, which
converges slowly for multiple roots.

A solution could be adding a modified Newton's method like this:

x_{k+1} = x_k - F(x_k)/F'(x_k) with F(x) = f(x)/f'(x)

Original issue for #83: http://code.google.com/p/mpmath/issues/detail?id=43

Original author: https://code.google.com/u/[email protected]/

Original owner: https://code.google.com/u/[email protected]/

Issues with mp.dps and exp function

What steps will reproduce the problem? --------------------------------------

Run the following python code:
# Test Exp function

from mpmath import *
# set precision and rounding

mp.dps = 512
mp.rounding = 'nearest'

print 'Test 1'
z1 = mpc('-1.0', '0.0')
nprint(z1, 17)
z2 = exp(z1)
nprint(z2, 17)

print 'Test 2'
z3 = mpc(-1.0, 0.0)
nprint(z3, 17)
z4 = exp(z3)
nprint(z4, 17)
## What is the expected output?

Test 1
(-1.0 + 0.0j)
(0.36787944117144233 + 0.0j)
Test 2
(-1.0 + 0.0j)
(0.36787944117144233 + 0.0j)
## What do you see instead?

Test 1
(-1.0 + 0.0j)
(2.7182818284590452 + 0.0j)
Test 2
(-1.0 + 0.0j)
(2.7182818284590452 + 0.0j) What version of the product are you using? On what operating system? --------------------------------------------------------------------

Windows XP
Python 2.5.1
mpmath 0.7 Please provide any additional information below. -----------------------------------------------

If I comment out the precision statement: mp.dps = 512
then the returned answer is correct.

mp.dps=64 works OK
mp.dps=80 works OK

Is there some problem with memory allocation?

This bug causes big time havoc.
I'll try a later version of python and see what happens.

Regards
Richard Lyon

Original issue for #73: http://code.google.com/p/mpmath/issues/detail?id=33

Original author: https://code.google.com/u/114427645118861597836/

allow to use python float & complex instead of mpf, mpc

mpf and mpc are a lot slower than python floats and comlpexes. Sometimes
I'd like to take advantage of all the nice algorithms in mpmath (like
special functions, ODE solvers), but I'd like them to be executing fast
(using python float & complex classes) and I don't mind some rounding errors.

Imho something like

mpf = float
mpc = complex

should be enough, but it needs hooked up in the mpmath somehow.

Original issue for #72: http://code.google.com/p/mpmath/issues/detail?id=32

Original author: https://code.google.com/u/104039945248245758823/

one tests fails on Debian

After applying the patch in the issue 61 , one test fails:

$ py.test 
# ============================= test process starts

executable:   /usr/bin/python  (2.4.5-candidate-1)
using py lib: /usr/lib/python2.4/site-packages/py <rev unknown>

mpmath/tests/test_bitwise.py[8] ........
mpmath/tests/test_compatibility.py[3] F..
mpmath/tests/test_convert.py[7] .......
mpmath/tests/test_diff.py[2] ..
mpmath/tests/test_division.py[6] ......
mpmath/tests/test_functions2.py[3] ...
mpmath/tests/test_hp.py[1] .
mpmath/tests/test_interval.py[2] ..
mpmath/tests/test_mpmath.py[26] ..........................
mpmath/tests/test_power.py[2] ..
mpmath/tests/test_quad.py[9] .........
mpmath/tests/test_rootfinding.py[1] .
mpmath/tests/test_special.py[4] ....
mpmath/tests/test_trig.py[3] ...

---

____________________ entrypoint: test_double_compatibility

---

```
def test_double_compatibility():
    mp.prec = 53
    mp.rounding = 'default'
    for x, y in zip(xs, ys):
        mpx = mpf(x)
        mpy = mpf(y)
        assert mpf(x) == x
        assert (mpx < mpy) == (x < y)
        assert (mpx > mpy) == (x > y)
        assert (mpx == mpy) == (x == y)
        assert (mpx != mpy) == (x != y)
        assert (mpx <= mpy) == (x <= y)
        assert (mpx >= mpy) == (x >= y)
        assert mpx == mpx
        assert mpx + mpy == x + y
        assert mpx * mpy == x * y
```

E           assert mpx / mpy == x / y

> ```
>       assert (mpf('-4.1974624032366689e+117') /
> ```
> 
> mpf('-8.4657370748010221e-47')) == (-4.1974624032366689e+117 /
> -8.4657370748010221e-47)

[/home/ondra/ext/mpmath/mpmath/tests/test_compatibility.py:35]

---
# ============= tests finished: 76 passed, 1 failed in 28.46 seconds

Original issue for #62: http://code.google.com/p/mpmath/issues/detail?id=22

Original author: https://code.google.com/u/104039945248245758823/

Referenced issues: #61

Interval arithmetic: pow(0,...)

What steps will reproduce the problem? >>> from mpmath import mpi

> > > mpi(0,1)**2 What is the expected output? What do you see instead? Expect [0,1].  But the **pow** method doesn't seem to handle 0 in base and
> > > 2 in exponent.  So instead get:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.5/site-packages/mpmath/apps/interval.py", line
129, in **pow**
    assert s.a >= 1 and s.b >= 1
AssertionError What version of the product are you using? On what operating system? 0.6 on Linux, Python 2.5. Please provide any additional information below. For now I've replaced x*_2 with x_x (luckily just needed integer exponent).
 But presumably this case ought to be handlable using *\* too...

Original issue for #46: http://code.google.com/p/mpmath/issues/detail?id=6

Original author: https://code.google.com/u/111244884810143668698/

Implement all mathematical functions in mpmath.lib

Some functions are currently only implemented in mptypes.py. However, all
functions should be implemented in mpmath.lib to provide a complete
functional interface that is independent of the mpf class interface (and
its relatively fragile state-based management of precision and rounding).

Functions currently not implemented in lib include:
- Noninteger powers (real, complex and real->complex)
- Inverse trigonometric / hyperbolic functions
- All the extra functions (gamma, zeta, ...)

More applied code like numerical integration should probably not be moved,
as implementing it functionally will make it significantly more complex.

Original issue for #48: http://code.google.com/p/mpmath/issues/detail?id=8

Original author: https://code.google.com/u/111502149103757882156/

Original owner: https://code.google.com/u/111502149103757882156/

pickling fails with mpmath numbers

Hello, in order to save large amounts of high-precision data,
I need to serialize mpmath numbers. Unfortunately, this fails with an
exception I don't understand, since the method **getstate** seems to be
defined (see below). Converting to and from strings is only a temporary
option, because it is way too slow and wastes space.

Example:

In [2]:import mpmath

In [3]:a = mpmath.mpc(1+2j)

In [4]:a
Out[4]:mpc(real='1.0', imag='2.0')

In [5]:import pickle

In [6]:pickle.dumps(a)

Results in:

<type 'exceptions.TypeError'>: a class that defines **slots** without
defining **getstate** cannot be pickled What version of the product are you using? On what operating system? Python 2.5
mpmath 0.7
on opensuse 10.2

Original issue for #74: http://code.google.com/p/mpmath/issues/detail?id=34

Original author: https://code.google.com/u/109204080795773808200/

Implement all hypergeometric functions

Mpmath should be able to compute nearly all the functions listed on this
page (which are special cases of the general hypergeometric series which
mpmath now knows how to compute): http://documents.wolfram.com/mathematica/Built-inFunctions/MathematicalFunctions/HypergeometricRelated/ In many cases implementing a function is simply be a matter of translating
the appropriate formula to code and writing tests to verify that no typo
was made. (It may be necessary to watch out for cancellation effects at
special points).

The 0F1 and 1F1 series converge for all z, but 2F1 only converges for |z| <
1. For functions based on 2F1, variable transformations have to be used, if
they exist at all.

The bigger challenge is to implement 2F1 for arbitrary z. I have looked at
using the two integral representations given on http://mathworld.wolfram.com/HypergeometricFunction.html , but they are
nearly useless: the Euler integral has horrible endpoint singularities that
generally seem to fool the tanh-sinh algorithm, and the Barnes integral
oscillates wildly. Is there a trick to do compute these integrals reliably?

Otherwise, the only method I know of to compute 2F1 is to use a generic ODE
solver to integrate the hypergeometric differential equation (the method is
described in Numerical Recipes). This will be slow, but if it works, it is
better than nothing.

Original issue for #70: http://code.google.com/p/mpmath/issues/detail?id=30

Original author: https://code.google.com/u/111502149103757882156/

mpmath does not interact with float nan's/inf's correctly

What steps will reproduce the problem? What is the expected output? What do you see instead? See two examples below

> > > """ Example 1 mpf \* floating point inf """
> > > ' Example 1 mpf \* floating point inf '
> > > mpmath.mpf( '1.2345' ) \* mpmath.mpf( 'inf' )
> > > mpf('+inf')
> > > mpmath.mpf( '1.2345' ) \* float( 'inf' )
> > > Traceback (most recent call last):
> > >   File "<stdin>", line 1, in <module>
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 214, in
> > > **mul**
> > >     return s.binop(t, fmul)
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 196, in binop
> > >     t = mpf_convert_rhs(t)
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 77, in
> > > mpf_convert_rhs
> > >     return make_mpf(from_float(x, 53, round_floor))
> > >   File "/usr/lib/python2.5/site-packages/mpmath/lib/floatop.py", line 218,
> > > in from_float
> > >     m, e = math.frexp(x)
> > > OverflowError: math range error
> > > 
> > > "Example 2  mpf \* floating point nan"
> > > 'Example 2  mpf \* floating point nan'
> > > mpmath.mpf( '1.2345' ) \* mpmath.mpf( 'nan' )
> > > mpf('nan')
> > > mpmath.mpf( '1.2345' ) \* float( 'nan' )
> > > mpf('0.0') What version of the product are you using? On what operating system? 0.6 linux Please provide any additional information below.

Original issue for #45: http://code.google.com/p/mpmath/issues/detail?id=5

Original author: https://code.google.com/u/110030153462670186063/

linear algebra

If you are going to implement some stuff for solving linear equations (as
you mentioned in your recent blog post), I could provide working (yet
somewhat messy) code to do the basic stuff like LU decomposition (this
includes solving linear systems and calculating the inverse/determinant
efficiently). Additionally I could share code for solving overdetermined
(and ordinary) linear systems via QR decomposition (LU decomposition is two
times faster, but less accurate).

Original issue for #86: http://code.google.com/p/mpmath/issues/detail?id=46

Original author: https://code.google.com/u/[email protected]/

Original owner: https://code.google.com/u/[email protected]/

Performance tips?

I dropped "mpmath" into my iterative transformation grapher (itgrapher)
GIMP plugin.  I replaced all occurrences of float() with mpf().  It worked
but it was much slower than "math".

Based on your benchmark data, I expected an improvement in performance just
for dropping it in (on equations that don't overflow with "math" and
therefore don't need the extra precision).  Do I need to explicitly limit
the precision to get those speed gains?

I had turned to mpmath because I had overflows with the exp() operation. 
Prior to porting itgrapher to Python-fu, it was in PERL, where there was no
trouble.  (BTW, thanks for working to make the transition easier!)

So, if I need to limit the precision most of the time, I'm going to need a
way to open in up when needed.  Can I detect overflows and then repeat
operations with higher precision.  Do you raise exceptions and if so what?

I'm using mpmath 0.6 and Python 2.5 and GIMP 2.4

Original issue for #59: http://code.google.com/p/mpmath/issues/detail?id=19

Original author: awspring…@gmail.com

Uniform interface for calculus functions

Calculus functions should have a uniform interface for specifying goals,
handling errors, etc. Here is a tentative specification.

Some of the default values could perhaps be turned into settings of the
global context.

List of parameters for calculus functions:

problem parameters
    These parameters specify the mathematical problem to be solved.
Typically the first parameter is a function f and the rest specify some
point a or interval a, b over which f should be integrated, differentiated
etc. May be given either positionally or by keyword.

algorithmic parameters
    Numerical algorithms often require manual tuning to perform optimally
(sometimes to give correct results at all). A typical algorithmic parameter
might for example be an integer n specifying the number of point samples to
use. Most functions try to choose reasonable parameters automatically, but
some may require an educated guess from the user. May be given by keyword only.

Additional keyword options common for all functions:

eps, dps, prec

```
Sets the accuracy goal for the computation (only one of these should be
```

given). The computation is considered finished when the estimated error is
less than eps / accurate to at least dps decimal places / prec bits.

```
Default: automatically set equal to the working precision.
```

metric

```
Specifies which metric to use for measuring error:

  • ‘absolute’ – the absolute error must meet the accuracy goal
  • ‘relative’ – the relative error must meet the accuracy goal
  • ‘either’ – it is sufficient that either the absolute or the
    ```

relative error meets the accuracy goal
\* ‘both’ – both absolute and relative error must meet the
accuracy goal

```
Default: ‘either’
```

workprec, workdps, extraprec, extradps

```
Sets the internal working precision, either as an absolute value or
```

relative to the external working precision. If unspecified, the precision
is automatically set slightly higher (a few digits) than minimally required
to meet the accuracy goal, to guard against typical small rounding errors.
The working precision should be increased manually if rounding errors or
cancellations lead to inaccurate results.

```
Default: typically 3-10 dps, sometimes much higher, depending on the
```

function.

estimate

```
Specifies by which method to determine whether the result meets the
```

accuracy goal:

```

  • ‘fast’ – the error is estimated quickly using heuristic
    ```

methods known based on experience to work for typical (reasonably
well-behaved) input.
\* ‘safe’ – the computation is performed twice, the second time
with increased precision and/or slightly tweaked algorithmic parameters.
The error is estimated as twice the difference between the results. At the
cost of increased computation time, this method is very reliable for all
but the most pathological inputs.
\* ‘none’ – no attempt is made to estimate the error. The
specified algorithmic parameters are assumed to result in the desired
accuracy goal.

```
Default: ‘fast’.

Note: for some functions, ‘fast’ and ‘safe’ are identical, because no
```

more efficient heuristic has been implemented for the algorithm.

error

```
This parameter determines how to handle failure:

  • ‘raise’ – the result is returned as soon as it meets the
    ```

accuracy goal. Failure to meet the goal with the given algorithmic
parameters results in an exception being raised.
\* ‘none’ – the function silently returns whatever result it
obtains, even when likely to be inaccurate.
\* ‘warn’ – a result is returned regardless of whether it is
fully accurate. A warning is printed if the accuracy goal is not met.
\* ‘return’ – the function returns a tuple (result, err) where
err is the estimated error. Nothing special happens if err is larger than
the epsilon (this is left for the user to handle).

```
Default: ‘raise’.
```

retries

```
In many cases, increasing precision and/or modifying algorithmic
```

parameters slightly can save a computation that fails on the first try. If
set to a positive integer, this number of retries will be performed
automatically.

```
Default: 0-2, depending on the function.
```

verbose

```
If set to any nonzero value, detailed messages about progress and
```

errors are printed while the function is running.

```
Default: False.
```

Original issue for #75: http://code.google.com/p/mpmath/issues/detail?id=35

Original author: https://code.google.com/u/111502149103757882156/

Original owner: https://code.google.com/u/111502149103757882156/

Suggested big renaming to avoid internal *-imports

lib -> libmpf
lib.ffunction -> libmpf.function
libmpc.mpc_function -> libmpc.function

For example,

  from lib import *
  fadd(x,y,prec)
  fmul(x,y,prec)

becomes:

  import libmpf
  libmpf.add(x,y,prec)
  libmpf.mul(x,y,prec)

This would make parts of the code verbose. However, code that makes
frequent use of e.g. libmpf.add can still simply rebind this function
locally as e.g. fadd.

I think Ondrej would approve?

Original issue for #88: http://code.google.com/p/mpmath/issues/detail?id=48

Original author: https://code.google.com/u/111502149103757882156/

Original owner: https://code.google.com/u/111502149103757882156/

sqrt with interval arithmetic doesn't work

>>> x=mpi('100')

> > > sqrt(x)
> > > Traceback (most recent call last):
> > >   File "<stdin>", line 1, in <module>
> > >   File "C:\Python25\lib\site-packages\mpmath\mptypes.py", line 643, in f
> > >     x = convert_lossless(x)
> > >   File "C:\Python25\lib\site-packages\mpmath\mptypes.py", line 196, in
> > > convert_l
> > > ossless
> > >     raise TypeError("cannot create mpf from " + repr(x))
> > > TypeError: cannot create mpf from [100.0, 100.0]
> > > x**.5
> > > [9.9999999999999964473, 10.000000000000003553]

Original issue for #82: http://code.google.com/p/mpmath/issues/detail?id=42

Original author: https://code.google.com/u/[email protected]/

mpmath doesn't work with python2.4

ondra@fuji:~/ext/mpmath-svn$ python2.4
Python 2.4.5 (#2, Jun 25 2008, 14:11:58) 
[GCC 4.3.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.

> > > import mpmath
> > > Traceback (most recent call last):
> > >   File "<stdin>", line 1, in ?
> > >   File "mpmath/**init**.py", line 5, in ?
> > >     from mptypes import *
> > >   File "mpmath/mptypes.py", line 18, in ?
> > >     from libmpc import *
> > >   File "mpmath/libmpc.py", line 342, in ?
> > >     alpha_crossover = from_float(1.5)
> > >   File "mpmath/lib.py", line 458, in from_float
> > >     return from_man_exp(int(m*(1<<53)), e-53, prec, rnd)
> > >   File "mpmath/lib.py", line 387, in from_man_exp
> > >     return normalize(sign, man, exp, bc, prec, rnd)
> > >   File "mpmath/lib.py", line 282, in _normalize
> > >     t = trailtable[man & 255]
> > > TypeError: list indices must be integers

Original issue for #89: http://code.google.com/p/mpmath/issues/detail?id=49

Original author: https://code.google.com/u/104039945248245758823/

Original owner: https://code.google.com/u/casevh/

Improve Lambert W function

http://en.wikipedia.org/wiki/Lambert_W_function and the code that implements it:

"
import math

def lambertW(x, prec = 1E-12, maxiters = 100):
    w = 0
    for i in range(maxiters):
        we = w \* pow(math.e,w)
        w1e = (w + 1) \* pow(math.e,w)
        if prec > abs((x - we) / w1e):
            return w
        w -= (we - x) / (w1e - (w+2) \* (we-x) / (2*w+2))
    raise ValueError("W doesn't converge fast enough for abs(z) = %f" % abs(x))
"

Original issue for #60: http://code.google.com/p/mpmath/issues/detail?id=20

Original author: https://code.google.com/u/104039945248245758823/

mpfs are not eval-repr-invariant at some precision levels

It seems mpfs can be recreated from their string representation at the
default precision. But the conversion can fail at some other levels.

There should be functions in mpmath.lib for translating between decimal and
binary precisions, with different use of guard digits etc.

Original issue for #56: http://code.google.com/p/mpmath/issues/detail?id=16

Original author: https://code.google.com/u/111502149103757882156/

Original owner: https://code.google.com/u/111502149103757882156/

gmpy support

I've attached a file that adds gmpy support. The patches are against r498 .

The newly released gmpy v1.03 is required. Testing with mpmath uncovered a
couple of serious bugs in gmpy on 64-bit platforms.

Performance for runtests.py:

mpmath, r498 from svn: 25.3 seconds
with patch, but gmpy not present: 25.7 seconds
with patch, gmpy present: 22.7 seconds

The performance improvements become significant when the precision exceeds
100 digits. What version of the product are you using? On what operating system? Patches against svn r498 . O/S is Ubuntu 8.04 on Centrino Duo using gmpy
1.03 and a Core2 optimized version of GMP 4.2.2. Please provide any additional information below. I tried to optimize bitcount() and the square root functions but I haven't
done extensive testing.

Original issue for #84: http://code.google.com/p/mpmath/issues/detail?id=44

Original author: https://code.google.com/u/casevh/

Code: Implemenation of Jacobi Theta and Jacobi Elliptic Functions

Hiya!

Please find attached my implementation and unit tests for Jacobi Theta and
Elliptic functions, for your consideration for inclusion into mpmath.  I've
implemented a number of unit tests from Abramowitz & Stegun, and Mathworld,
including tests of various identities and special cases.  The tests have
been split into a full blown torture case, named elliptic_torture_tests.py,
and a more modest sampling is given inelliptic_tests.py.  The code
currently passes all of the tests.

Note that I've chosen to use the parameter k, rather than m, used by the
current mpmath.ellipk function.  This is mostly for ease of implementation
and testing, as the series expansions in Abramowitz are in terms of k.

This has had one look from Fredrik Johansson, and I've attempted to make
the initial changes he suggested.  Please let me know if you want me to
make more changes, or feel free to go ahead and modify it for inclusion in
mpmath.  The code is free to release under BSD, and I am authorized to
release it.  

Finally, please don't hesitate to contact me if you have any questions. 
I'll try to watch the mailing list, but please send me an e-mail to get my
attention if I don't respond fast enough.  

Thanks,

Mike Taschuk

Original issue for #79: http://code.google.com/p/mpmath/issues/detail?id=39

Original author: https://code.google.com/u/110354057086964594182/

fcexp bug fix and speed-up for fsin

What steps will reproduce the problem? The following test fails:

from mpmath import *

N = 10000
for dps in [15, 30, 50, 75, 100, 200]:
  for i in range(-N, N):
     a = 2_i_pi/N
     e = exp(j*a)
     assert e.imag == sin(a) and e.real == cos(a)

In Python this test passes

from cmath import exp
from math import cos, sin, pi
N = 10000
for i in range(-N, N):
   a = 2_i_pi/N
   e = exp(1j*a)
   assert e.imag == sin(a) and e.real == cos(a) What version of the product are you using? On what operating system? mpmath rev.415,  686 GNU/Linux

To fix this one can write

def fcexp(a, b, prec, rounding):
    if a == fzero:
        return cos_sin(b, prec, rounding)
    # continue as before

---

Currently sin(a) calls cos_sin, which computes both sin(a) and cos(a),
using the Taylor expansion of sin and computing cos with a square root.
At low precisions it is faster to compute sin(a) using only one
taylor expansion; in the attached file there is an implementation.
To satisfy the identities
exp(j_a).real == cos(a); exp(j_a).imag == sin(a)
the working precision has been increased; in fact the result
of computing cos(a) with a square root (in cos_sin, called by fcexp)
must be equal at the _mpf_ level to cos(a) computed with a Taylor expansion;
trying the above example for dps in [15, 30, 50, 75, 100, 200]
the minimum extra precision to pass this test turns out
to be 7 on my computer.

On my computer (686 GNU/Linux) the speed-up is around
30% for dps < 30,  10% for dps = 200.

Original issue for #68: http://code.google.com/p/mpmath/issues/detail?id=28

Original author: https://code.google.com/u/107755593449647463741/

diffc test fails on Debian (2.6.22-3-amd64)

What steps will reproduce the problem? 1. running 'python runtests.py'
2. a call to mpmath.gamma(mpmath.mpf('0.25')) 3. What is the expected output? What do you see instead? Failing test output is attached.

The call to the gamma function should return a number.  Current output is a
list index out of range error:

Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "/usr/lib/python2.4/site-packages/mpmath/specfun.py", line 239, in gamma
    prec, a, c = _get_spouge_coefficients(mp.prec + 8)
  File "/usr/lib/python2.4/site-packages/mpmath/specfun.py", line 170, in
_get_spouge_coefficients
    coefs = _calc_spouge_coefficients(a, prec)
  File "/usr/lib/python2.4/site-packages/mpmath/specfun.py", line 144, in
_calc_spouge_coefficients
    c[k] = _fix(((-1)**(k-1) \* (a-k)**k) \* b / sqrt(a-k), prec)
  File "/usr/lib/python2.4/site-packages/mpmath/mptypes.py", line 372, in
__rmul__
    r._mpf_ = fmuli(s._mpf_, t, g_prec, g_rounding)
  File "/usr/lib/python2.4/site-packages/mpmath/lib.py", line 587, in fmuli
    else:      bc += bctable[man>>bc]
IndexError: list index out of range What version of the product are you using? On what operating system? mpmath-0.7, on Debian Linux (2.6.22-3-amd64 #1 SMP) Please provide any additional information below.

Original issue for #66: http://code.google.com/p/mpmath/issues/detail?id=26

Original author: https://code.google.com/u/110354057086964594182/

MPF/MPC do not accept unicode as constructor parameter

Actual output with mpmath 0.5 using python 2.5 compiled from release:

> > > a = "2.76"
> > > b = u"2.76"
> > > mpf( a )
> > > mpf('2.7599999999999998')
> > > mpf( b )
> > > Traceback (most recent call last):
> > >   File "<stdin>", line 1, in <module>
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mpmath.py", line 146, in
> > > **new**
> > >     return +convert_lossless(val)
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mpmath.py", line 36, in
> > > convert_lossless
> > >     raise TypeError("cannot create mpf from " + repr(x))
> > > TypeError: cannot create mpf from u'2.76'
> > > What is the expected output? What do you see instead? Expected behavior is to be identical to python primitives e.g.
> > > 
> > > float( "2.76" )
> > > 2.7599999999999998
> > > float( u"2.76" )
> > > 2.7599999999999998

Original issue for #43: http://code.google.com/p/mpmath/issues/detail?id=3

Original author: https://code.google.com/u/110030153462670186063/

Polyroots 1-coefficient lists

What steps will reproduce the problem? 1. let n = an integer
2. call polyroots([n])

Error from Issue 876, sympy : http://code.google.com/p/sympy/issues/detail?id=876 ---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)

/home/ondra/sympy/<ipython console> in <module>()

/home/ondra/sympy/sympy/thirdparty/mpmath/calculus.py in polyroots(coeffs, 
maxsteps,
cleanup, extraprec, error)
    252         err = [mpf(1) for n in range(deg)]
    253         for step in range(maxsteps):
--> 254             if max(err).ae(0):
    255                 break
    256             for i in range(deg): What is the expected output? What do you see instead? if n == 0
probably an error.
if n != 0
empty list, [] What version of the product are you using? On what operating system? svn trunk, linux Please provide any additional information below. I assumed that polyroots([0]) should throw a ValueError since 0 == 0 is a 
tautology. polyroots([n]) for n != 0 will give [], since there are no 
roots.

Original issue for #85: http://code.google.com/p/mpmath/issues/detail?id=45

Original author: https://code.google.com/u/112859157854744488476/

Missing standard functions

Available in math or cmath:
log10
degrees
radians
frexp
pow (perhaps named differently to avoid mixup with the builtin)
modf
fabs

Some other functions that could be useful:
ln (just an alias for log)
cbrt (cube root)
nthroot for x^(1/n), maybe powpq(x,p,q) for x^(p/q)
sind or sindg, cosd, etc for trigonometric functions with degree arguments
round (perhaps named differently to avoid mixup with the builtin)

More:
List of functions in SciPy: http://www.scipy.org/SciPyPackages/Special List of functions in Matlab: http://tinyurl.com/6eq8vd

Original issue for #87: http://code.google.com/p/mpmath/issues/detail?id=47

Original author: https://code.google.com/u/111502149103757882156/

setup.py lacks shebang

$ ./setup.py install  
./setup.py: line 1: from: command not found  
: command not found  
'/setup.py: line 3: syntax error near unexpected token `name='mpmath',
'/setup.py: line 3:`setup(name='mpmath',  
$ python setup.py install
running install  
running build  
running build_py 
[...]                               

This is trivial, simple #!python in the first line would do the job.

Original issue for #77: http://code.google.com/p/mpmath/issues/detail?id=37

Original author: https://code.google.com/u/[email protected]/

Making the interface less dependent on the implementation

See http://wiki.sympy.org/wiki/Generic_interface As a first step, the attached patch makes an attempt to separate the
dependency on the class hierarchy in the way mathematical properties are
checked for, by replacing isinstance() calls with property checks. The
.func property is also made to work for all objects, and used to check for
exp and log.

Theoretically, using properties should be faster than calling isinstance,
but it might not be so in practice, due to traversal of the class hierarchy
to look up properties (this can be fixed). I did not do any detailed
timings, but I know that the tests ran in ~70 seconds before I started
making changes, and in ~70 seconds after, so this certainly doesn't cause
any major slowdowns.

As usual when sweeping over so much code, I noticed lots of (mostly minor)
bugs and oddities.

I had to do the substitutions manually, as there is quite a lot of code in
SymPy that mixes Basic and non-Basic instances. This is something that
should generally be avoided, unless it is clearly commented (much of the
time it is probably unintended).

In particular, there is a lot of code that looks like

```
if isinstance(x, Symbol):
    do_stuff_once(x)
else:
    do_stuff_repeatedly(x)
```

whereas the following would be clearer and less error-prone:

```
if type(x) is tuple:
    do_repeated_stuff(x)
x = sympify(x)
if x.is_Symbol:
    do_symbolic_stuff(x)
raise ValueError
```

I think the fact that

```
isinstance(<non basic object>, <BasicSubclass>)
```

stops working when removing the isinstance idiom is an advantage, as it
stops non-sympified objects from silently falling through and causing
trouble far away from where they first appeared.

In re, im, the following idiom is used:

```
if not arg.is_Add:
    term_list = [arg]

if isinstance(arg, Basic):
    term_list = arg.args
```

This idiom also occurs in one place in integrals.py
and in basic.py. Is there a reason why this is not written as

```
if arg.is_Add:
    term_list = arg.args
else:
    term_list = [arg]
```

?

I removed the test 'RandomString': 'RandomString' from test_sqrtdenest; it
seems nonsensical to just let invalid input slip through instead of raising
an exception.

Some Function subclasses call sympify inside canonize() while others don't.
But sympify is always called in Function.**new** before canonize gets
called, so this shouldn't be necessary. I think I fixed most cases of this.

I noticed that max_ and min_ were broken because their canonize methods
were not defined as classmethods; this has been fixed.

One file contained mixed space/tab indentations, causing me some debugging
headache (my editor shows tabs as 4 spaces). Please use spaces everywhere!

Next step might be to replace instances of "x is S.obj" with x.is_obj. (In
many cases, where several singletons are checked for (as in some canonize
methods), it would be even better to use a table lookup).

Original issue for #67: http://code.google.com/p/mpmath/issues/detail?id=27

Original author: https://code.google.com/u/111502149103757882156/

mpmath does not interact with float nan's/inf's correctly

What steps will reproduce the problem? What is the expected output? What do you see instead? See two examples below

> > > """ Example 1 mpf \* floating point inf """
> > > ' Example 1 mpf \* floating point inf '
> > > mpmath.mpf( '1.2345' ) \* mpmath.mpf( 'inf' )
> > > mpf('+inf')
> > > mpmath.mpf( '1.2345' ) \* float( 'inf' )
> > > Traceback (most recent call last):
> > >   File "<stdin>", line 1, in <module>
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 214, in
> > > **mul**
> > >     return s.binop(t, fmul)
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 196, in binop
> > >     t = mpf_convert_rhs(t)
> > >   File "/usr/lib/python2.5/site-packages/mpmath/mptypes.py", line 77, in
> > > mpf_convert_rhs
> > >     return make_mpf(from_float(x, 53, round_floor))
> > >   File "/usr/lib/python2.5/site-packages/mpmath/lib/floatop.py", line 218,
> > > in from_float
> > >     m, e = math.frexp(x)
> > > OverflowError: math range error
> > > 
> > > "Example 2  mpf \* floating point nan"
> > > 'Example 2  mpf \* floating point nan'
> > > mpmath.mpf( '1.2345' ) \* mpmath.mpf( 'nan' )
> > > mpf('nan')
> > > mpmath.mpf( '1.2345' ) \* float( 'nan' )
> > > mpf('0.0') What version of the product are you using? On what operating system? 0.6 linux Please provide any additional information below.

Original issue for #44: http://code.google.com/p/mpmath/issues/detail?id=4

Original author: https://code.google.com/u/110030153462670186063/

speed-up for diffc and TS_node

In the first attached file there is a patch with trivial changes
which speed-up diffc and TS_node.
diffc is 10% faster
In TS_node ldexp has been used when possible; in the example
  4_quadts(lambda x: sqrt(1-x_*2), 0, 1)
the first evaluation is on my computer
15% faster for dps < 50, 10% for dps = 100, t% for dps = 200

In the second attached file there is another modification for
TS_node, which saves the computation of an exponential;
in the above example it gives speed-up,
35% for dps < 100, 40% per dps = 200
Also this modification passes runtests.py, but it might have
some precision problems; comments are welcome.

Original issue for #69: http://code.google.com/p/mpmath/issues/detail?id=29

Original author: https://code.google.com/u/107755593449647463741/

Original owner: https://code.google.com/u/107755593449647463741/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.