Giter Club home page Giter Club logo

astropy-benchmarks's Introduction

Astropy

The Astropy Project (https://astropy.org/) is a community effort to develop a single core package for Astronomy in Python and foster interoperability between Python astronomy packages. This repository contains the core package which is intended to contain much of the core functionality and some common tools needed for performing astronomy and astrophysics with Python.

Table of Contents

Installation

Releases are registered on PyPI, and development is occurring at the project's GitHub page.

For detailed installation instructions, see the online documentation or docs/install.rst in this source distribution.

To install astropy from PyPI, use:

pip install astropy

Contributing

The Astropy Project is made both by and for its users, so we welcome and encourage contributions of many kinds. Our goal is to keep this a positive, inclusive, successful, and growing community by abiding with the Astropy Community Code of Conduct.

More detailed information on contributing to the project or submitting feedback can be found on the contributions page. A summary of contribution guidelines can also be used as a quick reference when you are ready to start writing or validating code for submission.

Getting Started with GitHub Codespaces

Codespaces is a cloud development environment supported by GitHub. None of the Astropy build machinery depends on it, but it is a convenient way to quickly get started doing development on Astropy.

To get started, create a codespace for this repository by clicking this:

Open in GitHub Codespaces

A codespace will open in a web-based version of Visual Studio Code. The dev container is fully configured with software needed for this project. For help, see the GitHub Codespaces Support page.

Note: Dev containers is an open spec which is supported by GitHub Codespaces and other tools.

Supporting the Project

Powered by NumFOCUS Donate

The Astropy Project is sponsored by NumFOCUS, a 501(c)(3) nonprofit in the United States. You can donate to the project by using the link above, and this donation will support our mission to promote sustainable, high-level code base for the astronomy community, open code development, educational materials, and reproducible scientific research.

License

Astropy is licensed under a 3-clause BSD style license - see the LICENSE.rst file.

astropy-benchmarks's People

Contributors

aconley avatar adrn avatar astrofrog avatar barentsen avatar bsipocz avatar cdeil avatar eteq avatar hamogu avatar jamienoss avatar larrybradley avatar mdboom avatar mdmueller avatar mhvk avatar mriduls avatar nden avatar neutrinoceros avatar pllim avatar saimn avatar taldcroft avatar wmwv avatar zacharyburnett avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

astropy-benchmarks's Issues

bug in benchmark - but would fix break history?

Looking at the setup for the modelling branchmarks, it look to me that the "medium" array has 2000 entries and the "large" array has only one (or indeed would fail if we used the most current numpy, since the step number is 5e-6, which is << 1 :
https://github.com/astropy/astropy-benchmarks/blob/master/benchmarks/modeling/model.py#L13
(same for the arrays with units). And indeed the tests with the "large" array run faster than the tests with the medium array...

The fix is easy, just change from 5e-6 to 5e+6 (or better 5000000 because that's an int and not a float) but that would probably make a regression appear? What to do? Fix and rename the test so that it's not compared to the old one?

Modeling benchmarks failing

@nden - it looks like some of the modeling benchmarks are currently failing:

[ 22.46%] ···· Traceback (most recent call last):
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/asv/benchmark.py", line 1039, in main_run_server
                   main_run(run_args)
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/asv/benchmark.py", line 913, in main_run
                   result = benchmark.do_run()
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/asv/benchmark.py", line 412, in do_run
                   return self.run(*self._current_params)
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/asv/benchmark.py", line 506, in run
                   min_run_count=self.min_run_count)
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/asv/benchmark.py", line 569, in benchmark_timing
                   timing = timer.timeit(number)
                 File "/home/travis/miniconda/envs/test/lib/python3.6/timeit.py", line 178, in timeit
                   timing = self.inner(it, self.timer)
                 File "<timeit-src>", line 6, in inner
                 File "/home/travis/build/astropy/astropy-benchmarks/benchmarks/modeling/compound.py", line 77, in time_large
                   r, d, = self.model(x_no_units_large * u.pix, x_no_units_large * u.pix)
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/astropy/modeling/core.py", line 386, in __call__
                   __call__, args, [('model_set_axis', None)])
                 File "/home/travis/miniconda/envs/test/l/travis/miniconda/envs/test/lib/python3.6/site-packages/astropy/modeling/core.py", line 2349, in <lambda>
                   (evaluate(*chain(inputs, islice(params, n_params))),)
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/astropy/modeling/functional_models.py", line 479, in evaluate
                   return x + offset
                 File "/home/travis/miniconda/envs/test/lib/python3.6/site-packages/astropy/units/quantity.py", line 418, in __array_prepare__
                   .format(function.__name__))
               astropy.units.core.UnitsError: Can only apply 'add' function to dimensionless quantities when other argument is not a quantity (unless the latter is all zero/infinity/nan)

Would you have any time to look into this? (in case it is a regression in astropy)

How to see which code is covered or not by benchmarks?

Recently @eerovaher brought up this good question. Would be nice to see what code is actually benchmarked (like codecov style) without having to dig through each benchmark module.

Not sure how hard this is to do. Running coverage during actual benchmarking would give you inaccurate timing, so the coverage measurement would have to be a separate job just for coverage, not timing. 💭

Profile/document the details of the performance enhancement from astropy#7324

Ported from astropy/astropy#7521

astropy/astropy#7324 created a substantial speedup in the coordinate-matching functionality. In astropy/astropy#7324 (comment) @ideasrule gave a good set of tests (suggested by @adrn) on this enhancement. We should probably find someplace to record the results of these tests. These could be put in as benchmarks on airspeed velocity? Or possibly just documented in the coordinates docs? Open to ideas...

MNT: Rename default branch from master to main

Please rename your default branch from master to main as part of astropy/astropy-project#151 , preferably by 2021-03-22. Also a friendly reminder to check documentation, workflows, etc., and update them accordingly. Please don't forget to communicate this change to your users and stakeholders. To summarize:

  • Rename branch from master to main, preferably by 2021-03-22.
  • Update documentation, workflows, etc., accordingly. -- See #104
  • Communicate this change to your users and stakeholders. -- N/A

Once this is taken care of, you may close this issue.

This is an automated issue. If this is opened in error, please let @pllim know!

LICENSE missing

We would like to use your cron.sh script for MDAnalysis/benchmarks#2 but there is no license.

Would you be willing to add an open source license that would make it clear how your code can be used by other projects?

Many thanks!

IPAC benchmark failing

The following benchmarks are currently failing:

 62.50%] ··· Running io_ascii.time_ipac.IPACSuite.time_data_str_vals                                                                                                              failed
[ 62.50%] ····· Traceback (most recent call last):
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 792, in <module>
                    commands[mode](args)
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 769, in main_run
                    result = benchmark.do_run()
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 453, in do_run
                    return self.run(*self._current_params)
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 528, in run
                    timing = timer.timeit(number)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/timeit.py", line 186, in timeit
                    timing = self.inner(it, self.timer)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/timeit.py", line 87, in inner
                    _func()
                  File "/Users/tom/tmp/astropy-benchmarks/benchmarks/io_ascii/time_ipac.py", line 38, in time_data_str_vals
                    data.str_vals()
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/site-packages/astropy/io/ascii/core.py", line 801, in str_vals
                    self._set_fill_values(self.cols)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/site-packages/astropy/io/ascii/core.py", line 755, in _set_fill_values
                    colnames = set(self.header.colnames)
                AttributeError: 'IpacData' object has no attribute 'header'

[ 75.00%] ··· Running io_ascii.time_ipac.IPACSuite.time_get_cols                                                                                                                   failed
[ 75.00%] ····· Traceback (most recent call last):
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 792, in <module>
                    commands[mode](args)
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 769, in main_run
                    result = benchmark.do_run()
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 453, in do_run
                    return self.run(*self._current_params)
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 528, in run
                    timing = timer.timeit(number)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/timeit.py", line 186, in timeit
                    timing = self.inner(it, self.timer)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/timeit.py", line 87, in inner
                    _func()
                  File "/Users/tom/tmp/astropy-benchmarks/benchmarks/io_ascii/time_ipac.py", line 27, in time_get_cols
                    self.header.get_cols(self.lines)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/site-packages/astropy/io/ascii/ipac.py", line 212, in get_cols
                    if self.ipac_definition == 'right':
                AttributeError: 'IpacHeader' object has no attribute 'ipac_definition'

[ 87.50%] ··· Running io_ascii.time_ipac.IPACSuite.time_header_str_vals                                                                                                            failed
[ 87.50%] ····· Traceback (most recent call last):
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 792, in <module>
                    commands[mode](args)
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 769, in main_run
                    result = benchmark.do_run()
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 453, in do_run
                    return self.run(*self._current_params)
                  File "/Users/tom/miniconda3/envs/dev35/lib/python3.5/site-packages/asv-0.2.dev966+211cc811-py3.5.egg/asv/benchmark.py", line 528, in run
                    timing = timer.timeit(number)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/timeit.py", line 186, in timeit
                    timing = self.inner(it, self.timer)
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/timeit.py", line 87, in inner
                    _func()
                  File "/Users/tom/tmp/astropy-benchmarks/benchmarks/io_ascii/time_ipac.py", line 33, in time_header_str_vals
                    header.str_vals()
                  File "/Users/tom/tmp/astropy-benchmarks/env/705fbb2b350520526769cb8754181ec0/lib/python3.4/site-packages/astropy/io/ascii/ipac.py", line 278, in str_vals
                    null = col.fill_values[core.masked]
                AttributeError: 'Column' object has no attribute 'fill_values'

[100.00%] ··· Running io_ascii.time_ipac.IPACSuite.time_splitter   

@taldcroft @hamogu - is this a real regression, or should the benchmarks be updated? (if the latter, could you open a PR to update them?)

Some table memory benchmarks failing

I haven't had a chance to investigate yet:

[ 19.68%] ··· io_ascii.table.TableSuite.mem_table_init                                                                                                   failed
[ 19.68%] ···· Traceback (most recent call last):
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 1184, in main_run_server
                   main_run(run_args)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 1058, in main_run
                   result = benchmark.do_run()
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 537, in do_run
                   return self.run(*self._current_params)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 729, in run
                   sizeof2 = asizeof.asizeof([obj, obj])
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2708, in asizeof
                   s = _asizer.asizeof(*t)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2137, in asizeof
                   return sum(self._sizer(o, 0, 0, None) for o in objs)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2137, in <genexpr>
                   return sum(self._sizer(o, 0, 0, None) for o in objs)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2075, in _sizer
                   s += z(o, i, d, None)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2075, in _sizer
                   s += z(o, i, d, None)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2075, in _sizer
                   s += z(o, i, d, None)
                 [Previous line repeated 1 more time]
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2050, in _sizer
                   infer=self._infer_)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 1711, in _typedef
                   v.set(**_numpy_kwds(obj))
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 1269, in set
                   self.reset(**d)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 1278, in reset
                   raise ValueError('invalid option: %s=%r' % ('base', base))
               ValueError: invalid option: base=-7880

[ 19.84%] ··· io_ascii.table.TableSuite.mem_table_outputter                                                                                              failed
[ 19.84%] ···· Traceback (most recent call last):
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 1184, in main_run_server
                   main_run(run_args)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 1058, in main_run
                   result = benchmark.do_run()
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 537, in do_run
                   return self.run(*self._current_params)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/benchmark.py", line 729, in run
                   sizeof2 = asizeof.asizeof([obj, obj])
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2708, in asizeof
                   s = _asizer.asizeof(*t)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2137, in asizeof
                   return sum(self._sizer(o, 0, 0, None) for o in objs)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2137, in <genexpr>
                   return sum(self._sizer(o, 0, 0, None) for o in objs)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2075, in _sizer
                   s += z(o, i, d, None)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2075, in _sizer
                   s += z(o, i, d, None)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2075, in _sizer
                   s += z(o, i, d, None)
                 [Previous line repeated 1 more time]
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 2050, in _sizer
                   infer=self._infer_)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 1711, in _typedef
                   v.set(**_numpy_kwds(obj))
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 1269, in set
                   self.reset(**d)
                 File "/home/tom/python/dev/lib/python3.7/site-packages/asv/extern/asizeof.py", line 1278, in reset
                   raise ValueError('invalid option: %s=%r' % ('base', base))
               ValueError: invalid option: base=-7880

Refactor benchmarking process to run relative benchmark for PR

Rename some confusing benchmark test names

For example: time_fits has nothing to do with astropy.time. Suggested naming to follow actual astropy sub-packages. c/c @bsipocz @astrofrog

NOTE: This needs to wait until current benchmarking is done. Renaming needs to be done also in the results across multiple branches.

How we Visualize Benchmarks

Throwing out a suggestion. Rather than ASV I recommend https://docs.codspeed.io/.
codspeed is really good, free for open-source, and integrates deeply with pytest (see pytest-benchmark). With codspeed + pytest ecosystem it's easy for use to have a specific benchmark test suite and also have benchmarked tests in the normal test suite.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.