Giter Club home page Giter Club logo

weaklensingdeblending's Introduction

Software for Weak Lensing Deblending Studies

ADS arXiv Documentation Status DOI

Weak lensing fast simulations and analysis for the LSST Dark Energy Science Collaboration.

This code was primarily developed to study the effects of overlapping sources on shear estimation, photometric redshift algorithms, and deblending algorithms.

User and reference documentation is hosted at http://weaklensingdeblending.readthedocs.io/ and also available as a single pdf document.

If you use this software please cite 2021JCAP...07..043S and this zenodo link.

weaklensingdeblending's People

Contributors

dkirkby avatar fjaviersanchez avatar ismael-mendoza avatar jmeyers314 avatar mr-superonion avatar rmandelb avatar sowmyakth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

weaklensingdeblending's Issues

How to rescale an image to a zero point

I'm generating images with this package, and I'd like to rescale them to a common zero point, say 30.0. This is important so that they can all be on the same footing to, e.g. be coadded (for example coadding r+i+z) and so it is easy to compute magnitudes downstream (all bands on the same zero point) etc.

What formula should I apply to do this conversion?

add Euclid instrument

I'm opening this issue to keep track of the work to implement the Euclid instrument/filters in WLD. The main issue is going to be getting the correct magnitudes for the OneSqDeg.fits catalog.

Adding Fisher bias calculation.

Adding bias calculation using the Fisher Formalism to the package will be useful. To calculate the biases it is necessary to first obtain the second partial derivatives of the galaxy with respect to its parameters.

An idea is to calculate the second partials in render.py like the first partials and store all the partials together in the datacubes (set an optional flag because this will potentially make the fits file become very big). Then we would need to add the corresponding functions in analysis.py to get the biases from the second partials. Bias images can also be displayed by adding the corresponding functions in fisher.py

add Euclid instrument

I'm opening this issue to keep track of the work to implement the Euclid instrument/filters in WLD. The main issue is going to be getting the correct magnitudes for the OneSqDeg.fits catalog.

Option to output only the survey image and not perform analysis

Add command line argument that lets the user to only generate the galsim created survey image for a given input catalog. This would skip over fisher formalism error computations, source and pixel level properties computations. This would be helpful for generating large field images for applications that do not require analysis results, thereby saving the user compute time.

Update outputs at SLAC

While using the example files hosted at SLAC, the new version of reader rises an error ValueError: unknown record: PSF_SIGM

Either we update these files or we make the reader back-compatible.

Review sky brightness calculation

We currently apply extinction to the sky brightness before converting it to elec/sec via Survey.get_flux. Is this the right thing to do?

Zero points for LSST seem very faint

I was just going through some of the code and noticed that the zero points for LSST are set at:

  • ZP_r = 55.8
  • ZP_i = 41.5

I did some quick calculations and get numbers around 30, which matches my expectations from using other telescopes like Subaru. I can copy some detailed calculations if it is helpful.

more realistic PSFs from externally produced models

A student working with me, Husni ( @hsnee ), would like to make it possible to use a more realistic PSF model with variation across the focal plane. He is working on this on a branch in the repo called external_psf. Before surprising you with a pull request, I thought it might be worth us discussing how this will be done, in case you have suggestions for the cleanest way to incorporate this feature into the code.

Going through the code in https://github.com/LSSTDESC/WeakLensingDeblending/blob/master/descwl/survey.py , it became clear to us that for a given image, there's just a single PSF model defined using various input quantities for the FWHM, beta, and shape.

What Husni wants to do is provide a model that gives an image (like a NumPy array or galsim.Image or a galsim.InterpolatedImage) of the PSF model as a function of position within the image. So there are a few questions:

  1. Inputs: we could imagine adding a kwarg called external_psf that defaults to False. It should either be False or it should be a callable routine that takes arguments for the focal plane (x, y) location to get the PSF image. In the second case (pass name of callable routine), all the PSF-related input kwargs would be ignored, while the first case (False) maintains the current behavior. Does this sound reasonable or is there some other way that we should do this?

  2. Survey.__init__ internals: would have to be rearranged to either do what it currently does if external_psf=False, or not do all the bits that calculate PSF moments if an external PSF model is passed in.

  3. GalaxyRenderer: needs to interact with Survey object to get the PSF model depending on the object position.

  4. Analysis routines in analysis.py: need to be updated to do the bookkeeping for each object potentially having a different PSF model

Does this make sense? Any obvious show-stoppers? Alternate ideas?

Problem with lmfit?

I recently tried running the basic command:

./simulate.py --catalog-name OneDegSq.fits --image-width 300 --image-height 300 --output-name tutorial

In my fork and I got the following error:

Traceback (most recent call last):
  File "./simulate.py", line 86, in <module>
    main()
  File "./simulate.py", line 78, in main
    results = analyzer.finalize(args.verbose,trace,args.calculate_bias)
  File "/Users/Ismael/WeakLensingDeblending/descwl/analysis.py", line 1003, in finalize
    bestfit = self.fit_galaxies([g1],deblended)
  File "/Users/Ismael/WeakLensingDeblending/descwl/analysis.py", line 661, in fit_galaxies
    minimum = lmfit.minimize(residuals,parameters)
  File "/Users/Ismael/miniconda2/lib/python2.7/site-packages/lmfit/minimizer.py", line 1543, in minimize
    return fitter.minimize(method=method)
  File "/Users/Ismael/miniconda2/lib/python2.7/site-packages/lmfit/minimizer.py", line 1242, in minimize
    return function(**kwargs)
  File "/Users/Ismael/miniconda2/lib/python2.7/site-packages/lmfit/minimizer.py", line 1072, in leastsq
    lsout = scipy_leastsq(self.__residual, vars, **lskws)
  File "/Users/Ismael/miniconda2/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 377, in leastsq
    shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
  File "/Users/Ismael/miniconda2/lib/python2.7/site-packages/scipy/optimize/minpack.py", line 26, in _check_func
    res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
  File "/Users/Ismael/miniconda2/lib/python2.7/site-packages/lmfit/minimizer.py", line 371, in __residual
    out = _nan_policy(out, nan_policy=self.nan_policy)
  File "/Users/Ismael/miniconda2/lib/python2.7/site-packages/lmfit/minimizer.py", line 1432, in _nan_policy
    raise ValueError("The input contains nan values")
ValueError: The input contains nan values

I tried to git clone the repository just to make sure it was not my fork. This still produced the error above running the same command.
I also updated my version of lmfit without any success. Not sure what could be the problem.

Change equilibration code 'magic number'

Mike's suggestion in the paper review is to use exact powers of 2 for equilibrating matrices instead of 10^4 (which we currently use). This would avoid the equilibration process itself introducing numerical noise from numbers (10^4) not being exactly invertible in double precision. I think an appropriate replacement would be either 2**13 or 2**14

I don't think this would change the output (so it's not urgent), but it would be good to change at some point.

Review LSST zeropoints

Calculate values for the latest LSST filters with airmass 1.2 using:

from speclite.filters import load_filter, ab_reference_flux

def calculate_zero_point(band_name, B0=24):
    filt = load_filter(band_name)
    return (filt.convolve_with_function(ab_reference_flux) *
            10 ** (-0.4 * B0)).to(1 / (u.s * u.m**2))

This gives 1.349 / (s m2) compared with 1.681 currently in the code (which is based on airmass 1). Can the airmass difference can explain this, or did the filter curves change that much? The original values were committed in Dec 2014.

Implement color dependent galaxy shapes

The catalog bulge/disk/AGN normalizations are specified at a one wavelength but multiply different SEDs, so this issue is to combine the normalization parameters (fluxnorm_bulge/disk/agn) with the appropriate SEDs (sedname_bulge/disk/agn) to calculate correct bulge/disk/agn proportions in each band.

As a cross check, try to reproduce the extinction corrected AB apparent magnitudes in the input catalog, which will require using the av_b/d, rv_b/d and ext_model_b/d catalog params (@danielsf why no corresponding AGN params?).

@jmeyers314 I remember you saying you had already done a similar check. How good was the agreement? Did you check the catalog magnitudes, colors or both?

The galaxy input catalog schema is documented here (@danielsf is this still current?)

Update dbquery docs

PR #5 replaces mssql with sqlalchemy, so this changes the instructions for running dbquery. Note that sqlalchemy is included with anaconda.

Tag and create zenodo reference

The metadetect paper needs a zenodo DOI to cite this package.

Instructions are here.

We should tag first. Any last minute changes to include in the tag? (I don't see any open PRs).

grp covariances depending on simulation size??

@ismael2395 brought to my attention that when he does:

./simulate.py --catalog-name OneDegSq.fits --image-width 1800 --image-height 1800\
               --ra-center -0.05 --dec-center -0.45\
               --survey-name LSST --filter-band i --output-name demo2 --verbose

he gets different results for the group 402700140027 than when he runs:

./simulate.py --catalog-name OneDegSq.fits --image-width 600 --image-height 600\
               --ra-center -0.04160895 --dec-center -0.42082338888888887\
               --survey-name LSST --filter-band i --output-name demo3 --verbose

Even though that group is fully contained within the image width and height. This requires some follow-up investigation.
@ismael2395: Please, feel free to elaborate more on this and/or include any plots you find relevant.

Requirements disabled in setup.py

I'm installing the WeakLensingDeblending package on a bare system, and was surprised to have to install dependencies that weren't automatically grabbed by the setup.py.

It looks like the install requirements is commented out:

install_requires=[ ], #requirements,

Is there a reason for that? If it's because there are some packages that can be annoying to install, it's possible to define several install configurations, and I can set that up. For instance, having the default not try to install galsim or something like that

Self-documenting variable names?

Hi @dkirkby -

My student Husni ( @hsnee ) has been looking through your code to learn how to use it for his DESC work. While doing so, we found something a bit confusing:

You have some variables in the output tables with names like psf_X (for example, psf_sigm) which seem like they are meant to include properties of the PSF. But if we look at where this quantity is set, https://github.com/LSSTDESC/WeakLensingDeblending/blob/master/descwl/analysis.py#L917, it appears that this is actually a property of the galaxy after PSF convolution. This is certainly a minor point, but it might make the outputs a bit more self-documenting if the column names and/or variable names are a bit more clear about what this quantity means.

Create --no-hsm option

After commenting the HSM fitting part in analysis.py the running time decreases by ~25% (from 95 mins to 65 mins for one 4K LSST chip). So it can be useful to include this option to speed up the program.

Running the same command produces different number of dropped objects

There is an issue with the current version of WLD. I realized that when running the exactly same command multiple times, like:

./WeakLensingDeblending/simulate.py --catalog-name params/OneDegSq.fits --image-width 1800 --image-height 1800 --ra-center -0.05 --dec-center -0.45 --survey-name LSST --filter-band i --output-name demo1--verbose --no-stamps --no-hsm --no-agn

will produce different number of dropped objects from the fisher inversion process. To be precise running:

sum(cat[cat['snr_grpf']==0]) 

may produce different results even even if cat was created from the exact same command.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.