Giter Club home page Giter Club logo

aimfast's Introduction

aimfast

An Astronomical Image Fidelity Assessment Tool

Main website: aimfast.rtfd.io

Introduction

Image fidelity is a measure of the accuracy of the reconstructed sky brightness distribution. A related metric, dynamic range, is a measure of the degree to which imaging artifacts around strong sources are suppressed, which in turn implies a higher fidelity of the on-source reconstruction. Moreover, the choice of image reconstruction algorithm also affects the correctness of the on-source brightness distribution.

Installation

Installation from source, working directory where source is checked out

$ pip install .

This package is available on PYPI, allowing

$ pip install aimfast

License

This project is licensed under the GNU General Public License v3.0 - see license for details.

Contribute

Contributions are always welcome! Please ensure that you adhere to our coding standards pep8.

aimfast's People

Contributors

3rico avatar athanaseus avatar sphemakh avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

aimfast's Issues

Reading -psf as a directory and not a size

When running the command to get residual comparison of two images:

aimfast --residual-image image_DI_Clustered.DeeperDeconv.app.residual.fits --tigger-model image_DI_Clustered.DeeperDeconv.app.restored-pybdsf.lsm.html --normality-test normaltest -psf 7.5 -af 5 --html-prefix kms --outfile cluster_fidelity_results.json

I get this error:

aimfast.aimfast - 2021-08-04 20:43:26,050 INFO - Welcome to AIMfast
aimfast.aimfast - 2021-08-04 20:43:26,050 INFO - Version: 1.0.0
Traceback (most recent call last):
  File "/home/kincaid/venv/bin/aimfast", line 5, in <module>
    main()
  File "/home/kincaid/venv/lib/python3.6/site-packages/aimfast/aimfast.py", line 2475, in main
    psf_size = measure_psf(args.psf)
  File "/home/kincaid/venv/lib/python3.6/site-packages/aimfast/aimfast.py", line 217, in measure_psf
    with fitsio.open(psffile) as hdu:
  File "/home/kincaid/venv/lib/python3.6/site-packages/astropy/io/fits/hdu/hdulist.py", line 164, in fitsopen
    lazy_load_hdus, **kwargs)
  File "/home/kincaid/venv/lib/python3.6/site-packages/astropy/io/fits/hdu/hdulist.py", line 402, in fromfile
    lazy_load_hdus=lazy_load_hdus, **kwargs)
  File "/home/kincaid/venv/lib/python3.6/site-packages/astropy/io/fits/hdu/hdulist.py", line 1051, in _readfrom
    fileobj = _File(fileobj, mode=mode, memmap=memmap, cache=cache)
  File "/home/kincaid/venv/lib/python3.6/site-packages/astropy/utils/decorators.py", line 535, in wrapper
    return function(*args, **kwargs)
  File "/home/kincaid/venv/lib/python3.6/site-packages/astropy/io/fits/file.py", line 175, in __init__
    self._open_filename(fileobj, mode, overwrite)
  File "/home/kincaid/venv/lib/python3.6/site-packages/astropy/io/fits/file.py", line 564, in _open_filename
    self._file = fileobj_open(self.name, IO_FITS_MODES[mode])
  File "/home/kincaid/venv/lib/python3.6/site-packages/astropy/io/fits/util.py", line 397, in fileobj_open
    return open(filename,  -psf PSF, --psf-image PSF
                        Name of the point spread function file or psf size in
                        arcsec
mode, buffering=0)
FileNotFoundError: [Errno 2] No such file or directory: '7.5'

From the help file, it either looks for a file or a beam size. But it seems its only performing the looking for file option:

image

--compare-model output HTML is broken with Firefox

Not working as I expected (this is with master or release_100, same result):

$ aimfast --compare-model pybdsm_1564201853-MFS-image.lsm.html pybdsm_1558752655-MFS-image.lsm.html --tol 5
aimfast.aimfast - 2020-09-15 11:54:24,536 INFO - Welcome to AIMfast
aimfast.aimfast - 2020-09-15 11:54:24,538 INFO - Number of model pairs to compare: 1
Loading /tmp/tmp3641guwe.txt: ASCII table
Loading pybdsm_1564201853-MFS-image.lsm.html: Tigger sky model
Loading /tmp/tmpepd6qswc.txt: ASCII table
Loading pybdsm_1558752655-MFS-image.lsm.html: Tigger sky model
aimfast.aimfast - 2020-09-15 11:54:32,503 INFO - Model 1 source: 906
aimfast.aimfast - 2020-09-15 11:54:32,503 INFO - Model 2 source: 986
aimfast.aimfast - 2020-09-15 11:54:32,503 INFO - Number of sources matched: 696
WARNING:bokeh.core.validation.check:W-1000 (MISSING_RENDERERS): Plot has no renderers: Figure(id='1044', ...)
aimfast.aimfast - 2020-09-15 11:54:32,686 INFO - Saving photometry comparisons in InputOutputFluxDensity.html
BokehUserWarning: ColumnDataSource's columns must be of the same length. Current lengths: ('s1_flux', 906), ('s1_label', 906), ('s2_flux', 986), ('s2_label', 986), ('x1', 906), ('x2', 986), ('y1', 906), ('y2', 986)
WARNING:bokeh.core.validation.check:W-1000 (MISSING_RENDERERS): Plot has no renderers: Figure(id='1851', ...)
aimfast.aimfast - 2020-09-15 11:54:32,812 INFO - Saving astrometry comparisons in InputOutputPosition.html
aimfast.aimfast - 2020-09-15 11:54:32,812 INFO - Dumping results into the 'fidelity_results.json' file
(caracal) oms@young:~/projects/OldDevils/selfcal-4C12.03/test$

Gives me a rather meaningless InputOutputFluxDensity.html:

image

I also tried with the PyPI version of aimfast. I had to use aimfast --compare-model pybdsm_1564201853-MFS-image.lsm.html:pybdsm_1558752655-MFS-image.lsm.html there, and I got a plot, but all the points were perfectly on the 1:1 line, so that can't be right either.

Input files are under /net/young/home/oms, if you want to try it.

aimfast applied to self-calibrated images


Self calibration in a nutshell
self_cal

  • This issue involves creating a meerkathi worker that performs self-calibration.
  • Subsequent to every imaging step, aimfast will be used assess the image quality.
  • Stopping criteria
    • Set the number of iterations to perform self-cal : No need to iterate infinitely
    • Set tolerance [for DR, 4-moments] : No need to iterate further if image quality doesn't improve
    • Set expected 4-moments of residual flux distribution [Say Gaussian] : No need to iterate further if statistical conditions are satisfied.

NVSS query and cross-matching script

Just discussing this with @IanHeywood. He's still seeing flux scale issues, and there's reports of the same problem from some Caracal users. There seems to be some doubt and confusion about the issue in various quarters. Anyway, that's a separate discussion -- what I'd like to see is a reliable diagnostic that we could run routinely with any eligible observation.

Occurs to me that aimfast offers the best base. So, could we have a scipt that, given an image:

  • Runs PyBDSF on it

  • Queries NVSS to find all suitable sources in the field

  • Cross-matches fluxes and plots the results (optionally, correcting for the primary beam using one of our models, if the input image is not already PB-corrected)

There are some subtleties to this, so @IanHeywood let's document and discuss them here.

If we can get this into a state where this is a standard pipeline product (or at least zero-effort to run), we can put a lot more firm ground under our flux scale discussions.

No Output plots created from --compare-models

When i run the command:

aimfast --compare-models image1-pybdsf.lsm.html image2-pybdsf.lsm.html --html-prefix cluster -fp log
i get:

image

Check:

WARNING - No photometric plot created for Zwcl2341-1_UGMRT_robust0_tap7-MFS-image.fitsimage.rgd.conv-pybdsf.lsm.html
WARNING - No plot astrometric created for Zwcl2341-1_UGMRT_robust0_tap7-MFS-image.fitsimage.rgd.conv-pybdsf.lsm.html

So no .html flux plot is created, only fidelity_results.json file

Data points missing from plots

The data points are missing from my flux offset and position offset plots. I am currently running the following command:
aimfast --compare-images /home/mika/ladumaredux/previousladumaimages/laduma1602964865_sdp_l0.880~933MHz.J03323-28075_im_2.image.tt0.fits /home/mika/ladumaredux/pipeline/output-20240513/1602964865_sdp_l0.8k-J03323_28075-corr-2GC2-MFS-image.fits -c default_sf_config.yml -sf pybdsf --html-prefix laduma5 -fp inout.
Any suggestions?
fluxoffsetladuma
positionoffsetladuma

Aimfast for MOSS

You should get the following results using aimfast commands:
First to install just:
$ pip install aimfast

Since we will use restored images, we have to run a source finder to generate sky model for the images. aimfast ships with both pybdsf & aegean as source finders, and they will save the models in fits and tigger lsm.

Generating source finder config file

$ aimfast source-finder -gc cluster_sf_config.yml

This config contains all the source finder parameters (also one can add any missing). I edited the config to change the threshold since the default also detects artefacts.
thresh_isl: 8.0
thresh_pix: 10.0

Running aimfast as follows will run source finder and cross-match the resulting sky models (both source flux and position offsets)

$ aimfast --compare-images cubical/image_DD_beam_2poly_beam_brighptsrc5_mask.app.restored.fits kms/image_DI_Clustered.DeeperDeconv.AP.app.restored.fits -c cluster_sf_config.yml -sf pybdsf --html-prefix cluster -fp inout

-fp is the type of flux plot .i.e inout plots model1 fluxes vs model2 fluxes, log plots log model1 fluxes vs log model2 fluxes and snr will plot log model 1 vs model1/mode2 flux

NB: Running source finders might take some time depending if the images contain very bright extended sources.
Once you got the sky models you can further just compare the models with --compare-models.

Screenshot from 2021-08-02 10-38-18

Screenshot from 2021-08-02 10-38-36

$ aimfast --compare-models cubical/image_DD_beam_2poly_beam_brighptsrc5_mask.app.restored-pybdsf.lsm.html kms/image_DI_Clustered.DeeperDeconv.AP.app.restored-pybdsf.lsm.html --html-prefix cluster3 -fp log

Screenshot from 2021-08-02 11-21-57

NB: log plots make it easier in case you have faint sources plus one very bright source, otherwise everything will be clustered on the low and one point high in the plot. But this should not be a big issue since the plots are interactive, meaning you can zoom in and also get source properties.

Also by default only point sources are cross-matched, but to compare all sources use the flag -as.

R2 gives an overall sense of how well the source fluxes in one sky model fit the other, meaning that a value close to 1 implies a perfect correlation between the two sky models. This cannot be solely relied on but can be used in conjunction with the RMSE which provides the average deviation of the flux differences in units of source flux densities. (It is sensitive to outliers). And then the intercept and the gradient are used to further confirm that the fit follows the expected fit, which is c=0, g=1.

Looking at the plots and the stats above it seems that the models resemble each other well with little differences except for the difference in the number of sources detected.

MeerKAT vs NVSS

Hey @Kincaidr we can do the NVSS comparison over here.

I used the image: image_DD_beam_2poly_beam_brighptsrc5_mask_uvsel_gmrt.restored.fits.image_J2000.rgd.conv.fits

Using aimfast with a source finder you can just run:

aimfast --compare-online image_DD_beam_2poly_beam_brighptsrc5_mask_uvsel_gmrt.restored.fits.image_J2000.rgd.conv.fits -tol 1 -as

By default, this should compare to the nvss catalog. At the moment nvss and sumss are supported. To increase the width of the field to query use -w 4.0d (in degrees).

NB: Increasing the -tol will increase sources matched but this yields poor results.

Screenshot from 2021-09-14 13-49-51

Screenshot from 2021-09-14 13-49-38

a bag of minor suggestions

(Continued from https://github.com/o-smirnov/meerkat-open-time/issues/18, as this is a more appropriate place)

All right, aimfast works like a charm now @Athanaseus (me having zapped the Island of Doom)!

image

Some questions/suggestions:

  • What does "R2" mean? https://en.wikipedia.org/wiki/Coefficient_of_determination?

  • Can we also fit a slope through a 0 intercept (and show it on the same plot)?

  • I find "intercept" a bit difficult to interpret. As reported above, it's the log of a fitted flux offset, I think. Might be good to convert it back into mJy?

  • I assume the solid blue line is slope 1, intercept 0? In this case please draw it until (0,0), as that makes it clearer what it is. Also, suggest swapping linestyles. The fitted slope(s) are the important lines so should be a solid color, the slope=1 line should be less prominent (i.e. dashed), but maybe that's just me.

  • Finally: in the plot above, it's clear the slope doesn't actually fit the (brighter) sources all that well, they all seem to lie above it consistently. I think it's that cluster of weak peripheral sources below the slope that's biasing the fit (and yay for the colour scheme, we wouldn't be able to diagnose the problem so easily without it!) We could think of clever schemes to downweigh peripheral sources, but for a start, could we add an option for a distance cutoff?

use a label instead of a path when creating stats dictionary

Using a path as a key is not compatible with stimela since the paths inside a container are not the same as paths on the host, so it becomes impossible to automate things. A simple solution is to use a user-supplied label to create the keys

{ <label>-residual : stats,
<label>-restored : stats,
}
...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.