Giter Club home page Giter Club logo

jwst's Issues

Encapsulate validation

Main reason is simply to encapsulate, but also functionally allow validation against the internal schemas.

IPC regression test is failing

The jwst regression tests, which are now running the git-based version of the jwst repo, are hitting an error in one of the tests of the IPC step. Details are at:

https://ssb.stsci.edu/pandokia/pandokia.cgi?query=detail&key_id=6419348

Specific error in the traceback is:

  File "/data4/iraf_conda/miniconda3/envs/rt_dev27/lib/python2.7/site-packages/jwst-0.6.0noop.dev174-py2.7-linux-x86_64.egg/jwst/ipc/ipc_corr.py", line 78, in ipc_correction
    "," + input_model.data.shape[-2])
TypeError("cannot concatenate 'str' and 'int' objects",)

Add DQ flag condition

The PWG has requested the addition of one more DQ flag condition to the list of accepted values:

OTHER_BAD_PIXEL: A catch-all flag

How to redo NIRSpec background sub?

If I want to rerun background subtraction step with my own background aperture, how do I do it? This question came up in JWST DADF MOS Tools sprint.

Fix documenting version numbers

stpipe.Step.run records the version of the software used as __svn_revision__.
@jhunkeler What is the way to get the equivalent number now?

I'm commenting out this line in stpipe for now to try to get a clean run of all tests.

Update to wfs_combine method

The WFSC working group has requested that we update the algorithm that's used in the wfs_combine task to compute output pixel values. In the case where pixels in both input images are good, the output pixel value should be computed as the average of the two inputs, rather than simply using the value from input image 1. In cases where the input from either image 1 or 2 is bad, then we still use the current scheme of using the 1 good input value for the output.

This is a low priority item and should only be worked when all other Build 7 tasks have been completed. If necessary, it can be delayed to Build 7.1.

Ramp fit OLS with ngroups <= 2

Bryan Hilbert reports that his testing of the Build 6 ramp fit step revealed the following:

"The only hiccups I've seen so far are for a pixel with only 2 good groups (I marked all the rest as saturated). In that case the equal weighting returns the expected slope value, but the optimal weighting returns a zero.

Similarly, for the case where there is only a single good group, both the equal and optimal weighting strategies return a slope of zero."

I think the case unweighted and ngroups=1 may have already been fixed since the Build 6 delivery, but should be checked. For the case of ngroups=2, no fitting should be done and the slope computed simply by differencing the 2 groups and dividing by the group exposure time. For ngroups=1 the slope should be computed by just dividing the one group value by the group exposure time, regardless of which OLS weighting scheme is used.

Remove sca_aper and wfscflag from core schema

The keyword dictionary working group has decided to remove the keywords SCA_APER and WFSCFLAG from JWST headers. SCA_APER is not needed and WFSCFLAG is a redundant entry with WFSVISIT.

Level 3 processing doesn't pay attention to strun --output_dir param

Mike Swam reports that when he tried to run wfs_combine level-3 processing using strun and specified the optional --output_dir param to designate a particular output directory in which to place the output product, the output product was still created in the working directory.

I'm guessing this is due to the fact that with level-3 processing, via any task like wfs_combine or calwebb_image3, it's the task/pipeline module itself that's saving the output file, using the name specified in the input ASN table, and is ignoring anything specified via the --output_dir param on the command line. With level-2 processing tasks/pipelines, the output product model is passed back up to stpipe to let it handle the creation of the output file and in that case stpipe does pay attention to any --output_dir that was specified.

Level-3 tasks and pipeline modules, such as wfs_combine, calwebb_image3, calwebb_spec3, etc. should be upgraded to use the output_dir path specified on the strun command line.

Documentation (DUH!)

This actually means producing docs that are not automatically created from code.

DataModel mocking

It would be nice, if it doesn't already exist, to have some function that will create a DataModel that has data (by default arrays of 1 or some such), keywords (with basic values matching their types), etc. Such a function may look like:

model = jwst.datamodels.mock(DataModel, data_shape=a_shape, data_fill=some_fill)

Core schema needs updates for SUBARRAY allowed values

The NIRCam team has sent a list of the currently allowed values for SUBARRAY. There are three values in that list that are not currently in the enum list of allowed values for the SUBARRAY keyword in our datamodels core schema. We need to add the values "SUBFP1A", "SUBFP1B", and "SUBSTRIPE256" to the list.

failing tests in datamodels

I have commented out a few failing tests in datamodels so that we could get a clean run.
These need to be fixed:

test_fits.test_extra_fits
test_fits.test_extra_fits_update
test_schema.test_date
test_schema.test_date2
test_schema.test_list2
test_schema.test_multislit_garbage
test_wcs.test_wcs

datamodels.items() recursion error under python2

Not knowing whether we're concerned about Python2: Using an example from @hbushouse , and the native conda-dev in python2, the following occurs. Under Python3, all is fine

$ conda create -n conda-dev -c http://ssb.stsci.edu/conda-dev python=2 jwst
$ source activate conda-dev
$ ipython
In [1]: from jwst.datamodels import MultiSlitModel

In [2]: m = MultiSlitModel('tests/data/jwst_nod2_cal.fits')

In [3]: m.items()
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-3-f16e51064509> in <module>()
----> 1 m.items()

/Users/eisenham/anaconda3/envs/conda-dev-py2/lib/python2.7/site-packages/jwst-0.6.0noop.dev291-py2.7-macosx-10.6-x86_64.egg/jwst/datamodels/model_base.py in items(self)
    523                 ("meta.observation.date": "2012-04-22T03:22:05.432")
    524             """
--> 525             return list(self.items())
    526
    527     def iterkeys(self):

... last 1 frames repeated, from the frame below ...

/Users/eisenham/anaconda3/envs/conda-dev-py2/lib/python2.7/site-packages/jwst-0.6.0noop.dev291-py2.7-macosx-10.6-x86_64.egg/jwst/datamodels/model_base.py in items(self)
    523                 ("meta.observation.date": "2012-04-22T03:22:05.432")
    524             """
--> 525             return list(self.items())
    526
    527     def iterkeys(self):

RuntimeError: maximum recursion depth exceeded

Ensure unique L3.5 associations

This issue will resolve the track issue Check for duplicate L3.5 associations with existing Candidate associations
Steps to resolution:

  • Implement new pool specification for candidate identification (see below)
  • Ensure the 'cXXXX' and 'aXXXX' naming is occuring correctly #214
  • Ensure that the candidate and discovered associations only occur when not specifying an observation list. #221
  • Check for discovered candidate uniqueness
  • product names do not have the candidate id when in full mode #214

failing tests in stpipe

These tests have been commented out so that we can get a clean run and need to be fixed:

test_pipeline.test_pipeline_commandline
test_step.test_save_model

Edit: CRDS tests should be moved to an internal system.

Flat-field step doesn't handle ref file DQ flags properly

The Build 6 testers in INS have discovered that the DQ flags from a flat-field ref file are not being handled properly within the flat-field step. This is due to the fact that the flat-field ref file - at least in some cases - is being loaded into a MultiSlitModel and this data model does NOT call the routine to perform dynamic DQ flag remapping. So the ref file DQ values are being left as is, instead of being translated via the settings in the DQ_DEF table included in the ref file.

A couple of possible solutions include:

  1. switching the flat-field step back to loading ref files using a FlatModel, now that we don't have NIRSpec flats in the form of MultiSlitModel's anymore. The FlatModel data model does apply the dynamic DQ remapping.

  2. Adding dynamic DQ remapping to all data models (such as MultiSlitModel), at least when a DQ_DEF table is available.

Update BUNIT in all output products

SDP is including the keyword BUNIT in the SCI extension header of the level-1b products that serve as input to the cal pipeline. They have its value set to 'DN' in the level-1b products. We don't currently have BUNIT in our datamodels schema and hence that keyword simply gets passed along untouched (presumably via the extra_fits attribute of our data models) to all the output files created by the cal pipeline. But the value of 'DN' is no longer correct for those output products and can be misleading. So we need to add that keyword to our core schema and make sure it gets updated when necessary.

Normally this would be an easy thing to implement, but given that the keyword resides in the SCI extension header, rather than the primary header, I'm not exactly sure how we can implement or specify it in our core schema. Is there a way to designate the fits_hdu to which a given keyword belongs?

Add coverage test to Travis CI

Would be nice to know the coverage of the tests. We can turn on GitHub coveralls webhook for automated coverage reporting too. Astropy has example on how to set this up.

Dark correction needs to check ref file params more thoroughly

I've found a case where the dark current step inadvertently applies data from a dark reference file that's totally inappropriate. Right now the step first checks to see if NFRAMES and GROUPGAP in the science and reference files are an exact match. If they're an exact match, it applies the dark ref data directly. If they aren't an exact match, it grabs and averages frames out of the ref file. The way the code is constructed, it contains an implicit assumption that the science data will always have NFRAMES and GROUPGAP values that are greater than the reference data. I've discovered a case where the selected ref file has NFRAMES=4, while the science data have NFRAMES=1, and the code blindly goes ahead and tries to apply it. This is incorrect.

A check needs to be added to the dark correction step to make sure that the values of NFRAMES and GROUPGAP for the ref data are always at least equal to or greater than the values for the science data. If not, a warning should be issued and the step skipped.

Add --version-id option

As of 20160818 AA meeting, SDP (the workflow) will determine a version ID to use with association creation. The generator should use this version ID instead of its own ID to place onto the associations.

Missing tests [explained]

The automated script was partially successful at extracting test data from each package.

Only packages with tests inside the package were ported. Any tests outside of the package were left behind.

Ported:

package/
    __init__.py
    tests/
        __init__.py

Not ported:

top-level/
    package/
        __init__.py
    tests/
        __init__.py
    setup.py

calwebb_spec2 uses exp_type NRS_MSA instead of NRS_MSASPEC

The calwebb_spec2 pipeline module has several steps that only get applied to NIRSpec MSA observations, which are identified via the value of datamodel.meta.exposure.type. Right now the code is checking for values of 'NRS_MSA', but the correct value that it should be using is 'NRS_MSASPEC.'

NIRSpec GWA tilt sensor temperature keyword

SDP is currently using the keyword name "GWA_TILT" in the headers of level-1b products that they produce, which contains the temperature of the NIRSpec GWA (grating wheel assembly) tilt sensor. That keyword name will be changing to "GWA_TTIL" to keep it in sync with the name used in the engineering telemetry and the FITS files produced by the FITSWriter for ground test data. Hence we will need to update our core data model schema to use the new form of this keyword name.

naming of source-based files

How will the source based data be named? Possible sources:

  • MultiSlitModel.slit.name
  • MultiSlitModel.slit.slitlet_id
  • MultiSlitModel.slit.source_id
  • Same combination involving filenames.

Problem with backward transform in NIRSPEC IFU

Somewhere along the IFU backward transform the inputs and outputs are not matched, i.e. the overall transform needs another combination of Mapping and Identity models. The error is

Traceback (most recent call last):
File "./compute_world_coordinates.py", line 206, in
ifu_coords(res.filename)
File "./compute_world_coordinates.py", line 47, in ifu_coords
ifu_slits = nirspec.nrs_ifu_wcs(model)
File "/grp/hst/ssb/rhel6/ssbdev/python/lib/python2.7/site-packages/jwst_pipeline.assign_wcs-0.6-py2.7.egg/jwst_pipeline/assign_wcs/nirspec.py", line 926, in nrs_ifu_wcs
wcs_list.append(nrs_wcs_set_input(input_model.meta.wcs, 0, i, wrange))
File "/grp/hst/ssb/rhel6/ssbdev/python/lib/python2.7/site-packages/jwst_pipeline.assign_wcs-0.6-py2.7.egg/jwst_pipeline/assign_wcs/nirspec.py", line 816, in nrs_wcs_set_input
slit2detector = slit_wcs.get_transform('slit_frame', 'detector')
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/gwcs-0.6.dev154-py2.7.egg/gwcs/wcs.py", line 117, in get_transform
return functools.reduce(lambda x, y: x | y, transforms)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/gwcs-0.6.dev154-py2.7.egg/gwcs/wcs.py", line 117, in
return functools.reduce(lambda x, y: x | y, transforms)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/astropy-1.3.dev15941-py2.7-linux-x86_64.egg/astropy/modeling/core.py", line 76, in
left, right, **kwargs)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/astropy-1.3.dev15941-py2.7-linux-x86_64.egg/astropy/modeling/core.py", line 1977, in _from_operator
inputs, outputs = mcls._check_inputs_and_outputs(operator, left, right)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/astropy-1.3.dev15941-py2.7-linux-x86_64.egg/astropy/modeling/core.py", line 2038, in _check_inputs_and_outputs
right.n_inputs, right.n_outputs))
astropy.modeling.core.ModelDefinitionError: Unsupported operands for |: None (n_inputs=3, n_outputs=3) and None (n_inputs=5, n_outputs=4); n_outputs for the left-hand model must match n_inputs for the right-hand model.

NIRSPEC IFU does not pass cleanly through extract_2d

I tried running extract_2d on an IFU file to see if it would exit cleanly without producing an error but got the following error:

output_model.meta.cal_step.extract_2d = 'SKIPPED'

UnboundLocalError: local variable 'output_model' referenced before assignment

the second to last line suggests that it knows it shouldn't do anything, but it doesn't exit.

Incorrect values from a data model

Under some circumstances, a model in datamodels can return incorrect values for data in a FITS BINTABLE. I was using a data model in which the column names of a table were specified as lower case, but the actual table that I was reading had the names in upper case. I was using Python 2.7.12. The datamodels code did not give any warning or error message when I referenced the values in a column by giving the name in lower case, but the data values returned by the model were all zero. For a text string column, the values were all blank.

For most cases, the datamodels code gave the same (correct) results regardless of the case of the file names. The case where I saw the incorrect values with upper-case names was under the following conditions: (1) a "multi" model, e.g. MultiSpecModel, (2) there was a text-string column, and (3) there was a column that contained arrays. The MultiSpecModel does not have any text-string columns and the float columns are scalar, so I modified a local copy of the model. I changed the table definition to three columns: slit_name, type ascii, maximum 15 characters; wavelength, type float32, shape (2000,); countrate, type float32, shape (2000,).

Ramp fit step parameter values

The "spec" definition in the ramp_fit_step.py module should be updated to specify the list of allowed values for the "algorithm" and "weighting" parameters, in addition to their default values. That way users will know whether they've entered a valid value that will properly trigger the options they want. This change should also use better wording for the weighting values.

For example:

  algorithm = option('OLS', 'GLS', default='OLS')
  weighting = option('unweighted', 'optimal', default='unweighted')

Corresponding code changes in ramp_fit_step and ramp_fit would also be necessary to use the modified values of the weighting options.

Exposure to Source tool

Simple utility to take exposure-based data, in particular datamodel.MultiSlitModel, and re-arrange into souce-based data, similar in structure to datamodel.MultiSlitModel.

system pressure - nirspec prism

As a reminder - waiting on clarification from the team on what value to use for the instrument pressure in the calculation of the refraction index for the NIRSPEC prism.

Formalize assocation types

Abstract

Now that the real nature of associations are getting together on all fronts, time for a grand refactor to accommodate:

  • User-level modification
  • OPS-level modification (though possibly not different than user-level)
  • Formalize observation, association candidate, and cross-observation associations
  • Further prepare for the, yet-to-be-specified, Level2 associations

TRAC references

ToDo

  • Factor into more distinct modules
  • Allow Associations to be instantiated without a member
  • Factor Association inside out
  • Factor out basic rule

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.