Giter Club home page Giter Club logo

jwst's Introduction

JWST Calibration Pipeline

Build Status codecov Documentation Status Powered by STScI Badge Powered by Astropy Badge DOI

STScI Logo

Important

JWST requires a C compiler for dependencies and is currently limited to Python 3.10, 3.11, or 3.12.

Note

Linux and MacOS platforms are tested and supported. Windows is not currently supported.

Warning

Installation on MacOS Mojave 10.14 will fail due to lack of a stable build for dependency opencv-python.

Installation

Please contact the JWST Help Desk for installation issues.

The easiest way to install the latest jwst release into a fresh virtualenv or conda environment is

pip install jwst

Detailed Installation

The jwst package can be installed into a virtualenv or conda environment via pip. We recommend that for each installation you start by creating a fresh environment that only has Python installed and then install the jwst package and its dependencies into that bare environment. If using conda environments, first make sure you have a recent version of Anaconda or Miniconda installed. If desired, you can create multiple environments to allow for switching between different versions of the jwst package (e.g. a released version versus the current development version).

In all cases, the installation is generally a 3-step process:

  • Create a conda environment
  • Activate that environment
  • Install the desired version of the jwst package into that environment

Details are given below on how to do this for different types of installations, including tagged releases, DMS builds used in operations, and development versions. Remember that all conda operations must be done from within a bash/zsh shell.

Installing latest releases

You can install the latest released version via pip. From a bash/zsh shell:

conda create -n <env_name> python=3.11
conda activate <env_name>
pip install jwst

You can also install a specific version:

conda create -n <env_name> python=3.11
conda activate <env_name>
pip install jwst==1.9.4

Installing the development version from Github

You can install the latest development version (not as well tested) from the Github master branch:

conda create -n <env_name> python=3.11
conda activate <env_name>
pip install git+https://github.com/spacetelescope/jwst

Installing a DMS Operational Build

There may be occasions where an exact copy of an operational DMS build is desired (e.g. for validation testing or debugging operational issues). We package releases for DMS builds via environment snapshots that specify the exact versions of all packages to be installed.

To install a particular DMS build, consult the Software vs DMS build version map table shown below to determine the correct jwst tag. For example, to install the version of jwst used in DMS build 9.0, use jwst tag 1.8.2. The overall procedure is similar to the 3-step process outlined in the previous section, but the details of each command vary, due to the use of environment snapshot files that specify all of the particular packages to install. Also note that different snapshot files are used for Linux and Mac OS systems.

Linux:

conda create -n jwstdp-1.12.5 --file https://ssb.stsci.edu/releases/jwstdp/1.12.5/conda_python_stable-deps.txt
conda activate jwstdp-1.12.5
pip install -r https://ssb.stsci.edu/releases/jwstdp/1.12.5/reqs_stable-deps.txt

MacOS:

conda create -n jwstdp-1.12.5 --file https://ssb.stsci.edu/releases/jwstdp/1.12.5/conda_python_macos-stable-deps.txt
conda activate jwstdp-1.12.5
pip install -r https://ssb.stsci.edu/releases/jwstdp/1.12.5/reqs_macos-stable-deps.txt

Each DMS delivery has its own installation instructions, which may be found in the corresponding release documentation linked from this page: https://github.com/astroconda/astroconda-releases/tree/master/jwstdp The installation procedures may change from time to time, so consulting the documentation page for the specific version in question is the best way to get that version installed.

Installing for Developers

If you want to be able to work on and test the source code with the jwst package, the high-level procedure to do this is to first create a conda environment using the same procedures outlined above, but then install your personal copy of the code overtop of the original code in that environment. Again, this should be done in a separate conda environment from any existing environments that you may have already installed with released versions of the jwst package.

As usual, the first two steps are to create and activate an environment:

conda create -n <env_name> python=3.11
conda activate <env_name>

To install your own copy of the code into that environment, you first need to fork and clone the jwst repo:

cd <where you want to put the repo>
git clone https://github.com/<your_github_username>/jwst.git
cd jwst

Note: python setup.py install and python setup.py develop commands do not work.

Install from your local checked-out copy as an "editable" install:

pip install -e .

If you want to run the unit or regression tests and/or build the docs, you can make sure those dependencies are installed too:

pip install -e ".[test]"
pip install -e ".[docs]"
pip install -e ".[test,docs]"

Need other useful packages in your development environment?

pip install ipython jupyter matplotlib pylint

Calibration References Data System (CRDS) Setup

Note: As of November 10, 2022, the process of deprecating the CRDS PUB Server will start. For details, refer to the CRDS PUB Server Freeze and Deprecation page

CRDS is the system that manages the reference files needed to run the pipeline. For details about CRDS, see the User's Guide

The JWST CRDS server is available at https://jwst-crds.stsci.edu

It supports the automatic processing pipeline at STScI. Inside the STScI network, the same server is used by the pipeline by default with no modifications. To run the pipeline outside the STScI network, CRDS must be configured by setting two environment variables:

export CRDS_PATH=<locally-accessable-path>/crds_cache/jwst_ops
export CRDS_SERVER_URL=https://jwst-crds.stsci.edu

<locally-accessable-path> can be any the user has permissions to use, such as $HOME. Expect to use upwards of 200GB of disk space to cache the latest couple of contexts.

To use a specific CRDS context, other than the current default, set the CRDS_CONTEXT environment variable:

export CRDS_CONTEXT=jwst_1179.pmap

Documentation

Documentation (built daily from the Github master branch) is available at:

https://jwst-pipeline.readthedocs.io/en/latest/

To build the docs yourself, clone this repository and build the documentation with:

pip install -e ".[docs]"
cd docs
make html
make latexpdf

Contributions and Feedback

We welcome contributions and feedback on the project. Please follow the contributing guidelines to submit an issue or a pull request.

We strive to provide a welcoming community to all of our users by abiding with the Code of Conduct.

If you have questions or concerns regarding the software, please open an issue at https://github.com/spacetelescope/jwst/issues or contact the JWST Help Desk.

Software vs DMS build version map

The table below provides information on each release of the jwst package and its relationship to software builds used in the STScI JWST DMS operations environment. The Released column gives the date on which the jwst tag was released on PyPi and the Ops Install column gives the date on which the build incorporating that release was installed in DMS operations. Note that the CRDS_CONTEXT listed is a minimum context that can be used with that release. A release should work with any contexts between the specified context and less than the context for the next release.

jwst tag DMS build SDP_VER CRDS_CONTEXT Released Ops Install Notes
1.14.0 B10.2rc1 1215 2024-03-29 First release candidate for B10.2
1.13.4 1185 2024-01-25 PyPI-only release for external users
1.13.3 B10.1 2023.4.0 1181 2024-01-05 Final release candidate for B10.1
1.13.2 B10.1rc3 2023.4.0 1181 2023-12-21 Third release candidate for B10.1
1.13.1 B10.1rc2 2023.4.0 1181 2023-12-19 Second release candidate for B10.1
1.13.0 B10.1rc1 2023.4.0 1179 2023-12-15 First release candidate for B10.1
1.12.5 B10.0.1 2023.3.1 1166 2023-10-19 2023-12-05 Patch release B10.0.1
1.12.4 2023-10-12 Pinning dependencies for external users
1.12.3 B10.0 2023.3.0 1135 2023-10-03 2023-12-05 Final release candidate for B10.0
1.12.2 B10.0rc3 1135 2023-10-02 Third release candidate for B10.0
1.12.1 B10.0rc2 1132 2023-09-26 Second release candidate for B10.0
1.12.0 B10.0rc1 1130 2023-09-18 First release candidate for B10.0
1.11.4 B9.3.1 2023.2.1 1107 2023-08-14 2023-08-24 Final release for B9.3.1 patch
1.11.3 B9.3 2023.2.0 1097 2023-07-17 Final release candidate for B9.3
1.11.2 B9.3rc3 1097 2023-07-12 Third release candidate for B9.3
1.11.1 B9.3rc2 1094 2023-06-29 Second release candidate for B9.3
1.11.0 B9.3rc1 1094 2023-06-21 First release candidate for B9.3
1.10.2 1077 2023-04-14 Pinning dependencies for external users
1.10.1 B9.2.x 2023.1.1 1077 2023-04-13 2023-05-23 Final release candidate for B9.2
1.10.0 B9.2rc1 1075 2023-03-31 First release candidate for B9.2
1.9.6 B9.1.2 2022.5.2 1068 2023-03-09 2023-03-15 Final release candidate for B9.1.2
1.9.5 1061 2023-03-02 First release candidate for B9.1.2
1.9.4 B9.1.1 2022.5.1 1041 2023-01-27 2023-02-28 Final release candidate for B9.1.1
1.9.3 B9.1 2022.5.0 1030 2023-01-12 2023-02-28 Final release candidate for B9.1
1.9.2 B9.1rc2 2023-01-04 Second release candidate for B9.1 (hotfix)
1.9.1 B9.1rc2 2023-01-03 Second release candidate for B9.1
1.9.0 B9.1rc1 2022-12-27 First release candidate for B9.1
1.8.5 B9.0 1019 2022-12-12 Documentation patch release for B9.0
1.8.4 B9.0 2022-11-16 Documentation patch release for B9.0
1.8.3 B9.0 2022-11-11 Documentation patch release for B9.0
1.8.2 B9.0 2022.4.0 1017 2022-10-19 2022-11-17 Final release candidate for B9.0
1.8.1 B9.0rc2 2022-10-17 Second release candidate for B9.0
1.8.0 B9.0rc1 2022-10-10 First release candidate for B9.0
1.7.2 B8.1.2 2022.3.1 0984 2022-09-12 2022-09-21 Final release candidate for B8.1.2
1.7.1 B8.1.2rc2 2022-09-07 Second release candidate for B8.1.2
1.7.0 B8.1.2rc1 2022-09-01 First release candidate for B8.1.2
1.6.2 B8.1 2022.3.0 0953 2022-07-19 2022-08-19 Final release candidate for B8.1
1.6.1 B8.1rc2 2022-07-15 Second release candidate for B8.1
1.6.0 B8.1rc1 2022-07-11 First release candidate for B8.1
1.5.3 B8.0.1 2022.2.1 0913 2022-06-20 2022-06-30 Patch release B8.0.1
1.5.2 B8.0 2022.2.0 0874 2022-05-20 2022-06-16 Final release candidate for B8.0
1.5.1 B8.0rc2 2022-05-17 Second release candidate for B8.0
1.5.0 B8.0rc1 2022-05-05 First release candidate for B8.0
1.4.6 B7.9.3 2022.1.2 0800 2022-03-25 Final release candidate for B7.9.3
1.4.5 B7.9.3rc2 2022-03-23 Second release candidate for B7.9.3
1.4.4 B7.9.3rc1 2022-03-16 First release candidate for B7.9.3
1.4.3 B7.9.1 2022.1.1 0800 2022-02-03 Final B7.9.1
1.4.2 B7.9 2022.1.0 0797 2022-01-20 Final release candidate for B7.9
1.4.1 B7.9rc2 2022-01-15 Second release candidate for B7.9
1.4.0 B7.9rc1 2022-01-10 First release candidate for B7.9
Pre-launch releases
1.3.3 B7.8.2 2021.4.0 0764 2021-10-05 Same as 1.3.2, but with installation bug fix
1.3.2 B7.8.2 2021.4.0 0764 2021-09-03 Final release candidate for B7.8.2
1.3.1 B7.8.1 2021.3.0 0742 2021-08-09 Final release candidate for B7.8.1
1.3.0 B7.8.1rc1 0741 2021-08-02 First release candidate for B7.8.1
1.2.3 B7.8 2021.2.0 0732 2021-06-08 Final release candidate for B7.8
1.2.2 B7.8rc3 2021-06-08 Third release candidate for B7.8
1.2.1 B7.8rc2 2021-06-07 Second release candidate for B7.8
1.2.0 B7.8rc1 0723 2021-05-24 First release candidate for B7.8
1.1.0 B7.7.1 2021.1.0 0682 2021-02-26 Final release candidate for B7.7.1
1.0.0 B7.7.1rc1 0678 2021-02-22 First release candidate for B7.7.1
0.18.3 B7.7 2020.4.0 0670 2021-01-25 Final release candidate for B7.7
0.18.2 B7.7rc3 0668 2021-01-19 Third release candidate for B7.7
0.18.1 B7.7rc2 0664 2021-01-08 Second release candidate for B7.7
0.18.0 B7.7rc1 0645 2020-12-21 First release candidate for B7.7
0.17.1 B7.6 2020.3.0 0641 2020-09-15 Final release candidate for B7.6
0.17.0 B7.6rc1 0637 2020-08-28 First release candidate for B7.6
0.16.2 B7.5 2020.2.0 0619 2020-06-10 Same as 0.16.1, but with installation bug fix
0.16.1 B7.5 2020.2.0 0619 2020-05-19 Final release candidate for B7.5
0.16.0 B7.5rc1 0614 2020-05-04 First release candidate for B7.5
0.15.1 B7.4.2 2020.1.0 0586 2020-03-10 Final release candidate for B7.4.2
0.15.0 B7.4.2rc1 0585 2020-02-28 First release candidate for B7.4.2
0.14.2 B7.4 2019.3.0 0570 2019-11-18 Final release candidate for B7.4
0.14.1 B7.4rc2 0568 2019-11-11 Second release candidate for B7.4
0.14.0 B7.4rc1 0563 2019-10-25 First release candidate for B7.4
0.13.8 B7.3.1 2019.2.0 0541 2019-09-05 Patch for Build 7.3 released as Build 7.3.1
0.13.7 B7.3 2019.1.0 0535 2019-06-21 Final release candidate for Build 7.3
0.13.6 B7.3rc4 0534 2019-06-20 Fourth release candidate for Build 7.3
0.13.5 B7.3rc3 0534 2019-06-19 Third release candidate for Build 7.3
0.13.4 B7.3rc2 0534 2019-06-18 Second release candidate for Build 7.3
0.13.3 B7.3rc1 0532 2019-06-04 First release candidate for Build 7.3
0.13.2 0500 2019-05-14 DMS test, no delivery to I&T
0.13.1 0500 2019-03-08 DMS test, no delivery to I&T
0.13.0 0500 2019-02-15 DMS test, no delivery to I&T
0.12.3 B7.2.1 0500 2019-01-15 DMS Build 7.2.1 patch release
0.12.2 B7.2 2018_2 0495 2018-11-07 Final release candidate for Build 7.2
0.12.1 B7.2rc2 0495 2018-11-01 Second release candidate for Build 7.2
0.12.0 B7.2rc1 0493 2018-10-09 First release candidate for Build 7.2
0.11.0 0482 2018-09-10 DMS test, no delivery to I&T
0.10.0 0477 2018-07-31 DMS test, no delivery to I&T
0.9.6 B7.1.3 2018_1 0468 2018-06-08 Final release candidate for Build 7.1.3
0.9.5 B7.1.3rc3 0468 2018-06-06 Third release candidate for Build 7.1.3
0.9.4 B7.1.3rc2 0463 2018-05-29 Second release candidate for Build 7.1.3
0.9.3 B7.1.3rc1 0457 2018-05-11 First release candidate for Build 7.1.3
0.9.2 0441 2018-03-28 DMS test, no delivery to I&T
0.9.1 0432 2018-02-16 DMS test, no delivery to I&T
0.9.0 B7.1.2 0422 2017-12-22 DMS patch release to I&T 2018-02-15
0.8.0 B7.1.1 0422 2017-11-06 DMS patch release to I&T 2018-01-17
0.8.0 B7.1 2017_1 0422 2017-11-06 Final release for Build 7.1
0.7.7 B7.0 2016_2 0303 2016-12-13 Final release for Build 7.0

Unit Tests

Unit tests can be run via pytest. Within the top level of your local jwst repo checkout:

pip install -e ".[test]"
pytest

Need to parallelize your test runs over all available cores?

pip install pytest-xdist
pytest -n auto

Regression Tests

Latest regression test results can be found here (STScI staff only):

https://plwishmaster.stsci.edu:8081/job/RT/job/JWST/

The test builds start at 6pm local Baltimore time Monday through Saturday on jwcalibdev.

To run the regression tests on your local machine, get the test dependencies and set the environment variable TEST_BIGDATA to our Artifactory server (STSci staff members only):

pip install -e ".[test]"
export TEST_BIGDATA=https://bytesalad.stsci.edu/artifactory

To run all the regression tests (except the very slow ones):

pytest --bigdata jwst/regtest

You can control where the test results are written with the --basetemp=<PATH> arg to pytest. NOTE that pytest will wipe this directory clean for each test session, so make sure it is a scratch area.

If you would like to run a specific test, find its name or ID and use the -k option:

pytest --bigdata jwst/regtest -k nirspec

If developers need to update the truth files in our nightly regression tests, there are instructions in the repository wiki.

https://github.com/spacetelescope/jwst/wiki/Maintaining-Regression-Tests

jwst's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jwst's Issues

Incorrect values from a data model

Under some circumstances, a model in datamodels can return incorrect values for data in a FITS BINTABLE. I was using a data model in which the column names of a table were specified as lower case, but the actual table that I was reading had the names in upper case. I was using Python 2.7.12. The datamodels code did not give any warning or error message when I referenced the values in a column by giving the name in lower case, but the data values returned by the model were all zero. For a text string column, the values were all blank.

For most cases, the datamodels code gave the same (correct) results regardless of the case of the file names. The case where I saw the incorrect values with upper-case names was under the following conditions: (1) a "multi" model, e.g. MultiSpecModel, (2) there was a text-string column, and (3) there was a column that contained arrays. The MultiSpecModel does not have any text-string columns and the float columns are scalar, so I modified a local copy of the model. I changed the table definition to three columns: slit_name, type ascii, maximum 15 characters; wavelength, type float32, shape (2000,); countrate, type float32, shape (2000,).

Core schema needs updates for SUBARRAY allowed values

The NIRCam team has sent a list of the currently allowed values for SUBARRAY. There are three values in that list that are not currently in the enum list of allowed values for the SUBARRAY keyword in our datamodels core schema. We need to add the values "SUBFP1A", "SUBFP1B", and "SUBSTRIPE256" to the list.

Dark correction needs to check ref file params more thoroughly

I've found a case where the dark current step inadvertently applies data from a dark reference file that's totally inappropriate. Right now the step first checks to see if NFRAMES and GROUPGAP in the science and reference files are an exact match. If they're an exact match, it applies the dark ref data directly. If they aren't an exact match, it grabs and averages frames out of the ref file. The way the code is constructed, it contains an implicit assumption that the science data will always have NFRAMES and GROUPGAP values that are greater than the reference data. I've discovered a case where the selected ref file has NFRAMES=4, while the science data have NFRAMES=1, and the code blindly goes ahead and tries to apply it. This is incorrect.

A check needs to be added to the dark correction step to make sure that the values of NFRAMES and GROUPGAP for the ref data are always at least equal to or greater than the values for the science data. If not, a warning should be issued and the step skipped.

Ensure unique L3.5 associations

This issue will resolve the track issue Check for duplicate L3.5 associations with existing Candidate associations
Steps to resolution:

  • Implement new pool specification for candidate identification (see below)
  • Ensure the 'cXXXX' and 'aXXXX' naming is occuring correctly #214
  • Ensure that the candidate and discovered associations only occur when not specifying an observation list. #221
  • Check for discovered candidate uniqueness
  • product names do not have the candidate id when in full mode #214

Flat-field step doesn't handle ref file DQ flags properly

The Build 6 testers in INS have discovered that the DQ flags from a flat-field ref file are not being handled properly within the flat-field step. This is due to the fact that the flat-field ref file - at least in some cases - is being loaded into a MultiSlitModel and this data model does NOT call the routine to perform dynamic DQ flag remapping. So the ref file DQ values are being left as is, instead of being translated via the settings in the DQ_DEF table included in the ref file.

A couple of possible solutions include:

  1. switching the flat-field step back to loading ref files using a FlatModel, now that we don't have NIRSpec flats in the form of MultiSlitModel's anymore. The FlatModel data model does apply the dynamic DQ remapping.

  2. Adding dynamic DQ remapping to all data models (such as MultiSlitModel), at least when a DQ_DEF table is available.

Formalize assocation types

Abstract

Now that the real nature of associations are getting together on all fronts, time for a grand refactor to accommodate:

  • User-level modification
  • OPS-level modification (though possibly not different than user-level)
  • Formalize observation, association candidate, and cross-observation associations
  • Further prepare for the, yet-to-be-specified, Level2 associations

TRAC references

ToDo

  • Factor into more distinct modules
  • Allow Associations to be instantiated without a member
  • Factor Association inside out
  • Factor out basic rule

NIRSPEC IFU does not pass cleanly through extract_2d

I tried running extract_2d on an IFU file to see if it would exit cleanly without producing an error but got the following error:

output_model.meta.cal_step.extract_2d = 'SKIPPED'

UnboundLocalError: local variable 'output_model' referenced before assignment

the second to last line suggests that it knows it shouldn't do anything, but it doesn't exit.

calwebb_spec2 uses exp_type NRS_MSA instead of NRS_MSASPEC

The calwebb_spec2 pipeline module has several steps that only get applied to NIRSpec MSA observations, which are identified via the value of datamodel.meta.exposure.type. Right now the code is checking for values of 'NRS_MSA', but the correct value that it should be using is 'NRS_MSASPEC.'

Level 3 processing doesn't pay attention to strun --output_dir param

Mike Swam reports that when he tried to run wfs_combine level-3 processing using strun and specified the optional --output_dir param to designate a particular output directory in which to place the output product, the output product was still created in the working directory.

I'm guessing this is due to the fact that with level-3 processing, via any task like wfs_combine or calwebb_image3, it's the task/pipeline module itself that's saving the output file, using the name specified in the input ASN table, and is ignoring anything specified via the --output_dir param on the command line. With level-2 processing tasks/pipelines, the output product model is passed back up to stpipe to let it handle the creation of the output file and in that case stpipe does pay attention to any --output_dir that was specified.

Level-3 tasks and pipeline modules, such as wfs_combine, calwebb_image3, calwebb_spec3, etc. should be upgraded to use the output_dir path specified on the strun command line.

DataModel mocking

It would be nice, if it doesn't already exist, to have some function that will create a DataModel that has data (by default arrays of 1 or some such), keywords (with basic values matching their types), etc. Such a function may look like:

model = jwst.datamodels.mock(DataModel, data_shape=a_shape, data_fill=some_fill)

Add DQ flag condition

The PWG has requested the addition of one more DQ flag condition to the list of accepted values:

OTHER_BAD_PIXEL: A catch-all flag

Add coverage test to Travis CI

Would be nice to know the coverage of the tests. We can turn on GitHub coveralls webhook for automated coverage reporting too. Astropy has example on how to set this up.

system pressure - nirspec prism

As a reminder - waiting on clarification from the team on what value to use for the instrument pressure in the calculation of the refraction index for the NIRSPEC prism.

Ramp fit step parameter values

The "spec" definition in the ramp_fit_step.py module should be updated to specify the list of allowed values for the "algorithm" and "weighting" parameters, in addition to their default values. That way users will know whether they've entered a valid value that will properly trigger the options they want. This change should also use better wording for the weighting values.

For example:

  algorithm = option('OLS', 'GLS', default='OLS')
  weighting = option('unweighted', 'optimal', default='unweighted')

Corresponding code changes in ramp_fit_step and ramp_fit would also be necessary to use the modified values of the weighting options.

Add --version-id option

As of 20160818 AA meeting, SDP (the workflow) will determine a version ID to use with association creation. The generator should use this version ID instead of its own ID to place onto the associations.

NIRSpec GWA tilt sensor temperature keyword

SDP is currently using the keyword name "GWA_TILT" in the headers of level-1b products that they produce, which contains the temperature of the NIRSpec GWA (grating wheel assembly) tilt sensor. That keyword name will be changing to "GWA_TTIL" to keep it in sync with the name used in the engineering telemetry and the FITS files produced by the FITSWriter for ground test data. Hence we will need to update our core data model schema to use the new form of this keyword name.

Missing tests [explained]

The automated script was partially successful at extracting test data from each package.

Only packages with tests inside the package were ported. Any tests outside of the package were left behind.

Ported:

package/
    __init__.py
    tests/
        __init__.py

Not ported:

top-level/
    package/
        __init__.py
    tests/
        __init__.py
    setup.py

failing tests in datamodels

I have commented out a few failing tests in datamodels so that we could get a clean run.
These need to be fixed:

test_fits.test_extra_fits
test_fits.test_extra_fits_update
test_schema.test_date
test_schema.test_date2
test_schema.test_list2
test_schema.test_multislit_garbage
test_wcs.test_wcs

Encapsulate validation

Main reason is simply to encapsulate, but also functionally allow validation against the internal schemas.

Documentation (DUH!)

This actually means producing docs that are not automatically created from code.

Fix documenting version numbers

stpipe.Step.run records the version of the software used as __svn_revision__.
@jhunkeler What is the way to get the equivalent number now?

I'm commenting out this line in stpipe for now to try to get a clean run of all tests.

Update to wfs_combine method

The WFSC working group has requested that we update the algorithm that's used in the wfs_combine task to compute output pixel values. In the case where pixels in both input images are good, the output pixel value should be computed as the average of the two inputs, rather than simply using the value from input image 1. In cases where the input from either image 1 or 2 is bad, then we still use the current scheme of using the 1 good input value for the output.

This is a low priority item and should only be worked when all other Build 7 tasks have been completed. If necessary, it can be delayed to Build 7.1.

datamodels.items() recursion error under python2

Not knowing whether we're concerned about Python2: Using an example from @hbushouse , and the native conda-dev in python2, the following occurs. Under Python3, all is fine

$ conda create -n conda-dev -c http://ssb.stsci.edu/conda-dev python=2 jwst
$ source activate conda-dev
$ ipython
In [1]: from jwst.datamodels import MultiSlitModel

In [2]: m = MultiSlitModel('tests/data/jwst_nod2_cal.fits')

In [3]: m.items()
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-3-f16e51064509> in <module>()
----> 1 m.items()

/Users/eisenham/anaconda3/envs/conda-dev-py2/lib/python2.7/site-packages/jwst-0.6.0noop.dev291-py2.7-macosx-10.6-x86_64.egg/jwst/datamodels/model_base.py in items(self)
    523                 ("meta.observation.date": "2012-04-22T03:22:05.432")
    524             """
--> 525             return list(self.items())
    526
    527     def iterkeys(self):

... last 1 frames repeated, from the frame below ...

/Users/eisenham/anaconda3/envs/conda-dev-py2/lib/python2.7/site-packages/jwst-0.6.0noop.dev291-py2.7-macosx-10.6-x86_64.egg/jwst/datamodels/model_base.py in items(self)
    523                 ("meta.observation.date": "2012-04-22T03:22:05.432")
    524             """
--> 525             return list(self.items())
    526
    527     def iterkeys(self):

RuntimeError: maximum recursion depth exceeded

Remove sca_aper and wfscflag from core schema

The keyword dictionary working group has decided to remove the keywords SCA_APER and WFSCFLAG from JWST headers. SCA_APER is not needed and WFSCFLAG is a redundant entry with WFSVISIT.

How to redo NIRSpec background sub?

If I want to rerun background subtraction step with my own background aperture, how do I do it? This question came up in JWST DADF MOS Tools sprint.

Ramp fit OLS with ngroups <= 2

Bryan Hilbert reports that his testing of the Build 6 ramp fit step revealed the following:

"The only hiccups I've seen so far are for a pixel with only 2 good groups (I marked all the rest as saturated). In that case the equal weighting returns the expected slope value, but the optimal weighting returns a zero.

Similarly, for the case where there is only a single good group, both the equal and optimal weighting strategies return a slope of zero."

I think the case unweighted and ngroups=1 may have already been fixed since the Build 6 delivery, but should be checked. For the case of ngroups=2, no fitting should be done and the slope computed simply by differencing the 2 groups and dividing by the group exposure time. For ngroups=1 the slope should be computed by just dividing the one group value by the group exposure time, regardless of which OLS weighting scheme is used.

Update BUNIT in all output products

SDP is including the keyword BUNIT in the SCI extension header of the level-1b products that serve as input to the cal pipeline. They have its value set to 'DN' in the level-1b products. We don't currently have BUNIT in our datamodels schema and hence that keyword simply gets passed along untouched (presumably via the extra_fits attribute of our data models) to all the output files created by the cal pipeline. But the value of 'DN' is no longer correct for those output products and can be misleading. So we need to add that keyword to our core schema and make sure it gets updated when necessary.

Normally this would be an easy thing to implement, but given that the keyword resides in the SCI extension header, rather than the primary header, I'm not exactly sure how we can implement or specify it in our core schema. Is there a way to designate the fits_hdu to which a given keyword belongs?

Problem with backward transform in NIRSPEC IFU

Somewhere along the IFU backward transform the inputs and outputs are not matched, i.e. the overall transform needs another combination of Mapping and Identity models. The error is

Traceback (most recent call last):
File "./compute_world_coordinates.py", line 206, in
ifu_coords(res.filename)
File "./compute_world_coordinates.py", line 47, in ifu_coords
ifu_slits = nirspec.nrs_ifu_wcs(model)
File "/grp/hst/ssb/rhel6/ssbdev/python/lib/python2.7/site-packages/jwst_pipeline.assign_wcs-0.6-py2.7.egg/jwst_pipeline/assign_wcs/nirspec.py", line 926, in nrs_ifu_wcs
wcs_list.append(nrs_wcs_set_input(input_model.meta.wcs, 0, i, wrange))
File "/grp/hst/ssb/rhel6/ssbdev/python/lib/python2.7/site-packages/jwst_pipeline.assign_wcs-0.6-py2.7.egg/jwst_pipeline/assign_wcs/nirspec.py", line 816, in nrs_wcs_set_input
slit2detector = slit_wcs.get_transform('slit_frame', 'detector')
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/gwcs-0.6.dev154-py2.7.egg/gwcs/wcs.py", line 117, in get_transform
return functools.reduce(lambda x, y: x | y, transforms)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/gwcs-0.6.dev154-py2.7.egg/gwcs/wcs.py", line 117, in
return functools.reduce(lambda x, y: x | y, transforms)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/astropy-1.3.dev15941-py2.7-linux-x86_64.egg/astropy/modeling/core.py", line 76, in
left, right, **kwargs)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/astropy-1.3.dev15941-py2.7-linux-x86_64.egg/astropy/modeling/core.py", line 1977, in _from_operator
inputs, outputs = mcls._check_inputs_and_outputs(operator, left, right)
File "/home/dencheva/ssbvirt/ssbdev-rhel6/lib/python2.7/site-packages/astropy-1.3.dev15941-py2.7-linux-x86_64.egg/astropy/modeling/core.py", line 2038, in _check_inputs_and_outputs
right.n_inputs, right.n_outputs))
astropy.modeling.core.ModelDefinitionError: Unsupported operands for |: None (n_inputs=3, n_outputs=3) and None (n_inputs=5, n_outputs=4); n_outputs for the left-hand model must match n_inputs for the right-hand model.

failing tests in stpipe

These tests have been commented out so that we can get a clean run and need to be fixed:

test_pipeline.test_pipeline_commandline
test_step.test_save_model

Edit: CRDS tests should be moved to an internal system.

Exposure to Source tool

Simple utility to take exposure-based data, in particular datamodel.MultiSlitModel, and re-arrange into souce-based data, similar in structure to datamodel.MultiSlitModel.

naming of source-based files

How will the source based data be named? Possible sources:

  • MultiSlitModel.slit.name
  • MultiSlitModel.slit.slitlet_id
  • MultiSlitModel.slit.source_id
  • Same combination involving filenames.

IPC regression test is failing

The jwst regression tests, which are now running the git-based version of the jwst repo, are hitting an error in one of the tests of the IPC step. Details are at:

https://ssb.stsci.edu/pandokia/pandokia.cgi?query=detail&key_id=6419348

Specific error in the traceback is:

  File "/data4/iraf_conda/miniconda3/envs/rt_dev27/lib/python2.7/site-packages/jwst-0.6.0noop.dev174-py2.7-linux-x86_64.egg/jwst/ipc/ipc_corr.py", line 78, in ipc_correction
    "," + input_model.data.shape[-2])
TypeError("cannot concatenate 'str' and 'int' objects",)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.