Giter Club home page Giter Club logo

s4cmb's Introduction

s4cmb

https://coveralls.io/repos/github/JulienPeloton/s4cmb/badge.svg?branch=master

Systematics For Cosmic Microwave Background (s4cmb) is a Python package developed to study the impact of instrumental systematic effects on measurements of CMB experiments based on bolometric detector technology. s4cmb provides a unified framework to simulate raw data streams in the time domain (TODs) acquired by CMB experiments scanning the sky, and to inject in these realistic instrumental systematics effect. The development of s4cmb is built on experience and needs of the analysis of data of the Polarbear ground-based experiment (see e.g. 1403.2369 and 1705.02907). It is designed to analyze real data, to guide the design of future instruments that require the estimation of specific systematic effects as well as to increase the realism of simulated data sets required in the development of data analysis methods. Users can currently model and study:

  • Electrical crosstalk in the multiplexed readout.
  • Relative gain-calibration uncertainty between the two detectors in a focal plane pixel.
  • Time drift of the gains between two consecutive calibration measurements.
  • Differential pointing between the two detectors in a pixel.
  • ... more to come!

The simplicity of the s4cmb framework allows to easily add new instrumental systematics to be simulated according to the users' needs. As far as we know, s4cmb is the only dedicated package that enables the study of a wide range of instrumental simulations, from the instrument to the sky map, while being publicly available. For more general purposes, including some instrumental systematic effect simulations, users might also consider the use of TOAST, a software framework to simulate and process timestream data collected by telescopes focusing on efficient TOD manipulation on massively parallel architectures.

The package is mainly written in python (>= 3.6), and it adopts several commonly used libraries in astronomy (astropy, healpy, ephem, pyslalib) and uses functions based on low-level languages wrapped in Python (e.g. Fortran with f2py) for speeding up the most critical part of the code without losing the flexibility provided by a simple python user-friendly interface. It has the following dependencies (see requirements.txt):

  • numpy, matplotlib
  • astropy, ephem, pyslalib, healpy (astro libs)
  • f2py (interfacing with python)

The compilation of Fortran parts is done usually when you install the package (see setup.py), but we also provide a Makefile for more customized compilations (see the Makefile in s4cmb).

s4cmb is designed to be employed on systems of varying scale, from laptops to parallel supercomputing platforms thanks to its internal Message Passing Interface (MPI) support. We also support packaging the entire application into a Docker container for portability.

I just want to use the code:

You can easily install the package using pip

pip install s4cmb

In addition to use the code, I want to be a developer:

The best is to fork the repo from this github repository to your account and clone it to your machine. Once you have the repo cloned on your machine, use the makefile to compile the source

cd /path/to/s4cmb
pip install -r requirements.txt
make

Do not forget to update your PYTHONPATH. Just add in your bashrc:

s4cmbPATH=/path/to/the/s4cmb
export PYTHONPATH=$PYTHONPATH:$s4cmbPATH

Then run the test suite and the coverage:

./coverage_and_test.sh

It should print the actual coverage of the test suite, and exit with no errors.

Again, you can easily install the package using pip

pip install s4cmb --user

Alternatively, if you want to do dev at NERSC and do a manual installation, it's better to keep most of your packages under Anaconda. I recommend to have a look first at the NERSC page describing how to use it.

The installation of s4cmb can be done in few steps:

  • Clone the repo somewhere in your $HOME
  • Install dependencies (see requirements.txt) using Anaconda
  • Compile the source (using make in /path/s4cmb)

Alternatively if you do not want install the package on your computer, we provide a docker image for s4cmb with always the latest version. Install docker on your computer, and pull the image:

docker pull julienpeloton/s4cmb:latest

Then create a new container and run an interactive session by just running

docker run -i -t julienpeloton/s4cmb:latest bash

We provide a quick end-to-end example for using the package:

python examples/test/simple_app.py -inifile examples/inifiles/simple_parameters.py -tag test

You can also run it on many processors, using MPI (you will need the package mpi4py):

mpirun -n <nproc> python examples/test/simple_app.py -inifile examples/inifiles/simple_parameters.py -tag test_MPI

where nproc should not be greater than the number of scans to run. Note that for NERSC users, we also provide a quick submission script for jobs on Cori (see examples/nersc_cori.batch).

You can find a bootcamp in two parts (notebooks + examples) at s4cmb-resources. The goal of this bootcamp is to describe the basic parts of the API, and provide ready-to-use examples (for use on laptop and supercomputer).

  • Add WHWP demodulation module.
  • Add correlated noise simulator (and update mapmaking weights).
  • Julien Peloton (peloton at lal.in2p3.fr)
  • Giulio Fabbian (g.fabbian at sussex.ac.uk)
  • @ngoecknerwald: original author for a large part of the scanning strategy module.
  • @giuspugl, @dpole, @joydidier, and all contributors for all valuable comments, tests, and feedbacks!

The package has already been used in a number of scientific and technical publications:

  • Instrumental systematics biases in CMB lensing reconstruction: a simulation-based assessment (2011.13910)
  • Development of Calibration Strategies for the Simons Observatory (1810.04633)
  • Studies of Systematic Uncertainties for Simons Observatory: Detector Array Effects (1808.10491)
  • Studies of Systematic Uncertainties for Simons Observatory: Polarization Modulator Related Effects (1808.07442)
  • Iterative map-making with two-level preconditioning for polarized Cosmic Microwave Background data sets (1801.08937)

s4cmb's People

Contributors

gfabbian avatar joydidier avatar julienpeloton avatar keskitalo avatar markm42 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s4cmb's Issues

Add demodulation of timestreams

This module already exists in the old version of the pipeline (not public), but I need to port it here.

Note.
I am not a big fan of demodulation (as opposed to pair-differencing), because it adds a lot of complexity for small gain in return. For example the sampling frequency of the detector needs to be quite big for the demodulation to be efficient so the memory increases a lot. Another example is that for demodulation to work, there are few filtering steps to perform (low-pass, band-pass), and this slows down a lot the code.

Get correct noise weights in mapmaking when performing noisy runs

Currently, TOD weights are set to one (i.e. no noise weighting while performing mapmaking). While this is OK for noiseless runs, one would like to weight properly the TOD if one uses noise. There are several ways to weight the timestream, but one could simply use the PSD of the timestream (routines already exist). The problem currently is that it adds a bit to the runtime (precomputation and store on disk for MC runs?).

Replace inifiles with python files for launching Apps?

The fact that we are currently using inifiles to define parameters to be used in App is only historical.
There is no real advantage using inifiles over python if the main script is in python.
First, it makes things more complex (needed to introduce an extra step using NormaliseParser to decode it), and second I think that if one can do something in python without suffering much, one must do.

Scan Implementation

Does this include Neil's latest updates on scan strategy here?
What happens to subscan information? Do we have the turnarounds? How do you identify the subscans in the TOD?

Doctests are failing because dependencies got updated (severly)

I recently tried to run the doctests with python 3.7 on my machine with latest versions of dependencies, and I got many failures... Inspecting further, it seems that there are two separate issues: python 3.7 vs others, and dependency versions. I need to fix that...

Add a linter

The code quality is very bad... and it should change (especially that we could have avoided many bugs). We should add a linter check in the GH actions.

Assertion Error in tod.py

I was trying to get the so_MC_app.py running and noticed I got an error that I didn't get with simple_app.py.
I traced it down to this difference in code between the two example apps:

== in simple_app.py ==

           d = []
           for det in tqdm(range(inst.focal_plane.nbolometer)):
                d.append(tod.map2tod(det))
           ## Project TOD to maps
           tod.tod2map(np.array(d), sky_out_tot)

== in so_MC_app.py ==

         for pair in tod.pair_list:
                d = np.array([tod.map2tod(det) for det in pair])
                ## Project TOD to maps
                tod.tod2map(d, sky_out_tot)

When I use the "so_MC_app" way in the simple app, I get the same error.
The error is:

Traceback (most recent call last):
File "examples/test/simple_app.py", line 179, in
tod.tod2map(d, sky_out_tot)
File "/data/home/joy/simons_observatory/code/s4cmb/s4cmb/tod.py", line 635, in tod2map
assert npixfp == self.point_matrix.shape[0]
AssertionError

Do you understand what's going on? Does the use of one code vs the other has to match the "perpair" parameter?
Thanks!!

Pixels outside obspix with deep_patch

Hi Julien !

Is it normal to have pixels falling outside the list of observed pixels (obspix) with the "deep_patch" scanning strategy, when we set width = 130 (default value) ?

Below is the inverse condition number of each pixel (1 CES, nside = 128), I realized with mollview that some pixels fall clearly out of the main sky patch defined by the scan.

rcond_128_issue

Instrument Configuration Options

  1. It would be convenient to have a hardware input file in YAML that can be shared between TOAST an s4cmb, so that we can just import an experiment (SO for example).
  2. We're also wondering if the following are implemented/easily implemented into the instrument setup
  • square focal plane vs. hex focal plane layout
  • dichroic detectors
  1. Is there room for a hierarchy of detector info (wafer, physical location, electrical info) versus just detector index? For example, a bolometer class with these attributes.

Run continuous integration on pull requests

The CI is only triggered when code is pushed to any branch in a repository. We would like instead to trigger the workflow on push or pull request events (eventually from forks).

Action item: add the pull_request event in the workflow.

no origin/dichroic branch

Hi Julien,

I would like to try the dichroich example:
https://github.com/JulienPeloton/s4cmb-resources/blob/master/Part1/s4cmb_dichroic_06.ipynb

for which I need to checkout the origin/dichroic branch'.
I have never used before git, so I am a very new user. I tried looking online/forums but I still have the issue.

Basically, I do not see the 'origin/dichroich' branch in my local copy.
I tried several commands (fetch, pull origin master etc etc)....
I did:

git pull origin master

From https://github.com/MariaJK/s4cmb

  • branch master -> FETCH_HEAD
    Already up-to-date.

git branch -r

origin/HEAD -> origin/master
origin/beam_ellipticity
origin/master

Moreover, looking at the github repo pag, https://github.com/JulienPeloton/s4cmb,
I only see 'master' and 'correlated_noise'. Joy found the same.

Could you help me?
Thanks,
Maria

Index error in detector_pointing.py

When running with any ini file that is not simple_parameter.py, this error comes up:

python examples/test/simple_app.py -inifile examples/inifiles/so_sac_parameters.py -tag test

Traceback (most recent call last):
File "examples/test/simple_app.py", line 155, in
mapping_perpair=params.mapping_perpair)
File "/data/home/joy/simons_observatory/code/s4cmb/s4cmb/tod.py", line 127, in init
self.get_boresightpointing()
File "/data/home/joy/simons_observatory/code/s4cmb/s4cmb/tod.py", line 426, in get_boresightpointing
lat=lat, ra_src=ra_src, dec_src=dec_src)
File "/data/home/joy/simons_observatory/code/s4cmb/s4cmb/detector_pointing.py", line 138, in init
self.ut1utc = get_ut1utc(self.ut1utc_fn, self.time[0])
File "/data/home/joy/simons_observatory/code/s4cmb/s4cmb/detector_pointing.py", line 53, in get_ut1utc
ut1utc = ut1utcs[uindex]
IndexError: index 2193 is out of bounds for axis 0 with size 2193

Generalized beam?

Is there a way to deal with beams without analytical solutions? For example, if someone provided a beam map, could this be used in the sims? If so, what format of beam map would be easy to implement?

Implement deprojection of spurious signals

Vast topic. The first things that come to my mind are

  • Modifying the TOD model, by including templates that will be filtered out in the mapmaking stage.
  • Deprojecting the signal at the TOD level directly (e.g. a la SPTPol for dealing with the detector crosstalk).

I might give a try soon at the first point.

Default ifort at NERSC does not support -openmp but default f2py (numpy v1.12.1) uses only it.

This issue has been brought by Maria S.
By default on Cori, an intel environment is loaded.
NERSC recently switched on using a new default version of ifort (the fortran intel compiler) which doesn’t support anymore the option -openmp (but -qopenmp).

Unfortunately, f2py (which is used for pyslalib or s4cmb for example) didn’t implement this change (it uses -openmp by default for Intel), hence there will be a problem at compilation.
You check that just typing the following in your terminal:

$ f2py -c --help-fcompiler

A quick workaround if you really want to use Intel is to unload the current module, and load a previous version:

$ module unload intel
$ module load intel/17.0.2.174

You see a deprecation warning, but that should work.
Another option (which I do) is to use GNU compilers only.

I leave this issue open in case someone still has a problem.

Test failure in recent numpy / python 3.9 versions

I have been trying to do a recent reinstall with a fresh environment using conda and MacOS Ventura. The following command installs an environment that still works.

conda create -n s4cmb-dev -c conda-forge python=3.7 camb openmpi ephem pyslalib jupyter healpy coverage coveralls mpi4py

However switching just the python version to 3.9 the compilation seems ok but there are multiple failures on the test level. The difference between the 2 environment in dumpy/f2py is the version that moves from 1.21 to 1.24.

**********************************************************************
File "/Users/gfabbian/Software/s4cmb-dev/s4cmb/s4cmb/tod.py", line 1321, in __main__.TimeOrderedDataPairDiff.tod2map
Failed example:
    for pair in tod.pair_list:
      d = np.array([tod.map2tod(det) for det in pair])
      tod.tod2map(d, m, gdeprojection=True)
Exception raised:
    Traceback (most recent call last):
      File "/Users/gfabbian/opt/anaconda3/envs/s4cmb-dev/lib/python3.9/doctest.py", line 1334, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest __main__.TimeOrderedDataPairDiff.tod2map[31]>", line 3, in <module>
        tod.tod2map(d, m, gdeprojection=True)
      File "/Users/gfabbian/Software/s4cmb-dev/s4cmb/s4cmb/tod.py", line 1406, in tod2map
        tod_f.tod2map_pair_f(
    tod_f.error: (shape(diff_weight, 0) == npix) failed for 14th argument npix: tod2map_pair_f:npix=139992

The behavior behavior seems connected to how f2py/numpy interpret data types with the current fortran code written in the repo. Suggestions welcome.

Atmospheric Noise

We need a realistic atmospheric model and common mode subtraction for many systematics. We need a realistic model that we can use with some level of 1/f noise and correlation across focal plane. We need it to have both temporal and spatial variations. Also need to have some random noise after common noise subtraction. It may be that we can model this in TOAST with a given focal plane and import a per pixel TOD. (Let's discuss this)

loading s4cmb modules from NERSC jupyter

Hi Julien,

I am sorry, I cannot load s4cmb from NERSC jupyter.
Here is the sequence of commands I did:

(from a putty terminal)
I did:

$ module unload intel
$ module load gcc/4.9.3

$ module load python/2.7-anaconda
$ pip install s4cmb --user
$ pip install -r requirements.txt
$ make

(I didn't find errors in running up to 'make' command).

and then I open the jupyter notebook from

https://jupyter-dev.nersc.gov

The notebook does not find the module 'instrument' and the following ones (see attached snapshots of my account).

I have tried also loading
$ module load python/2.7-anaconda-4.4
without any difference.

Thanks,
Maria

nersc4
nersc5
nersc6

Observed Pixels

In scanning multiple CESs, the observed pixels are fixed from the first CES and fixed for the rest, but these may change slightly with each CES (since the array is extended).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.