Giter Club home page Giter Club logo

namaster's Introduction

NaMaster

Build Status Docs Status Coverage Status

NaMaster is a C library, Python module and standalone program to compute full-sky angular cross-power spectra of masked fields with arbitrary spin and an arbitrary number of known contaminants using a pseudo-Cl (aka MASTER) approach. The code also implements E/B-mode purification and is available in both full-sky and flat-sky modes.

Installation

There are different ways to install NaMaster. In rough order of complexity, they are:

Conda forge

Unless you care about optimizing the code, it's worth giving this one a go. The conda recipe for NaMaster is currently hosted on conda-forge (infinite kudos to Mat Becker for this). In this case, installing NaMaster means simply running:

conda install -c conda-forge namaster

If that works for you and you don't care about optimizing the code too much, skip the rest of this section. If you don't have admin permissions, you can give virtual environments a try (or else follow the instructions below).

PyPI

NaMaster is also hosted on PyPI. Installing it should be as simple as running:

python -m pip install pymaster [--user]

(add --user if you don't have admin permissions). Note that this will compile the code on your machine, so you'll need to have installed its dependencies.

From source

If all the above fail, try to install NaMaster from its source. You should first clone this github repository. Then follow these steps:

1. Install dependencies.

Install the dependencies listed here. Note that some of them (libsharp and HEALPix) may not be necessary, as pymaster will attempt to install them automatically.

2. Install the python module

Installing the python module pymaster should be as simple as running

python setup.py install [--user]

or, even better, if you can use pip:

pip install . [--user]

where the optional --user flag can be used if you don't have admin privileges.

You can check that the python installation works by running the unit tests:

pytest -vv pymaster

Note that the test directory, containing all unit tests, also contains all the sample python scripts described in the documentation.

If you installed pymaster via pip, you can uninstall everything by running

pip uninstall pymaster

Note that the C library is automatically compiled when installing the python module. If you care about the C library at all, or you have trouble compiling it, see the next section.

3. Install the C code (optional)

The script scripts/install_libnmt.sh contains the instructions run by setup.py to compile the C library (libnmt.a). You may have to edit this file or make sure to include any missing compilation flags if setup.py encounters issues compiling the library.

If you need the C library for your own code, scripts/install_libnmt.sh installs it in _deps/lib and _deps/include. Note that the script process will also generate an executable namaster, residing in _deps/bin that can be used to compute power spectra. The use of this program is discouraged over using the python module.

You can check that the C code works by running

make check

If all the checks pass, you're good to go.

Installing on Mac

NaMaster can be installed on Mac using any of the methods above as long as you have either the clang compiler with OpenMP capabilities or the gcc compiler. Both can be accessed via homebrew. If you don't have either, you can still try the conda installation above.

Note: NaMaster is not supported on Windows machines yet.

Documentation

The following sources of documentation are available for users:

Dependencies

NaMaster has the following dependencies, which should be present in your system before you can install the code from source:

  • GSL. Version 2 required (note in certain systems you may also need to install openblas - see this issue.
  • FFTW. Version 3 required. Install with --enable-openmp and potentially also --enable-shared.
  • cfitsio. Any version >3 should work.

Besides these, NaMaster will attempt to install the following two dependencies. If this fails, or if you'd like to use your own preinstalled versions, follow these instructions:

  • libsharp. Libsharp is automatically installed with NaMaster. setup.py attempts to download and install libsharp automatically. This is done by running the script scripts/install_libsharp.sh. If you encounter any trouble during this step, inspect the contents of that file. Libsharp gets installed in _deps/lib and _deps/include. However, if you want to use your own preinstalled version of libsharp, you should simlink it into the directory _deps, such that _deps/lib/libsharp.a can be seen. See instructions in NERSC_installation.md for more details on libsharp.
  • HEALPix. Like libsharp, HEALPix is automatically installed by setup.py by running the script scripts/install_libchealpix.sh (have a look there if you run into trouble). HEALPix gets installed in _deps/lib and _deps/include. However, if you want to use your own preinstalled version , you should simlink it into the directory _deps, such that _deps/lib/libchealpix.a can be seen. Any version >2 should work. Only the C libraries are needed.

Licensing, credits and feedback

You are welcome to re-use the code, which is open source and freely available under terms consistent with BSD 3-Clause licensing (see LICENSE).

If you use NaMaster for any scientific publication, we kindly ask you to cite this github repository and the companion paper https://arxiv.org/abs/1809.09603. Special kudos should go to the following heroes for their contributions to the code:

  • Mat Becker (@beckermr)
  • Giulio Fabbian (@gfabbian)
  • Martina Gerbino (@mgerbino)
  • Daniel Lenz (@DanielLenz)
  • Zack Li (@xzackli)
  • Thibaut Louis (@thibautlouis)
  • Tom Cornish (@tmcornish)

For feedback, please contact the author via github issues or email ([email protected]).

namaster's People

Contributors

beckermr avatar carlosggarcia avatar damonge avatar daniellenz avatar joezuntz avatar mreineck avatar slosar avatar xgarrido avatar xzackli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

namaster's Issues

masked maps

It would be good to allow users to pass maps that have already been masked.

Memory and map

Currently NmtFields hold a copy of both the alms, the maps and the masks that were used to create them. For large maps, this can be very memory-heavy. We should include an option for fields to free up this memory upon creation if possible.

Missing file for tests

In branch:rect_dam

> python -m unittest discover -v
FileNotFoundError: [Errno 2] No such file or directory: 'test/benchmarks/msk_car.fits'

How to specify the path of 'fitsio.h' during the 'make' process ?

Hi, thanks for the wonderful codes and I'm trying to install them on my Ubuntu 16.04.
I have installed the dependencies. After ./configure and during the make process, following error came up:
src/flatsky_utils.c:2:20: fatal error: fitsio.h: No such file or directory compilation terminated. Makefile:719: recipe for target 'src/libnmt_la-flatsky_utils.lo' failed make[1]: *** [src/libnmt_la-flatsky_utils.lo] Error 1 make[1]: Leaving directory '/home/yao/NaMaster' Makefile:424: recipe for target 'all' failed.
But I have already added the path of cfitsio/include and cfitsio/lib to my .zshrc file.
Should I do something else besides adding the path of cfitsio to the .zshrc ?

Save workspaces to a standard format

Currently the code saves workspace information into a custom binary format. This is efficient for I/O, but ruins backward compatibility, since the format needs to change all the time. I'd like to save this information to FITS files.

Efficient workspace storage

Currently NmtWorkspaces save the masks of the two fields that went into computing them. This can become a nuisance for maps with high pixel resolutions. The main reason why they get saved is because they are then used to compute covariances, but this could be done in a different way, by creating the corresponding NmtCovarianceWorkspace object from the 4 associated NmtFields.

error on make (C-library) | conda installation?

Hi,

I'm having some trouble installing the package on Linux. I can successfully configure the files, but I get the following error on make:

make  all-am
make[1]: Entering directory `/auto/rcf-proj3/software/NaMaster'
/bin/sh ./libtool  --tag=CC   --mode=link gcc -std=gnu99  -fPIC -I/home/rcf-proj3/gsl/include/ -I/home/rcf-proj3/cfitsio/include  -lfftw3_omp -lfftw3 -lgsl -lgslcblas -lcfitsio -lsharp -lfftpack -lc_utils -lm -L/home/rcf-proj3/gsl/lib -L/home/rcf-proj3/cfitsio -L/home/rcf-proj3/software/libsharp/auto/lib -L/home/rcf-proj3/software/Healpix_3.50/lib -o libnmt.la -rpath /home/rcf-proj3/software/NaMaster-C/lib src/libnmt_la-utils.lo src/libnmt_la-flatsky_utils.lo src/libnmt_la-healpix_extra.lo src/libnmt_la-nmt_bins.lo src/libnmt_la-nmt_bins_flat.lo src/libnmt_la-nmt_field.lo src/libnmt_la-nmt_field_flat.lo src/libnmt_la-nmt_mask.lo src/libnmt_la-nmt_mask_flat.lo src/libnmt_la-nmt_master.lo src/libnmt_la-nmt_covar.lo src/libnmt_la-nmt_master_flat.lo src/libnmt_la-nmt_covar_flat.lo  -lfftw3 -lchealpix -lsharp -lcfitsio -lgsl -lgslcblas -lm 
libtool: _warning_: library '/home/rcf-proj3/gsl/lib/libgslcblas.la' was moved.
libtool: link: gcc -shared  -fPIC -DPIC  src/.libs/libnmt_la-utils.o src/.libs/libnmt_la-flatsky_utils.o src/.libs/libnmt_la-healpix_extra.o src/.libs/libnmt_la-nmt_bins.o src/.libs/libnmt_la-nmt_bins_flat.o src/.libs/libnmt_la-nmt_field.o src/.libs/libnmt_la-nmt_field_flat.o src/.libs/libnmt_la-nmt_mask.o src/.libs/libnmt_la-nmt_mask_flat.o src/.libs/libnmt_la-nmt_master.o src/.libs/libnmt_la-nmt_covar.o src/.libs/libnmt_la-nmt_master_flat.o src/.libs/libnmt_la-nmt_covar_flat.o   -Wl,-rpath -Wl,/home/rcf-proj3/gsl/lib -Wl,-rpath -Wl,/home/rcf-proj3/gsl/lib -lfftw3_omp -lfftpack -lc_utils -L/home/rcf-proj3/gsl/lib -L/home/rcf-proj3/cfitsio -L/home/rcf-proj3/software/libsharp/auto/lib -L/home/rcf-proj3/software/Healpix_3.50/lib -lfftw3 -lchealpix -lsharp -lcfitsio /home/rcf-proj3/gsl/lib/libgsl.so /home/rcf-proj3/gsl/lib/libgslcblas.so -lm    -Wl,-soname -Wl,libnmt.so.0 -o .libs/libnmt.so.0.0.0
/usr/bin/ld: /home/rcf-proj3/software/Healpix_3.50/lib/libchealpix.a(chealpix.s.o): relocation R_X86_64_32S against `.rodata' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /home/rcf-proj3/software/libsharp/auto/lib/libsharp.a(sharp.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /home/rcf-proj3/software/libsharp/auto/lib/libsharp.a(sharp_geomhelpers.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /home/rcf-proj3/software/libsharp/auto/lib/libsharp.a(sharp_core.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /home/rcf-proj3/software/libsharp/auto/lib/libsharp.a(sharp_legendre_roots.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /home/rcf-proj3/software/libsharp/auto/lib/libsharp.a(sharp_ylmgen_c.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status
make[1]: *** [libnmt.la] Error 1
make[1]: Leaving directory `/auto/rcf-proj3/software/NaMaster'
make: *** [all] Error 2

I don't understand what is causing the warning:
libtool: _warning_: library '/home/rcf-proj3/gsl/lib/libgslcblas.la' was moved.
When I check the directory, the file is there...

I also tried recompiling by passing -fPIC to CFLAGS and CPPFLAGS, but I still get the same error.

Is there an easy way to install the python module (using pip or conda) without explicitly installing the C library?

Weird feature with Nside=8192

I was trying to push some new machines and I was using Nside=8192 but it has this weird feature that when you import first pymaster and then healpy raises a Segmentation Fault however if you do this the other way around (first import healpy and then pymaster) it manages to open the Nside=8192 file without a problem. This is probably related to my installation and not really a problem itself but I figured I would document this.

dyld: Symbol not found: _util_malloc_

Great paper presentation today, looking forward to diving into it and using the code! Will have to debug this first, though.

Just copying the issues from damonge/NaMaster#24, where several people have run into the following:

First executing the following, which works fine.

./configure
make
make install

Then running make check gives the following:

EST 16/53 nmt:he_beams [OK]
TEST 17/53 nmt:he_alm2cl dyld: lazy symbol binding failed: Symbol not found: _util_malloc_
  Referenced from: /Users/dlenz/software/pymodules/NaMaster/.libs/libnmt.0.dylib
  Expected in: flat namespace

dyld: Symbol not found: _util_malloc_
  Referenced from: /Users/dlenz/software/pymodules/NaMaster/.libs/libnmt.0.dylib
  Expected in: flat namespace

/bin/sh: line 1: 62678 Abort trap: 6           ${dir}$tst
FAIL: test/check_nmt

I get the very same error when trying to run the namaster executable.

It might make sense to add a travis build that uses a typical MacOS build, to catch this and similar errors.

Locating Libsharp issue

Hello,

I am sorry that this is a simple issue. I have been trying to install NaMaster without using Conda. I followed the instructions for downloading the dependencies and as far as I can tell Libsharp installed correctly. However, when I run ./configure for NaMaster I get the following issue:

"checking for library containing sharp_Ylmgen_destroy... no
configure: error: Couldn't find libsharp"

I believe that it has to do with NaMaster not knowing what directory Libsharp is in. In my system it is located at /mnt/d/ubuntu/libsharp-1.0.0/ I was wondering if someone could point me in the direction of how to correct this issue.

Thank you.

namaster does not support healpix missing pixels in pure B

B-mode purification seems to break when we have the situation:

  1. a pixel is healpix UNSEEN in QU
  2. the mask is zero at the pixel location
  3. pure B is turned on

Thinking a little bit about how pure B must work, there is probably some peeking under the mask. @brandonshensley told me about this problem, and I recommended that for now, he sets the missing pixels in his maps to zero (this was the choice made in the Planck power spectrum analysis, making an appeal to authority here). I'm not sure this is totally valid, since the B operator is effectively some sort of local vector operation.

The Planck 2015 maps are inpainted (they query the mean pixel value of a disk around each missing pixel), so we didn't have to deal with this. The Planck 2018 maps instead use the UNSEEN value. I think these 2018 maps will probably be the most common data product used with NaMaster in the near future, so it'd be nice if we treated them correctly out of the box.

Minimal Working Example

I've slightly modified the standard pure B example to set the first pixel of the maps to the UNSEEN healpix value (mask is already zero there), to provide a minimal working example.

import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
import pymaster as nmt

# This script describes the computation of polarized power spectra using the
# pure-E and B approach

# We'll run this many simulations
nsim = 10
# HEALPix map resolution
nside = 256

# Let us first create a square mask:
msk = np.zeros(hp.nside2npix(nside))

th, ph = hp.pix2ang(nside, np.arange(hp.nside2npix(nside)))
ph[np.where(ph > np.pi)[0]] -= 2 * np.pi
msk[np.where((th < 2.63) & (th > 1.86) &
             (ph > -np.pi / 4) & (ph < np.pi / 4))[0]] = 1.
print("What is the value of the first pixel in the mask?", msk[0])

# Now we apodize the mask. The pure-B formalism requires the mask to be
# differentiable along the edges. The 'C1' and 'C2' apodization types
# supported by mask_apodization achieve this.
msk_apo = nmt.mask_apodization(msk, 10.0, apotype='C1')

# Select a binning scheme
b = nmt.NmtBin(nside, nlb=16)
leff = b.get_effective_ells()

# Read power spectrum and provide function to generate simulated skies
l, cltt, clee, clbb, clte = np.loadtxt('cls.txt', unpack=True)


def get_fields():
    mp_t, mp_q, mp_u = hp.synfast([cltt, clee, clbb, clte],
                                  nside=nside, new=True, verbose=False)
    
    # SET THE FIRST PIXEL TO UNSEEN =========================
    mp_q[0] = hp.UNSEEN
    mp_u[0] = hp.UNSEEN
    print("Setting the first pixels in the map to UNSEEN.")
    # =======================================================
    
    # This creates a spin-2 field without purifying either E or B
    f2_np = nmt.NmtField(msk_apo, [mp_q, mp_u])
    # This creates a spin-2 field with both pure E and B.
    f2_yp = nmt.NmtField(msk_apo, [mp_q, mp_u], purify_e=True, purify_b=True)
    # Note that generally it's not a good idea to purify both,
    # since you'll lose sensitivity on E
    return f2_np, f2_yp


# We initialize two workspaces for the non-pure and pure fields:
f2np0, f2yp0 = get_fields()
w_np = nmt.NmtWorkspace()
w_np.compute_coupling_matrix(f2np0, f2np0, b)
w_yp = nmt.NmtWorkspace()
w_yp.compute_coupling_matrix(f2yp0, f2yp0, b)


# This wraps up the two steps needed to compute the power spectrum
# once the workspace has been initialized
def compute_master(f_a, f_b, wsp):
    cl_coupled = nmt.compute_coupled_cell(f_a, f_b)
    cl_decoupled = wsp.decouple_cell(cl_coupled)
    return cl_decoupled


# We now iterate over several simulations, computing the
# power spectrum for each of them
data_np = []
data_yp = []
for i in np.arange(nsim):
    print(i, nsim)
    fnp, fyp = get_fields()
    data_np.append(compute_master(fnp, fnp, w_np))
    data_yp.append(compute_master(fyp, fyp, w_yp))
data_np = np.array(data_np)
data_yp = np.array(data_yp)
clnp_mean = np.mean(data_np, axis=0)
clnp_std = np.std(data_np, axis=0)
clyp_mean = np.mean(data_yp, axis=0)
clyp_std = np.std(data_yp, axis=0)

# Now we plot the results
plt.figure()
plt.title('$BB$ error', fontsize=18)
plt.plot(leff, clnp_std[3], 'r-', lw=2, label='Standard pseudo-$C_\\ell$')
plt.plot(leff, clyp_std[3], 'b-', lw=2, label='Pure-$B$ estimator')
plt.xlim([2, 512])
plt.xlabel('$\\ell$', fontsize=18)
plt.ylabel('$\\sigma(C_\\ell)$', fontsize=18)
plt.legend(loc='upper right', frameon=False)
plt.loglog()
plt.show()

ex

I think @bthorne93 might have had this issue once?

Additional NERSC installation instructions

Just opening a permanent issue in case it's useful for others. Note that the NERSC_installation.md instructions have been updated after the Cori upgrade, so I'd advice to try those first.

@akrolewski reports the following (many kudos to him!):

after the recent nersc upgrade, found that I needed to do a few more things than you had enumerated on your "nersc_installation" readme on github. (which was very helpful by the way). I don't know how many of these issues are specific to my setup or are more generic, but I figured I'd pass this along in case it can be helpful to others (and am happy to add some notes to github as well, if you think it's useful).

These were as follows:

  1. I couldn't get the healpix installation to work, but I instead was able to get away with using the pre-installed healpix on nersc. This meant that my .bashrc.ext looked like the following:
export PATH=$HOME/bin:/global/common/cori/contrib/hpcosmo/hpcports_gnu-4.0/healpix-3.30.1_62c0405b-4.0/bin:$PATH
export LD_LIBRARY_PATH=$GSL_DIR/lib:/opt/cray/pe/fftw/3.3.6.3/haswell/lib:/global/common/cori/contrib/hpcosmo/hpcports_gnu-4.0/healpix-3.30.1_62c0405b-4.0/lib:$LD_LIBRARY_PATH:$HOME/lib
export LDFLAGS+=" -fopenmp -L$GSL_DIR/lib -L$HOME/lib -L/global/common/cori/contrib/hpcosmo/hpcports_gnu-4.0/healpix-3.30.1_62c0405b-4.0/lib -L/opt/cray/pe/fftw/3.3.6.3/haswell/lib"
export CPPFLAGS+=" -fopenmp -I$GSL_DIR/include -I/usr/common/software/cfitsio/3.370-reentrant/hsw/intel/include -I$HOME/include -I/global/common/cori/contrib/hpcosmo/hpcports_gnu-4.0/healpix-3.30.1_62c0405b-4.0/include"
export CC=cc
  1. I've always found that I needed to add the following when compiling:
    module load cray-fftw

  2. (I believe the instructions in the updated NERSC_installation.md should take care of this one). When trying to set up pymaster, I ran into a bug where python couldn't find libgsl.so.19 (but the c-code namaster worked fine, it was just an issue with pymaster):
    ImportError: libgsl.so.19: cannot open shared object file: No such file or directory
    So I created a symbolic link to libgsl.so.23 in my $HOME/lib directory:
    ln -s $GSL_DIR/lib/libgsl.so.23 libgsl.so.19

  3. I found that while pymater worked fine on the login nodes, on the compute nodes I got the following error:
    seg 0, mem 0: remap(0x555555754000,0x2000,0x2000,0x3,0x2aaaaab18000) failed, error 14
    I got around this (thanks to Rollin Thomas at NERSC) by doing module unload craype-hugepages2M before compiling, and then also module unload craype-hugepages2M before running pymaster.

Python test failure - installation success unclear from examples

I've installed the C package, from which all tests apart from one (TEST 20/55 nmt:he_qdisc [SKIPPED]) passed. However, after the python setup, 22/48 unit tests fail.

It's difficult to tell from the readthedocs examples what the outputs plots are meant to look like (it would be good to plot them after the code snippets). When comparing with full sky anafast outputs, I get a completely different answer with NaMaster.

The install/testing output:
installation_output.txt

My installlation script:
export CFLAGS='-I/usr/local/Cellar/gsl/2.5/include/ -I/usr/local/Cellar/fftw/3.3.8/include/ -I/usr/local/Cellar/cfitsio/3.450/include/ -I/usr/local/Cellar/healpix/3.40/include/ -I/Users/ucapnje/Documents/software/libsharp/auto/include/'
export LDFLAGS='-L/usr/local/Cellar/cfitsio/3.450/lib/ -L/usr/local/Cellar/gsl/2.5/lib/ -L/Users/ucapnje/Documents/software/libsharp/auto/lib/ -L/usr/local/Cellar/healpix/3.40/lib/ -L/usr/local/Cellar/fftw/3.3.8/lib/ -L/Users/ucapnje/Documents/software/NaMaster-master/install_dir/lib/'
./configure --prefix=/Users/ucapnje/Documents/software/NaMaster-master/install_dir
make
make install
make check
export LDFLAGS='-L/usr/local/Cellar/cfitsio/3.450/lib/ -L/usr/local/Cellar/gsl/2.5/lib/ -L/Users/ucapnje/Documents/software/libsharp/auto/lib/ -L/usr/local/Cellar/healpix/3.40/lib/ -L/usr/local/Cellar/fftw/3.3.8/lib/ -L/Users/ucapnje/Documents/software/NaMaster-master/install_dir/lib/'
python setup.py install
python -m unittest discover -v

TEB mode-coupling matrices

NaMaster can compute mode-coupling matrices for spin0-spin0, 0-2 and 2-2 correlations, which gives it a lot of flexibility to only compute whatever you need for a given problem. This can be a slight annoyance if you want to compute all the mode-coupling matrix elements for a (spin0,spin2) pair (including TT, TE, TB, etc.), since some wigner-3j symbols need to be computed twice.

Although this is not a huge problem (only up to a factor 2 in computational time with respect to the current setup for MCMs), it could be easily solved by passing an is_teb flag when creating an NmtWorkspace object that computes all elements on the fly.

Different-spin window functions

Allow users to pass spin-0, spin-1 and spin-2 window functions when computing the pure-E/B power spectra.
Even more ideally: compute optimal window functions internally in NaMaster.

Kernel Dying in Jupyter Notebook

Hi,

I installed a fresh copy of pymaster using conda on python 3.7, and when I run a simple line like

nmt.synfast_flat(Nx, Ny, Lx, Ly, [Cl_TT],[0])

in Jupiter Notebook, I get the Kernel Restarting error:

The kernel appears to have died. It will restart automatically.

When I run the same code in iPython from the terminal it seems to run with no problem though.

Any ideas how I can fix this?

Thanks.

make check failed on test/check_nmt

Hi,

I have built NaMaster with these following packages provided by OpenHPC repo:

  • gnu8/8.3.0 (GCC)
  • mvapich2/2.3, openmpi3/3.1.3
  • fftw/3.3.8
  • gsl/2.5
  • libsharp compiled with either mvapich2 or openmpi3

and with healpix.x86_64 and chealpix.x86_64 from epel repo. The OS is CentOS7 (3.10.0-957.el7.x86_64).

The make check processes are failed on

TEST 34/66 nmt:bins_f_ell [FAIL]
  ERR: test/nmt_test_bins.c:30  expected 5.252e+05, got 5.252e+05 (diff -1.164e-10, tol 1.000e-10)

Ship libsharp with namaster.

If we could ship libsharp with namaster we wouldn't have to worry about having to install it separately, and it may make it easier for us to get it to just pip install.

Bandpower windows

NaMaster now relies on users calling decouple_cell(couple_cell( to convolve input power spectra with the bandpower window functions. It'd be good if the bandpower windows were also easily accessible.

Spin-2 covariance matrices

Develop a framework to compute covariance matrices for power spectra involving spin-2 fields (the current functionality only covers spin-0)

Have an lmax keyword

Apologies for spamming you guys with issues, I'm just really looking forward to using NaMaster.

Did you have any plans to introduce an lmax keyword? For most use cases, I don't see that people require the default lmax of 3*nside.

Moreover, the beam for e.g. Planck is only published for l ~< 4000, which would not suffice for analyses at nside=2048.

I'm currently testing pymaster with Planck data, and just initiating via nmt.NmtField() takes 1-2 min per field, plus 30+ min to do nmt.compute_full_master() on a high-end 2016 Macbook. Does that match your experience, or should it be faster?

Binning

Hello,

I successfully installed pymaster with conda. I am trying to reproduce one of your example scripts (Example 5: using workspaces) and I get the error AttributeError: type object 'NmtBin' has no attribute 'from_nside_linear' when trying to create a binning scheme.

Do you have any idea why the function seems not to be present? Which format should the output of this function have? I can write one myself as a workaround...

using flatmaps conda install

I've installed namaster/pymaster with conda.

When I attempt to import flatmaps (taking from an example from a colleague), flatmaps isn't imported. To use flatmaps, we currently need to download flatmaps.py separately, but this could be confusing later when repo is updated. Is there a better way to access this module?

Ell-weighting

Allow for the computation of pseudo-C_ells where the underlying true spectrum is assumed to be constant in each bandpower bin when multiplied by an ell-dependent factor f_ell.

One typical example of this is f_ell = ell*(ell+1)/2/pi. This is useful when you have steep spectra.

Make theory prediction a bit more user-friendly

If I have an array for the theory C_ell prediction tcl that is defined on np.arange(lmax) where lmax is quite a bit greater than the maximum multipole in the binning scheme (say, based on the nside), I still get an error:

Traceback (most recent call last):
  File "sim.py", line 76, in <module>
    cl_th = w.decouple_cell(w.couple_cell([tcl]))[0]
  File "/home/r/rbond/msyriac/.local/lib/python3.6/site-packages/pymaster-1.0-py3.6-linux-x86_64.egg/pymaster/workspaces.py", line 142, in couple_cell
    self.wsp.ncls * (self.wsp.lmax + 1))
RuntimeError: Passing inconsistent arguments from python

I get around this by doing instead:

cl_th = w.decouple_cell(w.couple_cell([tcl[:w.wsp.lmax+1]]))[0]

but it took me a while to figure that out. Could the slicing up to w.wsp.lmax+1 be done automatically whenever the maximum multipole of the input Cls is larger than w.wsp.lmax+1?

EDIT: This is on the easier_libsharp branch.

Computing power spectrum of already masked map

I need to compute the power spectrum of an already masked map, of which I have the apodized mask. If I just follow the examples, my map will be multiplied by the mask again and the final power spectrum will not be correct.

I adapted my script from the existing examples, but I could not confirm from the source code whether or not this makes sense. Given that my_map is masked with apodized_mask, whose binary version is binary_mask, and other_map is an unrelated map, is it correct to:
f_other = nmt.NmtField(apodized_mask, other_map)
w = nmt.NmtWorkspace()
w.compute_coupling_matrix(f_other, f_other, b)
f0 = nmt.NmtField(binary_mask, my_map)
cl = nmt.compute_coupled_cell(f0, f0)
cl_decoupled = w.decouple_cell(cl)

Thanks

Usage of CovarianceWorkspace.compute_coupling_coefficients()

I use the NaMaster gotten from conda-forge.
In example 8: NaMaster/test/sample_covariance.py,
the function, compute_coupling_coefficents(), requires four fields.
However, the actual required arguments seem to be two NmtWorkspaces.
Then, should I pass two NmtWorkspaces of two spin-0 fields to the function to calculate copuling coefficients even if I want a covariance of spin-2 fields?

Multipole order mismatch in get_coupling_matrix() routine

Hello,

When calling the new routine get_coupling_matrix(), on the B-modes purification test scrip for example, I get a mixing matrix with rows and column order that are mismatched.
I couldn't recover the initial EE-EE, EE-BB, etc.. blocks of the coupling matrix.

In addition, I compared a matrix from a workspace whose the polarisation purification option is on, with a matrix for which the purification option is off (purifyE/B = True/False). Both arrays are the same. I am surprised since I would have expected that the mixing kernel block EE-BB would be different.

Thanks,

Sylvain

What is coupling coefficients?

What does the function "compute_coupling_coefficients" actually compute in astro-ph/0307515 and/or arXiv/1609.09730?
I have this question because the covariance and sample variance with 100 samples are different.

For the sample variance, I prepared 100 Gaussian fluctuated maps and calculated pseudo-C_\ell with two different maps, e.g. mapN x mapN+1, using NaMaster.

For the covariance with NaMaster, I followed the example in https://namaster.readthedocs.io/en/latest/sample_covariance.html.
The difference is that I prepared two fields,
f0_1,f2_1, f0_2, f2_2, because I wanted to calculate correlation between two different maps.
I modified "cw.compute_coupling_coefficients(f0, f0, f0, f0)" to "cw.compute_coupling_coefficients(f0_1 f0_2, f0_1, f0_2)".
And, "w22.compute_coupling_matrix(f2, f2, b)" to "w22.compute_coupling_matrix(f2_1, f2_2, b)".
These show difference from sample variance.

However, when I revert the calculation of coupling coefficients like "cw.compute_coupling_coefficients(f0_1 f0_2, f0_1, f0_2)" to "cw.compute_coupling_coefficients(f0, f0, f0, f0)", the covariance agreed to sample variance.

Package conflict with CCL through conda

I have checked this on two different systems.

CCL and NaMaster conflict on the version of gsl:

  • CCL requires any newest version
  • NaMaster requires older versions

This means that once either of the two is installed through conda, the other one won't install because of this conflict.

Here is the output of conda install -c conda-forge pymaster when pyccl is installed, and equivalently, the output of conda install -c conda-forge pyccl when pymaster is installed:

Package gsl conflicts for:
gsl
pyccl -> gsl[version='>=2.6,<2.7.0a0']
namaster -> gsl[version='>=2.4,<2.5.0a0|>=2.5,<2.6.0a0']

Why is the newest version of gsl that NaMaster can work with the alpha release of 2.6? Surely it could work with just 2.6?

@beckermr tagging you because you got infinite kudos for putting NaMaster on conda-forge :)

Bins are a pain

Pasting this from a conversation with @amaurea

Constructing bin objects is tedious and confusing. An Nside argument is needed, and setting it incorrectly leads to confusing messages about requesting too high entries during mode-coupling matrix calculation. I also wish sensible defaults were provided for the weights argument. Ideally one would be able to say: bins = Bins(lmax=5000, dl=50) or Bins(lmax=5000, fsky=0.03) or Bins([[0,50],[50,100],[150,200],...])

I agree with the above.

lmax option in gaussian_covariance()

I got error in using "gaussian_covariance" funcion as:
Traceback (most recent call last):
File "MakeCovariance.py", line 79, in
w22, wb=w22
File "/home/cmb/yminami/.pyenv/versions/anaconda3-5.1.0/envs/fgenv/lib/python2.7/site-packages/pymaster/covariance.py", line 223, in gaussian_covariance
wa.wsp, wb.wsp, cla1b1, cla1b2, cla2b1, cla2b2, len_a * len_b
RuntimeError: Coupling coefficients only computed up to l=4096, but you requirelmax=6143. Recompute this workspace with a larger lmax

I'd like to use lmax to be 2nside, and nside =2048.
I prepared coupling coefficients in NmtCovarianceWorkspace with lmax=nside
2, set NmtWorkspace.lmax=nside2 by hand, and cls with lmax=nside2.
Though I think these are all arguments which have lmax information, it fails.

Could you please tell me the best way to set lmax in the calculation of covariance matrix?

Add example for rectangular pixel runs

Currently the examples in the documentation only cover HEALPix. We should write an example showing how to compute power spectra for rectangular pixellizations.

Fast beaming and binning

The effects of beams and binning on the MCM are trivially fast to compute (Eqs. 17 and 36 in the paper). We should have a method to recompute these effects if needed.
This is useful if you want to test different binning schemes or if you are gonna compute the cross-correlation between different maps with the same mask but different beams.

provide a no openmp option and use FFTW with pthreads instead of openmp

OpenMP doesn't play nice with python multiprocessing. The OpenMP code not in FFTW doesn't look so bad that no openmp there would be a disaster. For the FFTW code, you can link to the pthreads library without changing any of the source. The only changes to the repo then are in some makefiles, the setup.py and some documentation.

Thus I think providing a no openmp option should be feasible. You can see the patches I am using with the conda package here.

Thoughts?

Easier installation

We'd like to be able to install everything automatically (including libsharp and NaMaster's C library) by just running python setup.py install.

This is currently attempted in #84

Illegal hardware instruction

I've run into an issue giving me an 'illegal hardware instruction' error.

  • All the C-tests run fine
  • The Python tests raise that error for the test_workspace_covar_benchmark (test.test_nmt_covar.TestCovarSph)
  • I also get this when trying to use nmt.mask_apodization() in python
  • I'm doing this on TACC's stampede2 (KNL nodes with 272 CPUs), here's the output of cat /proc/cpuinfo:
processor       : 271
vendor_id       : GenuineIntel
cpu family      : 6
model           : 87
model name      : Intel(R) Xeon Phi(TM) CPU 7250 @ 1.40GHz
stepping        : 1
microcode       : 0x1b6
cpu MHz         : 1347.117
cache size      : 1024 KB
physical id     : 0
siblings        : 272
core id         : 73
cpu cores       : 68
apicid          : 295
initial apicid  : 295
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ring3mwait epb spec_ctrl ibpb_support fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms avx512f rdseed adx avx512pf avx512er avx512cd xsaveopt dtherm ida arat pln pts
bogomips        : 2793.83
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

I compiled the C-libraries with

./configure
  --enable-pic --enable-shared
  CC=icc CXX=icpc
  CFLAGS="-xCORE-AVX2 -axCORE-AVX512,MIC-AVX512 -std=c+11 -fPIC -I$LIBSHARP_INC -I$HEALPIX_INC -I$CFITSIO_INC -I$TACC_GSL_INC -I$TACC_FFTW3_INC"
  LDFLAGS="-L$LIBSHARP_LIB -L$HEALPIX_LIB -L$TACC_GSL_LIB -L$CFITSIO_LIB -L$TACC_FFTW3_LIB -L$TACC_GSL_LIB"

Any ideas?

Installation fixes

Dear Authors,

Thank you for sharing this package. In trying to install it, I came upon three small issues that could be fixed easily in the documentation/makefile and thus help others have a smoother installation process.

First, it would be helpful if it was specified that FFTW should also be installed with the --enable-shared configuration option. Otherwise there are problems when linking (message: relocation ... can not be used when making a shared object; recompile with -fPIC).

Second, it could also be specified that HEALPix should be installed with the shared C libraries, too (by default only a static library is linked).

Third, there were two typos in the Makefile generated by the configure script. Here are the lines:
libnmt_la_LDFLAGS = $(OPENP_CFLAGS) -lfftw3 -lfftw3_omp -lgsl -lgslcblas -lcfitsio -lsharp -lfftpack -lc_utils -lm
test_check_nmt_LDFLAGS = $(OPENP_CFLAGS) -L./ -lnmt -lfftw3 -lfftw3_omp -lgsl -lgslcblas -lcfitsio -lsharp -lfftpack -lc_utils -lm

OPENP_CFLAGS should actually read OPENMP_CFLAGS. Without -fopenmp the linking fails at the end.

I hope this helps.

Sincerely,
Robert Beck

ImportError: No module named '_nmtlib'

I'm very eager to use NaMaster for research and appreciate you making it available. I'm stuck on the installation step and am hoping I'm just stumbling over something obvious. Any insights much appreciated.

I have successfully compiled the NaMaster C code, with

./configure
make
make install
make check

all running fine and all checks passing (well, test 20 appears to be skipped). However, I've been having trouble with the python wrapper. Note that I am on a Mac, running High Sierra 10.13.6, and having installed many of the requisite libraries with MacPorts (hence the directory '/opt/local/lib' below).

The command

python setup.py install

looks to run normally, but when I then try

python
>> import pymaster

I get the error

Traceback (most recent call last):
File "/Users/bhensley/.local/lib/python3.6/site-packages/pymaster/nmtlib.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_nmtlib', [dirname(file)])
File "/anaconda3/lib/python3.6/imp.py", line 297, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_nmtlib'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "", line 1, in
File "/Users/bhensley/.local/lib/python3.6/site-packages/pymaster/init.py", line 40, in
from pymaster import nmtlib as lib
File "/Users/bhensley/.local/lib/python3.6/site-packages/pymaster/nmtlib.py", line 28, in
_nmtlib = swig_import_helper()
File "/Users/bhensley/.local/lib/python3.6/site-packages/pymaster/nmtlib.py", line 20, in swig_import_helper
import _nmtlib
ImportError: dlopen(/Users/bhensley/.local/lib/python3.6/site-packages/_nmtlib.cpython-36m-darwin.so, 2): Library not loaded: libchealpix.dylib
Referenced from: /Users/bhensley/.local/lib/python3.6/site-packages/_nmtlib.cpython-36m-darwin.so
Reason: image not found

However, when I add the following my .bash_profile:

export DYLD_LIBRARY_PATH='/opt/local/lib'

pymaster imports fine without complaint and even passes some unit tests. Note that that directory is where libchealpix.dylib lives. Unfortunately, whenever I have that line in my .bash_profile, healpy cannot be imported (and so all healpy-dependent unit tests fail). I haven't had any luck with things like adding '/opt/local/lib' to my LD_LIBRARY_PATH and LDFLAGS and am running out of ideas. Any help very much appreciated. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.