First, let me take this opportunity to thank you for providing this extremely useful tool. I'm a big fan. We use this for generating a variety of containers.
However, currently I am focusing on intel Singularity containers and I cannot get them to work. After having problems with more sophisticated applications, I went back to a simple recipe file that includes a "hello world" mpi application:
"""Intel/impi Development container
"""
import os
# Base image
Stage0.baseimage('ubuntu:18.04')
Stage0 += apt_get(ospackages=['build-essential','tcsh','csh','ksh','git',
'openssh-server','libncurses-dev','libssl-dev',
'libx11-dev','less','man-db','tk','tcl','swig',
'bc','file','flex','bison','libexpat1-dev',
'libxml2-dev','unzip','wish','curl','wget',
'libcurl4-openssl-dev','nano','screen', 'libasound2',
'libgtk2.0-common','software-properties-common',
'libpango-1.0.0','xserver-xorg','dirmngr',
'gnupg2','lsb-release','vim'])
# Install Intel compilers, mpi, and mkl
Stage0 += intel_psxe(eula=True, license=os.getenv('INTEL_LICENSE_FILE',default='intel_license/****.lic'), tarball=os.getenv('INTEL_TARBALL',default='intel_tarballs/parallel_studio_xe_2019_update5_cluster_edition.tgz'))
# Install application
Stage0 += copy(src='hello_world_mpi.c', dest='/root/jedi/hello_world_mpi.c')
Stage0 += shell(commands=['export COMPILERVARS_ARCHITECTURE=intel64',
'. /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh',
'cd /root/jedi','mpiicc hello_world_mpi.c -o /usr/local/bin/hello_world_mpi -lstdc++'])
Stage0 += runscript(commands=['/bin/bash -l'])
CNAME=intel19-impi-hello
hpccm --recipe $CNAME.py --format docker > Dockerfile.$CNAME
sudo docker image build -f Dockerfile.${CNAME} -t jedi-${CNAME} .
ubuntu@ip-172-31-87-130:~/jedi$ sudo docker run --rm -it jedi-intel19-impi-hello:latest
root@1dfdbccc1110:/# mpirun -np 4 hello_world_mpi
Hello from rank 1 of 4 running on 1dfdbccc1110
Hello from rank 2 of 4 running on 1dfdbccc1110
Hello from rank 0 of 4 running on 1dfdbccc1110
Hello from rank 3 of 4 running on 1dfdbccc1110
hpccm --recipe $CNAME.py --format singularity > Singularity.$CNAME
sudo singularity build $CNAME.sif Singularity.$CNAME
ubuntu@ip-172-31-87-130:~/jedi$ singularity shell -e intel19-impi-hello.sif
Singularity intel19-impi-hello.sif:~/jedi> source /etc/bash.bashrc
ubuntu@ip-172-31-87-130:~/jedi$ mpirun -np 4 hello_world_mpi
[mpiexec@ip-172-31-87-130] enqueue_control_fd (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:70): assert (!closed) failed
[mpiexec@ip-172-31-87-130] launch_bstrap_proxies (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:517): error enqueuing control fd
[mpiexec@ip-172-31-87-130] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:714): unable to launch bstrap proxy
[mpiexec@ip-172-31-87-130] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:1919): error setting up the boostrap proxies
I get the same thing if I try to create the Singularity image from the docker image:
sudo singularity build intel19-impi-hello.sif docker-daemon:jedi-intel19-impi-hello:latest
I just wanted to see if anyone here had any tips on building a working singularity container with the intel_psxe
building block. Thanks!