Giter Club home page Giter Club logo

amrex's Introduction

AMReX Logo

Citing DOI Coverity Scan Build Status License

A software framework for massively parallel block-structured adaptive mesh refinement applications.

Overview - Features - Documentation - Gallery - Get Help - Contribute - License - Citation

Overview

AMReX is a software framework designed to accelerate scientific discovery for applications solving partial differential equations on block-structured meshes. Its massively parallel adaptive mesh refinement (AMR) algorithms focus computational resources and allow scalable performance on heterogeneous architectures so that scientists can efficiently resolve details in large simulations. AMReX is developed at LBNL.

More information is available at the AMReX website.

Features

  • C++ and Fortran interfaces
  • Support for cell-centered, face-centered, edge-centered, and nodal data
  • Support for hyperbolic, parabolic, and elliptic solves on a hierarchical adaptive grid structure
  • Optional subcycling in time for time-dependent PDEs
  • Support for particles
  • Embedded boundary description of irregular geometry
  • Parallelization via flat MPI, OpenMP, hybrid MPI/OpenMP, or MPI/MPI
  • GPU Acceleration with CUDA (NVidia), HIP (AMD) or SYCL (Intel) backends
  • Parallel I/O
  • Plotfile format supported by Amrvis, VisIt, ParaView and yt
  • Built-in profiling tools

Documentation

Four types of documentation are available:

Gallery

AMReX supports several Exascale Computing Project software applications, such as ExaSky, WarpX, Pele(Combustion), Astro, and MFiX-Exa. AMReX has also been used in a wide variety of other scientific simulations, some of which, can be seen in our application gallery.

Gallery Slideshow

Get Help

You can also view questions and ask your own on our GitHub Discussions page. To obtain additional help, simply post an issue.

Contribute

We are always happy to have users contribute to the AMReX source code. To contribute, issue a pull request against the development branch. Any level of changes are welcomed: documentation, bug fixes, new test problems, new solvers, etc. For more details on how to contribute to AMReX, please see CONTRIBUTING.md.

Copyright Notice

AMReX Copyright (c) 2024, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Intellectual Property Office at [email protected].

Please see the notices in NOTICE.

License

License for AMReX can be found at LICENSE.

Citation

To cite AMReX, please use Citing

@article{AMReX_JOSS,
  doi = {10.21105/joss.01370},
  url = {https://doi.org/10.21105/joss.01370},
  year = {2019},
  month = may,
  publisher = {The Open Journal},
  volume = {4},
  number = {37},
  pages = {1370},
  author = {Weiqun Zhang and Ann Almgren and Vince Beckner and John Bell and Johannes Blaschke and Cy Chan and Marcus Day and Brian Friesen and Kevin Gott and Daniel Graves and Max Katz and Andrew Myers and Tan Nguyen and Andrew Nonaka and Michele Rosso and Samuel Williams and Michael Zingale},
  title = {{AMReX}: a framework for block-structured adaptive mesh refinement},
  journal = {Journal of Open Source Software}
}

amrex's People

Contributors

adam-m-jcbs avatar ajnonaka avatar asalmgren avatar atmyers avatar ax3l avatar bcfriesen avatar bsrunnels avatar cgilet avatar cychan-lbnl avatar cyrush avatar drummerdoc avatar dtgraves avatar dwillcox avatar emotheau avatar etpalmer63 avatar jbbel avatar jblaschke avatar jmsexton03 avatar kngott avatar kweide avatar maxpkatz avatar memmett avatar mic84 avatar mlminion avatar mrowan137 avatar revathijambunathan avatar tannguyen153 avatar vebeckner avatar weiqunzhang avatar zingale avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amrex's Issues

Non-standard Fortran in AMReX_INTERP_3D.F

Code in AMReX_INTERP_3D.F uses a dfloat intrinsic function. This is not a standard Fortran intrinsic, but apparently a commonly supported extension. However the NAG compiler does not recognize it. There are several ways to fix the problem. I'd be happy to submit a PR, but want to know your preference:

  1. Rely on automatic Fortran type conversion/promotion, making replacements such as dfloat(lratiox) with lratio. In all the uses here, automatic conversion does the right thing.
  2. Replace dfloat with the standard intrinsic dble. However dble is deprecated (but unlikely to disappear).
  3. Replacements dfloat(lratiox) --> real(lratiox,kind(1.0d0))

Make grids base to fine

Hi,

I have a question regarding the regridding. The function AmrMesh::MakeNewGrids(int, Real, int&, Array<BoxArray>&) that is called during the regrid process rebuilds the grids from the finest level down to the base level (given as first function argument). I'm asking if it's also possible to go the other way around, i.e. regridding from base level up to the finest level?
If not, might it be possible that you add this functionality?

Thanks,
Matthias

Make AMReX build with NAG

I have made AMReX and our code that uses it work with NAG 6.1. I have submitted all necessary fixes:

With this PRs, AMReX and our code build and runs with NAG. I opened this issue to keep track of this. After they are all merged (or an equivalent fix is implemented), then we can close this issue.

Questions about the examples of using Multigrid Solver in amrex

Hello all,

In the directory /Src/LinearSolvers/F_MG, there are two kinds of MG solver. I noticed some theoretical explanation in the Chapter 22 of MAESTRO guide, yet I still do not know how to use the codes. Are there some good examples for teaching how to use these solvers?

To be more specific, suppose the cell centered pressure value and edge centered velocities value have been defined on a Multifab, how to use these solvers to solve Poisson equation, like the following form:

picture1

Best and thanks!

Two small questions about the post-processing of amrex

Hello all,

I am Ruohai. Here two small questions about the post-processing of amrex. In the tutorial, it recommends Visit and Paraview software. Yet, here are two problems when I use them.

  1. I downloaded the latest Vists 2.12.3 in win7 system and try to open the Header file in plt02000 like the following picture. Yet, it shows the next two picture.

I have not found some suggestions from FAQ part. Since the calculation is finished, i think the file information is correct. I tested them in different window system, but it still failed.

  1. Another question is about Paraview, it says "Boxlib 3D Files" for Header file, but I can not find this type in the Paraview 5.3.0 64-bit. See the last two picture. So I can not open the file.

I think these are naive questions coming from myself, but actually I have not found some better ideas. I will be really thankful if someone can give me some suggestions. Thanks for your consideration.

Best!

inputfile
visiterror1
visiterror2
paraviewerror1
paraviewerror2

1d node bilinear interpolater

The Fortran function subroutine FORT_NBINTERP in Src/AmrCore/AMReX_INTERP_1D.F has extra argument, strip, strip_lo, and strip_hi that are not in the header file AMReX_INTERP_F.H.

Questions about post processing in different CPUs

Hello there,

In the tutorial, it introduces how to divide different boxes into different processors to form Mulitfab, like:

phi_new[lev].reset(new MultiFab(ba, dm, ncomp, nghost))

I just wonder which function I can use to output the variable values for different processors? Also, are there some functions to help us output the grid structure with or without ghost cells?

Thanks so much!

CoordSys::CellCenter() has zero offsets

I have a failing unit test when accessing the cell center function from a CoordSys (actually a Geometry) object. The following function in AMReX_CoordSys.cpp computes the center:

void
CoordSys::CellCenter (const IntVect& point,
                      Real*          loc) const
{
    BL_ASSERT(ok);
    BL_ASSERT(loc != 0);
    for (int k = 0; k < BL_SPACEDIM; k++)
    {
        loc[k] = offset[k] + dx[k]*(0.5+ (Real)point[k]);
    }
}

However, the offset is zero and therefore the returned cell centers are wrong. I am calling the following geometry constructor as follows:

    // Create the global cell-based index domain.
    amrex::IntVect domain_lo( AMREX_D_DECL(0,0,0) );
    amrex::IntVect domain_hi( num_cells );
    amrex::Box index_domain( domain_lo, domain_hi );

    // Create the global geometry.
    d_geometry.define(
        index_domain, &physical_domain, amrex::CoordSys::cartesian );

where physical_domain contains the proper offsets of the physical box.

Looking at the geometry constructor int AMReX_Geometry.cpp and the subsequent call to define it seems that the offsets are never set with the base class:

Geometry::Geometry (const Box&     dom,
                    const RealBox* rb,
                    int            coord,
                    int*           is_per)
{
    define(dom,rb,coord,is_per);
}

void
Geometry::define (const Box&     dom,
                  const RealBox* rb,
                  int            coord,
                  int*           is_per)
{
    if (c_sys == undef)
        Setup(rb,coord,is_per);

    domain = dom;
    ok     = true;

    for (int k = 0; k < BL_SPACEDIM; k++)
    {
        dx[k] = prob_domain.length(k)/(Real(domain.length(k)));
	inv_dx[k] = 1.0/dx[k];
    }
    if (Geometry::spherical_origin_fix == 1)
    {
	if (c_sys == SPHERICAL && prob_domain.lo(0) == 0 && BL_SPACEDIM > 1)
        {
            prob_domain.setLo(0,2*dx[0]);

            for (int k = 0; k < BL_SPACEDIM; k++)
            {
                dx[k] = prob_domain.length(k)/(Real(domain.length(k)));
		inv_dx[k] = 1.0/dx[k];
            }
	}
    } 
}

Should these offsets be set using the RealBox defining the physical domain in the define function? Or does the user need to explicitly set the offsets?

CPP

Here is a list of identifiers used in #if, #elif, #ifdef, or #ifndef (excluding include guards in headers) in AMReX sources. Many of them start with AMREX_ or BL_. I am proposing we should make every one of them (except NDEBUG) to start with BL_, or AMREX_, or another somewhat unique names.

Note that NDEBUG is a special case. It's a documented macro in glibc to turn off assertion. https://www.gnu.org/software/libc/manual/html_node/Consistency-Checking.html

  • AMREX_GIT_VERSION
  • BL_AIX
  • BL_AMRPROF
  • BL_ASSERT
  • BL_BACKTRACING
  • BL_CCSE_MPP
  • BL_COALESCE_FABS
  • BL_COMM_PROFILING
  • BL_CRAY
  • BL_FIXHEADERDENORMS
  • BL_FIX_GATHERV_ERROR
  • BL_FORT_USE_LOWERCASE
  • BL_FORT_USE_UNDERSCORE
  • BL_FORT_USE_UPPERCASE
  • BL_HOPPER
  • BL_LANG_FORT
  • BL_LAZY
  • BL_MEM_PROFILING
  • BL_NO_FORT
  • BL_PROFILE
  • BL_PROFILING
  • BL_PROFILING_SPECIAL
  • BL_SETBUF_SIGNED_CHAR
  • BL_SIM_HOPPER
  • BL_SIM_HOPPER_MAKE_TCFAB
  • BL_SINGLE_PRECISION_PARTICLES
  • BL_SPACEDIM
  • BL_T3E
  • BL_TESTING
  • BL_TINY_PROFILING
  • BL_TRACE_PROFILING
  • BL_USE_ARRAYVIEW
  • BL_USE_ASSERTION
  • BL_USE_DOUBLE
  • BL_USE_FLOAT
  • BL_USE_FORTRAN_MPI
  • BL_USE_FORT_STAR_PRECISION
  • BL_USE_F_BASELIB
  • BL_USE_F_INTERFACES
  • BL_USE_MPI
  • BL_USE_MPI3
  • BL_USE_NEWPLOTPER
  • BL_USE_OMP
  • BL_USE_TEAM
  • BL_USE_UPCXX
  • CRSEGRNDOMP
  • DEBUG
  • DEBUG_AFAP
  • DIMENSION_AGNOSTIC
  • FORTRAN_BOXLIB
  • HAS_XGRAPH
  • JEFF_TEST
  • MG_USE_FBOXLIB
  • NDEBUG
  • PARTICLES
  • SPACEDIM
  • TESTING_POLYNOMINTERPCOEFFS
  • UPCXX_DEBUG
  • USE_PARTICLES
  • USE_REORG_VERSION
  • USE_SLABSTAT
  • USE_STATIONDATA

Clarifications on ownership of nodal data (via fortran)

I apologize if these are already answered in the manual by I couldn't find it. As noted in the manual, on nodal data, the boxes associated with an mfiter are not disjoint.

  1. Are the boxes returned by tilebox() considered to be valid non-ghost cells even though they overlap? If so, are these overlapping non-ghost cells which are "shared" by different multifabs ever updated via fillboundary calls or is that job of the application developer?

  2. Is there an idiomatic way of looping over them such that they appear disjoint? (My main interest here is in interfacing with external solvers which represent the grid as a disjoint union among the ranks)

Cray C++ compiler fails in AMReX_BoxArray.cpp

The Cray C++ compiler (version 8.5.8) fails to compile AMReX_BoxArray.cpp. The error is the following:

CC-1623 crayc++: ERROR File = /global/homes/f/friesen/amrex/Src/Base/AMReX_BoxArray.cpp, Line = 1174
  The specified statement does not have a valid form for an OpenMP atomic
          directive.
      local_flag = m_ref->has_hashmap;

The commit which added this code is 52383e0 ("Don't want to perform this check inside a critical region, since in particle code we call this a lot from threaded regions.")

Macros defined by #define

Here is a partial list of macros defined. Many of them are used in Fortran 77 files that we are going to slowly replaced. Those constants (e.g., zero and one) should be put into a Fortran module. We certainly need to keep many of the macros. For example, D_DECL. The question is whether we should use a different name for this. AMREX_D_DECL? How about marking some of them deprecated for now and eventually removing them entirely?

  • ARLIM_P
  • ARLIM
  • ARLIM_3D
  • ARLIM_ANYD
  • ZFILL
  • BCREC_3D
  • BCREC_ANYD
  • BL_TO_FORTRAN
  • BL_TO_FORTRAN_N
  • BL_TO_FORTRAN_3D
  • BL_TO_FORTRAN_ANYD
  • BL_TO_FORTRAN_N_3D
  • BL_TO_FORTRAN_N_ANYD
  • DIMS
  • DIMDEC
  • DIMV
  • DIM_V
  • DIM1
  • DIM2
  • ARG_L1
  • ARG_L2
  • ARG_L3
  • ARG_H1
  • ARG_H2
  • ARG_H3
  • DIMARG
  • BOGUS_BC
  • REFLECT_ODD
  • INT_DIR
  • REFLECT_EVEN
  • REFLECT_ODD
  • FOEXTRAP
  • EXT_DIR
  • HOEXTRAP
  • Interiro
  • Inflow
  • Outflow
  • Symmetry
  • SlipWall
  • NoSlipWall
  • __BL_FORT_NAME__
  • BL_FORT_PROC_DECL
  • BL_FORT_PROC_CALL
  • BL_FORT_PROC_NAME
  • BL_FORT_FAB_ARG
  • BL_FORT_IFAB_ARG
  • BL_FORT_FAB_ARG_3D
  • BL_FORT_FAB_ARG_ANYD
  • BL_FORT_IFAB_ARG_3D
  • BL_FORT_IFAB_ARG_ANYD
  • BL_ASSERT
  • BL_IGNORE_MAX
  • bigreal
  • zero
  • one
  • two
  • three
  • four
  • five
  • six
  • seven
  • eight
  • nine
  • ten
  • twelve
  • fifteen
  • sixteen
  • twenty
  • seventy
  • ninety
  • tenth
  • eighth
  • sixth
  • fifth
  • forth
  • fourth
  • third
  • half
  • two3rd
  • Pi
  • BL_MPI_REQUIRE
  • BL_USE_FLOAT
  • BL_USE_DOUBLE
  • REAL_T
  • BL_REAL
  • BL_REAL_E
  • D_EXPR
  • D_DECL
  • D_TERM
  • D_PICK
  • LO_DIRICHLET
  • LO_NEUMANN
  • LO_REFLECT_ODD
  • LO_MARSHAK
  • LO_SANCHEZ_POMRANING

warning: comparison between signed and unsigned integer

Hi,

I compile a library with AMReX and I'm getting some signed-unsigned-comparison warnings. Although these warnings may not cause any running issues, it might be still nice having them resolved.

AMReX_ParallelDescriptor.H:360:53: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
         BL_ASSERT(whichSidecar >= 0 && whichSidecar < m_nProcs_sidecar.size());
                                                     ^
AMReX_ParallelDescriptor.H:587:72: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
             BL_ASSERT(inWhichSidecar != notInSidecar && inWhichSidecar < m_comm_sidecar.size());
                                                                        ^
AMReX_FabArray.H:768:12: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     if (li < m_fabs_v.size() && m_fabs_v[li] != 0) {
            ^
AMReX_FabArray.H:780:32: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     BL_ASSERT(mfi.LocalIndex() < indexArray.size());
                                ^
AMReX_FabArray.H:789:32: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     BL_ASSERT(mfi.LocalIndex() < indexArray.size());
                                ^
AMReX_FabArray.H:799:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     BL_ASSERT(li >=0 && li < indexArray.size());
                            ^
AMReX_FabArray.H:808:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     BL_ASSERT(li >=0 && li < indexArray.size());
                            ^

AMReX_FabArray.H:2780:29: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   if(newDistMapArray.size() != distributionMap.size()) {
                             ^
AMReX_FabArray.H:2785:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int idm(0); idm < newDistMapArray.size(); ++idm) {
                       ^
AMReX_FabArray.H:2831:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     for(int iim(0); iim < indexArray.size(); ++iim) {
                         ^
AMReX_FabArray.H:2845:29: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int imoves(0); imoves < fabMoves.size(); ++imoves) {
                             ^
AMReX_FabArray.H:2900:29: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int imoves(0); imoves < fabMoves.size(); ++imoves) {
                             ^

Thanks,
Matthias

Make compiles MiniApps/AMR_Adv_Diff_F90 in serial

Here is how I installed & tested the latest master (3724a07) of AMReX using Make directly:

module load cmake/3.6.2 gcc/5.3.0 openmpi/1.10.5 python/2.7-anaconda-4.1.1
cd $HOME/repos/amrex
cd MiniApps/AMR_Adv_Diff_F90
make -j16
mpirun -n 16 main.Linux.gfortran.exe

This compiles and runs, but in serial... Here are the first few lines:

$ mpirun -n 16 main.Linux.gfortran.exe
App launch reported: 1 (out of 1) daemons - 16 (out of 16) procs
 MPI initialized with            1  MPI processes
 MPI initialized with            1  threads
 MPI initialized with            1  MPI processes
 MPI initialized with            1  threads
 MPI initialized with            1  MPI processes
 MPI initialized with            1  threads
 MPI initialized with            1  MPI processes
 MPI initialized with            1  threads
...

How can the Make be used to compile this in parallel?

I tried the following patch:

--- a/MiniApps/AMR_Adv_Diff_F90/GNUmakefile
+++ b/MiniApps/AMR_Adv_Diff_F90/GNUmakefile
@@ -6,7 +6,7 @@ NDEBUG    := t
 MPI       :=
 OMP       :=
 PROF      :=
-COMP      := gfortran
+COMP      := mpif90
 MKVERBOSE := t
 
 include $(AMREX_HOME)/Tools/F_mk/GMakedefs.mak

But it fails with:

$ make
../../Tools/F_mk/GMakedefs.mak:184: *** "COMP=mpif90 is not supported".  Stop.

It seems there must be another way to compile this in parallel using the Makefile build system, but I haven't figured it out. Note that this works properly in parallel using the CMake build system (#63).

Cmake root access

When trying to configure with cmake I obtain the error:

CMake Error: Could not open file for write in copy operation /usr/local/bin/Makefile.export.tmp
CMake Error: : System Error: Permission denied
CMake Error at Tools/CMake/InstallManager.cmake:214 (configure_file):
  configure_file Problem configuring file
Call Stack (most recent call first):
  CMakeLists.txt:114 (create_exports)

Configure procedure:

git clone https://github.com/AMReX-Codes/amrex.git
cd amrex; mkdir build; cd build;
CXX=$MPICXX CC=$MPICC cmake -DENABLE_PARTICLES=1 -DENABLE_MG_BOXLIB=1 ../

The configure output is cmake-configure-output.txt and the log file of cmake is CMakeOutput.txt

The issue only occurs if CMAKE_INSTALL_PREFIX points to a directory that requires root access, i.e. sudo is needed.

Modules:

  • cmake/3.6.3
  • gcc/5.4.0
  • openmpi/1.10.4

OS:

  • Ubuntu/16.04

Multifab question

Is it necessarily true that the boxes in a multifab all have the same nodal type? The particular construction sequence I've encountered so far suggest this is the case. If so, is there a way to get the nodal type without iterating through the boxes? I'm using the Fortran interface if that matters.

Configure & make does not install ml_layout_module

I tried to build & test latest master (3724a07) of AMReX using the ./configure build system. Here is what I did:

module load cmake/3.6.2 gcc/5.3.0 openmpi/1.10.5 python/2.7-anaconda-4.1.1
cd $HOME/repos/amrex
./configure --prefix=`pwd`/inst
make -j16
make install
cd MiniApps/AMR_Adv_Diff_F90
mpicxx -I$HOME/repos/amrex/inst/include write_plotfile.f90 prob.f90 compute_flux.f90 init_phi.f90 update_phi.f90 advance.f90 main.f90 -L$HOME/repos/amrex/inst/lib -lfboxlib -lcboxlib -lmpi_usempif08

This works with the CMake build system (#63), but with ./configure it fails with:

write_plotfile.f90:3:6:

   use ml_layout_module
      1
Fatal Error: Can't open module file ‘ml_layout_module.mod’ for reading at (1): N
o such file or directory
compilation terminated.

This module is implemented the file Src/F_BaseLib/ml_layout.f90, but it is never installed:

$ ls inst/include/*.mod
inst/include/amrex_amrcore_module.mod
inst/include/amrex_amr_module.mod
inst/include/amrex_base_module.mod
inst/include/amrex_bc_types_module.mod
inst/include/amrex_boxarray_module.mod
inst/include/amrex_box_module.mod
inst/include/amrex_distromap_module.mod
inst/include/amrex_error_module.mod
inst/include/amrex_fab_module.mod
inst/include/amrex_filcc_module.mod
inst/include/amrex_fillpatch_module.mod
inst/include/amrex_fi_mpi.mod
inst/include/amrex_fluxregister_module.mod
inst/include/amrex_fort_module.mod
inst/include/amrex_geometry_module.mod
inst/include/amrex_interpolater_module.mod
inst/include/amrex_multifab_module.mod
inst/include/amrex_multifabutil_module.mod
inst/include/amrex_octree_module.mod
inst/include/amrex_omp_module.mod
inst/include/amrex_parallel_module.mod
inst/include/amrex_parmparse_module.mod
inst/include/amrex_particle_module.mod
inst/include/amrex_physbc_module.mod
inst/include/amrex_plotfile_module.mod
inst/include/amrex_string_module.mod
inst/include/amrex_tagbox_module.mod
inst/include/basefab_nd_module.mod
inst/include/bl_extrapolater.mod
inst/include/mempool_module.mod

Should it by installed by make and if not, why not? What should be the proper fix?

Use of LinearSolvers in Library Mode

Looking at the GNUmakefile.in on the latest development branch it seems that the linear solvers package is not enabled as as shown here

I see that it is enabled in CMake mode but I am running into other install issues using CMake with library mode. In particular, it does not seem like the AMREX_SPACEDIM macro gets set resulting in errors such as:

/Users/uy7/install/amrex/debug/include/AMReX_SPACE.H:27:2: error: #error AMREX_SPACEDIM must be defined
 #error AMREX_SPACEDIM must be defined
  ^
/Users/uy7/install/amrex/debug/include/AMReX_SPACE.H:31:2: error: #error AMREX_SPACEDIM must be either 1, 2, or 3
 #error AMREX_SPACEDIM must be either 1, 2, or 3
  ^
/Users/uy7/install/amrex/debug/include/AMReX_SPACE.H:47:26: error: 'AMREX_SPACEDIM' was not declared in this scope
     const int SpaceDim = AMREX_SPACEDIM;
                          ^
/Users/uy7/install/amrex/debug/include/AMReX_SPACE.H:10:21: error: 'AMREX_SPACEDIM' was not declared in this scope
 #define BL_SPACEDIM AMREX_SPACEDIM
                     ^
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H:397:14: note: in expansion of macro 'BL_SPACEDIM'
     int vect[BL_SPACEDIM];
              ^
In file included from /Users/uy7/install/amrex/debug/include/AMReX_Box.H:8:0,
                 from /Users/uy7/install/amrex/debug/include/AMReX_FArrayBox.H:5,
                 from /Users/uy7/install/amrex/debug/include/AMReX_MultiFab.H:9,
                 from /Users/uy7/software/PICFS/core/src/BoundaryCondition.hh:7,
                 from /Users/uy7/software/PICFS/core/src/BoundaryCondition.cc:3:
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H: In member function 'std::size_t amrex::IntVect::shift_hasher::operator()(const amrex::IntVect&) const':
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H:36:31: error: expected primary-expression before 'ret0'
      AMREX_D_DECL(std::size_t ret0 = vec[0], ret1 = vec[1], ret2 = vec[2]);
                               ^
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H:36:46: error: 'ret1' was not declared in this scope
      AMREX_D_DECL(std::size_t ret0 = vec[0], ret1 = vec[1], ret2 = vec[2]);
                                              ^
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H:36:61: error: 'ret2' was not declared in this scope
      AMREX_D_DECL(std::size_t ret0 = vec[0], ret1 = vec[1], ret2 = vec[2]);
                                                             ^
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H:42:13: error: 'ret0' was not declared in this scope
      return ret0 ^ (ret1 << shift1) ^ (ret2 << shift2);
             ^
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H: In constructor 'amrex::IntVect::IntVect()':
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H:68:31: error: 'vect' was not declared in this scope
     IntVect () { AMREX_D_EXPR(vect[0] = 0, vect[1] = 0, vect[2] = 0); }
                               ^
/Users/uy7/install/amrex/debug/include/AMReX_IntVect.H:68:68: error: 'AMREX_D_EXPR' was not declared in this scope
     IntVect () { AMREX_D_EXPR(vect[0] = 0, vect[1] = 0, vect[2] = 0); }

For reference here is the option output for the CMake configuration:

-- Configuring AMReX with the following options:
--    CMAKE_BUILD_TYPE = DEBUG (STRING: Debug|Release|RelWithDebInfo|MinSizeRel)
--    CMAKE_INSTALL_PREFIX = /Users/uy7/install/amrex/debug (STRING: <path to install dir>)
--    ENABLE_PIC = 0 (INT: 0,1)
--    BL_SPACEDIM = 3 (INT: 2,3)
--    ENABLE_MPI = 1 (INT: 0,1)
--    ENABLE_OMP = 0 (INT: 0,1)
--    ENABLE_DP = 1 (INT: 0,1)
--    ENABLE_PARTICLES = 0 (INT: 0,1)
--    ENABLE_DP_PARTICLES = 1 (INT: 0,1)
--    ENABLE_PROFILING = 1 (INT: 0,1)
--    ENABLE_TINY_PROFILING = 0 (INT: 0,1)
--    ENABLE_TRACE_PROFILING = 0 (INT: 0,1)
--    ENABLE_COMM_PROFILING = 0 (INT: 0,1)
--    ENABLE_BACKTRACE = 1 (INT: 0,1)
--    ENABLE_FPE = 0 (INT: 0,1)
--    ENABLE_ASSERTIONS = 0 (INT: 0,1)
--    ENABLE_FORTRAN_INTERFACES = 1 (INT: 0,1)
--    ENABLE_LINEAR_SOLVERS = 1 (INT: 0,1)
--    ENABLE_FBASELIB = 1 (INT: 0,1)

Do you have examples of using the linear solvers with amrex in library mode? I'm OK with using either CMake or Make for the build.

convert method on amrex_boxarray (fortran)

The convert method on an amrex_boxarray does not appear to be exposed via the fortran interface. Is there a straight-forward way to duplicate (via fortran) the functionality of the c++ code on page 26 of the amrex docs?

// ba is cell - centered BoxArray
// dm is DistributionMapping
int ncomp = 3; // Suppose the system has 3 components
int ngrow = 0; // no ghost cells
MultiFab state ( ba , dm , ncomp , ngrow );
MultiFab xflux ( amrex :: convert ( ba , IntVect {1 ,0 ,0}) , dm , ncomp , 0);
MultiFab yflux ( amrex :: convert ( ba , IntVect {0 ,1 ,0}) , dm , ncomp , 0);
MultiFab zflux ( amrex :: convert ( ba , IntVect {0 ,0 ,1}) , dm , ncomp , 0);

Questions about setting the Dirichlet BC for finer level

Hello all,

I am now developing an staggered in-compressible fluid solver based on amrex-frame. On level 0, I used multigrid solver for solving Poisson euqation. Then, I interpolated pressure from level 0 to level 1 for filling ghost cell on level (like the purple star in the fig). However, I do not know how to set up the these interpolated ghost value as the Dirichlet boundary condition for multifab on level 1.

image

In the AMReX_Docs, a function:
void amrex::BndryData::setValue | ( | Orientation | face,
  |   | int | n,
  |   | Real | val
  | )
can only set a fixed value for Dirichlet BC, not the interpolated value. I noticed this function because the multi-grid solver in Old Tutorials use this function to set up the Dirichlet Boundary condition.

Can anyone give me some suggestions? Maybe I should use other functions.

Thanks a lot.

Questions about the processor distribution in Boxarray

Hello all,

Here are two questions about the processor distribution in Boxarray.

Based on the tutorial Advection_AmrCore code, a two level grid structure is built, including level 0 and level 1. For each level, there is only a boxarray (thus, there are two boxarray in total.)

If four processors are used for this subroutine, then each boxarray is divided into 4 parts, so the lo_Vect and hi_Vect information can be output:

level 0 : ((0,0) (7,7) (0,0)) ((0,8) (7,15) (0,0)) ((8,0) (15,7) (0,0)) ((8,8) (15,15) (0,0)) )
level 1: ((2,8) (9,15) (0,0)) ((2,16) (9,23) (0,0)) ((10,8) (17,15) (0,0)) ((10,16) (17,23) (0,0)) )

What if 5 processors are used? I just wonder how the processors are distributed in these two levels.

In the tutorial, it says the distribution depends on space filling curve, how to control it to make sure each level contains all processors for running?

Thanks.

Obsolete OpenSource.txt

The file Src/Base/OpenSource.txt is presumably obsolete, as the AMReX license is in the root directory. We should remove this file for clarity if it no longer applies.

Summitdev: Allocate runs out of memory in Microphysics test_react suite with PGI 17.4, GCC 6.3.1

On OLCF Summitdev, the error arises when running the Microphysics test_react test suite on the cudadevice branch with the gpu branch of amrex using PGI 17.4 configured with GCC 6.3.1 with CUDA Fortran enabled.

The compile line is as follows:

make -j COMP=PGI NDEBUG=t MPI= OMP= ACC= CUDA=t NETWORK_DIR=ignition_reaclib/URCA-simple INTEGRATOR_DIR=VODE90 EOS_DIR=helmholtz

Error:

0: ALLOCATE: 281473269719360 bytes requested; not enough memory

This allocation error originates in amrex/Src/F_BaseLib/knapsack.f90, specifically the function make_box_key that does not properly import the value of dm as the subroutines in the same scope. dm is thus essentially uninitialized and takes a large value, leading the declaration integer :: r(dm) in make_box_key to attempt to allocate this absurd amount of memory.

comments in AMReX_REAL.H prevent compilation on IBM

AMReX_REAL.H is included in both C++ and Fortran files. With the IBM XL compilers, compilation under Fortran fails because of all of the comments in the /* */. If I remove all the comments from this header, then the code compiles.

Either we need to find a way to have these comments work for both Fortran and C++, or we need to remove them all.

Using `amrex::MultiFab` Alias Constructor

Having some issues when trying to use the MultiFab alias constructor. Calling this constructor gives the following backtrace:

 2: /lib/x86_64-linux-gnu/libc.so.6(+0x354b0) [0x7f45333984b0]
    ??
    ??:0

 3: /home/uy7/install/PICFS/debug/bin/incompressible_mpm(_ZN5amrex8FabArrayINS_9FArrayBoxEEC1ERKS2_NS_8MakeTypeEii+0x54) [0xc61660]
    amrex::FabArray<amrex::FArrayBox>::FabArray(amrex::FabArray<amrex::FArrayBox> const&, amrex::MakeType, int, int)
    /home/uy7/software/amrex/Src/Base/AMReX_FabArray.H:945

 4: /home/uy7/install/PICFS/debug/bin/incompressible_mpm(_ZN5amrex8MultiFabC1ERKS0_NS_8MakeTypeEii+0x36) [0xc5ae4a]
    amrex::MultiFab::MultiFab(amrex::MultiFab const&, amrex::MakeType, int, int)
    /home/uy7/software/amrex/Src/Base/AMReX_MultiFab.cpp:391

It seems to go directly from the call to shmem() to an uknown libc error. The configure doesn't inlude any shared memory:

cmake \
    -D CMAKE_INSTALL_PREFIX:PATH=$INSTALL \
    -D DEBUG:BOOL=ON \
    -D ENABLE_DP:BOOL=ON \
    -D ENABLE_MPI:BOOL=ON \
    -D ENABLE_PARTICLES:BOOL=ON \
    -D ENABLE_LINEAR_SOLVERS:BOOL=ON \
    -D ENABLE_TINY_PROFILE:BOOL=ON \

A quick grep shows that amrex::make_alias does not appear to be used anywhere in the library. Are there tests for this functionality or is this functionality currently in use?

clean up AMReX_COORDSYS_?D.F

The volume terms hardcode in pi in several places -- we should have a single constants module to provide this.

Also, the expressions for volume are not going to be very accurate for large (e.g. astrophysicy) numbers.

              vol(i) = half*RZFACTOR*(ro(i)**2 - ri(i)**2)

should be

              vol(i) = half*RZFACTOR*(ro(i) + r(i))*(ro(i) - ri(i))

likewise the ro(i)**3-ri(i)**3 should be factored

regtest.py: option to disable `git pull`

When using the script regtest.py, it is possible to specify a commit or branch for which the tests should be run (in the corresponding test input file, through the branch = line).

When doing so, the python script performs git checkout <specified commit> and then git pull. However, when a commit hash is provided (as opposed to a branch name) in the test input file, the git pull fails (because the code is in a "detached HEAD" state after git checkout).

Is there an option to disable git pull? I think that the option --no_update disables both git checkout <specified commit> and git pull, which is not quite what I am looking for.

1D AMReX compilation not working

A 1D build of AMReX fails with a C++ error (using gcc). I don't personally care about 1D, but thought I should mention it after discovering it while testing the Fortran replacement for the AMREX_CONSTANTS.H header file

/opt/src/amrex/amrex/Src/F_Interfaces/Base/AMReX_geometry_fi.cpp: In function ‘void amrex_fi_new_geometry(amrex::Geometry*&, int*, int*)’:
/opt/src/amrex/amrex/Src/F_Interfaces/Base/AMReX_geometry_fi.cpp:11:28: error: no matching function for call to ‘amrex::Geometry::Geometry(amrex::Box (&)(amrex::IntVect*, amrex::IntVect*))’
  geom = new Geometry(domain);
                            ^
In file included from /opt/src/amrex/amrex/Src/F_Interfaces/Base/AMReX_geometry_fi.cpp:1:0:
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:48:5: note: candidate: constexpr amrex::Geometry::Geometry(amrex::Geometry&&)
     Geometry (Geometry&& rhs) noexcept = default;
     ^~~~~~~~
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:48:5: note:   no known conversion for argument 1 from ‘amrex::Box(amrex::IntVect*, amrex::IntVect*)’ to ‘amrex::Geometry&&’
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:47:5: note: candidate: constexpr amrex::Geometry::Geometry(const amrex::Geometry&)
     Geometry (const Geometry& rhs) = default;
     ^~~~~~~~
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:47:5: note:   no known conversion for argument 1 from ‘amrex::Box(amrex::IntVect*, amrex::IntVect*)’ to ‘const amrex::Geometry&’
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:41:14: note: candidate: amrex::Geometry::Geometry(const amrex::Box&, const amrex::RealBox*, int, int*)
     explicit Geometry (const Box&     dom,
              ^~~~~~~~
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:41:14: note:   no known conversion for argument 1 from ‘amrex::Box(amrex::IntVect*, amrex::IntVect*)’ to ‘const amrex::Box&’
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:39:5: note: candidate: amrex::Geometry::Geometry()
     Geometry ();
     ^~~~~~~~
/opt/src/amrex/amrex/Src/Base/AMReX_Geometry.H:39:5: note:   candidate expects 0 arguments, 1 provided

How to set amrex parameters from code?

Many of the procedures in the Fortran amrex interface are getting input parameters not from arguments, but from a hidden "parmparse" database. Is there a way for Fortran code to set parameters in that database? The issue is that we (ECP ExaAM) do not want to use the amrex input file format. We have our own system for managing input parameters, and want to use amrex as a library and not as the environment we operate in.

Share some understandings about different BCs in amrex and propose some issues about them

Hello all,

Recently, I am learning the 4.16 section (P35) about BCs. Here are some notes about what I learned and also some issues that I cannot solve.

Firstly, I just want to clarify my future goal and my aim nowadays. My future goal is to combine the incompressible staggered grid flow solver in our group with the amrex package to make the grids adaptive. After reading the codes in the Advection_AmrCore directory carefully, I think I can initialize the cell-center variable pressure without ghost cell properly. Thus, my aim nowadays is to add the physical boundary conditions (add ghost cells) for pressure.

In section 4.16, there are three different types of BCs. The first type is internal BC. In the function FillPatchSingleLevel, the code shows:

mf.FillBoundary(dcomp, ncomp,geom.periodicity());

In my view, this FillBoundary function only fills the values of ghost cells on the same level like the following figure,

figure1

It does not fill the ghost cell on the physical boundary.
Is it correct?

Then, for physical boundary, it uses the code:

physbcf.FillBoundary(mf, dcomp, ncomp, time)

Since this virtual function is not overridden in this example, here are three issues.

1. To my understanding, the default physbcf.FillBoundary is to make all physical boundary condition perioidic, is it correct?
2. I have dug out the details of this function, it will return directly if the number of ghost cell of mf is 0 or geometry is periodic. Also, I think if the above two conditions are not satisfied (e.g. the geometry.is_periodic = 1 1 1 in the inputs file), the following codes in this function will make this mf periodic, like a constraint. Is it correct?
3. In the function Advance, a temporal mf Sborder is defined and it has three ghost cells. So after FillPatch function, it will have three ghost cells for physical periodic boundary condition. Does it make sense?

Since I aimed at adding different non-periodic physical boundary conditions for pressure and check them, I then did the following things.

A first try:

I added the codes on P36 to try to override the FillBoundary() for physical boundary condition, and used the FillDomainBoundary function, like this:
//------------------------------Code starts-----------------------//
void AmrCoreAdvPhysBC::FillBoundary (amrex::MultiFab& mf, int, int, amrex::Real time)
{
Array bc(mf.nComp());
for (int n = 0; n < mf.nComp(); ++n)
{
for (int idim = 0; idim < AMREX_SPACEDIM ; ++ idim )
{
if (Geometry::isPeriodic(idim))
{
bc[n].setLo(idim, BCType::int_dir); // interior
bc[n].setHi(idim, BCType::int_dir);
}
else
{
bc[n].setLo(idim, BCType::reflect_even); // reflect-even
bc[n].setHi(idim, BCType::reflect_even);
}
}
}
// for cell-centered data pressure
amrex::FillDomainBoundary (mf, geom[0], bc);
}
//------------------------------Code ends-----------------------//

Since I do not know the level of mf, I have no idea about how to give the input variable geom[lev]. I used the geom[0] here. I think it is wrong because finer grids can also touch the physical boundary.

A second try:

Then, I think I can add the codes on P36 somewhere else, so I added them in the AmrCoreAdv::FillPatch function after the amrex::FillPatchSingleLevel, like this:

void AmrCoreAdv::FillPatch (int lev, Real time, MultiFab& mf, int icomp, int ncomp)
{

AmrCoreAdvPhysBC physbc;
amrex::FillPatchSingleLevel(mf, time, smf, stime, 0, icomp, ncomp,
geom[lev], physbc);
//------------------------------Code starts-----------------------//
Codes on page 36
amrex::FillDomainBoundary(mf, geom[lev], bc);
//------------------------------Code ends------------------------//

}

The complier says:
error: ‘FillDomainBoundary’ is not a member of ‘amrex’
amrex::FillDomainBoundary(mf, geom[lev], bc);
How does this happen?

To summarize,

I still do not know how to add the physical boundary condition for cell-center variables (e.g. pressure) or edge-center variable? Also, where should I add the related codes based on the Advection_AmrCore frame? How can I output the ghost cell values to check whether the physical boundary condition has been applied properly?

(Notes: I noticed in the closed issue, @pbrady has proposed a similiar question

#2

@WeiqunZhang gave some links about simliar examples, yet most of these links can not be found. I think there are some alternative ways in the Advection_F example, which is a fortran-driven codes. Now I am learning them.)

Thanks for your suggestions.

Ruohai

Fortran pre-processor errors in AMReX_filcc_mod.F90

A recent commit (c7969cd) added includes of C header files to AMReX_filcc_mod.F90 without apparently being aware that the preprocessor language understood by standard Fortran is a subset of that understood by C/C++. In particular, the included files use the concatenation operator ## which is not standard in Fortran. Some compilers recognize it as an extension, but not the NAG compiler. So this breaks things for NAG.

regridding with many levels is slow

When running Castro, with up to 8 levels of refinement, once you go above level 7, the regrid process starts getting really expansive, and does not seem proportional to the number of grids or zone

Add io website to GitHub description

I think it'd be convenient to add a link to the github.io website for AMReX (https://amrex-codes.github.io/) to the GitHub description (https://github.com/AMReX-Codes/amrex) . An example of this is the Maestro GitHub repo, which includes the io website in the description (https://github.com/AMReX-Astro/MAESTRO). I realize the io website's mostly a stub for the moment, but as it grows it would be convenient to be able to quickly jump to it from the repository.

Interpolation Methods in amrex/OldTutorials/HeatEquation_EX3_F/advance.f90

Hello,
I want to know learn more about the interpolation method of ml_edge_restriction_c and ml_restrict_and_fill in amrex/OldTutorials/HeatEquation_EX3_F/advance.f90.

I partly modified the example and ran the code, but the difference is rather larger between the result with global finer mesh and that with AMR. I set different conditions and find the trouble is most likely the interpolation method.

I'll appreciate it if you can provide some information (method or some paper) for me! Thank you!

Debug Build Warnings

@atmyers I get the following warnings with a debug build out of amrex::Array:

/Users/uy7/install/amrex/debug/include/AMReX_Array.H:30:14: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
  BL_ASSERT(i < this->size());
/Users/uy7/install/amrex/debug/include/AMReX_Array.H:36:14: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
  BL_ASSERT(i < this->size());

The following comes from exclusively using extra data as struct-of-arrays with particles (and therefore the number of extra reals and ints on the particles are zero):

/Users/uy7/install/amrex/debug/include/AMReX_Particles.H:418:40: warning: ISO C++ forbids zero-size array [-Wpedantic]
     double real_struct_data[NStructReal];
                                        ^
/Users/uy7/install/amrex/debug/include/AMReX_Particles.H:419:38: warning: ISO C++ forbids zero-size array [-Wpedantic]
     int    int_struct_data[NStructInt];
                                      ^
/Users/uy7/install/amrex/debug/include/AMReX_Particles.H:420:38: warning: ISO C++ forbids zero-size array [-Wpedantic]
     double real_array_data[NArrayReal];
                                      ^
/Users/uy7/install/amrex/debug/include/AMReX_Particles.H:421:36: warning: ISO C++ forbids zero-size array [-Wpedantic]
     int    int_array_data[NArrayInt];
                                    ^

Can these be cleaned?

Error in AMReX_filcc_mod.F90 when compiled in 2D

The new version of AMReX_filcc_mod.F90 has some issues when it is compiled in 2D (amrex_spacedim = 2). Here's a snippet with an error that the NAG compiler detects (it is really good at this)

  subroutine filccn(lo, hi, q, q_lo, q_hi, ncomp, domlo, domhi, dx, xlo, bc)
   [...]
    integer,          intent(in   ) :: q_lo(3), q_hi(3)
    integer,          intent(in   ) :: domlo(amrex_spacedim), domhi(amrex_spacedim)
   [...]
    ks = max(q_lo(3), domlo(3))
    ke = min(q_hi(3), domhi(3))
   [...]
    klo = domlo(3)
    khi = domhi(3)

The domlo(3), domhi(3) references are out-of-bounds.

I don't know enough about the code to propose a fix. If the actual arguments for the domlo/hi dummy arguments are actually length 3, then the dummy arguments could be declared that way. Otherwise at runtime those references are illegal, though probably harmless (presumably ks, ke, klo, khi aren't actually used). Here's a patch that avoids executing the illegal references and which also passes muster with the NAG compiler

@@ -123,15 +123,19 @@ contains
     ie = min(q_hi(1), domhi(1))
     js = max(q_lo(2), domlo(2))
     je = min(q_hi(2), domhi(2))
-    ks = max(q_lo(3), domlo(3))
-    ke = min(q_hi(3), domhi(3))
+    if (amrex_spacedim == 3) then
+       ks = max(q_lo(3), domlo(amrex_spacedim))
+       ke = min(q_hi(3), domhi(amrex_spacedim))
+    end if
 
     ilo = domlo(1)
     ihi = domhi(1)
     jlo = domlo(2)
     jhi = domhi(2)
-    klo = domlo(3)
-    khi = domhi(3)
+    if (amrex_spacedim == 3) then
+       klo = domlo(amrex_spacedim)
+       khi = domhi(amrex_spacedim)
+    end if
 
     do n = 1, ncomp
 

Note that there is a a further issue if AMReX is built in 1D, though that seems to be broken at the moment (C++ error elsewhere).
@maxpkatz

recent changes breaks restart capabilities

e.g., in Castro, compiling with AMReX 96601a9 checkpointing and restart work fine. With the current development head for AMReX (27e6b9a6), we get a segfaul upon restart.

Test suite example here:

http://bender.astro.sunysb.edu/Castro/test-suite/test-suite-amrex-gfortran/2017-08-26/rad-2Tshock-1d-restart.html

backtrace:

=== If no file names and line numbers are shown below, one can run
            addr2line -Cfie my_exefile my_line_address
    to convert `my_line_address` (e.g., 0x4a6b) into file name and line number.

=== Please note that the line number reported by addr2line may not be accurate.
    One can use
            readelf -wl my_exefile | grep my_line_address'
    to find out the offset for that line.

 0: ./Castro1d.gnu.DEBUG.MPI.ex() [0x59f79c]
    amrex::BLBackTrace::print_backtrace_info(_IO_FILE*)
    /home/testing/castro-amrex-tests/AMReX//Src/Base/AMReX_BLBackTrace.cpp:88

 1: ./Castro1d.gnu.DEBUG.MPI.ex() [0x59f689]
    amrex::BLBackTrace::handler(int)
    /home/testing/castro-amrex-tests/AMReX//Src/Base/AMReX_BLBackTrace.cpp:51

 2: /lib64/libc.so.6(+0x35950) [0x7f315d170950]
    ??
    ??:0

 3: ./Castro1d.gnu.DEBUG.MPI.ex() [0x56ab46]
    amrex::FabArray<amrex::FArrayBox>::FabArray(amrex::BoxArray const&, amrex::DistributionMapping const&, int, int, amrex::MFInfo const&, amrex::FabFactory<amrex::FArrayBox> const&)
    /home/testing/castro-amrex-tests/AMReX//Src/Base/AMReX_FabArray.H:881

 4: ./Castro1d.gnu.DEBUG.MPI.ex() [0x56514b]
    amrex::MultiFab::MultiFab(amrex::BoxArray const&, amrex::DistributionMapping const&, int, int, amrex::MFInfo const&, amrex::FabFactory<amrex::FArrayBox> const&)
    /home/testing/castro-amrex-tests/AMReX//Src/Base/AMReX_MultiFab.cpp:380

 5: ./Castro1d.gnu.DEBUG.MPI.ex() [0x5f7182]
    amrex::StateData::restartDoit(std::istream&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
    /home/testing/castro-amrex-tests/AMReX//Src/Amr/AMReX_StateData.cpp:239

 6: ./Castro1d.gnu.DEBUG.MPI.ex() [0x5f6f4f]
    amrex::StateData::restart(std::istream&, amrex::Box const&, amrex::BoxArray const&, amrex::DistributionMapping const&, amrex::StateDescriptor const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
    /home/testing/castro-amrex-tests/AMReX//Src/Amr/AMReX_StateData.cpp:220

 7: ./Castro1d.gnu.DEBUG.MPI.ex() [0x5e1497]
    amrex::AmrLevel::restart(amrex::Amr&, std::istream&, bool)
    /home/testing/castro-amrex-tests/AMReX//Src/Amr/AMReX_AmrLevel.cpp:332 (discriminator 2)

 8: ./Castro1d.gnu.DEBUG.MPI.ex() [0x434a4b]
    Castro::restart(amrex::Amr&, std::istream&, bool)
    /home/testing/castro-amrex-tests/Castro//Source/driver/Castro_io.cpp:98

 9: ./Castro1d.gnu.DEBUG.MPI.ex() [0x5c797b]
    amrex::Amr::restart(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)
    /home/testing/castro-amrex-tests/AMReX//Src/Amr/AMReX_Amr.cpp:1450

10: ./Castro1d.gnu.DEBUG.MPI.ex() [0x5c5d5e]
    amrex::Amr::init(double, double)
    /home/testing/castro-amrex-tests/AMReX//Src/Amr/AMReX_Amr.cpp:1045

11: ./Castro1d.gnu.DEBUG.MPI.ex() [0x43bbf6]
    main
    /home/testing/castro-amrex-tests/Castro//Source/driver/main.cpp:120

12: /lib64/libc.so.6(__libc_start_main+0xf1) [0x7f315d15b401]
    ??
    ??:0

13: ./Castro1d.gnu.DEBUG.MPI.ex() [0x40acea]
    _start
    ??:?

The GNUmake system is not friendly to new systems with MPI compilers

AMReX has revealed many bugs in vendor compilers, which I have reported over the last several months by sending bug reports with an attached tar of AMReX, along with instructions for which directory to navigate to, and where to type "make". However, vendors usually test these bugs on internal systems which AMReX does not know about, and they often encounter errors with the make system because it cannot determine some component of the environment.

In theory, Make.local addresses this problem, but if one compiles with USE_MPI=TRUE, the make system still searches for a version of MPI, and fails if it can't find one. The only way I have been able to work around this is to set USE_MPI=FALSE so that it skips the MPI checks and uses whatever it is set in Make.local, even if the compilers set in Make.local are indeed MPI compilers. While this is a simple workaround, it seems to be a hack, rather than a feature.

It would be convenient to have a way to tell the make system to skip all checks and blindly use whatever is defined in Make.local. Is this possible to do, or am I thinking about make the wrong way?

Thanks.

Clarification on box corners and geometry corners

Consider this 2D snippet

type(amrex_box) :: domain
type(amrex_geometry) :: geom
domain = amrex_box([1,1], [10,10])
call amrex_geometry_build(geom, domain)

where

geometry.prob_lo = 0.0  0.0  0.0
geometry.prob_hi = 1.0  1.0  1.0

When I visualize the data on the associated multifab (visit) I expect to see a 10x10 mesh covering the unit square [0,1]^2, but instead the rendered domain is [0.1, 1.1]^2. I get what I expect if I instead do

domain = amrex_box([0,0], [9,9])

It seems the geometry (or plotting code?) assumes the box indices are 0-based. Is this a mistake or intentional? My sense of a box is it is an arbitrary rectangular section of index space (user's choice) and it ought to be possible to associate any box with a given rectangular geometry.

AMReX_base_mod.F90 is not installed by CMake

Here is how I installed & tested the latest master (3724a07) of AMReX:

module load cmake/3.6.2 gcc/5.3.0 openmpi/1.10.5
cd $HOME/repos/amrex
cmake -DCMAKE_INSTALL_PREFIX=inst .
make -j16
make install
cd MiniApps/AMR_Adv_Diff_F90
mpicxx -I$HOME/repos/amrex/inst/include write_plotfile.f90 prob.f90 compute_flux.f90 init_phi.f90 update_phi.f90 advance.f90 main.f90 -L$HOME/repos/amrex/inst/lib -lfboxlib -lcboxlib -lmpi_usempif08
mpirun -n 16 ./a.out

And that compiles and runs in parallel.

However, this did not install the Src/F_Interfaces/Base/AMReX_base_mod.F90 module into $HOME/repos/amrex/inst/include. In fact, by looking into the Src/F_Interfaces/Base/ directory, it seems that it is missing the CMake build system completely.

Is the file AMReX_base_mod.F90 supposed to be installed by CMake, and if not, why not? What should be the proper fix for this?

F90PP_source_files not added

Hi,

I'm getting a linker error

Scanning dependencies of target opal
[100%] Building CXX object src/CMakeFiles/opal.dir/Main.cpp.o
[100%] Linking CXX executable opal
/gpfs/home/frey_m/git/new-amrex/amrex/install/lib/libcboxlib.a(AMReX_FillPatchUtil.cpp.o): In function `amrex::InterpCrseFineBndryEMfield(amrex::InterpEM_t, std::array<amrex::MultiFab, 3ul> const&, std::array<amrex::MultiFab, 3ul>&, amrex::Geometry const&, amrex::Geometry const&, int)':
/gpfs/home/frey_m/git/new-amrex/amrex/Src/AmrCore/AMReX_FillPatchUtil.cpp:325: undefined reference to `amrex_interp_div_free_bfield'
/gpfs/home/frey_m/git/new-amrex/amrex/Src/AmrCore/AMReX_FillPatchUtil.cpp:336: undefined reference to `amrex_interp_efield'
collect2: error: ld returned 1 exit status
make[2]: *** [src/opal] Error 1
make[1]: *** [src/CMakeFiles/opal.dir/all] Error 2
make: *** [all] Error 2

since the variable F90PP_source_files in Src/AmrCore/CMakeLists.txt is not added to the variable local_source_files.

Best,
Matthias

CMake does not install the full Fortran interface

Here is how to reproduce using the latest master (3724a07) of AMReX:

module load cmake/3.6.2 gcc/5.3.0 openmpi/1.10.5
cd $HOME/repos/amrex
cmake -DCMAKE_INSTALL_PREFIX=inst .
make -j16
make install

This did not install the Src/F_Interfaces/Base/AMReX_base_mod.F90 module into $HOME/repos/amrex/inst/include. In fact, by looking into the Src/F_Interfaces/Base/ directory, it seems that it is missing the CMake build system completely.

This was originally reported in #63.

Physical Boundary Conditions - Fortran

I cannot find any example fortran code that uses the amrex_physbc fortran interface to supply boundary conditions. The current examples in Tuturials/*_CF use periodic bcs. I did notice that in some of the _C tutorials, an f77 file was present which provided a filcc routine. To impose boundary conditions, should I customize the filcc routines or is there a better way utilizing amrex_physbc?

GFortran and derived type finalization: working for anyone?

I'm running into a segfault when running an executable built with gfortran (gcc 6.4.1). According to amrex's own traceback file, it is occurring in amrex_imultifab_destroy and it is being called automatically through finalization of some object. (A higher-level object is being passed to an INTENT(OUT) dummy argument, which triggers finalization of the object.) The code is working fine when built with Intel 17.

I'm pretty sure this is an issue with gfortran. To this day gfortran has long-standing serious defects in its implementation of derived type finalization; see the meta bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=37336. How are other gfortran users dealing with this problem? By making explicit calls to the "destroy" subroutines to head-off automatic finalization from needing to do anything?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.