Giter Club home page Giter Club logo

specfem / specfem3d Goto Github PK

View Code? Open in Web Editor NEW
380.0 83.0 220.0 677.85 MB

SPECFEM3D_Cartesian simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra (structured or not).

License: GNU General Public License v3.0

Python 4.52% Shell 2.80% Fortran 51.05% Gnuplot 0.04% Perl 4.51% Makefile 0.98% MATLAB 1.41% C++ 0.86% C 18.01% Cuda 3.25% TeX 0.26% Tcl 0.52% PostScript 7.34% GLSL 4.01% CMake 0.02% Batchfile 0.01% Lex 0.03% Yacc 0.10% M4 0.08% Roff 0.21%

specfem3d's Introduction

Specfem3D

DOI

SPECFEM3D_Cartesian simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra (structured or not.)

It can, for instance, model seismic waves propagating in sedimentary basins or any other regional geological model following earthquakes. It can also be used for non-destructive testing or for ocean acoustics

SPECFEM3D was founded by Dimitri Komatitsch and Jeroen Tromp, and is now being developed by a large, collaborative, and inclusive community. A complete list of authors can be found at https://specfem3d.readthedocs.io/en/latest/authors/

Installation

Instructions on how to install and use SPECFEM3D are available in the

For a quick test, run the default example with these commands:

./configure FC=gfortran CC=gcc
make all
./run_this_example.sh

and check the output files in ./OUTPUT_FILES/

NOTE: Do not modify the 'configure' script directly. Please modify the 'configure.ac' file instead, and generate a new 'configure' script with the command: autoreconf -i

Development

Actions Status Travis Status Azure Status codecov Documentation Status License: GPL v3

Development is hosted on GitHub in the SPECFEM/specfem3d repository.

SPECFEM3D is a community project that lives by the participation of its members — i.e., including you! It is our goal to build an inclusive and participatory community so we are happy that you are interested in participating! We have collected a set of guidelines and advice on how to get involved in the community and keep them in the specfem3d github wiki: specfem3d wiki

Computational Infrastructure for Geodynamics (CIG)

SPECFEM3D is part of the software that is hosted by the Computational Infrastructure for Geodynamics (CIG). It is available on the CIG website here (SPECFEM3D).

specfem3d's People

Contributors

1sbeller avatar aakash10gupta avatar bottero avatar carltape avatar casarotti avatar cmorency1 avatar cses avatar danielpeter avatar dmiller423 avatar durochat avatar eheien avatar elifo avatar etiennebachmann avatar evcano avatar gargrahul avatar gassmoeller avatar gdmcbain avatar huihuiweng avatar jpampuero avatar kbai avatar kiwifb avatar komatits avatar luet avatar minchenatrice avatar mnagaso avatar rmodrak avatar slimkaki avatar vmont avatar vmontlma avatar xiezhinan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

specfem3d's Issues

merge SPECFEM3D_GLOBE into SPECFEM3D

merge SPECFEM3D_GLOBE into SPECFEM3D, i.e. create a single package and see the mesh of the Earth as a general mesh that we would
decompose in parallel using PT-Scotch. Problem: would this scale for huge global Earth simulations, e.g. at a seismic period of 1
second on 200,000 processor cores?

27-node elements in SPECFEM3D

Add support for 27-node elements in addition to 8-node elements;
that is useful for instance when handling large topographic variations or
significantly distorted layers; we do have support for 27-node elements in
SPECFEM3D_GLOBE but not in SPECFEM3D. I guess it would just be a
matter of cutting and pasting the 27-node routines from SPECFEM3D_GLOBE.

Users (for instance Vadim Monteiller and Paul Cristini) who use SPECFEM3D for mountain ranges for instance would be able to use curved elements, and since we have support for 27-node hexahedra in SPECFEM3D_GLOBE we could probably cut and paste that in SPECFEM3D.

Vadim could probably have a look at that because he will need that for his study of the Pyrénées.

As Emanuele mentioned, CUBIT has no real support for HEX27, but Gmsh does (and Paul uses Gmsh all the time; thus we could use it rather than CUBIT when curvature is needed).

I guess Vadim will work on that at some point.

Comment from Daniel:

keep in mind that we have two ways to mesh, in-house mesher xmeshfem3D and CUBIT/SCOTCH. if you insist on 27-node elements, i would start to implement it in the xmeshfem3D in-house mesher. that would probably show what is needed when combining it with CUBIT and SCOTCH which might be more of a problem.

Comment from Emanuele:

Theoretically, Cubit can generate mesh for HEX8, HEX20, HEX27 type of
hexahedra but higher-order nodes are created only when the mesh is
being exported to the Exodus II file (and persist in the CUBIT
database after file export) so after the meshing process... therefore
basically it is a regular hex mesh with more nodes inside each
elements (not a curvilinear element).

I'm not sure if the cubit's python interface is able to extract the
information about the internal nodes... I should take a try but in
any case it is possible to recreate it during the exporting phase
from cubit/geocubit to specfem3d (or directly in specfem?)

I suppose that I can easily create a script that produce a hex
association of 27 nodes but the hard-for-me step is to adapt the
specfem3d reader and the scotch partitioning.

(todo_list_please_dont_remove.txt: suggestion 25)

add a checkpointing/restarting system with CRC-32 to SPECFEM3D(_GLOBE)

add a checkpointing/restarting system with CRC-32 as in
SPECFEM3D_GLOBE/tags/v4.1.0_beta_merged_mesher_solver_non_blocking_MPI/src , or using the fault tolerance library developed by
Leonardo Bautista and Franck Cappello at INRIA (see the content of directory "utils/fault_tolerance").

see if Dirichlet or Neumann conditions are needed on the outside edge of the C-PML layers in acoustic media

Zhinan sees if Dirichlet or Neumann conditions are needed on the outside edge of the C-PML layers in acoustic media; in elastic
media, a Dirichlet condition (zero displacement) is needed to prevent spurious Rayleigh waves from developing at the far end of the
PML and making it blow up; in the acoustic case, a Dirichlet condition for pressure (zero pressure) means a Neumann (rather than
Dirichlet) condition for the potential because pressure is related to its gradient;

thus Zhinan should see (by running a simple test in the 2D code) if a Neumann condition is OK for an acoustic PML, and/or if a
Dirichlet condition works fine as well; it could be that both are OK (because no such spurious surface waves can develop, contrary
to the elastic case) but it is worth checking (and it is simple to check)

(todo_list_please_dont_remove.txt: suggestion 04)

time schemes: Zhinan's time scheme in 3D codes

Regarding the implicit time schemes that Zhinan has implemented in 2D, I agree that it would be great to put that in the official SVN version relatively soon to avoid losing the changes if we wait for too long. But I think we only need this in 2D for now, so let us not do it in 3D (at least for now in 2012). Let us just commit Zhinan's 2D version of the implicit routines to the official SVN code (making it off by default; the default should remain a purely explicit second-order Newmark scheme)

(todo_list_please_dont_remove.txt: suggestion 30)

C-PML in SPECFEM3D

Zhinan (with the help of Jo and Daniel) implements C-PML in SPECFEM3D (in that code only for now; we can cut and paste that in
SPECFEM3D_GLOBE later if needed, but we do not need that now).
Paul Cristini and I will then test it extensively after the summer.
Zhinan, Daniel, Paul and Jo need to agree on a numbering convention (i.e. flags) for the different PML regions and corners (a
binary / bit encoding technique can be used; see more details about this at the end of this suggestion);
they could use the same idea as in the convention used in 2D.

See (with Emanuele and also Paul Cristini) how to define and handle the PML absorbing elements in CUBIT;
read this from input files in the solver

See also if the current arrays called isPML() or PML() or something like that in SPECFEM3D need to be suppressed or on the contrary could
be reused. They seem to correspond to an earlier attempt a few years ago that was never finished.

One thing that we will need to investigate in the specific case of GLOBE is how to make PML work for boundaries that are not aligned
with x / y / z (i.e. for one chunk of SPECFEM3D_GLOBE). That should not be a problem but we will get more terms because of products with
the three components of the normal vector. We already have the general (tensorial) formulation written in our PML paper of 2003,
but I never implemented all the terms.

The standard logical bit handling functions of Fortran are described for instance at http://www.ews.uiuc.edu/~mrgates2/docs/fortran.html
Thus here is the standard way of using a single array for all the logical flags for CPML in Fortran:

  • define a single integer flag

integer, dimension(nspec) :: PML_flags

  • clear it initially : PML_flags(:) = 0
  • use for instance bit #1 for Xmin, bit #2 for Xmax , bit #3 for Ymin .... bit #6 for Zmax
  • thus, to indicate that an element belongs to the Xmax PML for instance, use :

PML_flags(ispec) = ibset(PML_flags(ispec), 2) ! sets bit #2 of that integer to 1

  • later, to test if this element belongs to the Xmax PML, use:

if(bset(PML_flags(ispec), 2)) then ! test if bit #2 is equal to 1
endif

Pretty simple, and then a single array is needed for all the flags.

(in fact, we could use this technique to group all the other logical flags of the code in a single array, to reduce memory size...)

(todo_list_please_dont_remove.txt: suggestion 03)

dynamic/kinematic faults enhancements

  • fault_solver_common:
    • make ordered version of subroutine assemble_MPI_vector_blocking, and use it in subroutine initialize_fault
  • fault_solver_dynamic:
    • many hard-coded ad hoc features need to be set instead as user options
      (e.g. rate-and-state friction)
  • rate-and-state friction:
    • make it a user-friendly option (currently a flag in fault_solver)
    • add documentation
  • cubit interface:
    • consolidate python scripts (eliminate redundancy)
  • verify consistency of fault edge nodes in kinematic solver:
    currently, these nodes are not split but they are included in the fault database
  • simplify mesh generation with faults:
    • eliminate fault edge nodes?
      Currently fault edge nodes must be defined as closed and non-split in CUBIT,
      SESAME asigns a single ibulk, but it adds it to the fault database.
    • develop a recipe for mesh coarsening in CUBIT based on half "cubed sphere" mesh
      • test tet-to-hex strategy
  • add snapshot outputs to kinematic solver (e.g. to export fault stresses for adjoint source inversion)
  • prepare automated tests

conform strictly to the Fortran2003 standard

To make the three codes (2D, 3D_Cartesian, 3D_GLOBE) conform strictly to the Fortran standard,
we should replace all "call system(...)" statements with "call execute_command_line(...)", which is the
new standard way in Fortran2008. However, this intrinsic subroutine is currently (July 2013) not supported
for instance by Intel ifort and thus we purposely do not do it yet, but around 2015 or so we should do it.

auto-time stepping in SPECFEM3D_GLOBE: fix auto_NER() in GLOBE when a single chunk is used and has a size of less than 90 x 90 degrees

fix auto_NER() in GLOBE when a single chunk is used and has a size of less than 90 x 90 degrees; in that case, that (relatively
old) routine written by Brian Savage around 2006/2007 is called, and I noticed that it makes the code blow up in most cases because
the mesh it creates and/or the time step that it automatically selects is not right (probably because the mesh has changed
significantly since Brian wrote that routine)

probably not urgent (in our group at least we almost always use the full Earth or else a chunk of size 90 x 90 degrees); but if
other users change that the current code will likely blow up

(todo_list_please_dont_remove.txt: suggestion 13)

In all SPECFEM versions (2D, 3D and 3D_GLOBE), fix all warnings from the Cray compiler.

In the new merged CPU / GPU version of SPECFEM3D_GLOBE, fix all these warnings that we get when using the Cray compiler.
They correspond to Fortran statements that are correct but that induce memory copies in and out of subroutine arguments, which may slow down the code
(significantly if these calls are used very often i.e. are for instance inside a big loop; not that significantly if the call is used only once;
however, it would be better / cleaner for the code to compile without a single warning. Here is the full list of Cray warnings (as of the beginning of Oct 2013):


ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/parallel.sharedmpi.o src/shared/parallel.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/spline_routines.shared.o src/shared/spline_routines.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/write_VTK_file.shared.o src/shared/write_VTK_file.f90

 call gather_all_cr(glob_data(1,:),nglob,store_val_ux_all,nglob,NPROCTOT)
                          ^                                            

ftn-1438 crayftn: CAUTION WRITE_VTK_DATA_CR_ALL, File = src/shared/write_VTK_file.f90, Line = 483, Column = 31
This argument produces a copy in and copy out to a temporary variable.

  call gather_all_cr(glob_data(2,:),nglob,store_val_uy_all,nglob,NPROCTOT)
                          ^                                            

ftn-1438 crayftn: CAUTION WRITE_VTK_DATA_CR_ALL, File = src/shared/write_VTK_file.f90, Line = 484, Column = 31
This argument produces a copy in and copy out to a temporary variable.

 call gather_all_cr(glob_data(3,:),nglob,store_val_uz_all,nglob,NPROCTOT)
                          ^                                            

ftn-1438 crayftn: CAUTION WRITE_VTK_DATA_CR_ALL, File = src/shared/write_VTK_file.f90, Line = 485, Column = 31
This argument produces a copy in and copy out to a temporary variable.

ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/meshfem3D_par.check.o src/meshfem3D/meshfem3D_par.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/meshfem3D.check.o src/meshfem3D/meshfem3D.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/meshfem3D_models.check.o src/meshfem3D/meshfem3D_models.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/add_missing_nodes.check.o src/meshfem3D/add_missing_nodes.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/model_aniso_mantle.check.o src/meshfem3D/model_aniso_mantle.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/compute_forces_outer_core_noDev.solverstatic.o src/specfem3D/compute_forces_outer_core_noDev.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/compute_forces_outer_core_Dev.solverstatic.o src/specfem3D/compute_forces_outer_core_Dev.F90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/compute_kernels.solverstatic.o src/specfem3D/compute_kernels.F90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/compute_seismograms.solverstatic.o src/specfem3D/compute_seismograms.f90

            hxir_store(irec_local,:),hetar_store(irec_local,:),hgammar_store(irec_local,:), &
                      ^                                                                       

ftn-1438 crayftn: CAUTION COMPUTE_SEISMOGRAMS_ADJOINT, File = src/specfem3D/compute_seismograms.f90, Line = 276, Column = 27
This argument produces a copy in and copy out to a temporary variable.
^
ftn-1438 crayftn: CAUTION COMPUTE_SEISMOGRAMS_ADJOINT, File = src/specfem3D/compute_seismograms.f90, Line = 276, Column = 53
This argument produces a copy in and copy out to a temporary variable.
^
ftn-1438 crayftn: CAUTION COMPUTE_SEISMOGRAMS_ADJOINT, File = src/specfem3D/compute_seismograms.f90, Line = 276, Column = 81
This argument produces a copy in and copy out to a temporary variable.

            hpxir_store(irec_local,:),hpetar_store(irec_local,:),hpgammar_store(irec_local,:), &
                       ^                                                                         

ftn-1438 crayftn: CAUTION COMPUTE_SEISMOGRAMS_ADJOINT, File = src/specfem3D/compute_seismograms.f90, Line = 277, Column = 28
This argument produces a copy in and copy out to a temporary variable.
^
ftn-1438 crayftn: CAUTION COMPUTE_SEISMOGRAMS_ADJOINT, File = src/specfem3D/compute_seismograms.f90, Line = 277, Column = 55
This argument produces a copy in and copy out to a temporary variable.
^
ftn-1438 crayftn: CAUTION COMPUTE_SEISMOGRAMS_ADJOINT, File = src/specfem3D/compute_seismograms.f90, Line = 277, Column = 84
This argument produces a copy in and copy out to a temporary variable.

ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/compute_stacey_crust_mantle.solverstatic.o src/specfem3D/compute_stacey_crust_mantle.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/compute_stacey_outer_core.solverstatic.o src/specfem3D/compute_stacey_outer_core.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/multiply_arrays_source.solverstatic.o src/specfem3D/multiply_arrays_source.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/noise_tomography.solverstatic.o src/specfem3D/noise_tomography.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/write_output_ASCII.solverstatic.o src/specfem3D/write_output_ASCII.f90
ftn -eC -eD -ec -en -eI -ea -g -G0 -I./obj -I. -I. -I./setup -c -o obj/write_output_SAC.solverstatic.o src/specfem3D/write_output_SAC.f90

  call write_n_real(seismogram_tmp(iorientation,1:seismo_current),seismo_current)
                                  ^                                               

ftn-1438 crayftn: CAUTION WRITE_OUTPUT_SAC, File = src/specfem3D/write_output_SAC.f90, Line = 624, Column = 39
This argument produces a copy in and copy out to a temporary variable.

  call write_n_real(real(seismogram_tmp(iorientation,1:seismo_current)),seismo_current)
                    ^                                                                   

ftn-1438 crayftn: CAUTION WRITE_OUTPUT_SAC, File = src/specfem3D/write_output_SAC.f90, Line = 626, Column = 25
This argument produces a copy in to a temporary variable.

Cray Fortran : Version 8.1.8

(todo_list_please_dont_remove.txt: suggestion 48)

Improve checkpointing/restarting, and also make sure it is not limited to NUMBER_OF_RUNS > 1

add a checkpointing/restarting system with a checksum system as in SPECFEM3D_GLOBE/tags/v4.1.0_beta_merged_mesher_solver_non_blocking_MPI/src , or using the fault tolerance library developed by Leonardo Bautista and Franck Cappello at INRIA (see the content of directory "utils/fault_tolerance").

Should probably be done based on ADIOS.

From Craig P Steffen at NCSA:

Make checkpointing/restarting easier to use and not limited to a NUMBER_OF_RUNS > 1;
i.e. on big machines we should be able to use the existing routines to checkpoint even when NUMBER_OF_RUNS == 1, in case the run fails and needs to be restarted.
This will be useful on very big machines, in particular in SPECFEM3D_GLOBE.

implement higher order time schemes in 3D

implement higher order time schemes in 3D by cutting and pasting what Zhinan has done in 2D; two schemes are useful: RK4 and
LDDRK4-6 (low storage low dissipation Runge Kutta, a bit more expensive than RK4 but much better)

implement these RK4 and LDDRK4-6 high-order time schemes including for viscoelastic media, as already done in the 2D code

probably not a high priority, could be done last because we do not need that for current projects

Feedback from Qinya: The same time scheme should be used for forward and adjoint simulations, while special care should be taken to implement the corresponding time scheme for the back-reconstruction of forward field in kernel calculations.

TO DO ONLY IN 2013 OR 2014 BECAUSE WE DO NOT NEED THAT FOR URGENT PROJECTS.

(todo_list_please_dont_remove.txt: suggestion 07)

optimize the CPML code (which is currently very slow) and use weights in Scotch decomposition for C-PML elements in the code

use weights in Scotch decomposition for C-PML elements in the code;
because C-PML elements will be more expensive because they will compute more terms and will solve more equations (for the memory
variables / convolution terms) thus we should assign a higher weight to them when we call Scotch

the "elmnts_load" array up in decompose_mesh/part_decompose_mesh.f90 (around line 1432)
currently does not take into account C-PML elements weights. The matching array in the code is
called "CPML_type" and is defined as follows: 1 = face, 2 = edge, 3 = corner.

that is similar to what Daniel does for acoustic elements in the current version of SPECFEM3D: he uses a weight of 1 for acoustic
elements and 3 (or something like that) for elastic elements, which use a 3D vector instead of a scalar.

In principle the weighting factors can be computed analytically by counting the exact number of additional multiplications and
additions performed by C-PML elements

In PML corners this additional factor can be multiplied by 2 (for instance in the slice of PML that has X and Y damping) or by 3
for corners that combine an X, a Y and a Z PML.

easy to do once we have an expression for the weighting factors

workflow for inverse problems: make the inversion tools more standard and more user-friendly

Update from Dimitri, Oct 2012: after talking with many developers and users, we should definitely make the inversion tools in SPECFEM3D more standard and more user friendly. Even if there are many options to adjust when solving inverse problems, and different users currently have different tools (in Matlab, Perl, Fortran, Python and so on), sooner rather than later we should probably try to define a single tool in which all these different options could be implemented, but at least we would have a unified framework, which we could document in the manual.

Currently many users complain that computing sensitivity kernels is easy with SPECFEM3D but that then it is not so easy to use them (and different people give different answers when they are asked). So far, at least the following groups have local tools: the Princeton group, the Toronto group, Carl in Alaska, Emanuele and his group at INGV, Alessia (at least for Flexwin, there are now two versions: in Fortran and in Python), and the Marseille / CNRS group. We should try to address this issue relatively quickly, because the more we wait the more these sets of tools will diverge.

One thing we should do quickly for sure is make sure we maintain the current tools of SPECFEM3D inside the same directory as SPECFEM3D itself; it seems that currently everything is outside, in an "ADJOINT_TOMO" directory, which is thus not downloaded by users when they get SPECFEM3D through SVN or from the CIG tar file; which is problematic.

CPML in the 2D code for tilted models

Zhinan is going to try to adapt CPML in the 2D code to tilted models, to see how to do that;
he will also try to see how many PML flags he needs.

(todo_list_please_dont_remove.txt: suggestion 01b)

use PT-Scotch (the parallel version of SCOTCH) in SPECFEM3D instead of (or as an option in addition to) the serial version of SCOTCH

use PT-Scotch (the parallel version of SCOTCH) in SPECFEM3D instead of (or as an option in addition to) the serial version of
SCOTCH; because decomposing very large 3D meshes on a serial machine (on a single processor core) can be a severe limitation and
for some meshes that will be a major bottleneck.

Support ParMetis as well because there is a new, improved version that has been released by Karypis et al recently.
To do that, Jeroen sent a Fortran bug fix (a patch to use to call METIS from Fortran) to Matthieu by email on Jan 16, 2013.

What would need to be done is switch to PT-Scotch version 6.0 and/or to ParMETIS
(supporting both would be easy, since the subroutine calls are very similar; in the current serial code we support both Scotch and METIS).

To do that, one would need to rewrite "decompose_mesh.F90" in directory src/decompose_mesh.
That is the only file that would need to change (in addition to part_decompose_mesh.f90, which is called by decompose_mesh.F90).
Unfortunately that file is not very well written nor very well commented.
Thus unfortunately that set of routines is not so easy to understand and modify.
A bunch of things are hardwired.

Thus, this is not so easy to do (Daniel and I checked about a year ago). It would maybe be easier to rewrite 'decompose_mesh_scotch'
from scratch as a new, MPI program instead of trying to add MPI support to the current and not so well written serial version.
That program is relatively small, and what it does is relatively straightforward.

Branch and tag cleanup

These branches appear unmerged, but should be merged or deleted:

  • coupling_vadim: From the description, this was merged into master, but not marked as such in SVN. See 21b7443 on master.
  • obsolete_old: This branch just contains renamed copies of basin_static and ORIGIN.
  • basin_static: The only standalone commit on this branch appears to be empty.
  • basin_static@16915: Continues as basin_static, so unnecessary.
  • sunflower: This is just a single commit, followed by the first SPECFEM3D_SUNFLOWER commit that is completely disjoint from master. They really should have been joined.
  • SPECFEM3D_SUNFLOWER: I believe this is merged in 43b1a09, but not correctly marked as a merge in SVN.
  • SPECFEM3D_SUNFLOWER.old: This is just one commit over SPECFEM3D_SUNFLOWER, with the commit renaming the branch.
  • obsolete_old_SPECFEM3D_SUNFLOWER.old: This is just one commit over SPECFEM3D_SUNFLOWER.old, with the commit renaming the branch.
  • update_temporary: The first commit on this branch claims it is "private", so not sure if it's been merged or not.
  • update_temporary@16915: Continues as update_temporary, so unnecessary.
  • ORIGIN: This is basically the first three commits from CVS, with some renames later on. I don't know why anyone bothered. Maybe it's just an artifact of the git conversion and nobody really did this?
  • ORIGIN@16915: Continues as ORIGIN, so unnecessary.
  • ORIGIN@10678: Continues as ORIGIN, so unnecessary.

These branches are merged and unnecessary:

  • trunk@16915
  • trunk@10678
  • ORIGIN@9365

There are lots of unnecessary tags:

  • R_20021218@10678, R_20021218@9365, R_20021218@9368 R_20021218@16915 are all copies of R_20021218. However, this is really just a tag of the first commit from cvs2svn, so it's useless.
  • older_unidentified_obsolete_v1.4@14753, older_unidentified_obsolete_v1.4@14757, older_unidentified_obsolete_v1.4@16111, older_unidentified_obsolete_v1.4@16915 are older versions of older_unidentified_obsolete_v1.4. This is just a copy of v1.4 with some renames. I don't know why anyone bothered with it.
  • v1.4@10158, v1.4@10678 are older versions of v1.4
  • v1.4.1@10158, v1.4.1@10344, v1.4.1@10678, v1.4.1@16915 are older versions of v1.4.1
  • v1.4.3_BASIN@15626, v1.4.3_BASIN@16111, v1.4.3_BASIN@16915 are older versions of v1.4.3_BASIN
  • v1.4.4_last_BASIN, v1.4.4_last_BASIN@16915 are older versions of v1.4.4

PML material property for adjacent elements

Make sure a PML element always has the same material properties as the first non PML it is in contact with (otherwise it cannot be perfectly matched, because there is then a (physical) material interface between them)

use ADIOS file format for models?

  • For binary files (for instance Carl Tape's m16 model) we should use NETCDF to store them and read them back; that is the only
    clean way of ensuring portability between different systems; and NETCDF is free and open source therefore there is no reason not to do it.

routine meshfem3D_models_getatten_val() from Hejun Zhu is unreliable and gives wrong results in SPECFEM3D_GLOBE

In SPECFEM3D_GLOBE, fix this known 64-bit issue when -mcmodel=medium -shared-intel is used in order to handle very large static memory size per core
(the 3D_Cartesian and 2D packages are fine because that compiler option is never used in them):

If you run very large meshes on a relatively small number
of processors, the static memory size needed on each processor might become
greater than 2 gigabytes, which is the upper limit for 32-bit addressing
(dynamic memory allocation is always OK, even beyond the 2 GB limit; only static memory has a problem).
In this case, on some compilers you may need to add -mcmodel=medium (if you do not use the Intel ifort / icc compiler)
or -mcmodel=medium -shared-intel (if you use the Intel ifort / icc compiler)
to the configure options of CFLAGS, FCFLAGS and LDFLAGS otherwise the compiler will display an error
message (for instance 'relocation truncated to fit: R_X86_64_PC32 against .bss' or something similar);

BEWARE that using -mcmodel=medium -shared-intel is known for currently leading to incorrect seismograms,
at least when flag ATTENUATION is on (and maybe even without), at least in the case of the Intel ifort / icc compiler.
This likely comes from intrinsic functions such as size() that return and integer8 instead of an integer4, thus leading
to incorrect results when used in function calls is the -i8 flag is not added to the compiler options;
however, when adding -i8 the code does not compile because the MPI calls then refuse to compile (they need integer4 as arguments).
It seems that this problem is triggered when 3D models are used and mostly (only?) when attenuation is on, but not triggered for 1D Earth model benchmarks
even with attenuation (at least not triggered for EXAMPLES/benchmarks/attenuation_benchmark_GJI_2002_versus_normal_modes nor for Anne Sieminski's three 1D benchmarks), which makes it uneasy to locate.
Since most current users run the code with less than 2 GB of static memory per core, we have not investigated that problem carefully for now,
and thus recommend that you do NOT use these compiler flags until this problem is located and fixed.

(todo_list_please_dont_remove.txt: suggestion 45)

merge all the mesh data files into a single mesh file per MPI slice

merge all the mesh data files into a single mesh file per MPI slice;
on large machines many people (including Dominik Goeddeke and I, also people at the UK supercomputing center) have found that
generating more than 4O smaller files per slice in GLOBE puts a very heavy load on LUSTRE or
GPFS shared file systems for no reason

the only reason why we have many files per slice is historical; we should definitely change that

Add ADIOS support for that

I have asked Jo to reduce that to a single file per process

we will need to do two things:

  • convert all ASCII mesh files to 'unformatted' (in particular all the MPI buffer files: list of points etc); easy to do: just add
    "form=unformatted" to the open statements, and change all write(unit_number,) statements to simply write(unit_number)
    i.e. just get rid of the ",
    "
  • merge all the files; easy to do in principle, just comment out all "close" and "open" statements in the code except the first
    'open' and the last "close"; also make sure that the same unit number is used in all writes (which is not the case right now: I
    remember we use 11, 12, 27, 34, IOUT and so on; you will need to make all of this a single value (called IOUT)

This should be done in both SPECFEM3D and SPECFEM3D_GLOBE

this is critical, because currently in GLOBE we write about 42 files per MPI slice, thus when running on 1000 cores we create
42,000 files for no reason. Let us reduce this to 1000.

IMPORTANT: see if this has an impact on processing scripts, if we decide to also merge the files output by the solver (for instance the
several different types of sensitivity kernel files) into a single file. If so, some kernel processing scripts and tools will need to be adapted
accordingly.

Feedback from Qinya: I can't agree more. We have had experience running big specfem jobs on our National Supercomputer Consortium
(related to kernels for noise measurements, which generate even greater number of files). It crippled the entire file system,
and the system administrators became really unfriendly to us after that ...

On the other hand, I know a lot of visualization programs (paraview) actually read the individual binary files and combine them
to form bigger visualization domains. So if we rewrite this, we need to write it in a form so that it is easy/faster for an
external program to access bits of it (such as x,y,z, ibool, etc). Maybe direct-accessed C binary files?

(todo_list_please_dont_remove.txt: suggestion 11)

specfem2D: allow quality factor to be different for each point

something that could be made more general in the 2D code:
at line 770 of specfem2D.F90:
!! DK DK if needed in the future, here the quality factor could be different for each point
i.e. they could be given at each (i,j,ispec) instead of at each (ispec) only
in the current version. Very easy to do if needed, just that line to change.

Find a better way to build internal scotch

Building the internal copy of scotch involves modifying its Makefiles. This is a bit hackish, plus git always complains that I modified those files even though I didn't really do so, and have no intention of checking in the changes.

We should find a better way (recursive autoconf, maybe?)

Comments in DATA/Par_file and also in the LaTeX manual of the 2D code should mention the CFL upper bound of the different time schemes that are now implemented and that can be selected by the user: Newark, RK4 and LDDRK4-6

in comments in DATA/Par_file and also in the LaTeX manual of the 2D code, Zhinan should mention the CFL upper bound of the
different time schemes that are now implemented and that can be selected by the user:
Newark, RK4 and LDDRK4-6

mention also in the manual and in these comments in DATA/Par_file that using Stacey absorbing conditions slightly reduces this maximum
CFL value i.e. the maximum time step (as a rule of thumb obtained by trial and error, it is reduced by a factor of about 0.90).

without such numbers as comments nor in the manual, it is difficult for users to decide what value to use for the time step (in the
2D code, the user selects the time step, it is not computed automatically by the code)

(todo_list_please_dont_remove.txt: suggestion 17)

see if compression (wavelets?) could be useful

See if compression (wavelets?) could help

Feedback from Qinya on the last point: I recently hosted a visit by Prof. Ling-yun Chiao (Chiao & Kuo 2001, Hung, Ping & Chiao 2011) from national Taiwan University, one of the first few people who started to apply wavelet to seismic inverse problem. I think we have some idea of how wavelet can potentially help with adjoint tomography, and I would love to share our thoughts with you all as well.

add calculation of estimate of memory consumption in SPECFEM3D_Cartesian; carefully check file memory_eval.f90

  • Regarding memory size (getting an estimate of memory consumption in SPECFEM3D),
    somebody should just cut and paste my SPECFEM3D_GLOBE routine
    "SPECFEM3D_GLOBE/version41_beta/src/memory_eval.f90", which
    I call from "SPECFEM3D_GLOBE/version41_beta/src/create_header_file.f90".
    It would work for SPECFEM3D as well (with minor modifications).

Carefully check file memory_eval.f90 in SPECFEM3D_Cartesian (already done and fixed for SPECFEM3D_Globe by Dimitri in July 2013).
(in particular now that C-PML has been implemented in SPECFEM3D_Cartesian), as the code has undergone lots of changes in the past few months
and thus new big arrays have been used, in particular for C-PML (but only in the CPML layers for these arrays, not in the rest of the model);
Thus we should make sure that all these arrays are taken into account in the memory estimate computed in memory_eval.f90.

2D C-PML

Zhinan commits his current clean 2D C-PML code to SVN; because changes
made outside of SVN are useless for regular users of the application,
and are also quickly lost.

Zhinan, please also include the modifications that René Matzen made to "compute_forces"
and that he sent on May 10, 2012.

Here is a more detailed list of things to do in order to finish that:

  • commit the acoustic CPML routines
  • make sure that CPML elastic and CPML acoustic work with MPI
  • make sure CPML can be used without Stacey; to do this, suppress the anyabs_local flag, whose name is confusing,
    and use the PML_BOUNDARY_CONDITIONS and STACEY_ABSORBING_CONDITIONS flags instead;
    maybe define anyabs_PML and anyabs_Stacey internally in the code instead of anyabs
  • make sure CPML can be used with external meshes (coming from Gmsh or CUBIT) as well, using the numbering convention
    that you defined with Paul two months ago; make sure that external meshes with fluid regions and solid regions also work fine with CPML
  • clarify why the corners need additional arrays and additional code;
    I still do not understand that; I thought that corners were not different from the rest in PML and that memory variables were simply summed there,
    but I see that in your code you have specific arrays for the corners, called "rmemory_dux_dx_corner()" and so on; they make the code
    (the implementation) longer and more complex, thus let us see if they are absolutely needed or if we could (maybe?) get rid of them;
    we should explain this in the LaTeX file that you are writing now to explain the CPML equations
  • see if CPML works with curved elements (checking the behavior with the default M2 UPPA example with topography for instance)

(todo_list_please_dont_remove.txt: suggestion 01a)

use heuristic rules to make source and receiver detection much faster and make these routines use far less memory

  • could be done by Vala:
    we could use heuristic rules to make source and receiver detection much
    faster and make these routines use far less memory. For instance using the
    latitude, longitude and depth of each source or receiver we can quickly
    determine if there is no chance for this source or receiver
    to be located in the current slice; then that processor would immediately
    switch to the next source or receiver; this could speedup the whole process by
    a factor of 10 to 100 I guess when many sources are used (e.g. Vala uses
    50,000) and/or many stations (Bernhard uses 20,000). We would
    get rid of the bottleneck in locate_sources and locate_receivers
    when one uses a huge number of sources (e.g. Vala -> 100000 sources)
    or a very large number of stations (e.g. Bernhard -> 20000 stations):
    in such a case there is a bunch of MPI_GATHERS which waste a lot
    of memory and can even use all the available memory and make the run crash.
    Vala partially fixed this problem by using subsets of 1000 stations, but after
    careful analysis we have found a way of doing better and getting rid of all
    the GATHERs and all the extra memory (this way we won't have to limit the number of sources to a maximum of 100000,
    as currently done in the solver).

GPU version: hybrid CPU/GPU computations

(not really critical, probably not worth doing, at least for now)


when I said that the future codes will support either CPUs or GPUs (in the same code but not simultaneously because we do not
compute part of the elements on the graphics cards and the rest on the CPUs): I realize that that is true but that there is an easy
and elegant way of running hybrid calculations anyway without changing the code:

  • if you have N GPUs and M CPUs, start M + N MPI processes, and call CUDA only from the N processes and the CPU code only from the
    M other processes; this way, N processes do GPU calculations only, M processes do CPU calculations only, and MPI can easily make
    them run on the same nodes and communicate through MPI
  • since GPUs are faster than CPUs (by a speedup factor that we can call S), to get close-to-perfect load balancing, just call
    Scotch or ParScotch with that S factor as a weight for partitions that will run on CPUs and thus S times slower; by doing this,
    Scotch will assign them approximately S times fewer elements, and thus we will get very good load balancing in a mixed/hybrid CPU +
    GPU models

clever and easy to do.

limitation (of hybrid models in general, not of the above approach): if GPUs are 30 or 40 times faster than CPUs, doing this is
probably not very useful and using the GPUs only is probably sufficient because using the CPUs in addition will result in a gain of
a few percent only;
maybe not worth doing it

(todo_list_please_dont_remove.txt: suggestion 22)

leave Stacey as an option even once C-PML is implemented

leave Stacey as an option even once C-PML is implemented;
because in many cases C-PML (with Dirichlet conditions on its outer edges) will be sufficient, but in principle it is possible to
replace Dirichlet with Stacey and thus use both C-PML and Stacey simultaneously to have even better absorption;
There is a paper by Collino and Monk in 1998 about this (for Maxwell)

So let us keep Stacey in the future codes, but let us make it off by default in DATA/Par_file (and C-PML on by default)

(todo_list_please_dont_remove.txt: suggestion 18)

GPU version: port newer modifications to GPU

Here are recent features of the code that have been implemented by different developers in the
regular Fortran + MPI version but that have not been ported to the GPU version yet:

  • CPML
  • modified fluid/solid formulation of Yang Luo to allow for adjoint sources located in fluid regions
    (for instance for offshore simulations in the oil industry)
  • QKappa attenuation support in addition to Qmu
  • new (fixed / improved / more accurate) attenuation time scheme of Zhinan Xie

(todo_list_please_dont_remove.txt: suggestion 34)

carefully check file memory_eval.f90 in SPECFEM3D_Cartesian

carefully check file memory_eval.f90 in SPECFEM3D_Cartesian (already done and fixed for SPECFEM3D_Globe by Dimitri in July 2013).
(in particular now that C-PML has been implemented in SPECFEM3D_Cartesian), as the code has undergone lots of changes in the past few months
and thus new big arrays have been used, in particular for C-PML (but only in the CPML layers for these arrays, not in the rest of the model);
Thus we should make sure that all these arrays are taken into account in the memory estimate computed in memory_eval.f90.

(todo_list_please_dont_remove.txt: suggestion 39)

Stacey absorbing boundaries

see with Zhinan if a spring should be added to Stacey in SPECFEM3D and SPECFEM3D_GLOBE; Stacey is a dashpot and thus Zhinan and his
advisors in China have shown that the condition can become inaccurate and have a static offset in some cases; they have shown how
to suppress this by adding a spring in parallel to the dashpot

already done in the 2D code

see with Zhinan if he thinks this should be done in SPECFEM3D and SPECFEM3D_GLOBE as well, and if it would be useful

should probably not be a high priority; PML and other things are more urgent

note DK DK Jan 2013: not needed any more because we now have C-PML instead; thus purposely not done.

(todo_list_please_dont_remove.txt: suggestion 20)

specfem2D, only one partition of "model_velocity.dat_output" is generated in MPI runs

Subject: [CIG-SEISMO] only one partition of "model_velocity.dat_output" will be generated in parallel run
Date: Mon, 12 Aug 2013 18:03:40 +0100
From: Yingzi Ying
To: cig-seismo

Dear SPECFEM Developers,

In parallel running of xspecfem2D, only one partition of "model_velocity.dat_output" will be generated.
Both mpirun and mpiexec executions have been tested.
The whole mesh can be generated in serial running.
Just a bug report.

Thanks,
Yingzi

reduce the memory usage for PML in fluid/solid simulations

Currently, the way PML is implemented for a fluid/solid simulation is memory consuming.
We should in the future reduce the memory usage in the following way:
1: replace the rmass_acoustic_interface(nglob_acoustic) with rmass_acoustic_interface(nglob_acoustic_solid_interface)
2: replace the potential_dot_dot_acoustic_interface(nglob_acoustic) with potential_dot_dot_acoustic_interface(nglob_acoustic_solid_interface)
where nglob_acoustic denotes the whole point set inside acoustic region, nglob_acoustic_solid_interface denotes the point set on acoustic_solid_interface

back-propagate some waves in 1D with viscoelasticity

Zhinan will try to back-propagate some waves in 1D with viscoelasticity;

Jeroen, Qinya, Zhinan and I discussed that a few months ago, some of us think the backward run is unstable when undoing attenuation but Zhinan remembers seeing some stable backward runs with a C viscous damping matrix in mechanical engineering at his institute in China, therefore it is worth trying using SPECFEM1D for instance.

(in 2D we could avoid this problem by saving all the timesteps of the forward run to disk and reading them back, but in 3D is it not possible yet because the amount of I/Os would be too big; this should change in 5 to 10 years, but for now we still need to back-propagate when SIMULATION_TYPE = 3 in 3D)

(todo_list_please_dont_remove.txt: suggestion 26)

re-constructing wavefields: read the adjoint information back only every 50 or 100

when reading the absorbing conditions back to reconstruct the forward run backwards when running the adjoint step (i.e. when
SIMULATION_TYPE = 3), my postdoc Vadim Monteiller suggests to read the adjoint information back only every 50 or 100 iterations, by
chunks of 50 or 100 time steps at a time, to avoid slowing down the run

advantage: makes the code faster by avoiding doing I/Os at each time step

drawback: more memory is needed to store these edge conditions (exactly 50 or 100 times more, but only for a few 2D arrays)

useful to do? implement it as an option maybe? and then the user could choose to use 50, 100 or 1, and chosing 1 would be
equivalent to the current behavior?

Feedback from Qinya: The way I implemented this was for simplicity. It was just easier to write every step the absorbing boundary term and then read it back in reverse order in the kernel calculation. But obviously you can write in 50-step chunks, and read them back in 50-step chunks as well (make sure you still apply them in the reverse time order). We may have to be careful of the sizes of storage variables so they are not exceedingly large for some slices. For example, a model of 3x3 slices, slice 0 will have 2 absorbing boundary sides, slice 1 will have 1 a.b. side, while slice 4 will have no a.b. We can make it a default option, but giving users the choice of 50 or 100 is just going to make the Par_file even more confusing. We could easily estimate a number from the max number of adjoint boundary elements in all slices.

(todo_list_please_dont_remove.txt: suggestion 16)

add support for the TetGen mesh creation package

we could add support for the TetGen mesh creation package, and cut each
tetrahedron into four hexahedra using the barycenter. This could help design
meshes for sedimentary basins, in particular near basin edges.
Celine Blitz has worked on that in 2010 while at TOTAL.

VERCE project modifications

Possibly set the name of the tomography file in the Par_file instead
of hard-coding it in model_tomography.f90 or creating a separate
parfile

Some interesting addition will be the:

---- multifiles tomography
Piero told me that the run for turkey/europe mesh had this possibility
so probably it is matter to commit that files in the svn version

(todo_list_please_dont_remove.txt: suggestion 00)


Par_file modifications:

- suggestion 00: about adding new CPML flags and also new (or modified) movie and shakemap flags + VERCE project modifications

From Dimitri: here are two more parameters to rename and/or move from the constants.h file to the input Par_file:

could you also rename "USE_RICKER_IPATI" to "USE_RICKER" or "USE_RICKER_TIME_FUNCTION" everywhere in the source code and in the input file?
Because "Ipati" is the name of an industrial field and thus we should make the name of that parameter less specific.

Let us also move that parameter from the constants.h file to the input file (Par_file), to avoid having to recompile the code
(see my previous email about this)

As discussed yesterday during our Skype call, let us make the
modifications below in the input Par_file of SPECFEM3D.
NGNOD should be added and set to 8 or 27, and then when reading the
Par_file the code should check that it is equal to one or the other, and
otherwise exit with an error message.
The code should then automatically set NGNOD2D to 4 if NGNOD == 8 and to 9 if NGNOD == 27.

For the new movie flags, I suggest changing MOVIE_SURFACE and
EXTERNAL_MESH_MOVIE_SURFACE, which are confusing, to MOVIE_SURFACE only,
and then define a second (integer) parameter, called for instance
MOVIE_TYPE, which could be equal to 1 to show the top surface
(topography + oceans) only, to 2 to show all the external faces of the
mesh (i.e. topography + vertical edges + bottom), and so on.

Let me add this to the todo list. Let us also decide who does it
(since new flags will be needed in the Par_file for PML, I guess it will
be easy and better to do all the modifications at the same time).

Let us also modify the code to allow for shakemaps and movies at the
same time, rather than having to run the code twice.

add NGNOD = 27 or 8 to the input Par_file (move it from shared/constants.h.in to the Par_file, since now either value can be used depending on the mesh,
whether it is HEX8 or HEX27)

In the VERCE framework they ask some modifications in order to improve
the flexibility of the code if installed as a module....

Move from constants.h to Par_file the flags:
EXTERNAL_MESH_MOVIE_SURFACE
EXTERNAL_MESH_CREATE_SHAKEMAP
OLSEN_ATTENUATION_RATIO

Allow shakemaps and surface movies to be generated during the same run
(EXTERNAL_MESH_MOVIE_SURFACE and EXTERNAL_MESH_CREATE_SHAKEMAP not
mutually exclusive)

We believe also that

Explain in the manual the difference in using
EXTERNAL_MESH_MOVIE_SURFACE instead of MOVIE_SURFACE, or
EXTERNAL_MESH_CREATE_SHAKEMAP instead of CREATE_SHAKEMAP

in SPECFEM3D_GLOBE use a potential of (rho * u) instead of u in the fluid, in case of fluid-fluid discontinuities

  • in SPECFEM3D_GLOBE use a potential of (rho * u) instead of u in the fluid, in case of fluid-fluid discontinuities; probably
    already done in SPECFEM3D because of purely acoustic oil industry problems, but not done in SPECFEM3D_GLOBE because there is no first-order
    discontinuity inside the fluid outer core therefore it is not needed; thus this point is not needed except if we decide
    to use SPECFEM3D_GLOBE for other problems one day, for instance helioseismology

attenuation in SPECFEM2D: define viscoelastic elements in a more flexible way

In the 2D code, the way we define viscoelastic or elastic element is not very nice: we need to set Q = 9999 to turn attenuation off;
it would be better to have a logical flag saying if a given layer has attenuation or not

this suggestion comes from notes that I took while talking to Paul Cristini a few months ago therefore I am not sure I remember
very well what Paul wanted to change about that. Paul, could you remind us more precisely the change that you had in mind?

Answer from Paul:

My concern was to be able to have in one domain several media which can be elastic or visoelastic. Currently, there is one flag for the whole domain which makes all media elastic or visoelastic.

Conclusion from Dimitri:

Now I remember more clearly what the problem is, based on what you told me:

  • in SPECFEM2D the TURN_ATTENUATION_ON flag is global; thus either all the elastic elements become viscoelastic, or none of them
  • and then, to have some viscoelastic regions and others that are purely elastic, we do something weird: we set Qkappa = Qmu = 9999 and we run a viscoelastic simulation but with Q = 9999, i.e. almost no loss of energy

Of course, what we should do is:

  • suppress the TURN_ATTENUATION_ON flag
  • for each material in the list of materials, add a flag that says if that material is elastic or viscoelastic;
    or detect if Q > 9000 for instance and if so, consider that that material is elastic and turn off attenuation for it
    (the same way we detect acoustic media by checking if the S wave velocity Vs is smaller than 0.0001)

Otherwise, in the current implementation, we waste memory and CPU time by doing viscoelastic simulations everywhere, even in elastic regions, when TURN_ATTENUATION_ON is on.

More precise analysis from Dimitri:
this means in particular changing this in DATA/Par_file:

ATTENUATION_VISCOELASTIC_SOLID = .false. # turn attenuation (viscoelasticity) on or off for non-poroelastic solid parts of the model
ATTENUATION_PORO_FLUID_PART = .false. # turn viscous attenuation on or off for the fluid part of poroelastic parts of the model

and making it local to each material set instead of global for the whole mesh.

Feedback from Qinya: I agree. The way attenuation appears in the 2D Par_file is quite confusing. It will be good to clean up, or at least explain well in the manual.

(todo_list_please_dont_remove.txt: suggestion 23)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.