Giter Club home page Giter Club logo

cospv2.0's Introduction

About COSP

The CFMIP Observation Simulator Package (COSP) takes the models representation of the atmosphere and simulates the retrievals for several passive (ISCCP, MISR and MODIS) and active (CloudSat (radar) and CALIPSO (lidar)) sensors.

An overview of COSP is provided in the COSP1 BAMS paper.

COSP Version 2 (COSP2) is a major reorganization and modernization of the previous generation of COSP. For a detailed description, see the COSP2 GMD paper.

The simulators in COSP (ISCCP, MISR, MODIS, radar/CloudSat and lidar/CALIPSO) have been developed by many institutions and agencies:

  • Met Office Hadley Centre
  • LLNL (Lawrence Livermore National Laboratory)
  • LMD/IPSL (Laboratoire de Meteorologie Dynamique/Institut Pierre Simon Laplace)
  • CSU (Colorado State University)
  • UW (University of Washington)
  • CU/CIRES (University of Colorado/Cooperative Institute for Research In Environmental Sciences)

Conditions of use

The code is distributed under BSD License. Each source file includes a copy of this license with details on the Owner, Year and Organisation. The license in the file quickbeam/README applies to all the files in the directory quickbeam.

What's in the distribution

The repository include directories

  • src/ contains the COSP layer and the underlying satellite simulators
  • model-interface/ contains routines used by COSP to couple to the host model. Edit these before building.
  • subsample_and_optics_example/ contains an example implementation, following COSP 1, of the routines to map model-derived grid-scale physical properties to the subgrid-scale optical properties needed by COSP.
  • driver/ contains codes that run COSP on example inputs and scripts that compare the current implementation to a reference.
  • build/ contains a Makefile describing build dependencies. Users may build a COSP library and other targets from this Makefile.
  • unit_testing/ contains small programs meant to exercise some of the simulators.

Users incorporating COSP into a model will need all routines found within src/, appropriately-edited versions of the routines in model-interface/, and routines that provide the functionality of those in subsample_and_optics_example/.

As described in Swales et al. (2018), COSP2 requires inputs in the forms of subcolumn-sampled optical properties. The drivers map the model physical state to these inputs using the routines in driver/subsample_and_optics/, which are consistent with the fixed choices made in COSP1. We anticipate that users incorporating COSP into models will develop a model-specific mapping between the model's physical state and the inputs required for COSP that is consistent with the host model.

The offline drivers read sample snapshots from the Met Office Unified Model, use the routines in subsample_and_optics_example/ to compute COSP inputs, and record the results from COSP in netCDF files. The default driver calls COSP 2 directly and produces netCDF output. The layer mimicking the COSP 1.4 interface is tested with a separate driver. A third driver uses the CMOR1 infrastructure to write a subset of fields to individual netCDF files following conventions for publication on the Earth System Grid Federation.

Running the offline tests

  1. Build the drivers.

    1. Edit the files in model-interface/ if necessary. By default COSP is built using double-precision real variables and printing any error messages to the standard output.
    2. In build/ edit Makefile.conf to reflect the choice of compiler, compiler flags, and library names and locations.
    3. In build/, make driver will build a COSP library, a separate library with the example mapping from model state to COSP inputs, and the cosp2_test executable, which is then copied to driver/run.
  2. Running the test program

    1. Directory driver/run contains namelists and other files needed by the test programs. If the executables have been built they should run in this directory using these files as supplied.
    2. The behavior of COSP can be changed via the input namelists (e.g. driver/src/cosp2_input_nl.txt) and output (driver/src/cosp2_output_nl.txt) namelists. The input namelist controls the COSP setup (i.e. Number of subcolumns to be used, etc...) and simulator specific information (i.e. Radar simulator frequency). The output namelist controls the output diagnostics. The test program receives the input namelist as an argument from the command line, i.e.:

    ./cosp2_test cosp2_input_nl.um_global.txt

Currently, there are 2 input namelists: cosp2_input_nl.txt uses a small test input file (cosp_input.nc), stored in the github repository; cosp2_input_nl.um_global.txt uses a global field (cosp_input.um_global.nc). The global input file and its associated known good output (cosp2_output.um_global.gfortran.kgo.nc) are stored externally in Google Drive, and they can be retrieved by running download_test_data.sh from within the driver/ directory.

  1. Regression testing (comparing to reference data)

    1. Reference data or known good output (KGO) for a small test case is provided with COSP2. The data can be found at driver/data/outputs/UKMO/.

    2. To compare driver output to the KGO, in driver/, invoke Python 3 script compare_to_kgo.py. This script requires the following Python modules: numpy, netCDF4, argparse, and sys. The following example shows how to call this script from the command line:

      python compare_to_kgo.py data/outputs/UKMO/cosp2_output_um.gfortran.kgo.nc data/outputs/UKMO/cosp2_output_um.nc --atol=1.0e-20 --rtol=0.0005

    The script accepts thresholds for absolute and relative tolerances, named atol and rtol respectively. By default the script will report all differences, i.e. --atol=0.0 --rtol=0.0. The example above allows for a relative tolerance of 0.05% in the subset of absolute differences larger or equal than 1.0e-20. Previous tests indicate that these thresholds are appropriate when gfortran is used to build the driver program. If a different compiler is available, it is encouraged to build the driver using that compiler to increase the robustness of the tests. For non-gfortran compilers, the differences may be larger. This is not necessarily an issue, but further investigations may be required.

cospv2.0's People

Contributors

alejandrobodas avatar brhillman avatar dustinswales avatar lqxyz avatar robertpincus avatar rodrigoguzman-lmd avatar takmichibata avatar whannah1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cospv2.0's Issues

[User] Better documentation of changes in stable versions

Raised during the CFMIP meeting in Boulder, October 2018.
People upgrading COSP in their models want to have a better information on the impact of the changes between stable versions. It would be desirable to provide more information on which diagnostics have changed between stable versions, and the expected impact of the changes using a battery of quickplots.

Adding OPAQ diagnostics + ground lidar to COSP_lidar

This issue has two purposes:

  1. Add 7 new variables (OPAQ products) to the standard CALIPSO-like COSP_lidar outputs consistently with the new GOCCPv3.0 dataset (3x2D variables and 4x3D variables)
  2. Add a ground-based COSP_lidar with 7 new outputs from this new code embedded in the lidar simulator (Molecular backscatter + standard 2D and 3D cloud variables)

Creation of sister repository: COSPweb

This is just to record the process of creation of a new repository that will host the COSP web pages, as discussed in the PMC.
@dustinswales , please can you create a new repository called COSPweb and give me the relevant permissions? Thanks!

Encoding version information in the KGO

In the PMC on 29 May 2020, there was some discussion about encoding the KGO with some version information. It was agreed that it was a nice idea only if it can be done automatically by the CI test. This issue will explore the possibility of automatically updating that information to the KGO when the CI test fails.

hgt_matrix_half is used inconsistently

The array cospstateIN%hgt_matrix_half appears to be used inconsistently in the interface code and in calls to simulators and cosp_change_vertical_grid. This array is allocated to have size (npoints,nlevels+1), which would imply that it should hold the heights of model level interfaces, but is used like this in calls to cosp_change_vertical_grid:

call cosp_change_vertical_grid(cloudsatIN%Npoints,1,cloudsatIN%Nlevels,  
                                                      cospgridIN%hgt_matrix(:,cloudsatIN%Nlevels:1:-1),            
                                                      cospgridIN%hgt_matrix_half(:,cloudsatIN%Nlevels:1:-1),
                                                      betamol_in, Nlvgrid,
                                                      vgrid_zl(Nlvgrid:1:-1),vgrid_zu(Nlvgrid:1:-1),            
                                                      betamolI(:,1,Nlvgrid:1:-1))

cosp_change_vertical_grid expects zlev_half to have shape (npoints,nlevels), and to hold the heights of the bottom of each level. But, if a model were to populate hgt_matrix_half with the full interface levels (from levels 1 to nlevels+1, from TOA to surface), then the slicing nlevels:1:-1 would be taking the first nlevels heights, which would be the heights of the interface tops, not bottoms. In E3SM, we appear to implement the hacky fix to populate levels (1,nlevels) of cospstateIN%hgt_matrix_half with the heights of the level bottoms, and then populate level nlevels+1 of hgt_matrix_half with a dummy value of 0. Is this expected behavior?

RTTOV 12 interface

Add interface to RTTOV12 (clear-sky only).

It might be of use to some users to be able to use RTTOV 11 whilst it is still supported by the NWP SAF so I will make changes to the Makefile to make COSP use with RTTOV a bit more straightforward too.

MISR outputs are not properly set to fillvalues for night columns

It appears that MISR_mean_ztop is not properly reset to the fillvalue R_UNDEF for night columns. In MISR_COLUMN() in MISR_simulator.F90, the following code computes the joint histogram for sunlit points, or else sets values to fillvalue (lines 266-287):

if (sunlit(j) .eq. 1) then
   < do stuff >
else
   MISR_cldarea(j)         = R_UNDEF
   MISR_mean_ztop(npoints) = R_UNDEF
endif

Note that MISR_cldarea appears to be set correctly, but MISR_mean_ztop is only set for the last index, and fq_MISR_TAU_v_CTH is excluded entirely. The following correction should give the desired behavior:

if (sunlit(j) .eq. 1) then
   < do stuff >
else
   MISR_cldarea(j)   = R_UNDEF
   MISR_mean_ztop(j) = R_UNDEF
   fq_MIST_TAU_v_CTH(j,:,:) = R_UNDEF
endif

Implementation of CLARA simulator

Hi,

I have been tasked with implementing the CLARA -simulator (Eliasson et al., 2020) into COSP if possible.

The way we made the simulator for offline implementation was very similar to the MODIS simulator.

The main difference to the MODIS simulator is how the cloud mask is simulated. The cloud mask relies on auxiliary data of the probability of detection (POD) as a function of optical depth, geographical location and whether or not the subcolumn is sunlit. The subcolumn is considered cloudy when a generated random number calculated for each subcolumn is smaller than the POD as read from the auxiliary data. The background to this approach is explained in the paper describing the simulator.

Our implementation does not require any additional model fields than already required by COSP simulators

Reference:
Eliasson, S., Karlsson, K.-G., and Willén, U.: A simulator for the CLARA-A2 cloud climate data record and its application to assess EC-Earth polar cloudiness, Geosci. Model Dev., 13, 297–314, https://doi.org/10.5194/gmd-13-297-2020, 2020.

Missing type specifier causing crash on Cori-KNL in running debug mode

I've been working with @brhillman to test his update to the COSP interface that was written to work with E3SM-MMF. My test runs run without debug mode, but when I enable debug mode I get a floating point overflow error and a backtrace that points to line 121 of src/simulator/cosp_cloudsat_interface.F90, which looks like this:

rcfg%step_list(j)=0.1_wp+0.1_wp*((j-1)**1.5)

Adding the "wp" type to the value in the exponent resolves the issue:

rcfg%step_list(j)=0.1_wp+0.1_wp*((j-1)**1.5_wp)

Remove COSPv1.4 interface

As part of the COSP2 release we provided a "drop-in replacement" interface that would allow COSP1 users to use the new COSP2 code with little interruption on their side. Essentially the role of this interface was to map derived data types (DDTs) used in COSP1 to the DDTs used in COSP2.
This interface was maintained up until COSPv2.0.3, consistent with the last release of COSP1, v1.4.3.
However, many developments have been added to the COSP2 repository since, the latest version is v2.1.6, and this interface is no longer compatible with these later versions of COSP2, v2.1 and later. Supporting this interface beyond v2.0.3 isn't sensible as the new diagnostics will not be available to users of the COSP1 code base.
We (Robert and I) propose we remove this deprecated code from the current COSP2 code base.

Allocation issue with MODIS/Cloudsat joint-products

@takmichibata
If these diagnostics are not requested, there is a runtime error as these fields are always allocated.
I propose you add some logic around this section of code, lines 1600-1683 of cosp.F90, so that the diagnostics are produced only when the fields are requested.
As a reference, see how the joint Cloudsat-Calipso products, just above where your MODIS/Cloudsat joint-diagnostics handle this issue.

cosp_diag_warmrain has wrong dimensions for variable frac_out / design flaw

The size of the matrix "frac_out" should be (nPoints,nColumns,nLevels) but Nlvgrid is being passed into the subroutine cosp_diag_warmrain as "nLevels" (see cosp.F90 line 1629 and cosp_stats.F90 line 283). On my system this is causing random values to appear in the additional elements within frac_out and causing "random errors" in the output variable wr_occfreq_ntotal.

I believe there is a design flaw here with the authors of this algorithm assuming that frac_out is based on the same grid as the re-grided radar reflectivity (which it isn't). I suspect the easiest solution is to pass in the radar reflectivity on the model grid (i.e. using nLevels) rather than using the radar grid (on vgrid) but that may violate the intent to reproduce a CloudSat-specific result and accordingly the algorithm needs to be redesigned to either (a) NOT use frac_out or (b) re-grid frac_out on to CloudSat radar (vgrid).

** NOTE: I have not checked to see if the same problem exists in other routines. I found this because of my cosp2_test output was generating strange results.

Error in COSP_CHANGE_VERTICAL_GRID

I'm getting a empty/R_UNDEF COSPOUT%calipso_cldlayer(npoints,ncat) being returned from COSP_STATS.F90.
I traced the problem to this:

r = R_UNDEF
dz = zu - zl

do i=1,Npoints
   ! Vertical grid at that point
   xl = zhalf(i,:)
   xu(1) = zfull(i,1)+zfull(i,1)-zhalf(i,1)
   xu(2:Nlevels) = xl(1:nlevels-1) ! this puts a 0 at xu(2) Why??????

Because zhalf(:,1) is all 0s in my model (TOA), this causes XU(2,:) to be 0 and therefore introduces a very large negative number (roughly -72 000) in the weights at 1 or 2 positions and in many other places the weight is 0. Effectively XU(1) is a doubling of zfull(i,1) which makes it about 160 000 m. I don't get why this is doubled here.

This results in the sum(w(:k)) always being less than 0 and therefore the output is never filled and returns with all R_UNDEF.

   ! Do the weighted mean
   do j=1,Ncolumns
      do k=1,M
         ws = sum(w(:,k))
         if (ws > 0.0) r(i,j,k) = sum(w(:,k)*yp(j,:))/ws
      enddo
   enddo

I'm not sure if this vertical grid was designed specifically for only 1 model type, but clearly it doesn't work in my case. I'm curious as to why the half levels are used here? My model has 91 levels that go up to about 80 km and we are trying to put this onto a 18 km grid. Would appreciate some help with this problem.

Best,
/M

Adding land points to the sample dataset

As mentioned in the introduction of Pull Request #19, it would be useful to have some land points in the sample data provided with the offline version of COSPv2. So far, the 153 grid points provided in this dataset are over ocean only. Adding land points to the sample dataset would allow us to have a wider range of cases covered with the regression test, and would be particularly useful for testing diagnostics depending on or related to the surface elevation.

cosp2_test reference (test) output is missing variables & needs versioning.

The cosp2_test driver is producing outputs (all related to ncfodd) which is not in the test dataset currently supplied with the master branch (cosp2_output_um.ref.nc). I note this test data does not appear to have any versioning information, which is problematic in this regards.

Missing variables are listed below:

< float npdfcld(loc) ;
< npdfcld:long_name = "# of Non-Precipitating Clouds" ;
< npdfcld:units = "1" ;
< npdfcld:standard_name = "number_of_slwc_nonprecip" ;
< float npdfdrz(loc) ;
< npdfdrz:long_name = "# of Drizzling Clouds" ;
< npdfdrz:units = "1" ;
< npdfdrz:standard_name = "number_of_slwc_drizzle" ;
< float npdfrain(loc) ;
< npdfrain:long_name = "# of Precipitating Clouds" ;
< npdfrain:units = "1" ;
< npdfrain:standard_name = "number_of_slwc_precip" ;
< float ncfodd1(CFODD_NICOD, CFODD_NDBZE, loc) ;
< ncfodd1:long_name = "# of CFODD (05 < Reff < 12 micron)" ;
< ncfodd1:units = "1" ;
< ncfodd1:standard_name = "cfodd_reff_small" ;
< float ncfodd2(CFODD_NICOD, CFODD_NDBZE, loc) ;
< ncfodd2:long_name = "# of CFODD (12 < Reff < 18 micron)" ;
< ncfodd2:units = "1" ;
< ncfodd2:standard_name = "cfodd_reff_medium" ;
< float ncfodd3(CFODD_NICOD, CFODD_NDBZE, loc) ;
< ncfodd3:long_name = "# of CFODD (18 < Reff < 35 micron)" ;
< ncfodd3:units = "1" ;
< ncfodd3:standard_name = "cfodd_reff_large" ;
< float CFODD_NDBZE(CFODD_NDBZE) ;
< CFODD_NDBZE:long_name = "CloudSat+MODIS dBZe vs ICOD joint PDF X-axis" ;
< CFODD_NDBZE:units = "dBZ" ;
< CFODD_NDBZE:standard_name = "cloudsat_quivalent_reflectivity_factor" ;
< float CFODD_NICOD(CFODD_NICOD) ;
< CFODD_NICOD:long_name = "CloudSat+MODIS dBZe vs ICOD joint PDF Y-axis" ;
< CFODD_NICOD:units = "none" ;
< CFODD_NICOD:standard_name = "modis_in-cloud_optical_depth" ;

Convert README to Markdown

For consistency with the format of the web pages, currently under development in the cfmip.github.io repository, here I propose to convert the README file to Markdown. This will enhance the styles avilable to present information more clearly.

Example data for tests do not include any night columns

The example data used for regression testing do not contain any night columns, so currently the regression test does not properly test output fields which should be masked at night (i.e., the passive instrument simulator outputs for ISCCP, MISR, and MODIS). The example input data should then contain at least one column for which sunlit = 0. Fix for this coming shortly.

Calculation of cloudsat_preclvl_index fails when use_vgrid=.false.

The error is caused by the calculation of cospIN%fracPrecipIce at the end of the the CloudSat radar optics section in subsample_and_optics, between L870 and L887. If this diagnostic can only be calculated for the CloudSat grid, then this needs to be protected by an IF test. The error can be seen in this CI test.

Test case using global snapshot

In the PMC of 29 May 2020, we agreed that it would be good to have a more robust set of cases for the test suite. This included adding more points than the standard ~150 points from the current test, such a global snapshot from the UM or any other model.

CI test ifort broken

It seems that updates in the Intel onaAPI have broken the ifort test, as discovered in #53.
setvars.sh has been moved to /opt/intel/oneapi. However, after fixing the location of setvars.sh, it still fails when building the NetCDF library.:
configure: error: Could not link conftestf.o and conftest.o

Striping caused by vertical interpolation routine

CALIPSO (and Cloudsat 3D diagnostics show striping, potentially caused caused by the vertical interpolation from the model's native grid. This problem was identified years ago (e.g Figure 2 https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2012GL053153).

G. Cesana noticed that the issue disappears when the interpolation routine is deactivated (in both IPSL and GISS models), so it has to be related to the interpolation. However, when trying to pinpoint the issue, he used this interpolation routine in a separate program and the striping didn't occur…

The problem seems to become less relevant as the model's vertical resolution increases, as Figure 9 in https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017MS001115 suggests. HadGEM3-GC3 has a higher vertical resolution and it only shows a discontinuity at high altitudes.

The evidence suggests that the vertical interpolation routine is not working properly for thick layers (thicker than the target layer of 480m deep).

MODIS Optical_Thickness_vs_ReffICE and Optical_Thickness_vs_ReffLIQ not masked for night columns

This looks like a bug, and I can confirm in the implementation in our model that modis_Optical_Thickness_vs_ReffICE and modis_Optical_Thickness_vs_ReffLIQ appear to not be masked properly for night columns. They do appear to be set properly for sunlit columns, but lack a corresponding statement to mask the non-sunlit columns (as is correctly done for modis_Optical_Thickness_vs_Cloud_Top_Pressure on line 1442 in cosp.F90). This probably won't matter for models that redundantly handle this outside of the COSP infrastructure, but does assume that users will need to treat sunlit vs non-sunlit cases explicitly to avoid dealing with the unset non-sunlit columns.

Python test script does not note occurrence of outputs that are not in the test dataset (Version check?)

cosp2_test is currently producing outputs which are not in the test dataset (see issue #42). The python test script did not report the presence of these additional outputs. I suggest that it be modified to check that the number of variables in the reference and test_output are the same.

And perhaps it would be good to add some versioning check (i.e. have cosp2_test output a version number into the data file and thereby have the python comparison script check this number against what is in the reference dataset to make sure both are up to date).

The number of actual arguments cannot be greater than the number of dummy arguments. [COSP_INIT]

when i compiled make driver_COSP1.4 under build directory, this error appeared.

there are 22 arguments in cosp_interface_v1p4.F90 when call COSP_INIT ,but there 26 arguments in
COSP_INIT in cosp.F90, the inconsitent arguments number resulted the error:

../cosp-1.4-interface/cosp_interface_v1p4.F90(646): error #6784: The number of actual arguments cannot be greater than the number of dummy arguments. [COSP_INIT]
call COSP_INIT(cfg%Lisccp_sim,cfg%Lmodis_sim,cfg%Lmisr_sim,cfg%Lradar_sim, &
------------^
../cosp-1.4-interface/cosp_interface_v1p4.F90(647): error #6633: The type of the actual argument differs from the type of the dummy argument. [NPOINTS]
cfg%Llidar_sim,cfg%Lparasol_sim,cfg%Lrttov_sim,gbx%Npoints,gbx%Nlevels, &

Bug in MODIS indexing of pressure input

There appears to be a bug in the column indexing of the pressure input variable to the MODIS simulator. The calls to modis_subcolumn on lines 903-913 of cosp.F90 contain the following:

          do i = 1, modisIN%nSunlit
             call modis_subcolumn(modisIN%Ncolumns,modisIN%Nlevels,modisIN%pres(i,:),    &
                                  modisIN%tau(int(modisIN%sunlit(i)),:,:),               &
                                  modisIN%liqFrac(int(modisIN%sunlit(i)),:,:),           &
                                  modisIN%g(int(modisIN%sunlit(i)),:,:),                 &
                                  modisIN%w0(int(modisIN%sunlit(i)),:,:),                &
                                  isccp_boxptop(int(modisIN%sunlit(i)),:),               &
                                  modisRetrievedPhase(i,:),                              &
                                  modisRetrievedCloudTopPressure(i,:),                   &
                                  modisRetrievedTau(i,:),modisRetrievedSize(i,:))
          end do

It appears that most of the inputs are using the modisIN%sunlit array to index into just the sunlit columns, however modisIN%pres does not use this indexing here. As a result, it appears that for groups of columns that contain both sunlit and not-sunlit columns, there will be a mismatch between the pressure profiles and the other inputs used. It appears as though modisIN%pres(i,:) should be replaced with modisIN%pres(int(sunlit(i)),:) here.

Subroutine COSP_OPAQ declares surfelev as optional

Subroutine COSP_OPAQ declares input variable surfelev as optional, but it is used throughout the routine as if it is always present. This causes a segfault when using the Intel compiler if surfelev is not passed to the routine. My guess is that surfelev probably should not be optional at all, but if this is the desired interface then simply wrapping code that uses it in a logical that checks for the presence of surfelev would be sufficient. For example, at line 1429 in lidar_simulator.F90, the following could be used as a fix:

                 if (present(surfelev)) then
                    cldtypemeanzse(ip,1) = cldtypemeanzse(ip,1) + (( vgrid_z(zopac) + vgrid_z(z_top) )/2.) - surfelev(ip)
                    cldtypemeanzse(ip,3) = cldtypemeanzse(ip,3) + ( vgrid_z(zopac) - surfelev(ip) )
                 else
                    cldtypemeanzse(ip,1) = cldtypemeanzse(ip,1) + (( vgrid_z(zopac) + vgrid_z(z_top) )/2.)
                    cldtypemeanzse(ip,3) = cldtypemeanzse(ip,3) + ( vgrid_z(zopac) )
                 end if

and likewise in other spots in the code that use surfelev.

Inconsistent floating point types in quickbeam_optics

Different conventions for floating point literals in quickbeam_optics.F90 lead to compile-time errors when using the intel compiler with the -r8 flag. For example:

../subsample_and_optics_example/optics/quickbeam_optics/quickbeam_optics.F90(472): error #6633: The type of the actual argument differs from the type of the dummy argument.
          ld   = ((apm*gamma(1.+bpm)*N0)/(rho_a*Q*1E-3))**tmp1

The problem appears to be because bpm is declared properly as a real(wp), but this conflicts with the precision of the 1. literal when the -r8 flag is used. We could omit the -r8 flag, but this seems to be required by other parts of our model. The bigger issue seems to be inconsistent floating point literals in the code here, with some instances using gamma(1+bpm) and others gamma(1.+bpm). My understanding is that adding the decimal without giving an explicit precision is somewhat dangerous, because we can't be sure what precision will be used. A safer approach (and one that resolves the above compile-time error) would be to either drop the decimal from each instance in the code, or else be explicit with precision by appending _wp to each literal, like gamma(1._wp+bpm). I can confirm that either of these fixes resolve the above error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.