Giter Club home page Giter Club logo

cospv2.0's Issues

cosp_diag_warmrain has wrong dimensions for variable frac_out / design flaw

The size of the matrix "frac_out" should be (nPoints,nColumns,nLevels) but Nlvgrid is being passed into the subroutine cosp_diag_warmrain as "nLevels" (see cosp.F90 line 1629 and cosp_stats.F90 line 283). On my system this is causing random values to appear in the additional elements within frac_out and causing "random errors" in the output variable wr_occfreq_ntotal.

I believe there is a design flaw here with the authors of this algorithm assuming that frac_out is based on the same grid as the re-grided radar reflectivity (which it isn't). I suspect the easiest solution is to pass in the radar reflectivity on the model grid (i.e. using nLevels) rather than using the radar grid (on vgrid) but that may violate the intent to reproduce a CloudSat-specific result and accordingly the algorithm needs to be redesigned to either (a) NOT use frac_out or (b) re-grid frac_out on to CloudSat radar (vgrid).

** NOTE: I have not checked to see if the same problem exists in other routines. I found this because of my cosp2_test output was generating strange results.

Python test script does not note occurrence of outputs that are not in the test dataset (Version check?)

cosp2_test is currently producing outputs which are not in the test dataset (see issue #42). The python test script did not report the presence of these additional outputs. I suggest that it be modified to check that the number of variables in the reference and test_output are the same.

And perhaps it would be good to add some versioning check (i.e. have cosp2_test output a version number into the data file and thereby have the python comparison script check this number against what is in the reference dataset to make sure both are up to date).

Inconsistent unit labels for hgt_matrix

It looks like the units for hgt_matrix and hgt_matrix_half are labeled incorrectly in the definition of cosp_column_inputs. The comment associated with the declaration says these should be in km, but throughout the code it looks like they are treated as being in units of m. For example, line 661 in cosp.F90 sets misrIN%zfull = cospgridIN%hgt_matrix directly, and the MISR simulator seems to treat zfull as having units of m, since the cloud top height bins (which are in km) are multiplied by 1000 when comparing to zfull (line 273 in MISR_simulator.F90). Likewise, line 716 in cosp.F90 sets cloudsatIN%hgt_matrix = cospgridIN%hgt_matrix, but in the call to quickbeam_subcolumn at line 897, hgt_matrix is divided by 1000 to get units in km, which quickbeam expects. It looks like at least units are handled consistently in the component simulators, but it would be helpful to the user of the top-level comment for hgt_matrix were consistent with the usage in the simulators. It would also be helpful to state the expected units of zfull in the MISR simulator.

Striping caused by vertical interpolation routine

CALIPSO (and Cloudsat 3D diagnostics show striping, potentially caused caused by the vertical interpolation from the model's native grid. This problem was identified years ago (e.g Figure 2 https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2012GL053153).

G. Cesana noticed that the issue disappears when the interpolation routine is deactivated (in both IPSL and GISS models), so it has to be related to the interpolation. However, when trying to pinpoint the issue, he used this interpolation routine in a separate program and the striping didn't occur…

The problem seems to become less relevant as the model's vertical resolution increases, as Figure 9 in https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2017MS001115 suggests. HadGEM3-GC3 has a higher vertical resolution and it only shows a discontinuity at high altitudes.

The evidence suggests that the vertical interpolation routine is not working properly for thick layers (thicker than the target layer of 480m deep).

hgt_matrix_half is used inconsistently

The array cospstateIN%hgt_matrix_half appears to be used inconsistently in the interface code and in calls to simulators and cosp_change_vertical_grid. This array is allocated to have size (npoints,nlevels+1), which would imply that it should hold the heights of model level interfaces, but is used like this in calls to cosp_change_vertical_grid:

call cosp_change_vertical_grid(cloudsatIN%Npoints,1,cloudsatIN%Nlevels,  
                                                      cospgridIN%hgt_matrix(:,cloudsatIN%Nlevels:1:-1),            
                                                      cospgridIN%hgt_matrix_half(:,cloudsatIN%Nlevels:1:-1),
                                                      betamol_in, Nlvgrid,
                                                      vgrid_zl(Nlvgrid:1:-1),vgrid_zu(Nlvgrid:1:-1),            
                                                      betamolI(:,1,Nlvgrid:1:-1))

cosp_change_vertical_grid expects zlev_half to have shape (npoints,nlevels), and to hold the heights of the bottom of each level. But, if a model were to populate hgt_matrix_half with the full interface levels (from levels 1 to nlevels+1, from TOA to surface), then the slicing nlevels:1:-1 would be taking the first nlevels heights, which would be the heights of the interface tops, not bottoms. In E3SM, we appear to implement the hacky fix to populate levels (1,nlevels) of cospstateIN%hgt_matrix_half with the heights of the level bottoms, and then populate level nlevels+1 of hgt_matrix_half with a dummy value of 0. Is this expected behavior?

MISR outputs are not properly set to fillvalues for night columns

It appears that MISR_mean_ztop is not properly reset to the fillvalue R_UNDEF for night columns. In MISR_COLUMN() in MISR_simulator.F90, the following code computes the joint histogram for sunlit points, or else sets values to fillvalue (lines 266-287):

if (sunlit(j) .eq. 1) then
   < do stuff >
else
   MISR_cldarea(j)         = R_UNDEF
   MISR_mean_ztop(npoints) = R_UNDEF
endif

Note that MISR_cldarea appears to be set correctly, but MISR_mean_ztop is only set for the last index, and fq_MISR_TAU_v_CTH is excluded entirely. The following correction should give the desired behavior:

if (sunlit(j) .eq. 1) then
   < do stuff >
else
   MISR_cldarea(j)   = R_UNDEF
   MISR_mean_ztop(j) = R_UNDEF
   fq_MIST_TAU_v_CTH(j,:,:) = R_UNDEF
endif

Convert README to Markdown

For consistency with the format of the web pages, currently under development in the cfmip.github.io repository, here I propose to convert the README file to Markdown. This will enhance the styles avilable to present information more clearly.

CI test ifort broken

It seems that updates in the Intel onaAPI have broken the ifort test, as discovered in #53.
setvars.sh has been moved to /opt/intel/oneapi. However, after fixing the location of setvars.sh, it still fails when building the NetCDF library.:
configure: error: Could not link conftestf.o and conftest.o

Adding OPAQ diagnostics + ground lidar to COSP_lidar

This issue has two purposes:

  1. Add 7 new variables (OPAQ products) to the standard CALIPSO-like COSP_lidar outputs consistently with the new GOCCPv3.0 dataset (3x2D variables and 4x3D variables)
  2. Add a ground-based COSP_lidar with 7 new outputs from this new code embedded in the lidar simulator (Molecular backscatter + standard 2D and 3D cloud variables)

Adding land points to the sample dataset

As mentioned in the introduction of Pull Request #19, it would be useful to have some land points in the sample data provided with the offline version of COSPv2. So far, the 153 grid points provided in this dataset are over ocean only. Adding land points to the sample dataset would allow us to have a wider range of cases covered with the regression test, and would be particularly useful for testing diagnostics depending on or related to the surface elevation.

download_test_data.sh drive links

We tried to follow the instructions for running the offline tests in the README file, and it seems that the links in the download_test_data.sh are outdated. When running compare_to_kgo.py we got the following error:

===== ERROR: some of the differences are larger than the tolerances.
==========================================  Summary statistics  ==========================================
                                Variable          N      AvgDiff      MinDiff      MaxDiff        StDev
                                 npdfcld         47  -1.0000e+00  -1.0000e+00  -1.0000e+00   0.0000e+00
                                 npdfdrz         47  -1.0000e+00  -1.0000e+00  -1.0000e+00   0.0000e+00
                                npdfrain         47  -1.0000e+00  -1.0000e+00  -1.0000e+00   0.0000e+00
                                 ncfodd1      35250  -1.0000e+00  -1.0000e+00  -1.0000e+00   0.0000e+00
                                 ncfodd2      35250  -1.0000e+00  -1.0000e+00  -1.0000e+00   0.0000e+00
                                 ncfodd3      35250  -1.0000e+00  -1.0000e+00  -1.0000e+00   0.0000e+00
==========================================================================================================
===== ERROR: the test is exiting with an error, please review the output above.

And judging by the links used in the continuous_integration.yml the links in the download script are at least one year older. Should we make a PR with fix?

Creation of sister repository: COSPweb

This is just to record the process of creation of a new repository that will host the COSP web pages, as discussed in the PMC.
@dustinswales , please can you create a new repository called COSPweb and give me the relevant permissions? Thanks!

Inconsistent floating point types in quickbeam_optics

Different conventions for floating point literals in quickbeam_optics.F90 lead to compile-time errors when using the intel compiler with the -r8 flag. For example:

../subsample_and_optics_example/optics/quickbeam_optics/quickbeam_optics.F90(472): error #6633: The type of the actual argument differs from the type of the dummy argument.
          ld   = ((apm*gamma(1.+bpm)*N0)/(rho_a*Q*1E-3))**tmp1

The problem appears to be because bpm is declared properly as a real(wp), but this conflicts with the precision of the 1. literal when the -r8 flag is used. We could omit the -r8 flag, but this seems to be required by other parts of our model. The bigger issue seems to be inconsistent floating point literals in the code here, with some instances using gamma(1+bpm) and others gamma(1.+bpm). My understanding is that adding the decimal without giving an explicit precision is somewhat dangerous, because we can't be sure what precision will be used. A safer approach (and one that resolves the above compile-time error) would be to either drop the decimal from each instance in the code, or else be explicit with precision by appending _wp to each literal, like gamma(1._wp+bpm). I can confirm that either of these fixes resolve the above error.

Encoding version information in the KGO

In the PMC on 29 May 2020, there was some discussion about encoding the KGO with some version information. It was agreed that it was a nice idea only if it can be done automatically by the CI test. This issue will explore the possibility of automatically updating that information to the KGO when the CI test fails.

[User] Better documentation of changes in stable versions

Raised during the CFMIP meeting in Boulder, October 2018.
People upgrading COSP in their models want to have a better information on the impact of the changes between stable versions. It would be desirable to provide more information on which diagnostics have changed between stable versions, and the expected impact of the changes using a battery of quickplots.

cosp2_test reference (test) output is missing variables & needs versioning.

The cosp2_test driver is producing outputs (all related to ncfodd) which is not in the test dataset currently supplied with the master branch (cosp2_output_um.ref.nc). I note this test data does not appear to have any versioning information, which is problematic in this regards.

Missing variables are listed below:

< float npdfcld(loc) ;
< npdfcld:long_name = "# of Non-Precipitating Clouds" ;
< npdfcld:units = "1" ;
< npdfcld:standard_name = "number_of_slwc_nonprecip" ;
< float npdfdrz(loc) ;
< npdfdrz:long_name = "# of Drizzling Clouds" ;
< npdfdrz:units = "1" ;
< npdfdrz:standard_name = "number_of_slwc_drizzle" ;
< float npdfrain(loc) ;
< npdfrain:long_name = "# of Precipitating Clouds" ;
< npdfrain:units = "1" ;
< npdfrain:standard_name = "number_of_slwc_precip" ;
< float ncfodd1(CFODD_NICOD, CFODD_NDBZE, loc) ;
< ncfodd1:long_name = "# of CFODD (05 < Reff < 12 micron)" ;
< ncfodd1:units = "1" ;
< ncfodd1:standard_name = "cfodd_reff_small" ;
< float ncfodd2(CFODD_NICOD, CFODD_NDBZE, loc) ;
< ncfodd2:long_name = "# of CFODD (12 < Reff < 18 micron)" ;
< ncfodd2:units = "1" ;
< ncfodd2:standard_name = "cfodd_reff_medium" ;
< float ncfodd3(CFODD_NICOD, CFODD_NDBZE, loc) ;
< ncfodd3:long_name = "# of CFODD (18 < Reff < 35 micron)" ;
< ncfodd3:units = "1" ;
< ncfodd3:standard_name = "cfodd_reff_large" ;
< float CFODD_NDBZE(CFODD_NDBZE) ;
< CFODD_NDBZE:long_name = "CloudSat+MODIS dBZe vs ICOD joint PDF X-axis" ;
< CFODD_NDBZE:units = "dBZ" ;
< CFODD_NDBZE:standard_name = "cloudsat_quivalent_reflectivity_factor" ;
< float CFODD_NICOD(CFODD_NICOD) ;
< CFODD_NICOD:long_name = "CloudSat+MODIS dBZe vs ICOD joint PDF Y-axis" ;
< CFODD_NICOD:units = "none" ;
< CFODD_NICOD:standard_name = "modis_in-cloud_optical_depth" ;

Example data for tests do not include any night columns

The example data used for regression testing do not contain any night columns, so currently the regression test does not properly test output fields which should be masked at night (i.e., the passive instrument simulator outputs for ISCCP, MISR, and MODIS). The example input data should then contain at least one column for which sunlit = 0. Fix for this coming shortly.

add MODIS joint histgram diagnostics

We would like to add four new joint-histogram diagnostics to the MODIS simulator:

  1. CTP-COT joint histogram for liquid-topped clouds
  2. CTP-COT joint histogram for ice-topped clouds
  3. CWP-CER joint histogram for liquid-topped clouds
  4. CWP-CER joint histogram for ice-topped clouds

(CTP: cloud-top pressure; COT: cloud optical thickness; CWP: cloud water path; CER: cloud particle size)

These diagnostics correspond to the observational dataset from Pincus et al. 2023 (https://doi.org/10.5194/essd-15-2483-2023). Additionally, we would like to make several small changes to the histogram bin edges to match the data from Pincus et al.

Allocation issue with MODIS/Cloudsat joint-products

@takmichibata
If these diagnostics are not requested, there is a runtime error as these fields are always allocated.
I propose you add some logic around this section of code, lines 1600-1683 of cosp.F90, so that the diagnostics are produced only when the fields are requested.
As a reference, see how the joint Cloudsat-Calipso products, just above where your MODIS/Cloudsat joint-diagnostics handle this issue.

The number of actual arguments cannot be greater than the number of dummy arguments. [COSP_INIT]

when i compiled make driver_COSP1.4 under build directory, this error appeared.

there are 22 arguments in cosp_interface_v1p4.F90 when call COSP_INIT ,but there 26 arguments in
COSP_INIT in cosp.F90, the inconsitent arguments number resulted the error:

../cosp-1.4-interface/cosp_interface_v1p4.F90(646): error #6784: The number of actual arguments cannot be greater than the number of dummy arguments. [COSP_INIT]
call COSP_INIT(cfg%Lisccp_sim,cfg%Lmodis_sim,cfg%Lmisr_sim,cfg%Lradar_sim, &
------------^
../cosp-1.4-interface/cosp_interface_v1p4.F90(647): error #6633: The type of the actual argument differs from the type of the dummy argument. [NPOINTS]
cfg%Llidar_sim,cfg%Lparasol_sim,cfg%Lrttov_sim,gbx%Npoints,gbx%Nlevels, &

Implementation of CLARA simulator

Hi,

I have been tasked with implementing the CLARA -simulator (Eliasson et al., 2020) into COSP if possible.

The way we made the simulator for offline implementation was very similar to the MODIS simulator.

The main difference to the MODIS simulator is how the cloud mask is simulated. The cloud mask relies on auxiliary data of the probability of detection (POD) as a function of optical depth, geographical location and whether or not the subcolumn is sunlit. The subcolumn is considered cloudy when a generated random number calculated for each subcolumn is smaller than the POD as read from the auxiliary data. The background to this approach is explained in the paper describing the simulator.

Our implementation does not require any additional model fields than already required by COSP simulators

Reference:
Eliasson, S., Karlsson, K.-G., and Willén, U.: A simulator for the CLARA-A2 cloud climate data record and its application to assess EC-Earth polar cloudiness, Geosci. Model Dev., 13, 297–314, https://doi.org/10.5194/gmd-13-297-2020, 2020.

Missing type specifier causing crash on Cori-KNL in running debug mode

I've been working with @brhillman to test his update to the COSP interface that was written to work with E3SM-MMF. My test runs run without debug mode, but when I enable debug mode I get a floating point overflow error and a backtrace that points to line 121 of src/simulator/cosp_cloudsat_interface.F90, which looks like this:

rcfg%step_list(j)=0.1_wp+0.1_wp*((j-1)**1.5)

Adding the "wp" type to the value in the exponent resolves the issue:

rcfg%step_list(j)=0.1_wp+0.1_wp*((j-1)**1.5_wp)

Subroutine COSP_OPAQ declares surfelev as optional

Subroutine COSP_OPAQ declares input variable surfelev as optional, but it is used throughout the routine as if it is always present. This causes a segfault when using the Intel compiler if surfelev is not passed to the routine. My guess is that surfelev probably should not be optional at all, but if this is the desired interface then simply wrapping code that uses it in a logical that checks for the presence of surfelev would be sufficient. For example, at line 1429 in lidar_simulator.F90, the following could be used as a fix:

                 if (present(surfelev)) then
                    cldtypemeanzse(ip,1) = cldtypemeanzse(ip,1) + (( vgrid_z(zopac) + vgrid_z(z_top) )/2.) - surfelev(ip)
                    cldtypemeanzse(ip,3) = cldtypemeanzse(ip,3) + ( vgrid_z(zopac) - surfelev(ip) )
                 else
                    cldtypemeanzse(ip,1) = cldtypemeanzse(ip,1) + (( vgrid_z(zopac) + vgrid_z(z_top) )/2.)
                    cldtypemeanzse(ip,3) = cldtypemeanzse(ip,3) + ( vgrid_z(zopac) )
                 end if

and likewise in other spots in the code that use surfelev.

Calculation of cloudsat_preclvl_index fails when use_vgrid=.false.

The error is caused by the calculation of cospIN%fracPrecipIce at the end of the the CloudSat radar optics section in subsample_and_optics, between L870 and L887. If this diagnostic can only be calculated for the CloudSat grid, then this needs to be protected by an IF test. The error can be seen in this CI test.

Bug in MODIS indexing of pressure input

There appears to be a bug in the column indexing of the pressure input variable to the MODIS simulator. The calls to modis_subcolumn on lines 903-913 of cosp.F90 contain the following:

          do i = 1, modisIN%nSunlit
             call modis_subcolumn(modisIN%Ncolumns,modisIN%Nlevels,modisIN%pres(i,:),    &
                                  modisIN%tau(int(modisIN%sunlit(i)),:,:),               &
                                  modisIN%liqFrac(int(modisIN%sunlit(i)),:,:),           &
                                  modisIN%g(int(modisIN%sunlit(i)),:,:),                 &
                                  modisIN%w0(int(modisIN%sunlit(i)),:,:),                &
                                  isccp_boxptop(int(modisIN%sunlit(i)),:),               &
                                  modisRetrievedPhase(i,:),                              &
                                  modisRetrievedCloudTopPressure(i,:),                   &
                                  modisRetrievedTau(i,:),modisRetrievedSize(i,:))
          end do

It appears that most of the inputs are using the modisIN%sunlit array to index into just the sunlit columns, however modisIN%pres does not use this indexing here. As a result, it appears that for groups of columns that contain both sunlit and not-sunlit columns, there will be a mismatch between the pressure profiles and the other inputs used. It appears as though modisIN%pres(i,:) should be replaced with modisIN%pres(int(sunlit(i)),:) here.

Error in COSP_CHANGE_VERTICAL_GRID

I'm getting a empty/R_UNDEF COSPOUT%calipso_cldlayer(npoints,ncat) being returned from COSP_STATS.F90.
I traced the problem to this:

r = R_UNDEF
dz = zu - zl

do i=1,Npoints
   ! Vertical grid at that point
   xl = zhalf(i,:)
   xu(1) = zfull(i,1)+zfull(i,1)-zhalf(i,1)
   xu(2:Nlevels) = xl(1:nlevels-1) ! this puts a 0 at xu(2) Why??????

Because zhalf(:,1) is all 0s in my model (TOA), this causes XU(2,:) to be 0 and therefore introduces a very large negative number (roughly -72 000) in the weights at 1 or 2 positions and in many other places the weight is 0. Effectively XU(1) is a doubling of zfull(i,1) which makes it about 160 000 m. I don't get why this is doubled here.

This results in the sum(w(:k)) always being less than 0 and therefore the output is never filled and returns with all R_UNDEF.

   ! Do the weighted mean
   do j=1,Ncolumns
      do k=1,M
         ws = sum(w(:,k))
         if (ws > 0.0) r(i,j,k) = sum(w(:,k)*yp(j,:))/ws
      enddo
   enddo

I'm not sure if this vertical grid was designed specifically for only 1 model type, but clearly it doesn't work in my case. I'm curious as to why the half levels are used here? My model has 91 levels that go up to about 80 km and we are trying to put this onto a 18 km grid. Would appreciate some help with this problem.

Best,
/M

Test case using global snapshot

In the PMC of 29 May 2020, we agreed that it would be good to have a more robust set of cases for the test suite. This included adding more points than the standard ~150 points from the current test, such a global snapshot from the UM or any other model.

Remove COSPv1.4 interface

As part of the COSP2 release we provided a "drop-in replacement" interface that would allow COSP1 users to use the new COSP2 code with little interruption on their side. Essentially the role of this interface was to map derived data types (DDTs) used in COSP1 to the DDTs used in COSP2.
This interface was maintained up until COSPv2.0.3, consistent with the last release of COSP1, v1.4.3.
However, many developments have been added to the COSP2 repository since, the latest version is v2.1.6, and this interface is no longer compatible with these later versions of COSP2, v2.1 and later. Supporting this interface beyond v2.0.3 isn't sensible as the new diagnostics will not be available to users of the COSP1 code base.
We (Robert and I) propose we remove this deprecated code from the current COSP2 code base.

RTTOV 12 interface

Add interface to RTTOV12 (clear-sky only).

It might be of use to some users to be able to use RTTOV 11 whilst it is still supported by the NWP SAF so I will make changes to the Makefile to make COSP use with RTTOV a bit more straightforward too.

MODIS Optical_Thickness_vs_ReffICE and Optical_Thickness_vs_ReffLIQ not masked for night columns

This looks like a bug, and I can confirm in the implementation in our model that modis_Optical_Thickness_vs_ReffICE and modis_Optical_Thickness_vs_ReffLIQ appear to not be masked properly for night columns. They do appear to be set properly for sunlit columns, but lack a corresponding statement to mask the non-sunlit columns (as is correctly done for modis_Optical_Thickness_vs_Cloud_Top_Pressure on line 1442 in cosp.F90). This probably won't matter for models that redundantly handle this outside of the COSP infrastructure, but does assume that users will need to treat sunlit vs non-sunlit cases explicitly to avoid dealing with the unset non-sunlit columns.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.