wrf-model / wrf Goto Github PK
View Code? Open in Web Editor NEWThe official repository for the Weather Research and Forecasting (WRF) model
License: Other
The official repository for the Weather Research and Forecasting (WRF) model
License: Other
In var/da/da_radiance/da_get_innov_vector_crtm.inc
! NOTE: WRF high-level q values seems too big, replaced by constants
if (p(1)*0.01 < 75.0) Atmosphere(1)%Absorber(kte-k+1,1) = 0.001
Not sure about the justification about the arbitrary 75.0 and 0.001.
But at least the value should be min(Atmosphere(1)%Absorber(kte-k+1,1),0.001).
Currently, WRF only permits < 10000 MPI tasks. As the grid size increases, more than 10000 cores could be necessary. Therefore, the value of RSL_MAXPROC in external/RSL_LITE/rsl_lite.h needs to be increased to e.g. 10000.
Also the sprintf statements in external/RSL_LITE/c_code.c needs to changed permitting "06d" integers.
With these CPP directives in dyn_em/module_small_step_em.F:
5 # define mut(...) (c1f(k)*XXPCTFXX(__VA_ARGS__)+c2f(k))
6 # define XXPCTFXX(...) mut(__VA_ARGS__)
7
8 # define Mut(...) (c1h(k)*XXPCTHXX(__VA_ARGS__)+c2h(k))
9 # define XXPCTHXX(...) Mut(__VA_ARGS__)
35 # define muts(...) (c1f(k)*XXPCTFSXX(__VA_ARGS__)+c2f(k))
36 # define XXPCTFSXX(...) muts(__VA_ARGS__)
37
38 # define Muts(...) (c1h(k)*XXPCTHSXX(__VA_ARGS__)+c2h(k))
39 # define XXPCTHSXX(...) Muts(__VA_ARGS__)
This line 318 should be using the "h" levels, not the "f" levels. The lower case, "muts" and "mut" in line 318, refer to the c1f and c2f coefficients.
314 DO j=j_start, j_end
315 DO k=k_start, k_end
316 DO i=i_start, i_end
317 t_save(i,k,j) = t_2(i,k,j)
318 t_2(i,k,j) = muts(i,j)*t_1(i,k,j)-mut(i,j)*t_2(i,k,j)
319 ENDDO
320 ENDDO
321 ENDDO
The correct line 318 in dyn_em/module_small_step_em.F, subroutine small_step_prep, should be using the "Muts" and "Mut" 1d arrays:
314 DO j=j_start, j_end
315 DO k=k_start, k_end
316 DO i=i_start, i_end
317 t_save(i,k,j) = t_2(i,k,j)
318 t_2(i,k,j) = Muts(i,j)*t_1(i,k,j)-Mut(i,j)*t_2(i,k,j)
319 ENDDO
320 ENDDO
321 ENDDO
This same mod is required for both v3.9 and v3.9.1.1 when using the the hybrid vertical coordinate option.
Look at the following lines in v4. If the identified line above for 3.9 and 3.9.1.1 is incorrect, then these may need to be addressed also.
> git checkout -b v3.9
> grep -n mut module_small_step_em.F | grep -i "t_2("
318: t_2(i,k,j) = muts(i,j)*t_1(i,k,j)-mut(i,j)*t_2(i,k,j)
467: t_2(i,k,j) = (t_2(i,k,j) + t_save(i,k,j)*mut(i,j))/muts(i,j)
476: t_2(i,k,j) = (t_2(i,k,j) + t_save(i,k,j)*mut(i,j))/muts(i,j)
485: t_2(i,k,j) = (t_2(i,k,j) - dts*number_of_small_timesteps*mut(i,j)*h_diabatic(i,k,j) &
> git checkout -b v4.0
> grep -n c1f module_small_step_em.F | grep -i "t_2("
263: t_2(i,k,j) = (c1f(k)*muts(i,j)+c2f(k))*t_1(i,k,j)-(c1f(k)*mut(i,j)+c2f(k))*t_2(i,k,j)
412: t_2(i,k,j) = (t_2(i,k,j) + t_save(i,k,j)*(c1f(k)*mut(i,j)+c2f(k)))/(c1f(k)*muts(i,j)+c2f(k))
421: t_2(i,k,j) = (t_2(i,k,j) - dts*number_of_small_timesteps*(c1f(k)*mut(i,j)+c2f(k))*h_diabatic(i,k,j) &
With the GNU 5.2.0 compilers and configure -d
, the module_cu_mskf.F
module fails to build with the following error:
module_cu_mskf.f90:5562:27:
TIMEC=AMIN1(TIMEC,86400)
1
Error: ‘a2’ argument of ‘amin1’ intrinsic at (1) must be REAL(4)
ar ru ../main/libwrflib.a module_driver_constants.o module_domain_type.o module_streams.o module_domain.o module_integrate.o module_timing.o module_configure.o module_tiles.o module_machine.o module_nesting.o module_wrf_error.o module_state_description.o module_sm.o module_io.o module_comm_dm.o module_comm_dm_0.o module_comm_dm_1.o module_comm_dm_2.o module_comm_dm_3.o module_comm_dm_4.o module_comm_nesting_dm.o module_dm.o module_quilt_outbuf_ops.o module_io_quilt.o module_intermediate_nmm.o module_cpl.o module_cpl_oasis3.o module_clear_halos.o wrf_num_bytes_between.o wrf_shutdown.o wrf_debug.o libmassv.o collect_on_comm.o hires_timer.o clog.o nl_get_0_routines.o nl_get_1_routines.o nl_get_2_routines.o nl_get_3_routines.o nl_get_4_routines.o nl_get_5_routines.o nl_get_6_routines.o nl_get_7_routines.o nl_set_0_routines.o nl_set_1_routines.o nl_set_2_routines.o nl_set_3_routines.o nl_set_4_routines.o nl_set_5_routines.o nl_set_6_routines.o nl_set_7_routines.o module_alloc_space_0.o module_alloc_space_1.o module_alloc_space_2.o module_alloc_space_3.o module_alloc_space_4.o module_alloc_space_5.o module_alloc_space_6.o module_alloc_space_7.o module_alloc_space_8.o module_alloc_space_9.o
ar: creating ../main/libwrflib.a
ar: module_domain_type.o: No such file or directory
make[2]: [framework] Error 1 (ignored)
ranlib ../main/libwrflib.a
ranlib: '../main/libwrflib.a': No such file
Well stated:
"The idea is that, at the time code is developed, and by the person contributing the change, we're best positioned to describe in the eventual release notes the importance of or need for a particular set of changes. Writing up the release notes as we do development would also mean that almost anyone would be capable of preparing the release notes by simply grepping for "RELEASE NOTE" in the output of 'git log' and sorting the points as needed. This would save us perhaps an entire day of someone's time for future releases, since we wouldn't have to peruse 300 pages of 'git log' output, looking at code diffs, trying to tease out the importance (or lack thereof) of each and every commit."
Hello
Having capability for arbitrary shape domains would be a great addon for WRF.
It would allow to save computation power when doing large grid / high-resolution simulations over seaside or mixed flat-land / mountainous area.
I did a quick survey of the actual code and internal mechanisms of WRF, and it looks like to me that it could be implemented without major modification.
I think that I am able to do the necessary coding, and I would be happy to give it a try.
I am looking for an academical or private institution that would be willing to:
Anyone interested?
Changing module_sf_fogdes.F does not create a new module_sf_fogdes.o in version 3.9.1.1. Not sure about v4.
In var/da/da_radiance/da_get_innov_vector_crtm.inc:
Atmosphere(1)%Level_Pressure(0) should be TOA_PRESSURE (from crtm_module), not model_ptop.
In module_dm.F external/RSL_LITE/module_dm.F#L1488 from PR #10, the new function "wrf_dm_lor_logical" incorrectly uses the "MPI_LAND" operator instead of the "MPI_LOR" operator. This is presumably a copy-paste error from the other new function "wrf_dm_land_logical"
See this page for details about the build-in operations for calls to mpi_reduce: http://mpi-forum.org/docs/mpi-1.1/mpi-11-html/node78.html
The WRF SCM 3x3 stencil is below the minimum domain dimensions when running WRF SCM on systems compiled for MPI. Even when one processor is specified the following error is printed to the rsl.error.0000 file:
For domain 1 , the domain size is too small for this many processors, or the decomposition aspect ratio is poor.
Minimum decomposed computational patch size, either x-dir or y-dir, is 10 grid cells.
e_we = 3, nproc_x = 1, with cell width in x-direction = 3
e_sn = 3, nproc_y = 1, with cell width in y-direction = 3
--- ERROR: Reduce the MPI rank count, or redistribute the tasks.
See PR #302. Maybe something like "global=.true.", which would alias to polar=.true. (derived namelist value).
rsl.error.0000 output
"""
D01 3-D analysis nudging for wind is applied and Guv= 0.3000E-03
D01 3-D analysis nudging for temperature is applied and Gt= 0.3000E-03
D01 3-D analysis nudging for water vapor mixing ratio is applied and Gq= 0.3000E-03
D01 surface analysis nudging for wind is applied and Guv_sfc= 0.3000E-03
D01 surface analysis nudging for temperature is applied and Gt_sfc= 0.3000E-03
D01 surface analysis nudging for water vapor mixing ratio is applied and Gq_sfc= 0.1000E-04
mediation_integrate.G 1943 DATASET=HISTORY
mediation_integrate.G 1944 grid%id 1 grid%oid 1
Timing for Writing /data/wrfoutput/wrfout_d01_2016-12-30_00:00:00.nc for domain 1: 0.47222 elapsed seconds
d01 2016-12-30_00:00:00 Error trying to read metadata
d01 2016-12-30_00:00:00 File name that is causing troubles = /data/wrfoutput/wrfout_d01_2016-12-30_00:00:00.nc
d01 2016-12-30_00:00:00 /data/wrfoutput/wrfout_d01_2016-12-30_00:00:00.nc
d01 2016-12-30_00:00:00 Error trying to read metadata
d01 2016-12-30_00:00:00 You can try 1) ensure that the input file was created with WRF v4 pre-processors, or
d01 2016-12-30_00:00:00 2) use force_use_old_data=T in the time_control record of the namelist.input file
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE: LINE: 324
---- ERROR: The input file appears to be from a pre-v4 version of WRF initialization routines
"""
The v4_metgrid variable is used inside share/input_wrf.F. The variable is not in the Registry.EM_COMMON_var.
Solution
The MYNN PNL has the EDMF (bl_mynn_edmf=1) option on by default. In check_a_mundo when the bl_mynn_edmf=1, even if the user specifically asks for the shallow cumulus option to be activated (sh_cu=1), the shallow cumulus scheme is turned off.
IF ( model_config_rec%bl_mynn_edmf(i) .EQ. MYNN_STEM_EDMF .OR. &
model_config_rec%bl_mynn_edmf(i) .EQ. MYNN_TEMF_EDMF) THEN
model_config_rec%shcu_physics(i) = 0 ! maxdom
model_config_rec%ishallow = 0 ! not maxdom
END IF
This should be FATAL.
Refer to #246
When using old input files with WRF v4, we get an error message like the following:
DYNAMICS OPTION: Eulerian Mass Coordinate
alloc_space_field: domain 1 , 70085920 bytes allocated
med_initialdata_input: calling input_input
This input data is not V4: OUTPUT FROM REAL_EM V3.9.1.1 PREPROCESSOR
File name that is causing troubles = wrfinput_d01
wrfinput_d01
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE: <stdin> LINE: 318
Whoa there pard, there are troubles with this data. It might be too old.
-------------------------------------------
Given that there will likely be quite a few users who will try to use their old wrfinput files with this release of WRF, it may be good to have an error message that more clearly indicates the problem, and that doesn't confuse non-native English speakers.
How about:
DYNAMICS OPTION: Eulerian Mass Coordinate
alloc_space_field: domain 1 , 70085920 bytes allocated
med_initialdata_input: calling input_input
This input data is not V4: OUTPUT FROM REAL_EM V3.9.1.1 PREPROCESSOR
File name that is causing troubles = wrfinput_d01
wrfinput_d01
-------------- FATAL CALLED ---------------
FATAL CALLED FROM FILE: <stdin> LINE: 318
The wrfinput_d01 file appears to be from an older version of WRF. Please ensure that the wrfinput_d01 file was created with WRF v4 pre-processors
-------------------------------------------
I think we know the filename at the point the error message is written, so something like the above should be possible.
To be fair, we do state that the input file is "not V4" earlier, though in our experience, I think, users generally only read the last line or two in an rsl.error.0000
file.
To allow users to easily cut and paste, swap the ";" characters (in the README.namelist and examples.namelist files) for the standard comment demarkation character "!".
For example:
test/em_real/examples.namelist
** Optional gravitational settling of fog/cloud droplets (MYNN PBL only)
grav_settling = 1, ; default 0
run/README.namelist
For example, assuming the job is 10 nodes with 36 cores per node. To have 1 I/O processor
per node, set
&pio_control
usepio = .true., ; turn PIO on
pioprocs = 10, ; Number of processors to do IO
piostart = 0, ; starting from processor 0
piostride = 36, ; number of intervals between IO processsors. pioprocs * piostride = total number of procs.
pioshift = 1, ; Shift PIO master to this processor (instead of default 0)
/
Source sent from Thomas Schwitalla in external/io_pnetcdf/wrfio.F90, and frame/module_bdywrite.F in nf_create. Changes look to be similar to this:
Original
stat = NFMPI_CREATE(Comm, newFileName, IOR(NF_CLOBBER, NF_64BIT_OFFSET), info, DH%NCID)
New
stat = NFMPI_CREATE(Comm, newFileName, IOR(NF_CLOBBER, NF_64BIT_DATA), info, DH%NCID)
Need netcdf 4.4, NCL 6.3, pnetcdf 1.6 (all and greater).
From WRF Forum
V4 REAL rejecting OBSGRID analyses
Post by RCarpenter » Wed Jun 20, 2018 2:59 pm
I am using the OBSGRID metoa_em files as input to REAL. In v4, I get this error:
d01 2018-06-20_12:00:00 This input data is not V4: OUTPUT FROM OBSGRID
It appears that REAL is rejecting any METGRID netCDF files that don't have "V4" in the TITLE attribute.
As a workaround, I am setting force_use_old_data=T in the time_control record of the namelist.input file.
So, is this a bug or a feature?
Here is a simple Fortran program that exhibits different behavior when the OpenMP compile-time option is activated:
program mat_mult
#ifdef _OPENMP
print *,'yep, this is OpenMP parallel'
#else
print *,'nope, this is NOT OpenMP parallel'
#endif
end program
Here is an older Intel ifort compiler:
> ml | grep intel
Currently Loaded Modules:
1) ncarenv/1.2 3) numpy/1.13.3 5) scipy/0.19.1 7) intel/17.0.1 9) mpt/2.15f
2) python/2.7.13 4) netcdf4-python/1.2.7 6) matplotlib/2.0.2 8) ncarcompilers/0.4.1 10) netcdf/4.6.1
> ifort -openmp -o mat_mult mat_mult.F
We get a warning message about deprecated options:
ifort: command line remark #10411: option '-openmp' is deprecated and will be removed in a future release. Please use the replacement option '-qopenmp'
However, the executable is built and can run.
> mat_mult
yep, this is OpenMP parallel
When switching to the newer Intel ifort compiler, the -openmp option has now been removed.
> ml intel/18.0.1
> ifort -openmp -o mat_mult mat_mult.F
Which gives us this output
ifort: command line error: option '-openmp' is not supported. Please use the replacement option '-qopenmp'
As expected, the new -qopenmp option is OK
> ifort -qopenmp -o mat_mult mat_mult.F
> mat_mult
yep, this is OpenMP parallel
I tested this on WRF. The -qopenmp option seems safe at least going back to ifort 16.0.1:
From Craig Mattocks (NOAA)
In module_model_constants: do something like this:
real, parameter :: small = tiny(1.0)
Then import as needed in physics schemes:
!---------------
! Import modules
!---------------
use :: Constants, only: small
This gets rid of all of the myriad
IF ( val(i,j) .LT. 1.e-30) THEN
blah
ENDIF
Looks like "moved" is always false. In phys/module_physics_init.F, this is a temporary fix:
#if 0
949 IF ( .NOT. moved ) THEN
950 DO j=jts,jtf
951 DO i=its,itf
952 XLAND(i,j)=float(config_flags%ideal_xland)
953 GSW(i,j)=0.
954 GLW(i,j)=0.
955 !-- initialize ust to a small value
956 UST(i,j)=0.0001
957 MOL(i,j)=0.0
958 PBLH(i,j)=0.0
959 HFX(i,j)=0.
960 QFX(i,j)=0.
961 RAINBL(i,j)=0.
962 RAINNCV(i,j)=0.
963 SNOWNCV(i,j)=0.
964 GRAUPELNCV(i,j)=0.
965 ACSNOW(i,j)=0.
966 DO k=kms,kme !wig, 17-May-2006: Added for idealized chem. runs
967 EXCH_H(i,k,j) = 0.
968 END DO
969 ENDDO
970 ENDDO
971 ENDIF
#endif
When using io_metgrid=102 in WPS, all met_em files apart from the .0000 file contain NUM_LAND_CAT=0 leading to a stop in real.exe due to inconsistent land use categories.
A similar issue occurs for the LAI12M flag which is also missing except in the .0000 file.
At the initial time of the WRF model, the perturbation dry potential temperature field ("T" in the netcdf file) for d02 (and greater) is identically zero when the initial conditions for the fine-domain fields are manufactured via horizontal interpolation (in the WRF Registry, this horizontal interpolation at the initial time is activted through the "d" switch in the nest options).
This error is noticed when
&time_control
input_from_file = .true.,.false.,.false.,
/
&domains
max_dom = 2,
/
&dynamics
use_theta_m = 1,
/
With the default namelist.input
file for ARW real-data cases, I get the following error when trying to run real.exe with metgrid output produced from GFS data going up to 100 Pa:
d01 2016-12-29_00:00:00 You need one of four things:
d01 2016-12-29_00:00:00 1) More eta levels: e_vert
d01 2016-12-29_00:00:00 2) A lower p_top: p_top_requested
d01 2016-12-29_00:00:00 3) Increase the lowest eta thickness: dzbot
d01 2016-12-29_00:00:00 4) Increase the stretching factor: dzstretch_s or dzstretch_u
d01 2016-12-29_00:00:00 All are namelist options
Increasing the value of e_vert
to 35 resolved the issue.
Should we consider increasing e_vert
in the default namelist.input
file?
Restarting run with the MYNN PBL scheme cannot yield results identical to that from continuous run.
@kkeene44 @smileMchen @davegill
For the new MP schemes that are getting into the WRF model, if there are binary lookup tables that are supplied by the developer, we need to do one of the following two items before v4.1 release:
Include a location in the WRF commit template to link in a specific issue.
From a clean environment, the first build of WRF ALWAYS starts as non-compressed. The second build, if NC4 with HDF5 compression is available, uses compression.
cheyenne.ucar.edu:/glade2/scratch2/gill/NC34/WRF_nc4>configure << EOF > foo
? 13
? 1
? EOF
cheyenne.ucar.edu:/glade2/scratch2/gill/NC34/WRF_nc4>tail -5 foo
*****************************************************************************
This build of WRF will use classic (non-compressed) NETCDF format
*****************************************************************************
cheyenne.ucar.edu:/glade2/scratch2/gill/NC34/WRF_nc4>configure << EOF > foo
? 13
? 1
? EOF
cheyenne.ucar.edu:/glade2/scratch2/gill/NC34/WRF_nc4>tail -5 foo
*****************************************************************************
This build of WRF will use NETCDF4 with HDF5 compression
*****************************************************************************
(Originally reported at WRF-CMake#4)
When compiling WRF with -fcheck=bounds
then running ideal.exe
fails with:
At line 19288 of file .../inc/nl_config.inc
Fortran runtime error: Actual string length is shorter than the declared one for dummy argument 'mminlu' (4/256)
It seems that lots of code is correctly handling strings, e.g.:
WRF/dyn_em/module_initialize_fire.F
Lines 207 to 209 in 3ced231
However, there are cases which would trigger the runtime error:
WRF/dyn_em/module_initialize_ideal.F
Line 346 in 3ced231
A grep for nl_set_mminlu
reveals a few more.
I would highly recommend to turn all relevant runtime checks on at least during testing so that those issues can be caught early and definitely before new releases.
Hi there,
I was running into an array bound exceedance errors with a particular set of debug flags. Below is a suggested patch.
Regards,
Jeremy Silver
From 04b10ec3164c2a32fd45ed328361425e1bfcbcab Mon Sep 17 00:00:00 2001
From: Jeremy Silver <[email protected]>
Date: Mon, 8 Jan 2018 16:56:32 +1100
Subject: [PATCH 1/1] Prevent exceedance of 'seed' array in subroutine
cup_forcing_ens_3d
- The problem most likely only happens under unusual circumstances, but it is avoidable
---
phys/module_cu_g3.F | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/phys/module_cu_g3.F b/phys/module_cu_g3.F
index cf614c2..8a01ff5 100644
--- a/phys/module_cu_g3.F
+++ b/phys/module_cu_g3.F
@@ -3134,8 +3134,8 @@ CONTAINS
.743,.813,.886,.947,1.138,1.377,1.896/
!
seed=0
- seed(2)=j
- seed(3)=ktau
+ if(seed_size .ge. 2) seed(2)=j
+ if(seed_size .ge. 3) seed(3)=ktau
nens=0
irandom=1
if(high_resolution.eq.1)irandom=0
--
2.7.4
Taken from an email:
I have been trying to run the official WRF4.0.1 release compiled with configure -D but it is not working. WRF does not run, but, actually I am not able to run Real with configure -D. I compiled with intel and MPI (option 15). I tried to turn off different options in the namelist but Real does not run. I get a "floating divide by zero" error when Real is writing.
If I compile without the -D option in the configure WRF runs fine.
Having the configure -D working is very useful for model development.
If you any idea of what could be happening please let me know.
ps, I you need a case showing the problem you can find it here:
/glade/scratch/jimenez/wrfout/wrfsolar-v2/d02/2018041615/run
The file WRF/share/module_model_constants.F lists quite a few constants that are used by various dynamics and physics schemes within the WRF model. With the eventual sharing of physics schemes, there is a need to easily know what these constants are, and the units.
Below is a snippet of the file. A couple of constants are already defined - mighty handy indeed.
! These are the physical constants used within the model.
! JM NOTE -- can we name this grav instead?
REAL , PARAMETER :: g = 9.81 ! acceleration due to gravity (m {s}^-2)
#if ( NMM_CORE == 1 )
REAL , PARAMETER :: r_d = 287.04
REAL , PARAMETER :: cp = 1004.6
#else
REAL , PARAMETER :: r_d = 287.
REAL , PARAMETER :: cp = 7.*r_d/2.
#endif
REAL , PARAMETER :: r_v = 461.6
REAL , PARAMETER :: cv = cp-r_d
REAL , PARAMETER :: cpv = 4.*r_v
REAL , PARAMETER :: cvv = cpv-r_v
REAL , PARAMETER :: cvpm = -cv/cp
REAL , PARAMETER :: cliq = 4190.
REAL , PARAMETER :: cice = 2106.
REAL , PARAMETER :: psat = 610.78
REAL , PARAMETER :: rcv = r_d/cv
REAL , PARAMETER :: rcp = r_d/cp
REAL , PARAMETER :: rovg = r_d/g
REAL , PARAMETER :: c2 = cp * rcv
real , parameter :: mwdry = 28.966 ! molecular weight of dry air (g/mole)
Several users have pointed out an error in the wrf.exe program, noticeable in the pressure field, related to the usage of the moist potential temperature option (use_theta_m=1). In the WRF/dyn_em/start_em.F file, near line 782, the original code ALWAYS includes the scale factor of 1+Rv/Rd*Qv for the potential temperature. In the case that the incoming potential temperature variable is already moist theta (i.e., the user has set use_theta_m=1, which is now the default), then this is a repeated scaling.
Here is the original code, which is incorrect for use_theta_m=1.
781 #else
782 qvf = 1.+rvovrd*moist(i,k,j,P_QV)
783 grid%p(i,k,j)=p1000mb*( (r_d*(t0+grid%t_1(i,k,j))*qvf)/ &
784 (p1000mb*(grid%al(i,k,j)+grid%alb(i,k,j))) )**cpovcv &
785 -grid%pb(i,k,j)
786 #endif
The code should functionally be:
781 #else
IF(config_flags%use_theta_m == 0 ) THEN
782 qvf = 1.+rvovrd*moist(i,k,j,P_QV)
ELSE
qvf = 1
END IF
783 grid%p(i,k,j)=p1000mb*( (r_d*(t0+grid%t_1(i,k,j))*qvf)/ &
784 (p1000mb*(grid%al(i,k,j)+grid%alb(i,k,j))) )**cpovcv &
785 -grid%pb(i,k,j)
786 #endif
Use of PIO should not use the netcdf4 compression. The configure file needs to be fixed and cleaned.
Dear WRF-Chem developers,
My question is regarding the GOCART aerosol module.
I am trying to understand why the instantaneous fluxes of gravitationally settled and dry deposited dust (ug/m2/s) are being stored instead of accumulated values (ug/m2)?
state real dustgraset_1 ij misc 1 - r "graset_1" "dust gravitational settling for size 1" "ug/m2/s"
state real dustgraset_2 ij misc 1 - r "graset_2" "dust gravitational settling for size 2" "ug/m2/s"
state real dustgraset_3 ij misc 1 - r "graset_3" "dust gravitational settling for size 3" "ug/m2/s"
state real dustgraset_4 ij misc 1 - r "graset_4" "dust gravitational settling for size 4" "ug/m2/s"
state real dustgraset_5 ij misc 1 - r "graset_5" "dust gravitational settling for size 5" "ug/m2/s"
state real dustdrydep_1 ij misc 1 - r "drydep_1" "dust dry deposition for size 1" "ug/m2/s"
state real dustdrydep_2 ij misc 1 - r "drydep_2" "dust dry deposition for size 2" "ug/m2/s"
state real dustdrydep_3 ij misc 1 - r "drydep_3" "dust dry deposition for size 3" "ug/m2/s"
state real dustdrydep_4 ij misc 1 - r "drydep_4" "dust dry deposition for size 4" "ug/m2/s"
state real dustdrydep_5 ij misc 1 - r "drydep_5" "dust dry deposition for size 5" "ug/m2/s"
It is impossible to calculate accumulated values since not all fluxes are being written to the output.
But, it is possible to restore the fluxes if the accumulated values are stored in the output.
Could you please advice?
I could modify the code to store accumulated values instead of fluxes.
---- ERROR: The input file appears to be from a pre-v4 version of WRF initialization routines. I use the WPS-4.0.1, using netcdf=3.6.3.
From a developer:
Is it possible to change the ifdef for _MULTI_BDY_FILES_
in share/mediation_integrate.F to some kind of namelist switch? I'd rather not have to have a separate executable for a multi_bdy_file run and a traditional WRF run.
Also, since this feature appears in the the released code, the capability and instructions for use should probably appear in some document.
The CO2 concentration is currently set to 379ppm. Recent observations suggest that the current concentration is close to 400ppm so this needs to be adjusted. May have an effect on longer term simulations.
It seems that a bug may be present in noahmplsm when the model domain covers the northern and southern hemisphere. In subroutine phenology, in case e.g. dveg=4 (default value), the day is shifted by half a year causing strong temperature gradients at the equator because the GREENFRAC and LAI fields suddenly change.
It looks like the variable grid%t_2 is used as potential temperature in the tslist routine and in the diagnostic routines (pld, zld, mean_output_calc, and diurnalcycle_output_calc). We need to identify if dry potential temperature perturbation is desired and assumed.
Code named Skylake processor and upcoming Cascadelake processor support is necessary.
It just needs CPU target change from the existing KNL.
###########################################################
#ARCH Linux SKX/CLX x86_64 ppc64le i486 i586 i686 #serial smpar dmpar dm+sm
#
DESCRIPTION = INTEL ($SFC/$SCC): Xeon Scalable Processor (SKX/CLX)
DMPARALLEL = # 1
OMPCPP = # -D_OPENMP
OMP = # -qopenmp -fpp -auto
OMPCC = # -qopenmp -fpp -auto
SFC = ifort
SCC = icc
CCOMP = icc
DM_FC = mpif90 -f90=$(SFC)
DM_CC = mpicc -cc=$(SCC)
FC = CONFIGURE_FC
CC = CONFIGURE_CC
LD = $(FC)
RWORDSIZE = CONFIGURE_RWORDSIZE
PROMOTION = -real-size `expr 8 \* $(RWORDSIZE)` -i4
ARCH_LOCAL = -DNONSTANDARD_SYSTEM_FUNC -DWRF_USE_CLM
CFLAGS_LOCAL = -w -O3 -ip -fp-model fast=2 -no-prec-div -no-prec-sqrt -ftz -no-multibyte-chars -xCORE-AVX512
LDFLAGS_LOCAL = -ip -fp-model fast=2 -no-prec-div -no-prec-sqrt -ftz -align all -fno-alias -fno-common -xCORE-AVX512
CPLUSPLUSLIB =
ESMF_LDFLAG = $(CPLUSPLUSLIB)
FCOPTIM = -O3
FCREDUCEDOPT = $(FCOPTIM)
FCNOOPT = -O0 -fno-inline -no-ip
FCDEBUG = # -g $(FCNOOPT) -traceback # -fpe0 -check noarg_temp_created,bounds,format,output_conversion,pointers,uninit -ftrapuv -unroll0 -u
FORMAT_FIXED = -FI
FORMAT_FREE = -FR
FCSUFFIX =
BYTESWAPIO = -convert big_endian
RECORDLENGTH = -assume byterecl
FCBASEOPTS_NO_G = -ip -w -ftz -align all -fno-alias $(FORMAT_FREE) $(BYTESWAPIO) -fp-model fast=2 -no-heap-arrays -no-prec-div -no-prec-sqrt -fno-common -xCORE-AVX512
FCBASEOPTS = $(FCBASEOPTS_NO_G) $(FCDEBUG)
MODULE_SRCH_FLAG =
TRADFLAG = CONFIGURE_TRADFLAG
CPP = /lib/cpp CONFIGURE_CPPFLAGS
AR = ar
ARFLAGS = ru
M4 = m4
RANLIB = ranlib
RLFLAGS =
CC_TOOLS = $(SCC)
I am trying to compile the model and I am mainly getting the following error messages..
make[3]: time: Command not found
I do have the time
program working on my machine (using Arch Linux 4.18) and I don't understand why this is happening.
I am attaching the entire build log and error messages below.
Thank you.
bld.log
The convert_emiss.exe is not able to compile with the following error message:
This option is not recognized: emi_conv
Likely solution is to put in an IF test in the compile script, something like:
61 else if ( "$a" == "em_real" ) then
62 set arglist = ( $arglist $a )
63 set ZAP = ( main/wrf.exe main/real.exe main/ndown.exe main/tc.exe )
The ZAP part should explicitly include the executables that will be generated (so that they are explictly removed during the build).
Associated with PR #329, provide a README with information for users. For example, something like this ...
These coefficient files are binary and cannot be tracked by git and will be put in WRFDA website for download from the release 4.0. A copy will also be made available under ~wrfhelp on the NCAR cheyenne supercomputer (for internal use). The full set of CRTM coefficient files can be downloaded from the official CRTM ftp site http://ftp.emc.ncep.noaa.gov/jcsda/CRTM/.
I keep getting an error message when I try to compile WRF Plus.
I configured with intel compilers (option 10) and netcdf 4.6.2 (without netcdf4), and I have no problem when I compiled WRF and WPS.
So, when I compile WRF Plus I get the following error messages:
mpiifort -o ../main/module_wrf_top.o -c -O3 -xCORE-AVX512 -w -auto -ftz -fno-alias -fp-model fast=1 -no-prec-div -no-prec-sqrt -FR -convert big_endian -auto -align array64byte -I../dyn_em -I../dyn_nmm -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/external/esmf_time_f90 -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/main -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/external/io_netcdf -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/external/io_int -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/frame -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/share -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/phys -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/wrftladj -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/chem -I/home/cross/WRF/4.0.2_nc3/WRF_Plus/inc -I/home/cross/Software/nc3/libs/netcdf/include -r8 -real-size `expr 8 \* 8` -i4 ../main/module_wrf_top.f90
../main/module_wrf_top.f90(756): error #6099: An ENDDO statement occurred without a corresponding DO or DO WHILE statement.
ENDDO
^
compilation aborted for ../main/module_wrf_top.f90 (code 1)
../configure.wrf:346: recipe for target '../main/module_wrf_top.o' failed
So I've searched this problem, and it supposed to be some ENDEIF missing, but it doesn't seems to be that in this case. here is the code from module_wrf_top.f90 from line 745 o 756
IF ( config_flags%check_TL .or. config_flags%check_AD ) THEN
CALL allocate_grid ( )
!$OMP PARALLEL DO &
!$OMP DEFAULT (SHARED) PRIVATE ( ij ) &
DO ij = 1 , head_grid%num_tiles
CALL copy_grid_to_s ( head_grid , &
head_grid%i_start(ij), head_grid%i_end(ij), &
head_grid%j_start(ij), head_grid%j_end(ij) )
ENDDO
does anyone know what could be the problem here?
In phys/module_shcu_grims.F line 1148 in function fthex an exponent is limited to 100. That value is too large for single precision.
if(pd.gt.0.) then
el=hvap+dldt*(t-ttp)
expo=el*eps*pv/(cp*t*pd)
expo = min(expo,100.0)
fthex=t*pd**(-rocp)*exp(expo)
It doesn't work and no one uses it.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.