Heracles Single Column Model: Difference between revisions
Created page with "This page describes the steps necessary to set up and run the Single Column Model (SCM) under Heracles on discover. It assumes that you have successfully run the model as des..." |
→Setting Up and Running Existing SCM Experiments: Need to source g5_modules. |
||
(One intermediate revision by the same user not shown) | |||
Line 23: | Line 23: | ||
Create your own run directory, then modify and uncomment the first executable line of <code>scm_setup</code>, which assigns <code>ESMADIR</code> to your local GEOS-5 build that you are using for the SCM (you may already have this set as an environment variable). Uncomment one of the lines that assign the variable <code>CASEDIR</code> to choose the experiment to run. Then run the script from the run directory you have created. | Create your own run directory in $NOBACKUP, then modify and uncomment the first executable line of <code>scm_setup</code>, which assigns <code>ESMADIR</code> to your local GEOS-5 build that you are using for the SCM (you may already have this set as an environment variable). Uncomment one of the lines that assign the variable <code>CASEDIR</code> to choose the experiment to run. Then run the script from the run directory you have created. For example, if you selected the <tt>arm_97jul</tt> case: | ||
Then you can just run the model executable from the command line in the directory you created. You will have to load the proper modules by sourcing <code>src/g5_modules</code> in the build you are using. Although it runs with a single processor, on '''discover''' you should run it from an interactive job on a compute node (as opposed to the '''discover''' front end). This can be done by running | $ ./scm_setup | ||
arm_97jul | |||
It will copy all of the necessary resource, forcing and data files to the working directory: | |||
<nowiki> | |||
$ ls -ltr | |||
total 11M | |||
-rwxr-xr-x 1 mathomp4 g0620 5.0K Jun 22 14:35 scm_setup* | |||
lrwxrwxrwx 1 mathomp4 g0620 78 Jun 22 14:35 GEOSgcm.x -> /discover/swdev/mathomp4/Models/Heracles-UNSTABLE/GEOSagcm/Linux/bin/GEOSgcm.x* | |||
-rwxr-xr-x 1 mathomp4 g0620 2.5M Jun 22 14:35 arm_97jul.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_dynave.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 fraci.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 SEAWIFS_KPAR_mon_clim.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 sst.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 sstsi.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 464 Jun 22 14:35 tile.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_gwdvar.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_trbvar.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 3.6K Jun 22 14:35 lai.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 green.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 nirdf.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 vegdyn.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 visdf.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 768 Jun 22 14:35 catch_internal_rst* | |||
</nowiki> | |||
Each experiment requires its own directory. If you modify the resource files (e.g., HISTORY.rc) you may want to copy the setup directory to your own area and modify it and the setup script accordingly so that you don't clobber your modifications. | |||
Then you can just run the model executable from the command line in the directory you created. You will have to load the proper modules by sourcing <code>src/g5_modules</code> in the build you are using. Although it runs with a single processor, on '''discover''' you should run it from an interactive job on a compute node (as opposed to the '''discover''' front end). This can be done by running: | |||
$ xalloc --ntasks=1 --time=08:00:00 --job-name=Interactive --account='''accountnumber''' | |||
salloc: Pending job allocation 8883687 | |||
salloc: job 8883687 queued and waiting for resources | |||
salloc: job 8883687 has been allocated resources | |||
salloc: Granted job allocation 8883687 | |||
srun.slurm: cluster configuration lacks support for cpu binding | |||
$ hostname | |||
borgo065 | |||
$ | |||
where '''accountnumber''' is the account you wish to charge to. (NOTE: Your hostname is probably different. This is just to show you are no longer on a discover head node, but on a borg compute node.) | |||
Once the allocation is granted, it starts an interactive shell on the job node, from which you can run the GEOS-5 executable. Since all of the necessary configuration files are copied to the experiment directory by <code>scm_setup</code>, running the SCM requires none of the extra environmental infrastructure needed for a global experiment that the run script <code>gcm_run.j</code> creates. | |||
To run the SCM, first, you'll need to source g5_modules and then you can run it: | |||
<nowiki>$ setenv ESMADIR /discover/path/to/your/Heracles-5_2/GEOSagcm | |||
$ source $ESMADIR/src/g5_modules | |||
$ mpirun -np 1 ./GEOSgcm.x |& tee run.log | |||
srun.slurm: cluster configuration lacks support for cpu binding | |||
In MAPL_Shmem: | |||
NumCores per Node = 1 | |||
NumNodes in use = 1 | |||
Total PEs = 1 | |||
In MAPL_InitializeShmem (NodeRootsComm): | |||
NumNodes in use = 1 | |||
Integer*4 Resource Parameter HEARTBEAT_DT: 1800 | |||
NOT using buffer I/O for file: cap_restart | |||
Read CAP restart properly, Current Date = 1997/06/18 | |||
Current Time = 23:30:00 | |||
Character Resource Parameter ROOT_CF: AGCM.rc | |||
Character Resource Parameter ROOT_NAME: GCS | |||
... | |||
</nowiki> | |||
and it should run. | |||
== Creating Driving Datasets from MERRA == | == Creating Driving Datasets from MERRA == |
Latest revision as of 12:33, 22 June 2016
This page describes the steps necessary to set up and run the Single Column Model (SCM) under Heracles on discover. It assumes that you have successfully run the model as described in the Quick Start guide.
Setting Up and Running Existing SCM Experiments
The GEOS-5 executable will run the single column model with any of the cases below. The executable of the Single Column Model is identical to the global model; the only differences are the environment and settings with which it runs. It uses a different setup script from global runs: src/Applications/GEOSgcm_App/scm_setup
. You do not run the gcm_setup
script as you do to set up a global run. The setup script is specific to model tags, so use only the one that comes with the model tag.
At the time of this writing there are twenty experiments to choose from:
- ARM Southern Great Plains site (http://www.arm.gov/sites/sgp), July 1997
- KWAJEX (http://www.atmos.washington.edu/kwajex/)
- ARM-SCSMEX (http://www.cawcr.gov.au/bmrc/wefor/research/scsmex.htm)
- ARM-TWP (http://acrf-campaign.arm.gov/twpice/)
- TRMM-LBA (http://radarmet.atmos.colostate.edu/lba_trmm/)
- Five experiments with about the same times and locations as the five above, but forced with MERRA data
- TOGA COARE (http://www.atmos.washington.edu/togacoare/summaries.html)
- Six CGILS cases (http://atmgcm.msrc.sunysb.edu/cfmip_figs/Case_specification.html)
- NAMECORE (forced with MERRA data) (http://www.eol.ucar.edu/projects/name/)
- NAMEAZNM (forced with MERRA data) (http://www.eol.ucar.edu/projects/name/)
- NAMMA (forced with MERRA data) (http://namma.msfc.nasa.gov/)
- A MERRA-forced experiment in Barrow, March 2013
- A MERRA-forced experiment at KAN L Greeland, March 2013
Create your own run directory in $NOBACKUP, then modify and uncomment the first executable line of scm_setup
, which assigns ESMADIR
to your local GEOS-5 build that you are using for the SCM (you may already have this set as an environment variable). Uncomment one of the lines that assign the variable CASEDIR
to choose the experiment to run. Then run the script from the run directory you have created. For example, if you selected the arm_97jul case:
$ ./scm_setup arm_97jul
It will copy all of the necessary resource, forcing and data files to the working directory:
$ ls -ltr total 11M -rwxr-xr-x 1 mathomp4 g0620 5.0K Jun 22 14:35 scm_setup* lrwxrwxrwx 1 mathomp4 g0620 78 Jun 22 14:35 GEOSgcm.x -> /discover/swdev/mathomp4/Models/Heracles-UNSTABLE/GEOSagcm/Linux/bin/GEOSgcm.x* -rwxr-xr-x 1 mathomp4 g0620 2.5M Jun 22 14:35 arm_97jul.dat* -rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_dynave.data* -rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 fraci.data* -rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 SEAWIFS_KPAR_mon_clim.data* -rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 sst.data* -rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 sstsi.data* -rwxr-xr-x 1 mathomp4 g0620 464 Jun 22 14:35 tile.data* -rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_gwdvar.data* -rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_trbvar.data* -rwxr-xr-x 1 mathomp4 g0620 3.6K Jun 22 14:35 lai.dat* -rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 green.dat* -rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 nirdf.dat* -rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 vegdyn.data* -rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 visdf.dat* -rwxr-xr-x 1 mathomp4 g0620 768 Jun 22 14:35 catch_internal_rst*
Each experiment requires its own directory. If you modify the resource files (e.g., HISTORY.rc) you may want to copy the setup directory to your own area and modify it and the setup script accordingly so that you don't clobber your modifications.
Then you can just run the model executable from the command line in the directory you created. You will have to load the proper modules by sourcing src/g5_modules
in the build you are using. Although it runs with a single processor, on discover you should run it from an interactive job on a compute node (as opposed to the discover front end). This can be done by running:
$ xalloc --ntasks=1 --time=08:00:00 --job-name=Interactive --account=accountnumber salloc: Pending job allocation 8883687 salloc: job 8883687 queued and waiting for resources salloc: job 8883687 has been allocated resources salloc: Granted job allocation 8883687 srun.slurm: cluster configuration lacks support for cpu binding $ hostname borgo065 $
where accountnumber is the account you wish to charge to. (NOTE: Your hostname is probably different. This is just to show you are no longer on a discover head node, but on a borg compute node.)
Once the allocation is granted, it starts an interactive shell on the job node, from which you can run the GEOS-5 executable. Since all of the necessary configuration files are copied to the experiment directory by scm_setup
, running the SCM requires none of the extra environmental infrastructure needed for a global experiment that the run script gcm_run.j
creates.
To run the SCM, first, you'll need to source g5_modules and then you can run it:
$ setenv ESMADIR /discover/path/to/your/Heracles-5_2/GEOSagcm $ source $ESMADIR/src/g5_modules $ mpirun -np 1 ./GEOSgcm.x |& tee run.log srun.slurm: cluster configuration lacks support for cpu binding In MAPL_Shmem: NumCores per Node = 1 NumNodes in use = 1 Total PEs = 1 In MAPL_InitializeShmem (NodeRootsComm): NumNodes in use = 1 Integer*4 Resource Parameter HEARTBEAT_DT: 1800 NOT using buffer I/O for file: cap_restart Read CAP restart properly, Current Date = 1997/06/18 Current Time = 23:30:00 Character Resource Parameter ROOT_CF: AGCM.rc Character Resource Parameter ROOT_NAME: GCS ...
and it should run.
Creating Driving Datasets from MERRA
Given the resource and other files that come with a complete SCM configuration (either from an existing case or create with the procedure below), a driving data file for the same location and time span can be generated using MERRA output. Users should note that this current scheme for MERRA data does not include analysis increments in the advection terms, and probably should.
Obtaining MERRA Data
MERRA output files are located under /archive/u/dao_ops/MERRA/dao_ops/production/GEOSdas-2_1_4
on NCCS discover. You will need the output files spanning the desired dates of the experiment from following collections:
inst3_3d_asm_Cp tavg3_3d_tdt_Cp tavg3_3d_qdt_Cp tavg1_2d_slv_Nx tavg1_2d_flx_Nx
While you only need the the one gridpoint or so from each collection, you will have to copy over the entire collection from /archive
, which may take a couple days even with use of dmget
. Maybe there's a better way with OPeNDAP?
Generating the Driving Data
Now the ASCII .dat file used for the driving data can be created.
Under the directory src/GMAO_Shared/GEOS_Util/post
in your Fortuna build, apply the following CVS command:
cvs upd -r b_afe_Fortuna_merra2scm merra2scm.F GNUmakefile
This will check out the source file merra2scm.F
; it must be modified to the time and location of the data set to be created. Change the parameters begdate
and enddate
to the dates you want to cover, but leave begtime
and endtime
alone. If you are replicating an existing experiment, begdate
and enddate
can be obtained from that experiment's cap_restart
and CAP.rc
, respectively. The parameters lonbegin
, lonend
, latbegin
, and latend
specify the location and appropriate values can be gleaned from the filenames in the appropriate experiment under /discover/nobackup/aeichman/scm/scminfiles/
-- for example, the filename
tile.data_simple1_XY1x1-C_34N_100W_38N_95W
. (Note that these file names are truncated when copied by the SCM setup script.) Finally, change the variable dirname
to the directory where you copied the MERRA data.
Then cd
up to the src
directory, run make install
, and run the executable merra2scm.x
. It will generate the driving data file merra_scm.dat
, which can be used to replace the one with the expriment data.
Creating New SCM Case Boundary Conditions
To create the boundary condition files for a new SCM case with a location and time span different from the ones already made, there are a set of IDL scripts in
src/GEOSgcs_GridComp/GEOSgcm_GridComp/GEOSagcm_GridComp/GEOSsuperdyn_GridComp/GEOSdatmodyn_GridComp/idl
IDL can be run from the dali machine, which you should be able to ssh
to from discover. See the NCCS documentation for IDL for help: http://www.nccs.nasa.gov/dali_qna.html#step5. Follow the steps and start IDL (idl
). To run the script make_bcs_ics.pro
enter:
IDL> .run make_bcs_ics
The procedure requires the restart files fvcore_internal_rst
, moist_internal_rst
and catch_internal_rst
(with or without dates appended) seasonally appropriate for the experiment's start date, chosen in the priority month-day-year. Most restart files available have the time 2100z -- the affects of a disjoint between restart time-of-day and that of the beginning time of the model apparently diminish after a few days of spinup.
In the file make_bcs_ics.pro
make the following changes:
odir
to an appropriate directory in your area for input filesgdirbase
to a directory in your area for outputcr
to the geographic range in coordinate degrees ([S W N E]
)casename
to a directory-appropriate name (this will be a subdirectory created for output ingdirbase
)sst_impose
to the desired SST boundary condition (if neccessary, in K)ntype
(tile surface type) to 100 if over land, 0 if over sea'moist_internal_rst.b19830214_21z'
to the name of your moist internal restart file- the intances of
'fvcore_internal_rst.b19830214_21z'
to the name of your fvcore internal restart file
In odir
, place the following files:
catch_internal_rst* FV_144x91_DC_360x180_DE.til fvcore_internal_rst* lai_green_clim_144x91_DC.data moist_internal_rst* nirdf_144x91_DC.dat topo_DYN_ave_144x91_DC.data topo_GWD_var_144x91_DC.data topo_TRB_var_144x91_DC.data vegdyn_144x91_DC.dat visdf_144x91_DC.dat
The files other than the restarts can currently be obtained from /discover/nobackup/amolod/bcs/144x91/
.
Then run make_bcs_ics.pro
from the IDL command line. This will create a set of files in gdirbase/casename
.
Now you have to select a tile from the file FV_144x91_DC_360x180_DE.til
. After the header, each line contains the specifications for one tile. Find a tile close to the location of your experiment -- the third column is longitude, the fourth latitude. The first column should be the same as the ntype
in make_bcs_ics.pro
. The last column is the tile number, which you should record.
Matlab (running on dali) can make this task easier. First a edit a copy of FV_144x91_DC_360x180_DE.til
, deleting the file header, the first eight lines. Then from Matlab you can <load> it. The following is an example of finding tiles near 39N 77W:
>> format shortG >> load FV_144x91_DC_360x180_DE.til >> lat=40;lon=-77; >> tilelines=find(int8(FV_144x91_DC_360x180_DE(:,4))==lat & int8(FV_144x91_DC_360x180_DE(:,3))==lon );FV_144x91_DC_360x180_DE(tilelines,[1 4 3 12]) ans = 100 39.963 -76.626 6894 100 39.96 -76.975 6895 100 40.182 -76.625 6896 100 40.187 -76.899 6899 100 40.134 -77.357 6900 100 40.353 -77.128 6901 100 39.595 -77.256 6935 19 39.566 -76.569 67187 >>
The last command displays candidate tiles with their land/sea value, latitude, logitude, and tile number. It might make sense to examine adjacent whole lat/lon values.
Edit make_land_files.pro
so that bcsodir
, xdir
and casename
are the same as odir
, gdirbase
and casename
, respectively, in make_bcs_ics.pro
. Also change itile
to the tile number you recorded from the tile file and catchname
to the name of your catchment restart. The run the script. It will create a subdirectory Landfiles
in the output directory and generate the land BC files there.
To make an appropriate AGCM.rc
, copy one from an existing SCM case, and change the following:
AGCM_GRIDNAME
andOGCM_GRIDNAME
to reflect the coordinates in the filenames of the files that you just generatedDRIVER_DATA
to the name of your driving data file name (for example, created from MERRA data in the section above)
Likewise, copy a CAP.rc
and change the END_DATE
as appropriate. Do the same for the start date in cap_restart
. A HISTORY.rc
can be copied without modification. Keep AGCM.rc
, CAP.rc
, cap_restart
, and HISTORY.rc
with the output from the IDL scripts. The latter will have to be either renamed or linked to names that the model will recognize -- see /discover/nobackup/aeichman/scm/scminfiles/arm_97jul
for an example. You should have the following:
AGCM.rc CAP.rc cap_restart catch_internal_rst datmodyn_internal_rst fraci.data fvcore_internal_rst HISTORY.rc laigrn.data moist_internal_rst nirdf.dat SEAWIFS_KPAR_mon_clim.data sst.data sstsi.data tile.data topo_dynave.data topo_gwdvar.data topo_trbvar.data vegdyn.data visdf.dat
These files, plus a driving data file, comprise the case-specific files for an SCM case, similar to those cases in /discover/nobackup/aeichman/scm/scminfiles/
, and can use getSCMdata.sh
to set up the model environment to run.
Some Discussion About How to Use and Configure SCM
The following section contains excerpts from emails from users along with replies that might answer questions that can come up.
The VERTICAL_ADVECTION Flag
From ARM_97JUL, I do see there is a flag "VERTICAL_ADVECTION" in AGCM.rc. I am trying to turn it off by setting it to "0". Is it the right way to use observed vertical advection?
yes sir, that is the way to turn it off. but turning it off is what gives 50 deg temp biases. there is a code change that i am testing today or tomorrow that greg walker tried and said made a big difference. we are suspecting that the T vertical advection term in the obs dataset is missing the adiabatic expansion term (ie, that it is dT/dp and not the total vert tend). so we will try to use the vertical advection of s term (it is already divided by Cp i think). in that case the idea would be to go into the reader.F90 term and (greg walker did this) do something like:
add the line:
TMP(13,:,:) = TMP(13,:,:)/3600.0
near where we do other stuff like this. and then: instead of:
T_V_ADV(i,k) = -dv(7,K,I)
put in something like:
if (filename.eq."arm_97jul.dat") then T_V_ADV(i,k) = -dv(13,K,I) ! Vertical_s_Advec/cp (K/s) else T_V_ADV(i,k) = -dv(7,K,I) ! Vertical_T_Advec (K/s) [is omega*alpha included?] endif
got it? the idea is to use the vertical advection of s term for arm 97 july case for now (its possible that there are other cases that we have to do this also, but until we know that we want to try it for arm 97 july only).
so - if you want to turn off the calculation of vertical advection you set VERTICAL_ADVECTION to 0 in the AGCM.rc (or leave it out - 0 is the default). and then i would HIGHLY recommend doing what i suggest here.
CGILS Experiments
It will be great if we can have some cases to study low clouds such as off coasts of California and Peru. Joao and I are interested to use SCM for the low clouds study.
just to let you both know that there is a new set of cases that we can now do with the scm, but they may not make their way into the 'official' set of cases. they are the CGILS cases. this is the set of CFMIP-GCSS simulations at three points on the transect from the calif coast to the mid pacific (stratus, strato cu, cu). the CGILS project includes a set of 3 or 4 LES simulations of the same 3 spots. the SCM forcing is idealized and the simulation is perpetual july 15 with no diurnal cycle (code changes needed for this to the model will not be in 'official' code releases - that's why these experiments won't be ones that are on the 'list'), but the set-up is great for testing pbl schemes and the interaction with the shallow and deep convection in marine boundary layers. i have done the simulations with our fortuna-2_0 code and i've also done a suite of sensitivity experiments and am continuing to do more. if you'd like the code mods for these runs i can provide them.
Parameters for progno_cloud
The following is a list of parameters for progno_cloud
that can be in the AGCM.rc
configuration file.
Slot Name (AGCM.rc) default var. name description ---- -------------- ------- --------- ------------ 1 'CNV_BETA:', 10.0 CNV_BETA Divide convective rain by cnv_beta for Marsh-Palm drop size, number, velocity - used for evap of rain 2 'ANV_BETA:', 4.0 ANV_BETA Divide anvil rain rate by anv_beta 3 'LS_BETA:', 4.0 LS_BETA Divide Large Scale rain by ls_beta 4 'RH_CRIT:', 1.0 RH00 Upper limit on critical relative humidity for evap/condense 5 'AUTOC_LS:', 2.0e-3 C_00 Multiplication factor (+unit conversion) for autoconversion rate (autoconvert exp(-rate * dt) ) 6 'QC_CRIT_LS:', 8.0e-4 LWCRIT Scale autoconversion (impact ~ 1 - exp(-1/lwcrit)**2 ) 7 'ACCRETION:', 2.0 C_ACC Scale factor for accretion of cloud water by rain/snow 8 'BASE_REVAP_FAC:', 1.0 C_EV Scale factor for rain/snow re-evap (re-evap ~ 1 - exp(- c_ev) ) 9 'VOL_TO_FRAC:', -1.0 CLDVOL2FRC Not used 10 'SUPERSAT:', 0.0 RHSUP_ICE Not used 11 'SHEAR_EVAP_FAC:', 1.3 SHR_EVAP_FAC Not used 12 'MIN_ALLOW_CCW:', 1.0e-9 MIN_CLD_WATER Not used 13 'CCW_EVAP_EFF:', 3.3e-4 CLD_EVP_EFF Scale for evap of cloud water/(subl of ice) (+unit conv) 14 'NSUB_AUTOCONV:', 20. NSMAX Not used 15 'LS_SUND_INTER:', 4.8 LS_SDQV2 Factor to control how fast LS ice autonv drops at cold temps 16 'LS_SUND_COLD:', 4.8 LS_SDQV3 Factor to control how fast LS ice autonv drops at coldest temps 17 'LS_SUND_TEMP1:', 230. LS_SDQVT1 Temp at which to start decrease in LS ice autoconv ramping 18 'ANV_SUND_INTER:', 1.0 ANV_SDQV2 Factor to control how fast anvil ice autonv drops at cold temps 19 'ANV_SUND_COLD:', 1.0 ANV_SDQV3 Factor to control how fast anvil ice autonv drops at coldest temps 20 'ANV_SUND_TEMP1:', 230. ANV_SDQVT1 Temp at which to start decrease in anvil ice autoconv ramping 21 'ANV_TO_LS_TIME:', 14400. ANV_TO_LS Not used 22 'NCCN_WARM:', 50. N_WARM Not used 23 'NCCN_ICE:', 0.01 N_ICE Not used 24 'NCCN_ANVIL:', 0.1 N_ANVIL Not used 25 'NCCN_PBL:', 200. N_PBL Not used 26 'DISABLE_RAD:', 0. DISABLE_RAD Flag (=1) to disable radiative interaction with cloud/rain 27 'ICE_SETTLE:', 0. Not used 28 'ANV_ICEFALL:', 0.5 ANV_ICEFALL_C Scale for fall rate of anvil ice (used to scale ice autoconv) 29 'LS_ICEFALL:', 0.5 LS_ICEFALL_C Scale for fall rate of LS ice (used to scale ice autoconv) 30 'REVAP_OFF_P:', 2000. REVAP_OFF_P Max pressure at which to do precip re-evap (mb) 31 'CNV_ENVF:', 0.8 CNVENVFC Scale factor for convective rain/snow re-evap (re-evap ~ 1 - exp(- envfrac) ) - fraction of re-evap in environment as opposed to in the cloud 32 'WRHODEP:', 0.5 WRHODEP Control rate of dec/incr of ice fall speed with high/low press 33 'ICE_RAMP:', -40.0 T_ICE_ALL = ICE_RAMP + MAPL_TICE - Temp at which all cloud/precip is ice (fraction=1, use L of ice) 34 'CNV_ICEPARAM:', 1.0 CNVICEPARAM Control on how much new conv precip is ice (1=> use ice fraction, 0=> all liquid) 35 'CNV_ICEFRPWR:', 4.0 ICEFRPWR = CNV_ICEFRPWR + .001 -- Scale ice fraction (liquid/ice partition for consensation/evap/melting&freezing) Fraction = Fraction ** icefrpowr 36 'CNV_DDRF:', 0.0 CNVDDRFC Fraction of re-evap of conv precip to reserve for re-evap lower in atm (in a "downdraft") 37 'ANV_DDRF:', 0.0 ANVDDRFC Fraction of re-evap of conv precip to reserve for re-evap lower in atm (in a "downdraft") 38 'LS_DDRF:', 0.0 LSDDRFC Fraction of re-evap of conv precip to reserve for re-evap lower in atm (in a "downdraft") 39 'AUTOC_ANV:', 1.0e-3 Not used 40 'QC_CRIT_ANV:', 8.0e-4 Not used 41 'TANHRHCRIT:', 1. tanhrhcrit Flag to use tanh vertical profile for Rh crit for condens/evap 42 'MINRHCRIT:', 0.8 minrhcrit Min Rh in tanh profile 43 'MAXRHCRIT:', 1.0 maxrhcrit Max Rh in tanh profile