Setting Up the Fortuna 2.0 Single Column Model

From GEOS-5
Jump to navigation Jump to search

This page describes the steps and modifications necessary to build and run the Single Column Model (SCM) under Fortuna 2.0 on discover. It assumes that you have successfully run the model as described in GEOS-5 Quick Start.

Checking Out and Modifying GEOS-5 for SCM

First, check out the Fortuna 2.0 code as usual:

cvs co -r  Fortuna-2_0  Fortuna

Then cd to the directory

GEOSagcm/src/GEOSgcs_GridComp/GEOSgcm_GridComp/GEOSagcm_GridComp/GEOSsuperdyn_GridComp/GEOSdatmodyn_GridComp

and update the directory from the SCM branch:

cvs upd -r b_Fortuna-2_0_SCM 

Then compile the model as usual.

The modifications in the SCM branch will eventually be merged with the main branch so that this step is unnecessary.

Setting Up and Running Existing SCM Experiments

The setup script for the SCM experiments is /discover/nobackup/aeichman/scm/setup/getSCMdata.sh . You do not have to run the gcm_setup script as you do to set up a global run.

At the time of this writing there are fourteen experiments to choose from:

Create your own directory and copy to it the script getSCMdata.sh, then modify and uncomment the first executable line, which creates a symbolic link to the model executable (GEOSagcm/Linux/bin/GEOSgcm.x), so that it points to your own model executable. Uncomment one of the lines that assign the variable CASEDIR to choose the experiment to run. Then run the script. It will copy all of the necessary resource, forcing and data files to the working directory. Each experiment requires its own directory. If you modify the resource files (e.g., HISTORY.rc) you may want to copy the setup directory to your own area and modify it and the setup script accordingly so that you don't clobber your modifications.

Then you can just run the model executable from the command line in the directory you created. You will have to load the proper modules by sourcing GEOSagcm/src/g5_modules. Although it runs with a single processor, on discover you should run it from an interactive job on a compute node (as opposed to the discover front end). Since all of the necessary configuration files are copied to the experiment directory, it requires none of the extra environmental infrastructure needed for a global experiment that the run script gcm_run.j creates.

Creating Driving Datasets from MERRA

Given the resource and other files that come with a complete SCM configuration (either from an existing case or create with the procedure below), a driving data file for the same location and time span can be generated using MERRA output. Users should note that this current scheme for MERRA data does not include analysis increments in the advection terms, and probably should.

Obtaining MERRA Data

MERRA output files are located under /archive on NCCS discover and can be time consuming to obtain. For this purpose a set of scripts have been created to make the task easier. To use them, create a subdirectory and copy the contents of /discover/nobackup/aeichman/scm/util/get-merra to it. You should have the following:

getter.j
MERRAobsys.rc
README
watcher.j


To use the scripts, modify the line in getter.j starting setenv PATH ${PATH}: ... to point to the directory src/GMAO_Shared/GMAO_etc/ in your local Fortuna 2.0 build, which contains the necessary utilities. These use perl libraries, which might require additions to your environment. (Assume at first that they don't.) To specify the range of MERRA data to obtain, modify the variables BEGIN_date and END_date (both in the format YYYYMMDD). You may need to modify your group name in the PBS environment variables as well.

Then qsub watcher.j (not getter.j). It will submit the getter.j script while submitting a version of itself to monitor the "getter" job. getter.j uses the acquire utility to smoothly transfer files from /archive to the current directory. If the getter job ends without finishing -- most likely because the allotted walltime ran out -- then the watcher job will repeat the process, until all the data in the specified range are copied to the current directory. For data sets of a month or so this may take a few hours, but the scripts should run without intervention. If something interrupts this process, the same scripts may be started again, and acquire will be intelligent enough figure out where it needs to pick up. Keep in mind that response time of the /archive filesystem can vary considerably (on the scale of hours to days, depending on downtime).

For more details, see README.

Generating the Driving Data

Now the ASCII .dat file used for the driving data can be created.

Check out the data file generator with the following command:

cvs co -r afe-Fortuna-merra2scm-20110929

Then cd GEOSagcm/src/GMAO_Shared/GEOS_Util/post. The source file merra2scm.F is there; this must be modified to the time and location of the data set to be created. Change the parameters begdate and enddate to the dates you want to cover, but leave begtime and endtime alone. If you are replicating an existing experiment, begdate and enddate can be obtained from that experiment's cap_restart and CAP.rc, respectively. The parameters lonbegin, lonend, latbegin, and latend specify the location and appropriate values can be gleaned from the filenames in the appropriate experiment under /discover/nobackup/aeichman/scm/scminfiles/ -- for example, the filename tile.data_simple1_XY1x1-C_34N_100W_38N_95W. (Note that these file names are truncated when copied by the SCM setup script.) Finally, change the variable dirname to the directory where you copied the MERRA data.

Then cd up to the src directory, run make install, and run the executable merra2scm.x. It will generate the driving data file merra_scm.dat, which can be used to replace the one with the expriment data.

Required Modifications to the Model

At the time of this writing, modifications need to be compiled in to load new cases. We are planning to amend this inconvenience. The source files to modify are in src/GEOSgcs_GridComp/GEOSgcm_GridComp/GEOSagcm_GridComp/GEOSsuperdyn_GridComp/GEOSdatmodyn_GridComp.

First, in GEOS_DatmoDynGridComp.F90, a case must be added in the select statement at (or near) line 1039:

     select CASE(trim(DATA$))

A sample case is shown below:

   case("merra_arm97jul")
      NT = 240
      NLEVEL = 42
      DATA_DRIVER=.true.

For the case to be added, the case statement must have the name of the driver data file with the trailing .dat truncated (i.e. the file merra_scm.dat will require the case statement case("merra_scm") ). The variable NT must be assigned to the length of the time series of the driving data, and NLEVEL to the number of pressure levels. These values may be obtained from the header of the driver data file.

Similarly, any experiment using MERRA data requires a modification to reader.F90 at about line 211. The if then statement there:

        if(filename.eq."arm_97jul.dat".or. &
           filename.eq."merra_arm97jul.dat".or. &
           filename.eq."merra_arm_scmx.dat")then

requires the addition of the full driver data file name.

With these modifications in place, the model may be recompiled.

Finally, the parameter DRIVER_DATA in AGCM.rc needs to be changed to the full filename of the driver data. Note that you will probably have to change the begin time in the cap_restart and the end time in CAP.rc to the appropriate times in begtime (probably 000000) and endtime.

Creating New SCM Case Boundary Conditions

To create the boundary condition files for a new SCM case with a location and time span different from the ones already made, there are a set of IDL scripts in

src/GEOSgcs_GridComp/GEOSgcm_GridComp/GEOSagcm_GridComp/GEOSsuperdyn_GridComp/GEOSdatmodyn_GridComp/idl

Make sure you've updated the contents of this directory to the proper branch: cvs upd -r b_Fortuna-2_0_SCM

IDL can be run from the dali machine, which you should be able to ssh to from discover. See the NCCS documentation for IDL for help: http://www.nccs.nasa.gov/dali_qna.html#step5. Follow the steps and start IDL (idl). To run the script make_bcs_ics.pro enter:

IDL> .run make_bcs_ics

The procedure requires the restart files fvcore_internal_rst, moist_internal_rst and catch_internal_rst (with or without dates appended) seasonally appropriate for the experiment's start date, chosen in the priority month-day-year. Most restart files available have the time 2100z -- the affects of a disjoint between restart time-of-day and that of the beginning time of the model apparently diminish after a few days of spinup.

In the file make_bcs_ics.pro make the following changes:

  • odir to an appropriate directory in your area for input files
  • gdirbase to a directory in your area for output
  • cr to the geographic range in coordinate degrees ([S W N E])
  • casename to a directory-appropriate name (this will be a subdirectory created for output in gdirbase)
  • sst_impose to the desired SST boundary condition (if neccessary, in K)
  • ntype (tile surface type) to 100 if over land, 0 if over sea
  • 'moist_internal_rst.b19830214_21z' to the name of your moist internal restart file
  • the intances of 'fvcore_internal_rst.b19830214_21z' to the name of your fvcore internal restart file

In odir, place the following files:

catch_internal_rst*
FV_144x91_DC_360x180_DE.til
fvcore_internal_rst*
lai_green_clim_144x91_DC.data
moist_internal_rst*
nirdf_144x91_DC.dat
topo_DYN_ave_144x91_DC.data
topo_GWD_var_144x91_DC.data
topo_TRB_var_144x91_DC.data
vegdyn_144x91_DC.dat
visdf_144x91_DC.dat

The files other than the restarts can currently be obtained from /discover/nobackup/amolod/bcs/144x91/.

Then run make_bcs_ics.pro from the IDL command line. This will create a set of files in gdirbase/casename.

Now you have to select a tile from the file FV_144x91_DC_360x180_DE.til. After the header, each line contains the specifications for one tile. Find a tile close to the location of your experiment -- the third column is longitude, the fourth latitude. The first column should be the same as the ntype in make_bcs_ics.pro. The last column is the tile number, which you should record. Good luck.

Edit make_land_files.pro so that bcsodir, xdir and casename are the same as odir, gdirbase and casename, respectively, in make_bcs_ics.pro. Also change itile to the tile number you recorded from the tile file and catchname to the name of your catchment restart. The run the script. It will create a subdirectory Landfiles in the output directory and generate the land BC files there.

To make an appropriate AGCM.rc, copy one from an existing SCM case, and change the following:

  • AGCM_GRIDNAME and OGCM_GRIDNAME to reflect the coordinates in the filenames of the files that you just generated
  • DRIVER_DATA to the name of your driving data file name (for example, created from MERRA data in the section above)

Likewise, copy a CAP.rc and change the END_DATE as appropriate. Do the same for the start date in cap_restart. A HISTORY.rc can be copied without modification. Keep AGCM.rc, CAP.rc, cap_restart, and HISTORY.rc with the output from the IDL scripts. The latter will have to be either renamed or linked to names that the model will recognize -- see /discover/nobackup/aeichman/scm/scminfiles/arm_97jul for an example. You should have the following:

AGCM.rc
CAP.rc
cap_restart
catch_internal_rst
datmodyn_internal_rst
fraci.data
fvcore_internal_rst
HISTORY.rc
laigrn.data
moist_internal_rst
nirdf.dat
SEAWIFS_KPAR_mon_clim.data
sst.data
sstsi.data
tile.data
topo_dynave.data
topo_gwdvar.data
topo_trbvar.data
vegdyn.data
visdf.dat

These files, plus a driving data file, comprise the case-specific files for an SCM case, similar to those cases in /discover/nobackup/aeichman/scm/scminfiles/, and can use getSCMdata.sh to set up the model environment to run.

Some Discussion About How to Use and Configure SCM

The following section contains excerpts from emails from users along with replies that might answer questions that can come up.

The VERTICAL_ADVECTION Flag

From ARM_97JUL, I do see there is a flag "VERTICAL_ADVECTION" in AGCM.rc. I am trying to turn it off by setting it to "0". Is it the right way to use observed vertical advection?

yes sir, that is the way to turn it off. but turning it off is what gives 50 deg temp biases. there is a code change that i am testing today or tomorrow that greg walker tried and said made a big difference. we are suspecting that the T vertical advection term in the obs dataset is missing the adiabatic expansion term (ie, that it is dT/dp and not the total vert tend). so we will try to use the vertical advection of s term (it is already divided by Cp i think). in that case the idea would be to go into the reader.F90 term and (greg walker did this) do something like:

add the line:

      TMP(13,:,:) = TMP(13,:,:)/3600.0     

near where we do other stuff like this. and then: instead of:

          T_V_ADV(i,k) = -dv(7,K,I)

put in something like:

      if (filename.eq."arm_97jul.dat") then
          T_V_ADV(i,k) = -dv(13,K,I)  ! Vertical_s_Advec/cp (K/s)
      else
          T_V_ADV(i,k) = -dv(7,K,I)  ! Vertical_T_Advec (K/s) [is omega*alpha included?]
      endif

got it? the idea is to use the vertical advection of s term for arm 97 july case for now (its possible that there are other cases that we have to do this also, but until we know that we want to try it for arm 97 july only).

so - if you want to turn off the calculation of vertical advection you set VERTICAL_ADVECTION to 0 in the AGCM.rc (or leave it out - 0 is the default). and then i would HIGHLY recommend doing what i suggest here.

CGILS Experiments

It will be great if we can have some cases to study low clouds such as off coasts of California and Peru. Joao and I are interested to use SCM for the low clouds study.

just to let you both know that there is a new set of cases that we can now do with the scm, but they may not make their way into the 'official' set of cases. they are the CGILS cases. this is the set of CFMIP-GCSS simulations at three points on the transect from the calif coast to the mid pacific (stratus, strato cu, cu). the CGILS project includes a set of 3 or 4 LES simulations of the same 3 spots. the SCM forcing is idealized and the simulation is perpetual july 15 with no diurnal cycle (code changes needed for this to the model will not be in 'official' code releases - that's why these experiments won't be ones that are on the 'list'), but the set-up is great for testing pbl schemes and the interaction with the shallow and deep convection in marine boundary layers. i have done the simulations with our fortuna-2_0 code and i've also done a suite of sensitivity experiments and am continuing to do more. if you'd like the code mods for these runs i can provide them.

Parameters for progno_cloud

The following is a list of parameters for progno_cloud that can be in the AGCM.rc configuration file.


Slot  Name (AGCM.rc)    default  var. name        description
----  --------------    -------  ---------        ------------
  1   'CNV_BETA:',       10.0    CNV_BETA      Divide convective rain by cnv_beta for Marsh-Palm 
                                                  drop size, number, velocity - used for evap of rain
  2   'ANV_BETA:',       4.0     ANV_BETA      Divide anvil rain rate by anv_beta
  3   'LS_BETA:',        4.0     LS_BETA       Divide Large Scale rain by ls_beta
  4   'RH_CRIT:',        1.0     RH00          Upper limit on critical relative humidity for evap/condense
  5   'AUTOC_LS:',       2.0e-3  C_00          Multiplication factor (+unit conversion) for autoconversion 
                                                  rate  (autoconvert exp(-rate * dt) )
  6   'QC_CRIT_LS:',     8.0e-4  LWCRIT        Scale autoconversion (impact ~ 1 - exp(-1/lwcrit)**2 )
  7   'ACCRETION:',      2.0     C_ACC         Scale factor for accretion of cloud water by rain/snow
  8   'BASE_REVAP_FAC:', 1.0     C_EV          Scale factor for rain/snow re-evap (re-evap ~ 1 - exp(- c_ev) )
  9   'VOL_TO_FRAC:',    -1.0    CLDVOL2FRC    Not used
 10   'SUPERSAT:',       0.0     RHSUP_ICE     Not used
 11   'SHEAR_EVAP_FAC:', 1.3     SHR_EVAP_FAC  Not used
 12   'MIN_ALLOW_CCW:',  1.0e-9  MIN_CLD_WATER Not used
 13   'CCW_EVAP_EFF:',   3.3e-4  CLD_EVP_EFF   Scale for evap of cloud water/(subl of ice) (+unit conv)
 14   'NSUB_AUTOCONV:',  20.     NSMAX         Not used
 15   'LS_SUND_INTER:',  4.8     LS_SDQV2      Factor to control how fast LS ice autonv drops at cold temps
 16   'LS_SUND_COLD:',   4.8     LS_SDQV3      Factor to control how fast LS ice autonv drops at coldest temps
 17   'LS_SUND_TEMP1:',  230.    LS_SDQVT1     Temp at which to start decrease in LS ice autoconv ramping
 18   'ANV_SUND_INTER:', 1.0     ANV_SDQV2     Factor to control how fast anvil ice autonv drops at cold temps
 19   'ANV_SUND_COLD:',  1.0     ANV_SDQV3     Factor to control how fast anvil ice autonv drops at coldest temps
 20   'ANV_SUND_TEMP1:', 230.    ANV_SDQVT1    Temp at which to start decrease in anvil ice autoconv ramping
 21   'ANV_TO_LS_TIME:', 14400.  ANV_TO_LS     Not used
 22   'NCCN_WARM:',      50.     N_WARM        Not used
 23   'NCCN_ICE:',       0.01    N_ICE         Not used
 24   'NCCN_ANVIL:',     0.1     N_ANVIL       Not used
 25   'NCCN_PBL:',       200.    N_PBL         Not used
 26   'DISABLE_RAD:',    0.      DISABLE_RAD   Flag (=1) to disable radiative interaction with cloud/rain
 27   'ICE_SETTLE:',     0.                    Not used
 28   'ANV_ICEFALL:',    0.5     ANV_ICEFALL_C Scale for fall rate of anvil ice (used to scale ice autoconv)
 29   'LS_ICEFALL:',     0.5     LS_ICEFALL_C  Scale for fall rate of LS ice (used to scale ice autoconv)
 30   'REVAP_OFF_P:',    2000.   REVAP_OFF_P   Max pressure at which to do precip re-evap (mb)
 31   'CNV_ENVF:',       0.8     CNVENVFC      Scale factor for convective rain/snow re-evap 
                                                 (re-evap ~ 1 - exp(- envfrac) ) - fraction of re-evap in 
                                                 environment as opposed to in the cloud
 32   'WRHODEP:',        0.5     WRHODEP       Control rate of dec/incr of ice fall speed with high/low press
 33   'ICE_RAMP:',       -40.0   T_ICE_ALL     = ICE_RAMP + MAPL_TICE - Temp at which all cloud/precip is ice
                                                 (fraction=1, use L of ice)
 34   'CNV_ICEPARAM:',   1.0     CNVICEPARAM   Control on how much new conv precip is ice 
                                                  (1=> use ice fraction, 0=> all liquid)
 35   'CNV_ICEFRPWR:',   4.0     ICEFRPWR      = CNV_ICEFRPWR + .001 -- Scale ice fraction (liquid/ice 
                                                 partition for consensation/evap/melting&freezing) 
                                                       Fraction = Fraction ** icefrpowr
 36   'CNV_DDRF:',       0.0     CNVDDRFC      Fraction of re-evap of conv precip to reserve for re-evap 
                                                 lower in atm (in a "downdraft") 
 37   'ANV_DDRF:',       0.0     ANVDDRFC       Fraction of re-evap of conv precip to reserve for re-evap 
                                                 lower in atm (in a "downdraft") 
 38   'LS_DDRF:',        0.0     LSDDRFC        Fraction of re-evap of conv precip to reserve for re-evap 
                                                 lower in atm (in a "downdraft") 
 39   'AUTOC_ANV:',      1.0e-3                 Not used
 40   'QC_CRIT_ANV:',    8.0e-4                 Not used
 41   'TANHRHCRIT:',     1.      tanhrhcrit     Flag to use tanh vertical profile for Rh crit for condens/evap
 42   'MINRHCRIT:',      0.8     minrhcrit      Min Rh in tanh profile
 43   'MAXRHCRIT:',      1.0     maxrhcrit      Max Rh in tanh profile