Setting Up the Fortuna 2.0 Single Column Model
This page describes the steps and modifications necessary to build and run the Single Column Model (SCM) under Fortuna 2.0 on discover. It assumes that you have successfully run the model as described in GEOS-5 Quick Start.
Checking Out and Modifying GEOS-5 for SCM
First, check out the Fortuna 2.0 code as usual:
cvs co -r Fortuna-2_0 Fortuna
Then cd
to the directory
GEOSagcm/src/GEOSgcs_GridComp/GEOSgcm_GridComp/GEOSagcm_GridComp/GEOSsuperdyn_GridComp/GEOSdatmodyn_GridComp
and update these files from the SCM branch:
cvs upd -r b_Fortuna-2_0_SCM GEOS_DatmoDynGridComp.F90 reader.F90
Then compile the model as usual.
The modifications in the SCM branch will eventually be merged with the main branch so that this step is unnecessary.
Setting Up and Running Existing SCM Experiments
The setup script for the SCM experiments is /discover/nobackup/aeichman/scm/setup/getSCMdata.sh
. You do not have to run the gcm_setup
script as you do to set up a global run.
At the time of this writing there are six experiments to choose from:
- ARM Southern Great Plains site (http://www.arm.gov/sites/sgp), July 1997
- Same time and location as above, but forced with MERRA data
- KWAJEX (http://www.atmos.washington.edu/kwajex/)
- TOGA COARE (http://www.atmos.washington.edu/togacoare/summaries.html)
- TRMM-LBA (http://radarmet.atmos.colostate.edu/lba_trmm/)
- ARM-SCSMEX (http://www.cawcr.gov.au/bmrc/wefor/research/scsmex.htm)
Create your own directory and copy to it the script getSCMdata.sh
, then modify and uncomment the first executable line, which creates a symbolic link to the model executable (GEOSagcm/Linux/bin/GEOSgcm.x
), so that it points to your own model executable. Uncomment one of the lines that assign the variable SOURCEDIR
to choose the experiment to run. Then run the script. It will copy all of the necessary resource, forcing and data files to the working directory. Each experiment requires its own directory. If you modify the resource files (e.g., HISTORY.rc) you may want to copy the setup directory to your own area and modify it and the setup script accordingly so that you don't clobber your modifications.
Then you can just run the model executable from the command line in the directory you created. You will have to load the proper modules by sourcing GEOSagcm/src/g5_modules
. Although it runs with a single processor, on discover you should run it from an interactive job on a compute node (as opposed to the discover front end). Since all of the necessary configuration files are copied to the experiment directory, it requires none of the extra environmental infrastructure needed for a global experiment that the run script gcm_run.j
creates.
Creating Driving Datasets from MERRA
Given the resource and other files that come with a complete SCM configuration (either from an existing case or create with the procedure below), a driving data file for the same location and time span can be generated using MERRA output. Users should note that this current scheme for MERRA data does not include analysis increments in the advection terms, and probably should.
Obtaining MERRA Data
MERRA output files are located under /archive
on NCCS discover and can be time consuming to obtain. For this purpose a set of scripts have been created to make the task easier. To use them, create a subdirectory and copy the contents of /discover/nobackup/aeichman/scm/util/get-merra
to it. You should have the following:
getter.j MERRAobsys.rc README watcher.j
To use the scripts, modify the line in getter.j
starting setenv PATH ${PATH}:
... to point to the directory
src/GMAO_Shared/GMAO_etc/
in your local Fortuna 2.0 build, which
contains the necessary utilities. These use perl
libraries, which might require additions to your environment. (Assume at first that they don't.) To specify the range of MERRA data to obtain, modify the variables
BEGIN_date
and END_date
(both in the format YYYYMMDD). You may need
to modify your group name in the PBS environment variables as well.
Then qsub watcher.j
(not getter.j
). It will submit the getter.j
script while submitting a version of itself to monitor the "getter"
job. getter.j
uses the acquire
utility to smoothly transfer files from /archive
to the current directory. If the getter job ends without finishing -- most likely
because the allotted walltime ran out -- then the watcher job will
repeat the process, until all the data in the specified range are
copied to the current directory. For data sets of a month or so this may take a few hours, but the scripts should run without intervention. If something interrupts this process, the same scripts may be started again, and acquire
will be intelligent enough figure out where it needs to pick up.
For more details, see README
.
Generating the Driving Data
Now the ASCII .dat file used for the driving data can be created.
Check out the data file generator with the following command:
cvs co -r Fortuna-merra2scm_v1 Fortuna
Then cd GEOSagcm/src/GMAO_Shared/GEOS_Util/post
. The source file merra2scm.F
is there; this must be modified to the time and location of the data set to be created. Change the parameters begdate
and enddate
to the dates you want to cover, but leave begtime
and endtime
alone. The parameters lonbegin
, lonend
, latbegin
, and latend
specify the location and appropriate values can be gleaned from the filenames in the appropriate experiment under /discover/nobackup/aeichman/scm/scminfiles/
-- for example, the filename
tile.data_simple1_XY1x1-C_34N_100W_38N_95W
. (Note that these file names are truncated when copied by the SCM setup script.) Finally, change the variable dirname
to the directory where you copied the MERRA data.
Then cd
up to the src
directory, run make install
, and run the executable merra2scm.x
. It will generate the driving data file merra_scm.dat
, which can be used to replace the one with the expriment data.
Required Modifications to the Model
At the time of this writing, modifications need to be compiled in to load new cases. We are planning to amend this inconvenience. The source files to modify are in src/GEOSgcs_GridComp/GEOSgcm_GridComp/GEOSagcm_GridComp/GEOSsuperdyn_GridComp/GEOSdatmodyn_GridComp
.
First, in GEOS_DatmoDynGridComp.F90
, a case must be added in the select
statement at (or near) line 1039:
select CASE(trim(DATA$))
A sample case is shown below:
case("merra_arm97jul") NT = 240 NLEVEL = 42 DATA_DRIVER=.true.
For the case to be added, the case
statement must have the name of the driver data file with the trailing .dat
truncated (i.e. the file merra_scm.dat
will require the case
statement case("merra_scm")
). The variable NT
must be assigned to the length of the time series of the driving data, and NLEVEL
to the number of pressure levels. These values may be obtained from the header of the driver data file.
Similarly, any experiment using MERRA data requires a modification to reader.F90
at about line 211. The if then
statement there:
if(filename.eq."arm_97jul.dat".or. & filename.eq."merra_arm97jul.dat".or. & filename.eq."merra_arm_scmx.dat")then
requires the addition of the full driver data file name.
With these modifications in place, the model may be recompiled.
Finally, the parameter DRIVER_DATA
in AGCM.rc
needs to be changed to the full filename of the driver data. Note that you will probably have to change the begin time in the cap_restart
and the end time in CAP.rc
to the appropriate times in begtime
(probably 000000
) and endtime
.
Creating a New SCM Case
To create a new SCM case with a location and time span different from the ones already made, there are a set of IDL scripts in
src/GEOSgcs_GridComp/GEOSgcm_GridComp/GEOSagcm_GridComp/GEOSsuperdyn_GridComp/GEOSdatmodyn_GridComp/idl
Make sure you've updated the contents of this directory to the proper branch: cvs upd -r b_Fortuna-2_0_SCM
IDL can be run from the dali machine, which you should be able to ssh
to from discover. See the NCCS documentation for IDL for help: http://www.nccs.nasa.gov/dali_qna.html#step5. Follow the steps and start IDL (idl
). To run the script make_bcs_ics.pro
enter:
IDL> .run make_bcs_ics
The procedure requires the restart files fvcore_internal_rst
, moist_internal_rst
and catch_internal_rst
(with or without dates appended) seasonally appropriate for the experiment's start date, chosen in the priority month-day-year. Most restart files available have the time 2100z, the affects of which apparently diminish after a few days of spinup.
In the file make_bcs_ics.pro
make the following changes:
odir
to an appropriate directory in your area for input filesgdirbase
to a directory in your area for outputcr
to the coordinate range ([S W N E]
)casename
to a directory-appropriate name (this will be a subdirectory created for output ingdirbase
)sst_impose
to the desired SST boundary condition (if neccessary, in K)ntype
(tile surface type) to 100 if over land, 0 if over sea'moist_internal_rst.b19830214_21z'
to the name of your moist internal restart file- the intances of
'fvcore_internal_rst.b19830214_21z'
to the name of your fvcore internal restart file
In odir
, place the following files:
catch_internal_rst* FV_144x91_DC_360x180_DE.til fvcore_internal_rst* lai_green_clim_144x91_DC.data moist_internal_rst* nirdf_144x91_DC.dat topo_DYN_ave_144x91_DC.data topo_GWD_var_144x91_DC.data topo_TRB_var_144x91_DC.data vegdyn_144x91_DC.dat visdf_144x91_DC.dat
The files other than the restarts can currently be obtained from /discover/nobackup/amolod/bcs/144x91/
.
Then run make_bcs_ics.pro
from the IDL command line. This will create a set of files in gdirbase/casename
.
Now you have to select a tile from the file FV_144x91_DC_360x180_DE.til
. After the header, each line contains the specifications for one tile. Find a tile close to the location of your experiment -- the third column is longitude, the fourth latitude. The first column should be the same as the ntype
in make_bcs_ics.pro
. The last column is the tile number, which you should record. Good luck.
Edit make_land_files.pro
so that bcsodir
, xdir
and casename
are the same as odir
, gdirbase
and casename
, respectively, in make_bcs_ics.pro
. Also change itile
to the tile number you recorded from the tile file and catchname
to the name of your catchment restart. The run the script. It will create a subdirectory Landfiles
in the output directory and generate the land BC files there.
To make an appropriate AGCM.rc
, copy one from an existing SCM case, and change the following:
AGCM_GRIDNAME
andOGCM_GRIDNAME
to reflect the coordinates in the filenames of the files that you just generatedDRIVER_DATA
to the name of your driving data file name (for example, created from MERRA data in the section above)
Likewise, copy a CAP.rc
and change the END_DATE
as appropriate. Do the same for the start date in cap_restart
. A HISTORY.rc
can be copied without modification. Keep AGCM.rc
, CAP.rc
, cap_restart
, and HISTORY.rc
with the output from the IDL scripts. The latter will have to be either renamed or linked to names that the model will recognize -- see /discover/nobackup/aeichman/scm/scminfiles/arm_97jul
for an example. You should have the following:
AGCM.rc CAP.rc cap_restart catch_internal_rst datmodyn_internal_rst fraci.data fvcore_internal_rst HISTORY.rc laigrn.data moist_internal_rst nirdf.dat SEAWIFS_KPAR_mon_clim.data sst.data sstsi.data tile.data topo_dynave.data topo_gwdvar.data topo_trbvar.data vegdyn.data visdf.dat
These files, plus a driving data file, comprise the case-specific files for an SCM case, similar to those cases in /discover/nobackup/aeichman/scm/scminfiles/
, and can use getSCMdata.sh
to set up the model environment to run.