Fortuna 2.4 User's Guide: Difference between revisions

From GEOS-5
Jump to navigation Jump to search
No edit summary
 
(29 intermediate revisions by the same user not shown)
Line 9: Line 9:
   cvs co -r  ''TAGNAME'' -d ''DIRECTORY'' Fortuna
   cvs co -r  ''TAGNAME'' -d ''DIRECTORY'' Fortuna


where ''TAGNAME'' is the model "tag" (version).  A tag in <code>cvs</code> marks the various versions of the source files in the repository that together make up a particular version of the model.  A sample release tag is <code>Fortuna-2_4_p2</code>, indicating the version Fortuna 2.4 patch 2. ''DIRECTORY'' is the directory that the source code tree will be created.  If you are using a stock model tag it is reasonable to name the directory the same as the tag.  This directory determines which model in presumably your space a particular experiment is using.  Some scripts use the environment variable <code>ESMADIR</code>, which should be set to the absolute (full) pathname of this directory.
where ''TAGNAME'' is the model "tag" (version).  A tag in <code>cvs</code> marks the various versions of the source files in the repository that together make up a particular version of the model.  A sample release tag is <code>Fortuna-2_4_p2</code>, indicating the version Fortuna 2.4 patch 2 (the latest version). ''DIRECTORY'' is the directory that the source code tree will be created.  If you are using a stock model tag it is reasonable to name the directory the same as the tag.  This directory determines which model in presumably your space a particular experiment is using.  Some scripts use the environment variable <code>ESMADIR</code>, which should be set to the absolute (full) pathname of this directory.


When a modified version of some component of the model is saved to the repository, the tag it uses -- different from the standard model tag -- is supposed to be applied at most only to the directories with modified files.  This means that if you need to use some variant tag of a gridded component, you will have to <code>cd</code> to that directory and update to the variant tag.  So, for example, if you needed to apply updates to the SatSim gridded component, you would have to <code>cd</code> several levels down to the directory <code>GEOSsatsim_GridComp</code> and run  
When a modified version of some component of the model is saved to the repository, the tag it uses -- different from the standard model tag -- is supposed to be applied at most only to the directories with modified files.  This means that if you need to use some variant tag of a gridded component, you will have to <code>cd</code> to that directory and update to the variant tag.  So, for example, if you needed to apply updates to the SatSim gridded component, you would have to <code>cd</code> several levels down to the directory <code>GEOSsatsim_GridComp</code> and run  
Line 84: Line 84:
</pre>
</pre>


This provides a default HISTORY.rc (output specification) file.
This provides a default HISTORY.rc (output specification) file.  The initial default will be the tag of the build in which you are running <code>gcm_setup</code>.  The idea is that you can save a custom <code>HISTORY.rc</code> to the repository and have it checked out for your experiments.


<pre>
<pre>
Line 167: Line 167:
</pre>
</pre>


And the experiment is set up.  After you copy initial condition files (aka restarts) to the experiment directory, you can submit your job.


===Do not copy old experiments===
===Do not copy old experiments===


When creating related experiments, you will be tempted to copy the experiment directory tree of an older experiment.  '''Do ''not'' copy old experiments, run <code>gcm_setup</code> instead.'''  There are numerous instances where an experiment-specific directory is used in the run scripts and they will wreak subtle and pervasive havoc if executed in an unexpected environment.  This warning is especially true between model versions.  A useful and relatively safe exception to this rule is to copy previously used examples of <code>HISTORY.rc</code>.  However, you need to change the lines labeled <code>EXPID</code> and <code>EXPDSC</code> to the values in your automatically-generated  <code>HISTORY.rc</code> or the plotting will fail.
When creating related experiments, you will be tempted to copy the experiment directory tree of an older experiment.  '''Do ''not'' copy old experiments, run <code>gcm_setup</code> instead.'''  There are numerous instances where an experiment-specific directory is used in the run scripts created from templates by <code>gcm_setup</code>  and they will wreak subtle and pervasive havoc if executed in an unexpected environment.  This warning is especially true between model versions.  A useful and relatively safe exception to this rule is to copy previously used examples of <code>HISTORY.rc</code>.  However, you need to change the lines labeled <code>EXPID</code> and <code>EXPDSC</code> to the values in your automatically-generated  <code>HISTORY.rc</code> or the plotting will fail.


=== Using restart files ===
=== Using restart files ===
Line 176: Line 177:
Restart files provide the initial conditions for a run, and a set needs to be copied into a fresh experiment directory before running.  This includes the file <code>cap_restart</code>, which provides the model starting date and time in text.  Restart files themselves are resolution-specific and sometimes change between model versions.  As of the current model version, they are flat binary files with no metadata, so they tend to be stored together with restarts of the same provinance with the date either embedded in the filename or in an accompanying <code>cap_restart</code>, typically under a directory indicating the model version.
Restart files provide the initial conditions for a run, and a set needs to be copied into a fresh experiment directory before running.  This includes the file <code>cap_restart</code>, which provides the model starting date and time in text.  Restart files themselves are resolution-specific and sometimes change between model versions.  As of the current model version, they are flat binary files with no metadata, so they tend to be stored together with restarts of the same provinance with the date either embedded in the filename or in an accompanying <code>cap_restart</code>, typically under a directory indicating the model version.


A cleanly completed model run will leave a set of restarts and the corresponding <code>cap_restart</code> in its experiment directory.  Another source is <code>/archive/u/aeichman/restarts</code>.  Restarts are also left during runs in date-labeled tarballs in the <code>restarts</code> directory under the experiment directory.  You may have to create the <code>cap_restart</code>, which is simply one line of text with the date of the restart files in the format ''YYYYMMDD HHMMSS'' (with a space).  
A cleanly completed model run will leave a set of restarts and the corresponding <code>cap_restart</code> in its experiment directory.  Another source is <code>/archive/u/aeichman/restarts</code>.  Restarts are also left during runs in date-labeled tarballs in the <code>restarts</code> directory under the experiment directory before being transferred to the user's <code>/archive</code> space.  You may have to create the <code>cap_restart</code>, which is simply one line of text with the date of the restart files in the format ''YYYYMMDD HHMMSS'' (with a space).  


Failing the above sources, you can convert restarts from different resolutions and model versions, including MERRA, as described in  [[Regridding Restarts for Fortuna 2.4]].
Failing the above sources, you can convert restarts from different resolutions and model versions, including MERRA, as described in  [[Regridding Restarts for Fortuna 2.4]].




== What Happens during a Run ==
 
== What Happens During a Run ==
 
When the script <code>gcm_run.j</code> starts running, it creates a directory called  <code>scratch</code> and copies or links into it the model executable, rc files, restarts and boundary conditions necessary to run the model.  It also creates a directory for each of the output collections (in the default setup with the suffix <code>geosgcm_</code>) in the directory <code>holding</code> for before post-processing, and in the experiment directory for after post-processing.  It also tars the restarts and moves the tarball to the <code>restarts</code> directory.
 
Then the executable  <code>GEOSgcm.x</code> is run in the <code>scratch</code> directory, starting with the date in  <code>cap_restart</code> and running for the length of a segment.  A segment is the length of model time that the model integrates before returning, letting <code>gcm_run.j</code> do some housekeeping and then running another segment.  A model job will typically run a number of segments before trying to resubmit itself, ideally before the alloted wallclock time of the job runs out.
 
The processing that the various batch jobs perform is illustrated below:
 


[[Image:F2.5-job-diagram002.png]]
[[Image:F2.5-job-diagram002.png]]
Each time a segment ends, <code>gcm_run.j</code> submits a post-processing job before starting a new segment or exiting.  The post-processing job moves the model output from the  <code>scratch</code> directory to the respective collection directory under  <code>holding</code>.  Then it determines whether there is a enough output to create a monthly or seasonal mean, and if so, creates them and moves them to the collection directories in the experiment directory, and then tars up the daily output and submits an archiving job.  The archiving job tries to move the tarred daily output, the monthly and seasonal means and any tarred restarts to the user's space in <code>archive</code> filesystem.  The post-processing script also determines (assuming the default settings) whether enough output exists to create plots; if so, a plotting job is submitted to the queue.  The plotting script produces a number of pre-determined plots as <code>.gif</code> files in the <code>plot_CLIM</code> directory in the experiment directory.
As explained above, the contents of the <code>cap_restart</code> file determine the start of the model run in model time, which determines boundary conditions and the times stamps of the output.  The end time may be set in <code>CAP.rc</code> with the property <code>END_DATE</code>  (format ''YYYYMMDD HHMMSS'', with a space), though integration is usually leisurely enough that one can just kill the job or rename the run script <code>gcm_run.j</code> so that it is not resubmitted to the job queue.
Most of the other properties in <code>CAP.rc</code> are discussed elsewhere, but two that are importat for understanding how the batch jobs work are JOB_SGMT: NUM_SGMT:


== Determining Output: <code>HISTORY.rc</code> ==
== Determining Output: <code>HISTORY.rc</code> ==
Line 232: Line 254:


===What exports are available?===
===What exports are available?===
To add export fields to the <code>HISTORY.rc</code> you will need to know what fields the model provides, which gridded component provides them, and their name.  The most straightforward way to do this is to use <code>PRINTSPEC</code>.  The setting for  <code>PRINTSPEC</code> is in the file <code>CAP.rc</code>.  By default the line looks like so:
PRINTSPEC: 0  # (0: OFF, 1: IMPORT & EXPORT, 2: IMPORT, 3: EXPORT)
Setting <code>PRINTSPEC</code> to  3 will make the model send to standard output a list of exports available to <code>HISTORY.rc</code> in the model's current configuration, and then exit without integrating. The list includes each export's gridded component and short name (both necessary to include in <code>HISTORY.rc</code>), long (descriptive) name, units, and number of dimensions.  Note that run-time options can affect the exports available, so see to it that you have those set as you intend.  The other <code>PRINTSPEC</code> values are useful for debugging.
While you can set  <code>PRINTSPEC</code>, submit <code>qsub gcm_run.j</code>, and get the export list as part of PBS standard output, there are quicker ways of obtaining the list.  One way is to run it as a single column model on a single processor, as explained in [[Fortuna 2.4 Single Column Model]].  Another way is to run it in an existing experiment.  In the <code>scratch</code> directory of an experiment that has already run, change <code>PRINTSPEC</code> in  <code>CAP.rc</code> as above.  Then, in the file <code>AGCM.rc</code>, change the values of <code>NX</code> and <code>NY</code> (near the beginning of the file) to 1.  Then, from an interactive job (one processor will suffice), run the executable <code>GEOSgcm.x</code> in <code>scratch</code>.  You will need to run <code>source src/g5_modules</code> in the model's build tree to set up the environment.  The model executable will simply output the export list.


== Optimizing a Run ==
== Optimizing a Run ==
Line 254: Line 284:
=== post.rc ===
=== post.rc ===


== How to Obtain GEOS-5 and Compile Source Code ==
There are two options for obtaining the model source code: from the CVS repository on the NCCS progress server, and from the SVN "public" repository on the trac server.  Since the code on progress is more current, elgible users are strongly encouraged to obtain accounts from NCCS and use the progress repository.
=== Using the NCCS progress CVS code repository ===
The following assumes that you know your way around Unix, have successfully logged into your cluster account and have an account on the source code repository with the proper <code>ssh</code> configuration -- see the progress repository quick  start: https://progress.nccs.nasa.gov/trac/admin/wiki/CVSACL.  The link requires your NCCS username and password.
The commands below assume that your shell is <code>csh</code>.  Since the scripts to build and run GEOS-5  tend to be written in the same, you shouldn't bother trying to import too much into an alternative shell.  If you prefer a different shell, it is easiest just to open a <code>csh</code> process to build the model and your experiment.
Furthermore, model builds should be created in your space under <code>/discover/nobackup</code>, as creating them under your home directory will quickly wipe out your disk quota.
Set the following three environment variables:
setenv CVS_RSH ssh
setenv CVSROOT :ext:''USERID''@cvsacl:/cvsroot/esma
where ''USERID'' is, of course, your repository username, which should be the same as your NASA and NCCS username.  Then, issue the command:
cvs co -r  Fortuna-2_4 Fortuna
This should check out the latest stable version of the model from the repository and create a directory called <code>GEOSagcm</code>. 
=== Compiling the Model ===
<code>cd</code> into <code>GEOSagcm/src</code> and <code>source</code> the file called <code>g5_modules</code>:
source g5_modules
This will set up the build environment.  If you then type
module list
you should see:
Currently Loaded Modulefiles:
  1) comp/intel-11.0.083  2) mpi/impi-3.2.2.006    3) lib/mkl-10.0.3.020
If this all worked, then type:
gmake install
This will build the model.  It will take about 40 minutes.  If this works, it should create a directory under <code>GEOSagcm</code> called <code>Linux/bin</code>.  In here you should find the executable: <code>GEOSgcm.x</code> .
== Running GEOS-5 ==
First of all, to run jobs on the cluster you will need to set up passwordless <code>ssh</code> (which operates within the cluster).  To do so, run the following from your '''discover''' home directory:
cd .ssh
ssh-keygen -t dsa
cat id_dsa.pub >>  authorized_keys
Similarly, transferring the daily output files (in monthly tarballs) requires passwordless authentication from '''discover''' to '''dirac'''.  While in <code>~/.ssh</code> on '''discover''', run
  ssh-keygen -t dsa
Then, log into  '''dirac''' and cut and paste the contents of the <code>id_rsa.pub</code> and <code>id_dsa.pub</code> files on '''discover''' into the  <code>~/.ssh/authorized_keys</code> file on  '''dirac'''.  Problems with <code>ssh</code> should be referred to NCCS support.
To set the model up to run, in the  <code>GEOSagcm/src/Applications/GEOSgcm_App</code> directory we run:
gcm_setup
The <code>gcm_setup</code> script asks you a few questions such as an experiment name (with no spaces, called ''EXPID'') and description (spaces ok).  It will also ask you for the model resolution, expecting one of the available lat-lon domain sizes, the dimensions separated by a space.  For your first time out you will probably want to enter <code>144 91</code> (corresponding to ~2 degree resolution).  Towards the end it will ask you for a group ID -- the default is g0602 (GMAO modeling group).  Enter whatever is appropriate, as necessary.  The rest of the questions provide defaults which will be suitable for now, so just press enter for these. 
The script produces an experiment directory (''EXPDIR'') in your space as <code>/discover/nobackup/''USERID''/''EXPID''</code>, which contains, among other things, the sub-directories:
*<code>post</code>  (containing the incomplete post-processing job script and .rc file)
*<code>archive</code>  (containing an incomplete archiving job script)
*<code>plot</code>  (containing an incomplete plotting job script and .rc file)
The post-processing script will complete (i.e., add necessary commands to) the archiving and plotting scripts as it runs.  The setup script that you ran also creates an experiment home directory (''HOMEDIR'') as <code>~''USERID''/geos5/''EXPID''</code>  containing the run scripts and GEOS resource (<code>.rc</code>) files.
The run scripts need some more environment variables -- here are the minimum contents of a <code>.cshrc</code>:
umask 022
unlimit
limit stacksize unlimited
set arch = `uname`
setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib
The <code>umask 022</code> is not strictly necessary, but it will make the various files readable to others, which will facilitate data sharing and user support.  Your home directory <code>~''USERID''</code> is also inaccessible to others by default; running <code>chmod 755 ~</code> is helpful.
Copy the restart (initial condition) files and associated <code>cap_restart</code> into ''EXPDIR''.  Keep the "originals" handy since if the job stumbles early in the run it might stop after having renamed them.  The model expects restart filenames to end in "rst" but produces them with the date and time appended, so you may have to rename them.  The <code>cap_restart</code> file is simply one line of text with the date of the restart files in the format YYYYMMDD<space>HHMMSS.  The boundary conditions/forcings are provided by symbolic links created by the run script. 
If you need an arbitrary set of restarts, you can copy them from <code>/archive/u/aeichman/restarts/Fortuna-2_4/</code>, where they are indexed by date and resolution.
The script you submit, <code>gcm_run.j</code>, is in ''HOMEDIR''.  It should be ready to go as is.  The parameter END_DATE in <code>CAP.rc</code> (previously in <code>gcm_run.j</code>) can be set to the date you want the run to stop.  An alternative way to stop the run is by commenting out the line <code> if ( $capdate < $enddate ) qsub $HOMDIR/gcm_run.j</code> at the end of the script, which will prevent the script from being resubmitted, or rename the script file.  You may eventually want to tune parameters in the <code>CAP.rc</code> file JOB_SGMT (the number of days per segment, the interval between saving restarts) and NUM_SGMT (the number of segments attempted in a job) to maximize your run time. 
Submit the job with <code>qsub gcm_run.j</code>.  You can keep track of it with <code>qstat</code> or <code>qstat | grep ''USERID''</code>, or follow stdout with <code>tail -f /discover/pbs_spool/''JOBID''.OU</code>, ''JOBID'' being returned by <code>qsub</code> and displayed with <code>qstat</code>.  Jobs can be killed with <code>qdel ''JOBID''</code>.  The standard out and standard error will be delivered as files to the working directory at the time you submitted the job.
== Output and Plots ==
During a normal run, the <code>gcm_run.j</code> script will run the model for the segment length (current default is 8 days).  The model creates output files (with an <code>nc4</code> extension), also called collections (of output variables), in  <code>''EXPDIR''/scratch</code> directory.  After each segment, the script moves the output to the <code>''EXPDIR''/holding</code> and spawns a post-processing batch job which partitions and moves the output files  within the <code>holding</code> directory to their own distinct collection directory, which is again partitioned into the appropriate year and month.  The  post processing script then checks to
see if  a full month of data is present.  If not, the post-processing job ends.  If there is a full month, the script will then run the time-averaging executable to produce a monthly mean file in <code>''EXPDIR''/geos_gcm_*</code>.  The post-processing script then spawns a new batch job which will archive the data onto the mass-storage drives (<code>/archive/u/''USERID''/GEOS5.0/''EXPID''</code>).
If a monthly average file was made, the post-processing script will also
check to see if it should spawn a plot job.  Currently, our criteria for
plotting are:  1) if the month created was February or August,  AND
2) there are at least 3 monthly average files, then a plotting job for
the seasons DJF or JJA will be issued.  The plots are created as gifs in <code>''EXPDIR''/plots_CLIM</code>.
The post-processing script can be found in:
<code>GEOSagcm/src/GMAO_Shared/GEOS_Util/post/gcmpost.script</code>.  The <code>nc4</code> output files can be opened and plotted with <code>gradsnc4</code> -- see http://www.iges.org/grads/gadoc/tutorial.html for a tutorial, but use <code>sdfopen</code> instead of <code>open</code>.


The contents of the output files (including which variables get saved) may be configured in the  <code>''HOMEDIR''/HISTORY.rc</code> -- a good description of this file may be found at http://modelingguru.nasa.gov/clearspace/docs/DOC-1190 .


'''Back to [[GEOS-5 Documentation for Fortuna 2.4]]'''
'''Back to [[GEOS-5 Documentation for Fortuna 2.4]]'''

Latest revision as of 11:04, 21 May 2012

This page describes in detail how to set up and optimize a global model run of GEOS-5 Fortuna 2.4 on NCCS discover and NAS pleiades and generally make the model do what you want. It assumes that you have already run the model as described in Fortuna 2.4 Quick Start.

Back to GEOS-5 Documentation for Fortuna 2.4

Compiling the Model

Most of the time for longer runs you will be using a release version of the model, perhaps compiled with a different version of one or more of the model's gridded components, defined by subdirectories in the source code. This process starts with checking out the stock model from the repository using the command

 cvs co -r  TAGNAME -d DIRECTORY Fortuna

where TAGNAME is the model "tag" (version). A tag in cvs marks the various versions of the source files in the repository that together make up a particular version of the model. A sample release tag is Fortuna-2_4_p2, indicating the version Fortuna 2.4 patch 2 (the latest version). DIRECTORY is the directory that the source code tree will be created. If you are using a stock model tag it is reasonable to name the directory the same as the tag. This directory determines which model in presumably your space a particular experiment is using. Some scripts use the environment variable ESMADIR, which should be set to the absolute (full) pathname of this directory.

When a modified version of some component of the model is saved to the repository, the tag it uses -- different from the standard model tag -- is supposed to be applied at most only to the directories with modified files. This means that if you need to use some variant tag of a gridded component, you will have to cd to that directory and update to the variant tag. So, for example, if you needed to apply updates to the SatSim gridded component, you would have to cd several levels down to the directory GEOSsatsim_GridComp and run

 cvs upd -r  VARIANT_TAGNAME

The source code will then incorporate the tag's modifications.

Once the checkout from the repository is completed, you are ready to compile. cd to the src directory at the top of the source code directory tree and from a csh shell run source g5_modules. This will load the appropriate modules and create the necessary environment for compiling and running. It is tailored to the individual systems that GEOS-5 usually runs on, so it probably won't work elsewhere. After that you can run make install, which will create the necessary executables in the directory ARCH/bin, where ARCH is the local architecture (most often Linux).

Setting up a Global Model Run

The following describes how to set up a global model run. The prodcedure to set up a single column model run is described in Fortuna 2.4 Single Column Model.

Using gcm_setup

The setup script for global runs, gcm_setup, is in the directory src/Applications/GEOSgcm_App. The following is an example of a session with the setup script, with commentary.  :


 Enter the Experiment ID:

Enter a name and hit return. For this example we'll set the experiment ID to "myexp42". Experiment IDs need to have no whitespace and not start with a digit, since it will be the prefix of job names and PBS imposes certain limits on job names.


Enter a 1-line Experiment Description:

This should be short but descriptive, since it will be used to label plots. It can have spaces, though the string will be stored with underscores for the spaces. Provide a description and hit return.

Enter the Lat/Lon Horizontal Resolution: IM JM
   or ..... the Cubed-Sphere Resolution: cNN

The lat/lon option allows four resolutions: 144x91, 288x181, 576x361 or 1152x721 corresponding rougly to 2, 1, 1/2 and 1/4 degree resolutions. Enter a resolution like so:

144 91

and hit enter.

Enter the Model Vertical Resolution: LM (Default: 72)

The current standard is 72 levels, and unless you know what you are doing you should stick with that.

Do you wish to run GOCART? (Default: NO or FALSE)

GOCART is the interactive chemistry package, as opposed to prescribed chemistry. It incurs a significant performance cost, so unless you know you want it, you should go with the default. The following assumes that you have entered "y".


Enter the GOCART Emission Files to use: "CMIP" (Default), "PIESA", or "OPS":

Select your favorite emission files here.

Enter the AERO_PROVIDER: GOCART (Default) or PCHEM:

Here you get to choose again to use interactive or prescribed aerosols.

Enter the tag or directory (/filename) of the HISTORY.AGCM.rc.tmpl to use
(To use HISTORY.AGCM.rc.tmpl from current build, Type:  Current         )
-------------------------------------------------------------------------
Hit ENTER to use Default Tag/Location: (Fortuna-2_4)

This provides a default HISTORY.rc (output specification) file. The initial default will be the tag of the build in which you are running gcm_setup. The idea is that you can save a custom HISTORY.rc to the repository and have it checked out for your experiments.

This U.S. Government resource is for authorized users only.  By accessing
this system you are consenting to complete monitoring with no expectation
of privacy.  Unauthorized access or use may subject you to disciplinary
action and criminal prosecution.

===================================================================
Checking out HISTORY.AGCM.rc.tmpl
RCS:  /cvsroot/esma/esma/src/Applications/GEOSgcm/Application/HISTORY.AGCM.rc.tmpl,v
VERS: 1.16.2.3
***************
 
Enter Desired Location for HOME Directory (to contain scripts and RC files)
Hit ENTER to use Default Location:
----------------------------------
Default:  /discover/nobackup/aeichman/myexp42

This option determines where the experiment's home directory is located -- where the basic job scripts and major RC files (AGCM.rc, CAP.rc and HISTORY.rc) will be located. The first time you run the script it will default to a subdirectory under your account's home directory named geos5, remember what you decide (in ~/.HOMDIRroot) and use that as a default in subsequent times the script is run. This initial default is fine, though another possibility is to enter your nobackup space, as shown here. This will place all of the HOME directory files of the experiment together with the rest of them.

Enter Desired Location for EXP Directory (to contain model output and restart files)
Hit ENTER to use Default Location:
----------------------------------
Default:  /discover/nobackup/aeichman/myexp42

This determines the experiment directory, where restart files and various job output is stored. These are the storage-intensive parts and so default to the nobackup space.

Enter Location for Build directory containing:  src/ Linux/ etc...
Hit ENTER to use Default Location:
----------------------------------
Default:  /discover/nobackup/aeichman/Fortuna-2_4

This determines which of your local builds is used to create the experiment. It defaults to the build of the script you are running, which is generally a good idea.

Current GROUPS: g0620
Enter your GROUP ID for Current EXP: (Default: g0620)

This is used for by the job accounting system. If you are not in the default group, you will probably have been informed.

-----------------------------------

building file list ... done
GEOSgcm.x

sent 50117242 bytes  received 42 bytes  33411522.67 bytes/sec
total size is 50110997  speedup is 1.00
 
Creating gcm_run.j for Experiment myexp42 ...
Creating gcm_post.j for Experiment myexp42 ...
Creating gcm_plot.j for Experiment myexp42 ...
Creating gcm_archive.j for Experiment myexp42 ...
Creating gcm_regress.j for Experiment myexp42 ...
Creating AGCM.rc for Experiment myexp42 ...
Creating CAP.rc for Experiment myexp42 ...
Creating HISTORY.rc for Experiment myexp42 ...
 
Done!
-----
 
Build Directory: /discover/nobackup/aeichman/Fortuna-2_4
----------------
 
 
The following executable has been placed in your Experiment Directory:
----------------------------------------------------------------------
/discover/nobackup/aeichman/Fortuna-2_4/Linux/bin/GEOSgcm.x
 
 
You must now copy your AGCM Initial Conditions into: 
---------------------------------------------------- 
/discover/nobackup/aeichman/myexp42

And the experiment is set up. After you copy initial condition files (aka restarts) to the experiment directory, you can submit your job.

Do not copy old experiments

When creating related experiments, you will be tempted to copy the experiment directory tree of an older experiment. Do not copy old experiments, run gcm_setup instead. There are numerous instances where an experiment-specific directory is used in the run scripts created from templates by gcm_setup and they will wreak subtle and pervasive havoc if executed in an unexpected environment. This warning is especially true between model versions. A useful and relatively safe exception to this rule is to copy previously used examples of HISTORY.rc. However, you need to change the lines labeled EXPID and EXPDSC to the values in your automatically-generated HISTORY.rc or the plotting will fail.

Using restart files

Restart files provide the initial conditions for a run, and a set needs to be copied into a fresh experiment directory before running. This includes the file cap_restart, which provides the model starting date and time in text. Restart files themselves are resolution-specific and sometimes change between model versions. As of the current model version, they are flat binary files with no metadata, so they tend to be stored together with restarts of the same provinance with the date either embedded in the filename or in an accompanying cap_restart, typically under a directory indicating the model version.

A cleanly completed model run will leave a set of restarts and the corresponding cap_restart in its experiment directory. Another source is /archive/u/aeichman/restarts. Restarts are also left during runs in date-labeled tarballs in the restarts directory under the experiment directory before being transferred to the user's /archive space. You may have to create the cap_restart, which is simply one line of text with the date of the restart files in the format YYYYMMDD HHMMSS (with a space).

Failing the above sources, you can convert restarts from different resolutions and model versions, including MERRA, as described in Regridding Restarts for Fortuna 2.4.


What Happens During a Run

When the script gcm_run.j starts running, it creates a directory called scratch and copies or links into it the model executable, rc files, restarts and boundary conditions necessary to run the model. It also creates a directory for each of the output collections (in the default setup with the suffix geosgcm_) in the directory holding for before post-processing, and in the experiment directory for after post-processing. It also tars the restarts and moves the tarball to the restarts directory.

Then the executable GEOSgcm.x is run in the scratch directory, starting with the date in cap_restart and running for the length of a segment. A segment is the length of model time that the model integrates before returning, letting gcm_run.j do some housekeeping and then running another segment. A model job will typically run a number of segments before trying to resubmit itself, ideally before the alloted wallclock time of the job runs out.

The processing that the various batch jobs perform is illustrated below:




Each time a segment ends, gcm_run.j submits a post-processing job before starting a new segment or exiting. The post-processing job moves the model output from the scratch directory to the respective collection directory under holding. Then it determines whether there is a enough output to create a monthly or seasonal mean, and if so, creates them and moves them to the collection directories in the experiment directory, and then tars up the daily output and submits an archiving job. The archiving job tries to move the tarred daily output, the monthly and seasonal means and any tarred restarts to the user's space in archive filesystem. The post-processing script also determines (assuming the default settings) whether enough output exists to create plots; if so, a plotting job is submitted to the queue. The plotting script produces a number of pre-determined plots as .gif files in the plot_CLIM directory in the experiment directory.



As explained above, the contents of the cap_restart file determine the start of the model run in model time, which determines boundary conditions and the times stamps of the output. The end time may be set in CAP.rc with the property END_DATE (format YYYYMMDD HHMMSS, with a space), though integration is usually leisurely enough that one can just kill the job or rename the run script gcm_run.j so that it is not resubmitted to the job queue.

Most of the other properties in CAP.rc are discussed elsewhere, but two that are importat for understanding how the batch jobs work are JOB_SGMT: NUM_SGMT:

Determining Output: HISTORY.rc

The contents of the the file HISTORY.rc (in your experiment HOME directory) tell the model what and how to output its state and diagnostic fields. The default HISTORY.rc provides many fields as is, but you may want to modify it to suit your needs.

File format

The top of a default HISTORY.rc will look something like this:

EXPID:  myexp42
EXPDSC: this_is_my_experiment
  
 
COLLECTIONS: 'geosgcm_prog'
             'geosgcm_surf'
             'geosgcm_moist'
             'geosgcm_turb'

[....]

The attribute EXPID must match the name of the experiment HOME directory; this is only an issue if you copy the HISTORY.rc from a different experiment. The EXPDSC attribute is used to label the plots. The COLLECTIONS attribute contains list of strings indicating the output collections to be created. The content of the individual collections are determined after this list. Individual collections can be "turned off" by commenting the relevant line with a #.

The following is an example of a collection specification:

  geosgcm_prog.template:  '%y4%m2%d2_%h2%n2z.nc4',
  geosgcm_prog.archive:   '%c/Y%y4',
  geosgcm_prog.format:    'CFIO',
  geosgcm_prog.frequency:  060000,
  geosgcm_prog.resolution: 144 91,
  geosgcm_prog.vscale:     100.0,
  geosgcm_prog.vunit:     'hPa',
  geosgcm_prog.vvars:     'log(PLE)' , 'DYN'          ,
  geosgcm_prog.levels:     1000 975 950 925 900 875 850 825 800 775 750 725 700 650 600 550 500 450 400 350 300 250 200 150 100 70 50 40 30 20 10 7 5 4 3 2 1 0.7 0.5 0.4 0.3 0.2
0.1 0.07 0.05 0.04 0.03 0.02,
  geosgcm_prog.fields:    'PHIS'     , 'AGCM'         ,
                          'T'        , 'DYN'          ,
                          'PS'       , 'DYN'          ,
                          'ZLE'      , 'DYN'          , 'H'   ,
                          'OMEGA'    , 'DYN'          ,
                          'Q'        , 'MOIST'        , 'QV'  ,
                          ::

The individual collection attributes are described below, but what users modify the most are the fields attribute. This determines which exports are saved in the collection. Each field record is a string with the name of an export from the model followed by a string with the name of the gridded component which exports it, separated by a comma. The entries with a third column determine the name by which that export in saved in the collection file when the name is different from that of the export.

What exports are available?

To add export fields to the HISTORY.rc you will need to know what fields the model provides, which gridded component provides them, and their name. The most straightforward way to do this is to use PRINTSPEC. The setting for PRINTSPEC is in the file CAP.rc. By default the line looks like so:

PRINTSPEC: 0  # (0: OFF, 1: IMPORT & EXPORT, 2: IMPORT, 3: EXPORT)

Setting PRINTSPEC to 3 will make the model send to standard output a list of exports available to HISTORY.rc in the model's current configuration, and then exit without integrating. The list includes each export's gridded component and short name (both necessary to include in HISTORY.rc), long (descriptive) name, units, and number of dimensions. Note that run-time options can affect the exports available, so see to it that you have those set as you intend. The other PRINTSPEC values are useful for debugging.

While you can set PRINTSPEC, submit qsub gcm_run.j, and get the export list as part of PBS standard output, there are quicker ways of obtaining the list. One way is to run it as a single column model on a single processor, as explained in Fortuna 2.4 Single Column Model. Another way is to run it in an existing experiment. In the scratch directory of an experiment that has already run, change PRINTSPEC in CAP.rc as above. Then, in the file AGCM.rc, change the values of NX and NY (near the beginning of the file) to 1. Then, from an interactive job (one processor will suffice), run the executable GEOSgcm.x in scratch. You will need to run source src/g5_modules in the model's build tree to set up the environment. The model executable will simply output the export list.

Optimizing a Run

Tuning segments

Trimming plot verification datasets

Troubleshooting

post recover



Special Requirements

Perpetual ("Groundhog Day") mode

Saving restarts during a segment

post.rc

Back to GEOS-5 Documentation for Fortuna 2.4