Ganymed 3.0 User's Guide: Difference between revisions

 
(6 intermediate revisions by the same user not shown)
Line 9: Line 9:
   cvs co -r  ''TAGNAME'' -d ''DIRECTORY'' Fortuna
   cvs co -r  ''TAGNAME'' -d ''DIRECTORY'' Fortuna


where ''TAGNAME'' is the model "tag" (version).  A tag in <code>cvs</code> marks the various versions of the source files in the repository that together make up a particular version of the model.  A sample release tag is <code>Ganymed-2_1_p1</code>, indicating the latest patch of version Ganymed 1.0 for general use (there is a <code>Ganymed-2_1_p2</code> patch for operations -- don't use it unless you know otherwise). ''DIRECTORY'' is the directory that the source code tree will be created.  If you are using a stock model tag it is reasonable to name the directory the same as the tag.  This directory determines which model in presumably your space a particular experiment is using.  Some scripts use the environment variable <code>ESMADIR</code>, which should be set to the absolute (full) pathname of this directory.
where ''TAGNAME'' is the model "tag" (version).  A tag in <code>cvs</code> marks the various versions of the source files in the repository that together make up a particular version of the model.  A sample release tag is <code>Ganymed-3_0_p1</code>, indicating the latest patch of version Ganymed 3.0 for general use (there is a <code>Ganymed-3_0_p2</code> patch for operations -- don't use it unless you know otherwise). ''DIRECTORY'' is the directory that the source code tree will be created.  If you are using a stock model tag it is reasonable to name the directory the same as the tag.  This directory determines which model in presumably your space a particular experiment is using.  Some scripts use the environment variable <code>ESMADIR</code>, which should be set to the absolute (full) pathname of this directory.


When a modified version of some component of the model is saved to the repository, the tag it uses -- different from the standard model tag -- is supposed to be applied at most only to the directories with modified files.  This means that if you need to use some variant tag of a gridded component, you will have to <code>cd</code> to that directory and update to the variant tag.  So, for example, if you needed to apply updates to the SatSim gridded component, you would have to <code>cd</code> several levels down to the directory <code>GEOSsatsim_GridComp</code> and run  
When a modified version of some component of the model is saved to the repository, the tag it uses -- different from the standard model tag -- is supposed to be applied at most only to the directories with modified files.  This means that if you need to use some variant tag of a gridded component, you will have to <code>cd</code> to that directory and update to the variant tag.  So, for example, if you needed to apply updates to the SatSim gridded component, you would have to <code>cd</code> several levels down to the directory <code>GEOSsatsim_GridComp</code> and run  
Line 41: Line 41:


<pre>
<pre>
Enter the Lat/Lon Horizontal Resolution: IM JM
Enter the Atmospheric Horizontal Resolution code:
  or ..... the Cubed-Sphere Resolution: cNN
-----------------------------------------------------------
    Lat/Lon                    Cubed-Sphere
-----------------------------------------------------------
  b --  2  deg                c48  --  2  deg
  c --  1  deg                c90  --  1  deg
  d -- 1/2 deg                c180 -- 1/2  deg (56-km)
  e -- 1/4 deg (35-km)        c360 -- 1/4  deg (28-km) 
                              c720 -- 1/8  deg (14-km)
                              c1440 - 1/16 deg ( 7-km)
</pre>
</pre>


The lat/lon option allows four resolutions: 144x91, 288x181, 576x361 or 1152x721 corresponding roughly to 2, 1, 1/2 and 1/4 degree resolutions. Enter a resolution like so:
Here you choose whether to run with lat/lon domain decomposition (i.e. how the globe gets distributed to processors) or with the cubed sphere.  The science between to two should be identical and the output transparent to the difference, but lat/lon is deprecated and all further development should be done with the cubed sphere.
 
The options b/c/d/e select a resolution with lat/lon, and c48-c1440 select one with the cubed sphere. Enter a resolution like so:


<pre>
<pre>
144 91
c48
</pre>
</pre>


Line 54: Line 65:


<pre>
<pre>
Enter the Model Vertical Resolution: LM (Default: 72)
Do you wish to run the COUPLED Ocean/Sea-Ice Model? (Default: NO or FALSE)
</pre>
</pre>


The current standard is 72 levels, and unless you know what you are doing you should stick with that.
You probably don't, so hit enter.


<pre>
<pre>
Do you wish to run the COUPLED Ocean/Sea-Ice Model? (Default: NO or FALSE)
Enter the Data_Ocean Horizontal Resolution code: o1 (1  -deg,  360x180, (e.g. Reynolds) Default)
                                                o8 (1/8-deg, 2880x1440, (e.g. OSTIA))
</pre>
</pre>


You probably don't, so hit enter.
This selects the source of SST boundary conditions, 1 degree Reynolds or 1/8 degree OSTIA. 
Unless you are using a higher-resolution experiment, the default will suffice.


<pre>
<pre>
Line 116: Line 129:
Hit ENTER to use Default Location:
Hit ENTER to use Default Location:
----------------------------------
----------------------------------
Default:  /discover/nobackup/aeichman/Fortuna-2_5_p6
Default:  /discover/nobackup/aeichman/Ganymed-3_0_p1
</pre>
</pre>


Line 129: Line 142:


<pre>
<pre>


sending incremental file list
sending incremental file list
GEOSgcm.x
GEOSgcm.x


sent 50848492 bytes  received 31 bytes  33899015.33 bytes/sec
sent 73969908 bytes  received 31 bytes  147939878.00 bytes/sec
total size is 50842191 speedup is 1.00
total size is 73960783 speedup is 1.00
   
   
Creating gcm_run.j for Experiment myexp42 ...
Creating gcm_run.j for Experiment: myexp42  
Creating gcm_post.j for Experiment myexp42 ...
Creating gcm_post.j for Experiment: myexp42  
Creating gcm_plot.tmpl for Experiment myexp42 ...
Creating gcm_archive.j for Experiment: myexp42
Creating gcm_archive.j for Experiment myexp42 ...
Creating gcm_regress.j for Experiment: myexp42
Creating gcm_regress.j for Experiment myexp42 ...
Creating gcm_convert.j for Experiment: myexp42
Creating AGCM.rc for Experiment myexp42 ...
Creating gcm_plot.tmpl for Experiment: myexp42  
Creating CAP.rc for Experiment myexp42 ...
Creating gcm_forecast.tmpl for Experiment: myexp42  
Creating HISTORY.rc for Experiment myexp42 ...
Creating gcm_forecast.setup for Experiment: myexp42  
Creating CAP.rc.tmpl for Experiment: myexp42
Creating AGCM.rc.tmpl for Experiment: myexp42  
Creating HISTORY.rc.tmpl for Experiment: myexp42
Creating ExtData.rc.tmpl for Experiment: myexp42  
Creating fvcore_layout.rc for Experiment: myexp42  
   
   
Done!
Done!
-----
-----
   
   
Build Directory: /discover/nobackup/aeichman/Fortuna-2_5_p6
Build Directory: /discover/nobackup/aeichman/Ganymed-3_0_p1
----------------
----------------
   
   
Line 155: Line 172:
The following executable has been placed in your Experiment Directory:
The following executable has been placed in your Experiment Directory:
----------------------------------------------------------------------
----------------------------------------------------------------------
/discover/nobackup/aeichman/Fortuna-2_5_p6/Linux/bin/GEOSgcm.x
/discover/nobackup/aeichman/Ganymed-3_0_p1/Linux/bin/GEOSgcm.x
   
   
   
   
Line 163: Line 180:




discover28: /discover/nobackup/aeichman/Fortuna-2_5_p6/src/Applications/GEOSgcm_App >  
Applications/GEOSgcm_App>  
</pre>
</pre>


Line 192: Line 209:


Each time a segment ends, <code>gcm_run.j</code> submits a post-processing job before starting a new segment or exiting.  The post-processing job moves the model output from the  <code>scratch</code> directory to the respective collection directory under  <code>holding</code>.  Then it determines whether there is a enough output to create a monthly or seasonal mean, and if so, creates them and moves them to the collection directories in the experiment directory, and then tars up the daily output and submits an archiving job.  The archiving job tries to move the tarred daily output, the monthly and seasonal means and any tarred restarts to the user's space in <code>archive</code> filesystem.  The post-processing script also determines (assuming the default settings) whether enough output exists to create plots; if so, a plotting job is submitted to the queue.  The plotting script produces a number of pre-determined plots as <code>.gif</code> files in the <code>plot_CLIM</code> directory in the experiment directory.
Each time a segment ends, <code>gcm_run.j</code> submits a post-processing job before starting a new segment or exiting.  The post-processing job moves the model output from the  <code>scratch</code> directory to the respective collection directory under  <code>holding</code>.  Then it determines whether there is a enough output to create a monthly or seasonal mean, and if so, creates them and moves them to the collection directories in the experiment directory, and then tars up the daily output and submits an archiving job.  The archiving job tries to move the tarred daily output, the monthly and seasonal means and any tarred restarts to the user's space in <code>archive</code> filesystem.  The post-processing script also determines (assuming the default settings) whether enough output exists to create plots; if so, a plotting job is submitted to the queue.  The plotting script produces a number of pre-determined plots as <code>.gif</code> files in the <code>plot_CLIM</code> directory in the experiment directory.
You can check on jobs in the queue with <code>qstat</code>.  The jobs associated with the run will be named with the experiment name appended with the type of job it is: RUN, POST, ARCH or PLT.


As explained above, the contents of the <code>cap_restart</code> file determine the start of the model run in model time, which determines boundary conditions and the times stamps of the output.  The end time may be set in <code>CAP.rc</code> with the property <code>END_DATE</code>  (format ''YYYYMMDD HHMMSS'', with a space), though integration is usually leisurely enough that one can just kill the job or rename the run script <code>gcm_run.j</code> so that it is not resubmitted to the job queue.
As explained above, the contents of the <code>cap_restart</code> file determine the start of the model run in model time, which determines boundary conditions and the times stamps of the output.  The end time may be set in <code>CAP.rc</code> with the property <code>END_DATE</code>  (format ''YYYYMMDD HHMMSS'', with a space), though integration is usually leisurely enough that one can just kill the job or rename the run script <code>gcm_run.j</code> so that it is not resubmitted to the job queue.
Line 198: Line 217:


Most of the other properties in <code>CAP.rc</code> are discussed elsewhere, but two that are important for understanding how the batch jobs work are <code>JOB_SGMT</code>, the length of the segment, and <code>NUM_SGMT</code>, the number of segments that the job tries to run before resubmitting itself and exiting.  <code>JOB_SGMT</code> is in the format of ''YYYYMMDD HHMMSS'' (but usually expressed in days) and <code>NUM_SGMT</code> as an integer, so the multiple of the two is the total model time that a job will attempt to run.  It may be tempting to just run one long segment, but much housekeeping is done between segments, such as saving state in the form of restarts and spawning archiving jobs that keep your account from running over disk quota.  So to tune for the maximum number of segments in a job, it is usually best to manipulate <code>JOB_SGMT</code>.
Most of the other properties in <code>CAP.rc</code> are discussed elsewhere, but two that are important for understanding how the batch jobs work are <code>JOB_SGMT</code>, the length of the segment, and <code>NUM_SGMT</code>, the number of segments that the job tries to run before resubmitting itself and exiting.  <code>JOB_SGMT</code> is in the format of ''YYYYMMDD HHMMSS'' (but usually expressed in days) and <code>NUM_SGMT</code> as an integer, so the multiple of the two is the total model time that a job will attempt to run.  It may be tempting to just run one long segment, but much housekeeping is done between segments, such as saving state in the form of restarts and spawning archiving jobs that keep your account from running over disk quota.  So to tune for the maximum number of segments in a job, it is usually best to manipulate <code>JOB_SGMT</code>.


== Determining Output: <code>HISTORY.rc</code> ==
== Determining Output: <code>HISTORY.rc</code> ==