|
|
(8 intermediate revisions by the same user not shown) |
Line 9: |
Line 9: |
| cvs co -r ''TAGNAME'' -d ''DIRECTORY'' Fortuna | | cvs co -r ''TAGNAME'' -d ''DIRECTORY'' Fortuna |
|
| |
|
| where ''TAGNAME'' is the model "tag" (version). A tag in <code>cvs</code> marks the various versions of the source files in the repository that together make up a particular version of the model. A sample release tag is <code>Fortuna-2_4_p2</code>, indicating the version Fortuna 2.4 patch 2. ''DIRECTORY'' is the directory that the source code tree will be created. If you are using a stock model tag it is reasonable to name the directory the same as the tag. This directory determines which model in presumably your space a particular experiment is using. Some scripts use the environment variable <code>ESMADIR</code>, which should be set to the absolute (full) pathname of this directory. | | where ''TAGNAME'' is the model "tag" (version). A tag in <code>cvs</code> marks the various versions of the source files in the repository that together make up a particular version of the model. A sample release tag is <code>Fortuna-2_4_p2</code>, indicating the version Fortuna 2.4 patch 2 (the latest version). ''DIRECTORY'' is the directory that the source code tree will be created. If you are using a stock model tag it is reasonable to name the directory the same as the tag. This directory determines which model in presumably your space a particular experiment is using. Some scripts use the environment variable <code>ESMADIR</code>, which should be set to the absolute (full) pathname of this directory. |
|
| |
|
| When a modified version of some component of the model is saved to the repository, the tag it uses -- different from the standard model tag -- is supposed to be applied at most only to the directories with modified files. This means that if you need to use some variant tag of a gridded component, you will have to <code>cd</code> to that directory and update to the variant tag. So, for example, if you needed to apply updates to the SatSim gridded component, you would have to <code>cd</code> several levels down to the directory <code>GEOSsatsim_GridComp</code> and run | | When a modified version of some component of the model is saved to the repository, the tag it uses -- different from the standard model tag -- is supposed to be applied at most only to the directories with modified files. This means that if you need to use some variant tag of a gridded component, you will have to <code>cd</code> to that directory and update to the variant tag. So, for example, if you needed to apply updates to the SatSim gridded component, you would have to <code>cd</code> several levels down to the directory <code>GEOSsatsim_GridComp</code> and run |
Line 181: |
Line 181: |
| Failing the above sources, you can convert restarts from different resolutions and model versions, including MERRA, as described in [[Regridding Restarts for Fortuna 2.4]]. | | Failing the above sources, you can convert restarts from different resolutions and model versions, including MERRA, as described in [[Regridding Restarts for Fortuna 2.4]]. |
|
| |
|
| === Time in the model ===
| | |
|
| |
|
| == What Happens During a Run == | | == What Happens During a Run == |
Line 189: |
Line 189: |
| Then the executable <code>GEOSgcm.x</code> is run in the <code>scratch</code> directory, starting with the date in <code>cap_restart</code> and running for the length of a segment. A segment is the length of model time that the model integrates before returning, letting <code>gcm_run.j</code> do some housekeeping and then running another segment. A model job will typically run a number of segments before trying to resubmit itself, ideally before the alloted wallclock time of the job runs out. | | Then the executable <code>GEOSgcm.x</code> is run in the <code>scratch</code> directory, starting with the date in <code>cap_restart</code> and running for the length of a segment. A segment is the length of model time that the model integrates before returning, letting <code>gcm_run.j</code> do some housekeeping and then running another segment. A model job will typically run a number of segments before trying to resubmit itself, ideally before the alloted wallclock time of the job runs out. |
|
| |
|
| Part of the housekeeping that the script performs is illustrated below:
| | The processing that the various batch jobs perform is illustrated below: |
| | |
| | |
| | [[Image:F2.5-job-diagram002.png]] |
|
| |
|
|
| |
|
|
| |
|
| [[Image:F2.5-job-diagram002.png]]
| |
|
| |
|
| Each time a segment ends, <code>gcm_run.j</code> submits a post-processing job before starting a new segment or exiting. The post-processing job moves the model output from the <code>scratch</code> directory to the respective collection directory under <code>holding</code>. Then it determines whether there is a enough output to create a monthly or seasonal mean, and if so, creates them and moves them to the collection directories in the experiment directory, and then tars up the daily output and tries to move them, the means and any tarred restarts to the <code>archive</code> filesystem. | | |
| | Each time a segment ends, <code>gcm_run.j</code> submits a post-processing job before starting a new segment or exiting. The post-processing job moves the model output from the <code>scratch</code> directory to the respective collection directory under <code>holding</code>. Then it determines whether there is a enough output to create a monthly or seasonal mean, and if so, creates them and moves them to the collection directories in the experiment directory, and then tars up the daily output and submits an archiving job. The archiving job tries to move the tarred daily output, the monthly and seasonal means and any tarred restarts to the user's space in <code>archive</code> filesystem. The post-processing script also determines (assuming the default settings) whether enough output exists to create plots; if so, a plotting job is submitted to the queue. The plotting script produces a number of pre-determined plots as <code>.gif</code> files in the <code>plot_CLIM</code> directory in the experiment directory. |
| | |
|
| |
|
|
| |
|
Line 280: |
Line 284: |
| === post.rc === | | === post.rc === |
|
| |
|
| == How to Obtain GEOS-5 and Compile Source Code ==
| |
|
| |
| There are two options for obtaining the model source code: from the CVS repository on the NCCS progress server, and from the SVN "public" repository on the trac server. Since the code on progress is more current, elgible users are strongly encouraged to obtain accounts from NCCS and use the progress repository.
| |
|
| |
| === Using the NCCS progress CVS code repository ===
| |
|
| |
| The following assumes that you know your way around Unix, have successfully logged into your cluster account and have an account on the source code repository with the proper <code>ssh</code> configuration -- see the progress repository quick start: https://progress.nccs.nasa.gov/trac/admin/wiki/CVSACL. The link requires your NCCS username and password.
| |
|
| |
| The commands below assume that your shell is <code>csh</code>. Since the scripts to build and run GEOS-5 tend to be written in the same, you shouldn't bother trying to import too much into an alternative shell. If you prefer a different shell, it is easiest just to open a <code>csh</code> process to build the model and your experiment.
| |
|
| |
| Furthermore, model builds should be created in your space under <code>/discover/nobackup</code>, as creating them under your home directory will quickly wipe out your disk quota.
| |
|
| |
| Set the following three environment variables:
| |
|
| |
| setenv CVS_RSH ssh
| |
| setenv CVSROOT :ext:''USERID''@cvsacl:/cvsroot/esma
| |
|
| |
| where ''USERID'' is, of course, your repository username, which should be the same as your NASA and NCCS username. Then, issue the command:
| |
|
| |
| cvs co -r Fortuna-2_4 Fortuna
| |
|
| |
| This should check out the latest stable version of the model from the repository and create a directory called <code>GEOSagcm</code>.
| |
|
| |
| === Compiling the Model ===
| |
|
| |
| <code>cd</code> into <code>GEOSagcm/src</code> and <code>source</code> the file called <code>g5_modules</code>:
| |
|
| |
| source g5_modules
| |
|
| |
| This will set up the build environment. If you then type
| |
|
| |
| module list
| |
|
| |
| you should see:
| |
|
| |
| Currently Loaded Modulefiles:
| |
| 1) comp/intel-11.0.083 2) mpi/impi-3.2.2.006 3) lib/mkl-10.0.3.020
| |
|
| |
| If this all worked, then type:
| |
|
| |
| gmake install
| |
|
| |
| This will build the model. It will take about 40 minutes. If this works, it should create a directory under <code>GEOSagcm</code> called <code>Linux/bin</code>. In here you should find the executable: <code>GEOSgcm.x</code> .
| |
|
| |
| == Running GEOS-5 ==
| |
|
| |
| First of all, to run jobs on the cluster you will need to set up passwordless <code>ssh</code> (which operates within the cluster). To do so, run the following from your '''discover''' home directory:
| |
|
| |
| cd .ssh
| |
| ssh-keygen -t dsa
| |
| cat id_dsa.pub >> authorized_keys
| |
|
| |
| Similarly, transferring the daily output files (in monthly tarballs) requires passwordless authentication from '''discover''' to '''dirac'''. While in <code>~/.ssh</code> on '''discover''', run
| |
|
| |
| ssh-keygen -t dsa
| |
|
| |
| Then, log into '''dirac''' and cut and paste the contents of the <code>id_rsa.pub</code> and <code>id_dsa.pub</code> files on '''discover''' into the <code>~/.ssh/authorized_keys</code> file on '''dirac'''. Problems with <code>ssh</code> should be referred to NCCS support.
| |
|
| |
| To set the model up to run, in the <code>GEOSagcm/src/Applications/GEOSgcm_App</code> directory we run:
| |
|
| |
| gcm_setup
| |
|
| |
| The <code>gcm_setup</code> script asks you a few questions such as an experiment name (with no spaces, called ''EXPID'') and description (spaces ok). It will also ask you for the model resolution, expecting one of the available lat-lon domain sizes, the dimensions separated by a space. For your first time out you will probably want to enter <code>144 91</code> (corresponding to ~2 degree resolution). Towards the end it will ask you for a group ID -- the default is g0602 (GMAO modeling group). Enter whatever is appropriate, as necessary. The rest of the questions provide defaults which will be suitable for now, so just press enter for these.
| |
|
| |
| The script produces an experiment directory (''EXPDIR'') in your space as <code>/discover/nobackup/''USERID''/''EXPID''</code>, which contains, among other things, the sub-directories:
| |
|
| |
| *<code>post</code> (containing the incomplete post-processing job script and .rc file)
| |
| *<code>archive</code> (containing an incomplete archiving job script)
| |
| *<code>plot</code> (containing an incomplete plotting job script and .rc file)
| |
|
| |
| The post-processing script will complete (i.e., add necessary commands to) the archiving and plotting scripts as it runs. The setup script that you ran also creates an experiment home directory (''HOMEDIR'') as <code>~''USERID''/geos5/''EXPID''</code> containing the run scripts and GEOS resource (<code>.rc</code>) files.
| |
|
| |
|
| |
| The run scripts need some more environment variables -- here are the minimum contents of a <code>.cshrc</code>:
| |
|
| |
| umask 022
| |
| unlimit
| |
| limit stacksize unlimited
| |
| set arch = `uname`
| |
| setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib
| |
|
| |
| The <code>umask 022</code> is not strictly necessary, but it will make the various files readable to others, which will facilitate data sharing and user support. Your home directory <code>~''USERID''</code> is also inaccessible to others by default; running <code>chmod 755 ~</code> is helpful.
| |
|
| |
| Copy the restart (initial condition) files and associated <code>cap_restart</code> into ''EXPDIR''. Keep the "originals" handy since if the job stumbles early in the run it might stop after having renamed them. The model expects restart filenames to end in "rst" but produces them with the date and time appended, so you may have to rename them. The <code>cap_restart</code> file is simply one line of text with the date of the restart files in the format YYYYMMDD<space>HHMMSS. The boundary conditions/forcings are provided by symbolic links created by the run script.
| |
|
| |
| If you need an arbitrary set of restarts, you can copy them from <code>/archive/u/aeichman/restarts/Fortuna-2_4/</code>, where they are indexed by date and resolution.
| |
|
| |
|
| |
| The script you submit, <code>gcm_run.j</code>, is in ''HOMEDIR''. It should be ready to go as is. The parameter END_DATE in <code>CAP.rc</code> (previously in <code>gcm_run.j</code>) can be set to the date you want the run to stop. An alternative way to stop the run is by commenting out the line <code> if ( $capdate < $enddate ) qsub $HOMDIR/gcm_run.j</code> at the end of the script, which will prevent the script from being resubmitted, or rename the script file. You may eventually want to tune parameters in the <code>CAP.rc</code> file JOB_SGMT (the number of days per segment, the interval between saving restarts) and NUM_SGMT (the number of segments attempted in a job) to maximize your run time.
| |
|
| |
| Submit the job with <code>qsub gcm_run.j</code>. You can keep track of it with <code>qstat</code> or <code>qstat | grep ''USERID''</code>, or follow stdout with <code>tail -f /discover/pbs_spool/''JOBID''.OU</code>, ''JOBID'' being returned by <code>qsub</code> and displayed with <code>qstat</code>. Jobs can be killed with <code>qdel ''JOBID''</code>. The standard out and standard error will be delivered as files to the working directory at the time you submitted the job.
| |
|
| |
| == Output and Plots ==
| |
|
| |
| During a normal run, the <code>gcm_run.j</code> script will run the model for the segment length (current default is 8 days). The model creates output files (with an <code>nc4</code> extension), also called collections (of output variables), in <code>''EXPDIR''/scratch</code> directory. After each segment, the script moves the output to the <code>''EXPDIR''/holding</code> and spawns a post-processing batch job which partitions and moves the output files within the <code>holding</code> directory to their own distinct collection directory, which is again partitioned into the appropriate year and month. The post processing script then checks to
| |
| see if a full month of data is present. If not, the post-processing job ends. If there is a full month, the script will then run the time-averaging executable to produce a monthly mean file in <code>''EXPDIR''/geos_gcm_*</code>. The post-processing script then spawns a new batch job which will archive the data onto the mass-storage drives (<code>/archive/u/''USERID''/GEOS5.0/''EXPID''</code>).
| |
|
| |
| If a monthly average file was made, the post-processing script will also
| |
| check to see if it should spawn a plot job. Currently, our criteria for
| |
| plotting are: 1) if the month created was February or August, AND
| |
| 2) there are at least 3 monthly average files, then a plotting job for
| |
| the seasons DJF or JJA will be issued. The plots are created as gifs in <code>''EXPDIR''/plots_CLIM</code>.
| |
|
| |
| The post-processing script can be found in:
| |
| <code>GEOSagcm/src/GMAO_Shared/GEOS_Util/post/gcmpost.script</code>. The <code>nc4</code> output files can be opened and plotted with <code>gradsnc4</code> -- see http://www.iges.org/grads/gadoc/tutorial.html for a tutorial, but use <code>sdfopen</code> instead of <code>open</code>.
| |
|
| |
|
| The contents of the output files (including which variables get saved) may be configured in the <code>''HOMEDIR''/HISTORY.rc</code> -- a good description of this file may be found at http://modelingguru.nasa.gov/clearspace/docs/DOC-1190 .
| |
|
| |
|
| '''Back to [[GEOS-5 Documentation for Fortuna 2.4]]''' | | '''Back to [[GEOS-5 Documentation for Fortuna 2.4]]''' |