GEOS-5 Quick Start: Difference between revisions

From GEOS-5
Jump to navigation Jump to search
No edit summary
 
(34 intermediate revisions by the same user not shown)
Line 1: Line 1:
This section describes the minimum steps required to build and run GEOS-5 on discover.  You should successfully complete the steps in these instructions before doing anything more complicated.   
This page describes the minimum steps required to build and run GEOS-5 Fortuna 2.0 on discover.  You should successfully complete the steps in these instructions before doing anything more complicated.  '''FORTUNA 2.0 IS NOW DEPRECATED AND MINIMALLY SUPPORTED, AS IS THIS DOCUMENTATION.  PLEASE SEE THE DOCUMENTATION FOR THE LATEST VERSION'''.


== Checking Out and Compiling GEOS-5 ==
== Checking Out and Compiling GEOS-5 ==


The following assumes that you know your way around Unix, have successfully logged into your NCCS account (presumably on the '''discover''' cluster) and have an account on '''sourcemotel'''.  The commands below assume that your shell is <code>csh</code>.  Since the scripts to build and run GEOS-5  tend to be written in the same, you shouldn't bother trying to import too much into an alternative shell.  If you prefer a different shell, it is easiest just to open a <code>csh</code> process to build the model and your experiment.
The following assumes that you know your way around Unix, have successfully logged into your NCCS account (presumably on the '''discover''' cluster) and have an account on the source code repository with the proper <code>ssh</code> configuration -- see the progress repository quick  start: https://progress.nccs.nasa.gov/trac/admin/wiki/CVSACL.   
 
The commands below assume that your shell is <code>csh</code>.  Since the scripts to build and run GEOS-5  tend to be written in the same, you shouldn't bother trying to import too much into an alternative shell.  If you prefer a different shell, it is easiest just to open a <code>csh</code> process to build the model and your experiment.
 
Furthermore, model builds should be created in your space under <code>/discover/nobackup</code>, as creating them under your home directory will quickly wipe out your disk quota.


Set the following three environment variables:
Set the following three environment variables:


  setenv CVS_RSH ssh
  setenv CVS_RSH ssh
  setenv CVSROOT :ext:''USERID''@c-sourcemotel.gsfc.nasa.gov:/cvsroot/esma
  setenv CVSROOT :ext:''USERID''@progressdirect.nccs.nasa.gov:/cvsroot/esma
  setenv BASEDIR /discover/nobackup/projects/gmao/share/dao_ops/Baselibs/v3.1.5_build1
  setenv BASEDIR /discover/nobackup/projects/gmao/share/dao_ops/Baselibs/v3.1.5_build1


where ''USERID'' is, of course, your NCCS user-ID.  Then, issue the command:
where ''USERID'' is, of course, your repository username, which should be the same as your NASA and NCCS username.  Then, issue the command:


  cvs co -r  Fortuna-1_4 Fortuna
  cvs co -r  Fortuna-2_0 Fortuna


This should check out the latest stable version of the model from '''sourcemotel''' and create a directory called <code>GEOSagcm</code>.  <code>cd</code> into <code>GEOSagcm/src</code> and <code>source</code> the file called <code>g5_modules</code>:
This should check out the latest stable version of the model from the repository and create a directory called <code>GEOSagcm</code>.  <code>cd</code> into <code>GEOSagcm/src</code> and <code>source</code> the file called <code>g5_modules</code>:


  source g5_modules
  source g5_modules
Line 36: Line 40:
== Running GEOS-5 ==
== Running GEOS-5 ==


Setting up to run:
First of all, to run jobs on the cluster you will need to set up passwordless <code>ssh</code> (which operates within the cluster).  To do so, run the following from your '''discover''' home directory:
 
cd .ssh
ssh-keygen -t dsa
cat id_dsa.pub >>  authorized_keys
 
Similarly, transferring the daily output files (in monthly tarballs) requires passwordless authentication from '''discover''' to '''palm'''.  While in <code>~/.ssh</code> on '''discover''', run  
 
  ssh-keygen -t dsa
 
Then, log into  '''palm''' and cut and paste the contents of the <code>id_rsa.pub</code> and <code>id_dsa.pub</code> files on '''discover''' into the  <code>~/.ssh/authorized_keys</code> file on  '''palm'''.


From the  <code>GEOSagcm/src/Applications/GEOSgcm_App</code> directory, we run:
To set the model up to run, in the  <code>GEOSagcm/src/Applications/GEOSgcm_App</code> directory we run:


  gcm_setup
  gcm_setup


The <code>gcm_setup</code> script asks you a few questions such as an experiment name (with no spaces, called ''EXPID'') and description (spaces ok).  It will also ask you for the model resolution, expecting the lat-lon domain size, the dimensions separated by a space.  For you first time out you will probably want to enter <code>144 91</code>.
The <code>gcm_setup</code> script asks you a few questions such as an experiment name (with no spaces, called ''EXPID'') and description (spaces ok).  It will also ask you for the model resolution, expecting one of the available lat-lon domain sizes, the dimensions separated by a space.  For your first time out you will probably want to enter <code>144 91</code> (corresponding to ~2 degree resolution). Towards the end it will ask you for a group ID -- the default is g0602 (GMAO modeling group).  Enter whatever is appropriate, as necessary.  The rest of the questions provide defaults which will be suitable for now, so just press enter for these. 


The rest of the questions provide defaults which will be suitable for now, so just press enter for these.  The script produces an experiment directory (''EXPDIR'') in your space as <code>/discover/nobackup/''USERID''/''EXPID''</code>, which contains, among other things, the sub-directories:
The script produces an experiment directory (''EXPDIR'') in your space as <code>/discover/nobackup/''USERID''/''EXPID''</code>, which contains, among other things, the sub-directories:


*''post'' (containing the post-processing job script)
*<code>post</code> (containing the post-processing job script)
*''archive'' (containing an incomplete archiving job script)
*<code>archive</code> (containing an incomplete archiving job script)
*''plot'' (containing an incomplete plotting job script)
*<code>plot</code> (containing an incomplete plotting job script)


The post-processing script will complete (i.e., add necessary commands to) the archiving and plotting scripts as it runs.  The setup script that you ran also creates an experiment home directory (HOMEDIR) as <code>~''USERID''/geos5/''EXPID''</code>  containing the run scripts and GEOS resource (<code>.rc</code>) files.
The post-processing script will complete (i.e., add necessary commands to) the archiving and plotting scripts as it runs.  The setup script that you ran also creates an experiment home directory (''HOMEDIR'') as <code>~''USERID''/geos5/''EXPID''</code>  containing the run scripts and GEOS resource (<code>.rc</code>) files.




Line 62: Line 76:
  setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib
  setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib


where <code>.g5_modules</code> is simply a copy of the <code>g5_modules</code> that you ran earlier before compiling.   
where <code>.g5_modules</code> is simply a copy of the <code>g5_modules</code> that you ran earlier before compiling.  The <code>umask 022</code> is not strictly necessary, but it will make the various files readable to others, which will facilitate data sharing and user support.  Your home directory <code>~''USERID''</code> is also inaccessible to others by default; running <code>chmod 755 ~</code> is helpful.
 
Copy the restart (initial condition) files and associated <code>cap_restart</code> into ''EXPDIR''.  Keep the "originals" handy since if the job stumbles early in the run it might stop after having renamed them.  The model expects restart filenames to end in "rst" but produces them with the date and time appended, so you may have to rename them.  The <code>cap_restart</code> file is simply one line of text with the date of the restart files in the format YYYYMMDD<space>HHMMSS.  The boundary conditions/forcings are provided by symbolic links created by the run script.  If you need an arbitrary set of restarts, you can copy them from <code>/discover/nobackup/aeichman/restarts/Fortuna-2_0/144x91/20080327_benchmark</code>.


Copy the restart (initial condition) and <code>cap_restart</code> (t=t0) files into ''EXPDIR''.  Keep the "originals" handy since if the model crashes early in the run it might have renamed them.  The model expects restarts to end in "rst" but produces them with the date at time appended to the filename, so you may have to rename them.  The <code>cap_restart</code> file is simply one line of text with the format "YYYYMMDD HHMMSS".  The boundary conditions/forcings are provided by symbolic links created by the run script.


The script you submit, <code>gcm_run.j</code>, is in HOMEDIR.  It should be ready to go as is, though you may eventually want to tune JOB_SGMT (the number of days between saving restarts, called segments) and NUM_SGMT (the number of segments attempted in a job) to maximize your run time.  END_DATE can be changed to your end date, or just left as isCommenting out the <code>qsub gcm_run.j</code> at the end of the script will stop it, too.  Those and the PBS (batch system) parameters at the beginning are all that you will usually want to change in the script.
The script you submit, <code>gcm_run.j</code>, is in ''HOMEDIR''.  It should be ready to go as is, though you may eventually want to tune JOB_SGMT (the number of days per segment, the internal between saving restarts) and NUM_SGMT (the number of segments attempted in a job) to maximize your run time.  Leave END_DATE alone in Fortuna 2.0 -- there is a bug that erroneously resubmits the script after this date.  You can stop the run by commenting out the <code>qsub $HOMDIR/gcm_run.j</code> at the end of the script, which will prevent the script from being resubmitted.  Those and the PBS (batch system) parameters at the beginning are all that you will usually want to change in the script.


Submit the job with <code>qsub gcm_run.j</code>.  You can keep track of it with qstat or <code>qstat | grep USERNAME</code>, or stdout with <code>tail -f /discovery/pbs_spool/JOBID.OU</code>, JOBID being returned by qsub and displayed with qstat.  Jobs can be killed with <code>qdel JOBID</code>.  The stdout and stderr will be delivered as files to HOMEDIR at the end of a job.
Submit the job with <code>qsub gcm_run.j</code>.  You can keep track of it with <code>qstat</code> or <code>qstat | grep ''USERID''</code>, or stdout with <code>tail -f /discover/pbs_spool/''JOBID''.OU</code>, ''JOBID'' being returned by <code>qsub</code> and displayed with <code>qstat</code>.  Jobs can be killed with <code>qdel ''JOBID''</code>.  The standard out and standard error will be delivered as files to the working directory at the time you submitted the job.


== Output and Plots ==
== Output and Plots ==


During a normal run, the <code>gcm_run.j</code> script will run the model for the segment length (current default is 8 days).  The model creates output files (with an <code>nc4</code> extension), also called collections (of output variables), in the <code>''EXPDIR''/scratch</code> directory.  After each segment, the script moves the output to the <code>''EXPDIR''/holding</code> and spawns a post-processing batch job which partitions and moves the output files  within the <code>holding</code> directory to their own distinct collection directory, which is again partitioned into the appropriate year and month.  The  post processing script then checks to
During a normal run, the <code>gcm_run.j</code> script will run the model for the segment length (current default is 8 days).  The model creates output files (with an <code>nc4</code> extension), also called collections (of output variables), in <code>''EXPDIR''/scratch</code> directory.  After each segment, the script moves the output to the <code>''EXPDIR''/holding</code> and spawns a post-processing batch job which partitions and moves the output files  within the <code>holding</code> directory to their own distinct collection directory, which is again partitioned into the appropriate year and month.  The  post processing script then checks to
see if  a full month of data is present.  If not, the post-processing job ends.  If there is a full month, the script will then run the time-averaging executable to produce a monthly mean file in <code>''EXPDIR''/geos_gcm_*</code>.  The post-processing script then spawns a new batch job which will archive the data onto the mass-storage drives (<code>/archive/u/''USERID''/GEOS5.0/''EXPID''</code>).
see if  a full month of data is present.  If not, the post-processing job ends.  If there is a full month, the script will then run the time-averaging executable to produce a monthly mean file in <code>''EXPDIR''/geos_gcm_*</code>.  The post-processing script then spawns a new batch job which will archive the data onto the mass-storage drives (<code>/archive/u/''USERID''/GEOS5.0/''EXPID''</code>).


If a monthly average file was made, the post-processing script will also
If a monthly average file was made, the post-processing script will also
check to see if it should spawn a plot job.  Currently, our criteria for
check to see if it should spawn a plot job.  Currently, our criteria for
plotting is:  1) if the month created was February or August,  AND
plotting are:  1) if the month created was February or August,  AND
2) there are at least 3 monthly average files, then a plotting job for
2) there are at least 3 monthly average files, then a plotting job for
the seasons DJF or JJA will be issued.  The plots are created as gifs in <code>''EXPDIR''/plots</code>.
the seasons DJF or JJA will be issued.  The plots are created as gifs in <code>''EXPDIR''/plots</code>.
Line 83: Line 98:
The post-processing script can be found in:
The post-processing script can be found in:
<code>GEOSagcm/src/GMAO_Shared/GEOS_Util/post/gcmpost.script</code>.  The <code>nc4</code> output files can be opened and plotted with <code>gradsnc4</code> -- see http://www.iges.org/grads/gadoc/tutorial.html for a tutorial, but use <code>sdfopen</code> instead of <code>open</code>.
<code>GEOSagcm/src/GMAO_Shared/GEOS_Util/post/gcmpost.script</code>.  The <code>nc4</code> output files can be opened and plotted with <code>gradsnc4</code> -- see http://www.iges.org/grads/gadoc/tutorial.html for a tutorial, but use <code>sdfopen</code> instead of <code>open</code>.
The contents of the output files (including which variables get saved) may be configured in the  <code>''HOMEDIR''/HISTORY/tmpl</code> -- a good description of this file may be found at http://modelingguru.nasa.gov/clearspace/docs/DOC-1190 .

Latest revision as of 08:23, 13 October 2010

This page describes the minimum steps required to build and run GEOS-5 Fortuna 2.0 on discover. You should successfully complete the steps in these instructions before doing anything more complicated. FORTUNA 2.0 IS NOW DEPRECATED AND MINIMALLY SUPPORTED, AS IS THIS DOCUMENTATION. PLEASE SEE THE DOCUMENTATION FOR THE LATEST VERSION.

Checking Out and Compiling GEOS-5

The following assumes that you know your way around Unix, have successfully logged into your NCCS account (presumably on the discover cluster) and have an account on the source code repository with the proper ssh configuration -- see the progress repository quick start: https://progress.nccs.nasa.gov/trac/admin/wiki/CVSACL.

The commands below assume that your shell is csh. Since the scripts to build and run GEOS-5 tend to be written in the same, you shouldn't bother trying to import too much into an alternative shell. If you prefer a different shell, it is easiest just to open a csh process to build the model and your experiment.

Furthermore, model builds should be created in your space under /discover/nobackup, as creating them under your home directory will quickly wipe out your disk quota.

Set the following three environment variables:

setenv CVS_RSH ssh
setenv CVSROOT :ext:USERID@progressdirect.nccs.nasa.gov:/cvsroot/esma
setenv BASEDIR /discover/nobackup/projects/gmao/share/dao_ops/Baselibs/v3.1.5_build1

where USERID is, of course, your repository username, which should be the same as your NASA and NCCS username. Then, issue the command:

cvs co -r  Fortuna-2_0  Fortuna

This should check out the latest stable version of the model from the repository and create a directory called GEOSagcm. cd into GEOSagcm/src and source the file called g5_modules:

source g5_modules

If you then type

module list 

you should see:

Currently Loaded Modulefiles:
  1) comp/intel-9.1.052   2) lib/mkl-9.1.023      3) mpi/impi-3.2.011

If this all worked, then type:

gmake install

This will build the model. It will take about 40 minutes. If this works, it should create a directory under GEOSagcm called Linux/bin. In here you should find the executable: GEOSgcm.x .

Running GEOS-5

First of all, to run jobs on the cluster you will need to set up passwordless ssh (which operates within the cluster). To do so, run the following from your discover home directory:

cd .ssh
ssh-keygen -t dsa
cat id_dsa.pub >>  authorized_keys

Similarly, transferring the daily output files (in monthly tarballs) requires passwordless authentication from discover to palm. While in ~/.ssh on discover, run

 ssh-keygen -t dsa

Then, log into palm and cut and paste the contents of the id_rsa.pub and id_dsa.pub files on discover into the ~/.ssh/authorized_keys file on palm.

To set the model up to run, in the GEOSagcm/src/Applications/GEOSgcm_App directory we run:

gcm_setup

The gcm_setup script asks you a few questions such as an experiment name (with no spaces, called EXPID) and description (spaces ok). It will also ask you for the model resolution, expecting one of the available lat-lon domain sizes, the dimensions separated by a space. For your first time out you will probably want to enter 144 91 (corresponding to ~2 degree resolution). Towards the end it will ask you for a group ID -- the default is g0602 (GMAO modeling group). Enter whatever is appropriate, as necessary. The rest of the questions provide defaults which will be suitable for now, so just press enter for these.

The script produces an experiment directory (EXPDIR) in your space as /discover/nobackup/USERID/EXPID, which contains, among other things, the sub-directories:

  • post (containing the post-processing job script)
  • archive (containing an incomplete archiving job script)
  • plot (containing an incomplete plotting job script)

The post-processing script will complete (i.e., add necessary commands to) the archiving and plotting scripts as it runs. The setup script that you ran also creates an experiment home directory (HOMEDIR) as ~USERID/geos5/EXPID containing the run scripts and GEOS resource (.rc) files.


The run scripts need some more environment variables -- here are the minimum contents of a .cshrc:

umask 022
unlimit
limit stacksize unlimited
source ~/.g5_modules
set arch = `uname`
setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib

where .g5_modules is simply a copy of the g5_modules that you ran earlier before compiling. The umask 022 is not strictly necessary, but it will make the various files readable to others, which will facilitate data sharing and user support. Your home directory ~USERID is also inaccessible to others by default; running chmod 755 ~ is helpful.

Copy the restart (initial condition) files and associated cap_restart into EXPDIR. Keep the "originals" handy since if the job stumbles early in the run it might stop after having renamed them. The model expects restart filenames to end in "rst" but produces them with the date and time appended, so you may have to rename them. The cap_restart file is simply one line of text with the date of the restart files in the format YYYYMMDD<space>HHMMSS. The boundary conditions/forcings are provided by symbolic links created by the run script. If you need an arbitrary set of restarts, you can copy them from /discover/nobackup/aeichman/restarts/Fortuna-2_0/144x91/20080327_benchmark.


The script you submit, gcm_run.j, is in HOMEDIR. It should be ready to go as is, though you may eventually want to tune JOB_SGMT (the number of days per segment, the internal between saving restarts) and NUM_SGMT (the number of segments attempted in a job) to maximize your run time. Leave END_DATE alone in Fortuna 2.0 -- there is a bug that erroneously resubmits the script after this date. You can stop the run by commenting out the qsub $HOMDIR/gcm_run.j at the end of the script, which will prevent the script from being resubmitted. Those and the PBS (batch system) parameters at the beginning are all that you will usually want to change in the script.

Submit the job with qsub gcm_run.j. You can keep track of it with qstat or qstat | grep USERID, or stdout with tail -f /discover/pbs_spool/JOBID.OU, JOBID being returned by qsub and displayed with qstat. Jobs can be killed with qdel JOBID. The standard out and standard error will be delivered as files to the working directory at the time you submitted the job.

Output and Plots

During a normal run, the gcm_run.j script will run the model for the segment length (current default is 8 days). The model creates output files (with an nc4 extension), also called collections (of output variables), in EXPDIR/scratch directory. After each segment, the script moves the output to the EXPDIR/holding and spawns a post-processing batch job which partitions and moves the output files within the holding directory to their own distinct collection directory, which is again partitioned into the appropriate year and month. The post processing script then checks to see if a full month of data is present. If not, the post-processing job ends. If there is a full month, the script will then run the time-averaging executable to produce a monthly mean file in EXPDIR/geos_gcm_*. The post-processing script then spawns a new batch job which will archive the data onto the mass-storage drives (/archive/u/USERID/GEOS5.0/EXPID).

If a monthly average file was made, the post-processing script will also check to see if it should spawn a plot job. Currently, our criteria for plotting are: 1) if the month created was February or August, AND 2) there are at least 3 monthly average files, then a plotting job for the seasons DJF or JJA will be issued. The plots are created as gifs in EXPDIR/plots.

The post-processing script can be found in: GEOSagcm/src/GMAO_Shared/GEOS_Util/post/gcmpost.script. The nc4 output files can be opened and plotted with gradsnc4 -- see http://www.iges.org/grads/gadoc/tutorial.html for a tutorial, but use sdfopen instead of open.

The contents of the output files (including which variables get saved) may be configured in the HOMEDIR/HISTORY/tmpl -- a good description of this file may be found at http://modelingguru.nasa.gov/clearspace/docs/DOC-1190 .