GEOS-5 Quick Start: Difference between revisions
No edit summary |
No edit summary |
||
Line 26: | Line 26: | ||
gmake install | gmake install | ||
This will build the model. It will take about 40 minutes. | This will build the model. It will take about 40 minutes. If this works, it should create a directory under ''GEOSagcm'' called ''Linux/bin''. In here you should find the executable: ''GEOSgcm.x'' . | ||
If this works, it should create a directory under ''GEOSagcm'' called ''Linux/bin''. In here you should find the executable: ''GEOSgcm.x'' . | |||
Setting up to run: | Setting up to run: | ||
From the ''GEOSagcm/src/Applications/GEOSgcm_App'' directory, we run: | |||
gcm_setup | |||
experiment directory | |||
After answering a few questions, the ''gcm_setup'' script produces an | |||
experiment directory which contains, among other things, the sub-directories: | |||
post (containing the post-processing job script) | post (containing the post-processing job script) | ||
Line 47: | Line 41: | ||
plot (containing an incomplete plotting job script) | plot (containing an incomplete plotting job script) | ||
Note: The post-processing script will complete (i.e., add | |||
necessary commands to) the archiving and plotting scripts as it runs. | necessary commands to) the archiving and plotting scripts as it runs. | ||
The run scripts need some more environment variables -- here are the minimum contents of a ''.cshrc'': | |||
umask 022 | umask 022 | ||
unlimit | unlimit | ||
limit stacksize unlimited | limit stacksize unlimited | ||
source ~/.g5_modules | source ~/.g5_modules | ||
set arch = `uname` | set arch = `uname` | ||
setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib | setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib | ||
.g5_modules is simply a copy of g5_modules that you ran earlier. | where ''.g5_modules'' is simply a copy of the ''g5_modules'' that you ran earlier. | ||
Copy the *rst and cap_restart files into the experiment directory EXPDIR, by default under /discovery/nobackup/USERNAME . Keep the "originals" handy since if the model crashes early in the run it might have renamed them and put them in the EXPDIR/restarts/ directory. | |||
For the moment, you will have to change RC/Chem_Registry.rc so that everything under "Active Constituents" besides doing_PC is "no". | |||
The script you submit, gcm_run.j, is in HOMEDIR, by default under your home directory under geos5. It should be ready to go as is, though you may eventually want to tune JOB_SGMT (the number of days between saving restarts, called segments) and NUM_SGMT (the number of segments attempted in a job) to maximize your run time. END_DATE can be changed to your end date, or just left as is. Commenting out the ``qsub gcm_run.j'' at the end of the script will stop it, too. Those and the batch parameters at the beginning are all that you will usually want to change. | |||
Submit the job with ``qsub gcm_run.j''. You can keep track of it with qstat or ``qstat | grep USERNAME'', or stdout with ``tail -f /discovery/pbs_spool/JOBID.OU'', JOBID being returned by qsub and displayed with qstat. Jobs can be killed with ``qdel JOBID''. The stdout and stderr will be delivered as files to HOMEDIR at the end of a job. | |||
During a normal run, the gcm_run script will run the model for a | |||
user-specified segment length (current default is 8 days). | user-specified segment length (current default is 8 days). | ||
After each segment, the script spawns a post-processing batch job which | After each segment, the script spawns a post-processing batch job which |
Revision as of 10:28, 19 October 2009
This section describes the minimum steps required to build and run GEOS-5 on discover. You should successfully complete the steps in these instructions before doing anything more complicated.
The following assumes that you know your way around Unix, have successfully logged into your NCCS account (presumably on the discover cluster) and have an account on sourcemotel. The commands below assume that your shell is csh. The scripts to build and run GEOS-5 tend to be written in the same, you shouldn't bother trying to import too much into an alternative shell. If you prefer a different shell, it is easiest just to open a csh process to build the model and your experiment.
Set the following three environment variables:
setenv CVS_RSH ssh setenv CVSROOT :ext:USERID@c-sourcemotel.gsfc.nasa.gov:/cvsroot/esma setenv BASEDIR /discover/nobackup/projects/gmao/share/dao_ops/Baselibs/v3.1.5_build1
where USERID is, of course, your NCCS user-ID. Then, issue the command:
cvs co -r Fortuna-1_4 Fortuna
This should check out the latest stable version of the model from sourcemotel and create a directory called GEOSagcm. cd into GEOSagcm/src and source the file called g5_modules:
source g5_modules
If you then type module list you should see:
Currently Loaded Modulefiles: 1) comp/intel-9.1.052 2) lib/mkl-9.1.023 3) mpi/impi-3.2.011
If this all worked, then type:
gmake install
This will build the model. It will take about 40 minutes. If this works, it should create a directory under GEOSagcm called Linux/bin. In here you should find the executable: GEOSgcm.x .
Setting up to run:
From the GEOSagcm/src/Applications/GEOSgcm_App directory, we run:
gcm_setup
After answering a few questions, the gcm_setup script produces an experiment directory which contains, among other things, the sub-directories:
post (containing the post-processing job script) archive (containing an incomplete archiving job script) plot (containing an incomplete plotting job script)
Note: The post-processing script will complete (i.e., add necessary commands to) the archiving and plotting scripts as it runs.
The run scripts need some more environment variables -- here are the minimum contents of a .cshrc:
umask 022 unlimit limit stacksize unlimited source ~/.g5_modules set arch = `uname` setenv LD_LIBRARY_PATH ${LIBRARY_PATH}:${BASEDIR}/${arch}/lib
where .g5_modules is simply a copy of the g5_modules that you ran earlier.
Copy the *rst and cap_restart files into the experiment directory EXPDIR, by default under /discovery/nobackup/USERNAME . Keep the "originals" handy since if the model crashes early in the run it might have renamed them and put them in the EXPDIR/restarts/ directory.
For the moment, you will have to change RC/Chem_Registry.rc so that everything under "Active Constituents" besides doing_PC is "no".
The script you submit, gcm_run.j, is in HOMEDIR, by default under your home directory under geos5. It should be ready to go as is, though you may eventually want to tune JOB_SGMT (the number of days between saving restarts, called segments) and NUM_SGMT (the number of segments attempted in a job) to maximize your run time. END_DATE can be changed to your end date, or just left as is. Commenting out the ``qsub gcm_run.j at the end of the script will stop it, too. Those and the batch parameters at the beginning are all that you will usually want to change.
Submit the job with ``qsub gcm_run.j. You can keep track of it with qstat or ``qstat | grep USERNAME, or stdout with ``tail -f /discovery/pbs_spool/JOBID.OU, JOBID being returned by qsub and displayed with qstat. Jobs can be killed with ``qdel JOBID. The stdout and stderr will be delivered as files to HOMEDIR at the end of a job.
During a normal run, the gcm_run script will run the model for a user-specified segment length (current default is 8 days). After each segment, the script spawns a post-processing batch job which partitions and moves the files within the "holding" directory to their own distinct "collection" directory which is again partitioned into the appropriate Year and Month. The post processing script then checks to see if a full month of data is present. If not, the post-processing job ends. If there is a full month, the script will then run the time-averaging executable to produce a monthly mean file. The post-processing script then spawns a new batch job which will archive the data onto the mass-storage drives.
The post-processing script can be found in: .../src/GMAO_Shared/GEOS_Util/post/gcmpost.script
If a monthly average file was made, the post-processing script will also check to see if it should spawn a plot job. Currently, our criteria for plotting is: 1) if the month created was February or August, AND 2) there are at least 3 monthly average files, then a plotting job for the seasons DJF or JJA will be issued.