Running the GEOS-5 SBU Benchmark: Difference between revisions

From GEOS-5
Jump to navigation Jump to search
Benchmark Run: Edit for new portable BCs
 
(4 intermediate revisions by the same user not shown)
Line 106: Line 106:
These effectively let you change whatever you want - useful for debugging, etc. For example, you can set your timers in ~/.esma_base.mk.
These effectively let you change whatever you want - useful for debugging, etc. For example, you can set your timers in ~/.esma_base.mk.


-->


==Test Build with One-Day Run==
==Test Build with One-Day Run==
Line 112: Line 111:
To make sure all works, we will first try setting up a simple one-day experiment.
To make sure all works, we will first try setting up a simple one-day experiment.


===Learn to love tcsh===
One preliminary note is that GEOS-5 is, in many ways, a collection of csh/tcsh scripts. If things start going wrong, the answer can often be "change your shell to tcsh and try". Yes, it's not bash/fish/zsh, but it is what it is. I don't think it's entirely necessary for just this automated work, but it could happen.


===Setting up a one-day experiment===
===Setting up a one-day experiment===
Line 255: Line 251:


At this point, you should be able to <tt>sbatch gcm_run.j</tt> and the model should run a day.
At this point, you should be able to <tt>sbatch gcm_run.j</tt> and the model should run a day.
-->


==Benchmark Run==
==Benchmark Run==


The full benchmark run is a run of 5-days using a portable version of the GEOS-5 boundary conditions. This will use space and cores. For effective benchmarking of I/O, it's recommended to run on less congested than nobackup.
The full benchmark run is a run of 5-days using a portable version of the GEOS-5 boundary conditions. This will use space and cores. For effective benchmarking of I/O, it's recommended to run on less congested than nobackup.
===Learn to love tcsh===
One preliminary note is that GEOS-5 is, in many ways, a collection of csh/tcsh scripts. If things start going wrong, the answer can often be "change your shell to tcsh and try". Yes, it's not bash/fish/zsh, but it is what it is. I don't think it's entirely necessary for just this automated work, but it could happen.


===Setting up benchmark experiment===
===Setting up benchmark experiment===
Line 274: Line 276:
To create the experiment, run create_expt.py and choose a C720 horizontal resolution with climatological GOCART:
To create the experiment, run create_expt.py and choose a C720 horizontal resolution with climatological GOCART:


  $ $PORTBCS/scripts/create_expt.py benchmark-GEOSadas-5_16-5-5day-c720-Portable --horz c720 --ocean o3 --gocart C --account <ACCOUNTID> --expdir <root-for-experiment>
  $ $PORTBCS/scripts/create_expt.py benchmark-GEOSadas-5_16-5-5day-c720 --horz c720 --ocean o3 --gocart C --account <ACCOUNTID> --expdir <root-for-experiment>
  Found c720 horizontal resolution in experiment name
  Found c720 horizontal resolution in experiment name
  Using c720 horizontal resolution
  Using c720 horizontal resolution
Line 287: Line 289:
   Running gcm_setup...done.
   Running gcm_setup...done.
   
   
  Experiment is located in directory: <root-for-experiment>/benchmark-GEOSadas-5_16-5-5day-c720-OpsHistory
  Experiment is located in directory: <root-for-experiment>/benchmark-GEOSadas-5_16-5-5day-c720


Again, if you don't pass in an account-id, you'll get the default of g0620 (the developer's account).
Again, if you don't pass in an account-id, you'll get the default of g0620 (the developer's account).

Latest revision as of 05:25, 6 April 2017

Build and install the model

First, untar the model tarball in nobackup or swdev; the model alone will break the home quota:

$ tar xf GEOSadas-5_16_5-Benchmark.tar.gz

Next, set up ESMADIR:

$ setenv ESMADIR <directory-to>/GEOSadas-5_16_5-Benchmark/GEOSadas

it is just below the src/ directory.

Go into the src/ directory of your model. Following above:

$ cd $ESMADIR/src

Setup the environment by sourcing the g5_modules file:

$ source g5_modules

To build the model, you have one of two choices. First, you can use the parallel_build.csh script to submit a PBS job that compiles the model:

$ ./parallel_build.csh

or you can interactively build the model using:

$ make install

To capture the install log, we recommend tee'ing the output to a file:

$ make install |& tee make.install.log (on tcsh)
$ make install 2>&1 | tee make.install.log (on bash)

Note you can also build in parallel interactively with:

$ make -j8 pinstall |& tee make.pinstall.log (on tcsh)

where N is the number of parallel processes. From testing, 8 jobs is about as much as is useful. You can use more, but no benefit will accrue.

By default, the Intel Fortran compiler (ifort) is used for the build process. For other compilers, contact matthew.thompson@nasa.gov for instructions to use GCC or PGI compilers.

Monitor build process

The build can be monitored using the utility gmh.pl in the directory Config. From the src directory:

$ Config/gmh.pl -Av make.install.log

outputs the build status as

                          --------
                          Packages
                          --------

         >>>> Fatal Error           .... Ignored Error

 [ok]      Config
 [ok]      GMAO_Shared
 [ok]      |    GMAO_mpeu
 [ok]      |    |    mpi0
 [ok]      |    GMAO_pilgrim
 [ok]      |    GMAO_gfio
 [ok]      |    |    r4
 [ok]      |    |    r8
 [ok]      |    GMAO_perllib
 [ok]      |    MAPL_cfio
 [ok]      |    |    r4
 [ok]      |    |    r8
 [ok]      |    MAPL_Base
 [ok]      |    |    TeX
 [ok]      |    GEOS_Shared
 [ 1] .... .... Chem_Base
 [ok]      |    Chem_Shared
 [ok]      |    GMAO_etc
 [ok]      |    GMAO_hermes
 [ 2] .... .... GFDL_fms
 [ok]      |    GEOS_Util
 [ok]      |    |    post 

                          -------
                          Summary
                          -------

IGNORED mpp_comm_sma.d mpp_transmit_sma.d Chem_AodMod.d (3 files in 2 packages)
All 22 packages compiled successfully.

In case of errors, gmh summarizes exactly where it happens by indicating the package where it occurred. Caveat: it does not work in parallel (output is scrambled). So, if the parallel build fails, rerun it sequentially (it will go quickly and die in the same place) and run gmh on the output for a summary.

Benchmark Run

The full benchmark run is a run of 5-days using a portable version of the GEOS-5 boundary conditions. This will use space and cores. For effective benchmarking of I/O, it's recommended to run on less congested than nobackup.

Learn to love tcsh

One preliminary note is that GEOS-5 is, in many ways, a collection of csh/tcsh scripts. If things start going wrong, the answer can often be "change your shell to tcsh and try". Yes, it's not bash/fish/zsh, but it is what it is. I don't think it's entirely necessary for just this automated work, but it could happen.

Setting up benchmark experiment

Go into the model application directory and do a couple of preliminary commands:

$ cd $ESMADIR/src/Applications/GEOSgcm_App

These echos set up some defaults. While they both don't have to be in the same location, it's highly recommended they are and the use of the script below assumes they are. Also, make sure they are in a nobackup directory.

For the next few commands, you will need to know the location of the portable BCs directory used for this experiment, which is referred to below as $PORTBCS. On discover, a version will always be at:

/discover/nobackup/mathomp4/HugeBCs-H50

To create the experiment, run create_expt.py and choose a C720 horizontal resolution with climatological GOCART:

$ $PORTBCS/scripts/create_expt.py benchmark-GEOSadas-5_16-5-5day-c720 --horz c720 --ocean o3 --gocart C --account <ACCOUNTID> --expdir <root-for-experiment>
Found c720 horizontal resolution in experiment name
Using c720 horizontal resolution

Assuming default vertical resolution of 72
Using 72 vertical resolution

Ocean resolution of o3 passed in 
Using o3 ocean resolution

Using climatological aerosols
 Running gcm_setup...done.

Experiment is located in directory: <root-for-experiment>/benchmark-GEOSadas-5_16-5-5day-c720

Again, if you don't pass in an account-id, you'll get the default of g0620 (the developer's account).

Setup and Run Benchmark

Now change to the experiment directory and run MakeSBUBench.bash which will set up the experiment:

$ $PORTBCS/scripts/MakeSBUBench.bash

The script sets the run up for 5 days using 5400 cores, and other flags are tripped to best emulate Ops.

NOTE: This will also set the experiment to run in this SLURM environment:

#SBATCH --partition=preops
#SBATCH --qos=benchmark

This is how the script's developer can run 5400-core jobs. Others might have different partition/qos to submit to. Please edit these before sbatch submission to a partition/qos that you have access to that can accept a 5400-core job.

Finally, submit the job:

$ sbatch gcm_run.j