Heracles Single Column Model: Difference between revisions
Created page with "This page describes the steps necessary to set up and run the Single Column Model (SCM) under Heracles on discover. It assumes that you have successfully run the model as des..." |
→Setting Up and Running Existing SCM Experiments: Make "how to run" a bit more useful. |
||
Line 23: | Line 23: | ||
Create your own run directory, then modify and uncomment the first executable line of <code>scm_setup</code>, which assigns <code>ESMADIR</code> to your local GEOS-5 build that you are using for the SCM (you may already have this set as an environment variable). Uncomment one of the lines that assign the variable <code>CASEDIR</code> to choose the experiment to run. Then run the script from the run directory you have created. | Create your own run directory in $NOBACKUP, then modify and uncomment the first executable line of <code>scm_setup</code>, which assigns <code>ESMADIR</code> to your local GEOS-5 build that you are using for the SCM (you may already have this set as an environment variable). Uncomment one of the lines that assign the variable <code>CASEDIR</code> to choose the experiment to run. Then run the script from the run directory you have created. For example, if you selected the <tt>arm_97jul</tt> case: | ||
Then you can just run the model executable from the command line in the directory you created. You will have to load the proper modules by sourcing <code>src/g5_modules</code> in the build you are using. Although it runs with a single processor, on '''discover''' you should run it from an interactive job on a compute node (as opposed to the '''discover''' front end). This can be done by running | $ ./scm_setup | ||
arm_97jul | |||
It will copy all of the necessary resource, forcing and data files to the working directory: | |||
<nowiki> | |||
$ ls -ltr | |||
total 11M | |||
-rwxr-xr-x 1 mathomp4 g0620 5.0K Jun 22 14:35 scm_setup* | |||
lrwxrwxrwx 1 mathomp4 g0620 78 Jun 22 14:35 GEOSgcm.x -> /discover/swdev/mathomp4/Models/Heracles-UNSTABLE/GEOSagcm/Linux/bin/GEOSgcm.x* | |||
-rwxr-xr-x 1 mathomp4 g0620 2.5M Jun 22 14:35 arm_97jul.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_dynave.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 fraci.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 SEAWIFS_KPAR_mon_clim.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 sst.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 15K Jun 22 14:35 sstsi.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 464 Jun 22 14:35 tile.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_gwdvar.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 topo_trbvar.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 3.6K Jun 22 14:35 lai.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 green.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 nirdf.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 12 Jun 22 14:35 vegdyn.data* | |||
-rwxr-xr-x 1 mathomp4 g0620 1.1K Jun 22 14:35 visdf.dat* | |||
-rwxr-xr-x 1 mathomp4 g0620 768 Jun 22 14:35 catch_internal_rst* | |||
</nowiki> | |||
Each experiment requires its own directory. If you modify the resource files (e.g., HISTORY.rc) you may want to copy the setup directory to your own area and modify it and the setup script accordingly so that you don't clobber your modifications. | |||
Then you can just run the model executable from the command line in the directory you created. You will have to load the proper modules by sourcing <code>src/g5_modules</code> in the build you are using. Although it runs with a single processor, on '''discover''' you should run it from an interactive job on a compute node (as opposed to the '''discover''' front end). This can be done by running: | |||
$ xalloc --ntasks=1 --time=08:00:00 --job-name=Interactive --account='''accountnumber''' | |||
salloc: Pending job allocation 8883687 | |||
salloc: job 8883687 queued and waiting for resources | |||
salloc: job 8883687 has been allocated resources | |||
salloc: Granted job allocation 8883687 | |||
srun.slurm: cluster configuration lacks support for cpu binding | |||
$ hostname | |||
borgo065 | |||
$ | |||
where '''accountnumber''' is the account you wish to charge to. (NOTE: Your hostname is probably different. This is just to show you are no longer on a discover head node, but on a borg compute node.) | |||
Once the allocation is granted, it starts an interactive shell on the job node, from which you can run the GEOS-5 executable. Since all of the necessary configuration files are copied to the experiment directory by <code>scm_setup</code>, running the SCM requires none of the extra environmental infrastructure needed for a global experiment that the run script <code>gcm_run.j</code> creates. | |||
To run the SCM, then issue: | |||
<nowiki>$ mpirun -np 1 ./GEOSgcm.x |& tee run.log | |||
srun.slurm: cluster configuration lacks support for cpu binding | |||
In MAPL_Shmem: | |||
NumCores per Node = 1 | |||
NumNodes in use = 1 | |||
Total PEs = 1 | |||
In MAPL_InitializeShmem (NodeRootsComm): | |||
NumNodes in use = 1 | |||
Integer*4 Resource Parameter HEARTBEAT_DT: 1800 | |||
NOT using buffer I/O for file: cap_restart | |||
Read CAP restart properly, Current Date = 1997/06/18 | |||
Current Time = 23:30:00 | |||
Character Resource Parameter ROOT_CF: AGCM.rc | |||
Character Resource Parameter ROOT_NAME: GCS | |||
... | |||
</nowiki> | |||
and it should run. | |||
== Creating Driving Datasets from MERRA == | == Creating Driving Datasets from MERRA == |