Using ESMPy on Discover: Difference between revisions

Load Modules: Comment out a bit of unnecessary text
 
(8 intermediate revisions by the same user not shown)
Line 2: Line 2:


== Load Modules ==
== Load Modules ==
=== Base Modules ===


Only a couple Compiler+MPI combinations have been tested. These examples are based on the modules used in the <tt>GEOSagcm-bridge-DEVEL</tt> git repo:
Only a couple Compiler+MPI combinations have been tested. These examples are based on the modules used in the <tt>GEOSagcm-bridge-DEVEL</tt> git repo:
Line 10: Line 12:
  other/comp/gcc-5.3-sp3
  other/comp/gcc-5.3-sp3
  other/SSSO_Ana-PyD/SApd_4.1.1_py2.7_gcc-5.3-sp3
  other/SSSO_Ana-PyD/SApd_4.1.1_py2.7_gcc-5.3-sp3
<!--


And you point to the Baselibs in:
And you point to the Baselibs in:


  /discover/swdev/mathomp4/Baselibs/TmpBaselibs/GMAO-Baselibs-5_0_2-ESMF-7_1_0_beta_snapshot_17/x86_64-unknown-linux-gnu/ifort_17.0.0.098-intelmpi_17.0.0.098
  /discover/swdev/mathomp4/Baselibs/ESMA-Baselibs-5.0.4-beta25/x86_64-unknown-linux-gnu/ifort_17.0.0.098-intelmpi_17.0.0.098


(Equivalently, you can source <tt>g5_modules</tt> in the git DEVEL repo.)
-->
=== Extra Modules for mpi4py and ESMPy ===


Next, load mpi4py and ESMPy:
Next, load mpi4py and ESMPy:


  $ module use -a /home/mathomp4/modulefiles
  $ module use -a /home/mathomp4/modulefiles
  $ module load python/mpi4py/2.0.0/ifort_17.0.0.098-intelmpi_17.0.0.098 python/ESMPy/7.1.0b17/ifort_17.0.0.098-intelmpi_17.0.0.098
  $ module load python/mpi4py/2.0.0/ifort_17.0.0.098-intelmpi_17.0.0.098 python/ESMPy/7.1.0b25/ifort_17.0.0.098-intelmpi_17.0.0.098


This should set up pretty much everything. A simple test to make sure is run:
This should set up pretty much everything. A simple test to make sure is run:
Line 34: Line 38:
To run the examples that ESMF provides, copy them from:
To run the examples that ESMF provides, copy them from:


  $ cp -r /discover/swdev/mathomp4/Baselibs/TmpBaselibs/GMAO-Baselibs-5_0_2-ESMF-7_1_0_beta_snapshot_17/src/esmf/src/addon/ESMPy/examples .
  $ cp -r /discover/swdev/mathomp4/Baselibs/ESMA-Baselibs-5.0.4-beta25/src/esmf/src/addon/ESMPy/examples .


=== Download Test NC4 Files ===
=== Download Test NC4 Files ===
Line 66: Line 70:
  srun.slurm: cluster configuration lacks support for cpu binding
  srun.slurm: cluster configuration lacks support for cpu binding
  ESMPy Ungridded Field Dimensions Example
  ESMPy Ungridded Field Dimensions Example
   interpolation mean relative error = 0.000768815903378
   interpolation mean relative error = 0.000768815903364
   mass conservation relative error  = 0.0
   mass conservation relative error  = 1.49257625157e-16


  $ mpirun -np 6 python examples/grid_mesh_regrid.py
  $ mpirun -np 6 python examples/grid_mesh_regrid.py
Line 73: Line 77:
  ESMPy Grid Mesh Regridding Example
  ESMPy Grid Mesh Regridding Example
   interpolation mean relative error = 0.00235869859211
   interpolation mean relative error = 0.00235869859211
$ mpirun -np 6 python ./examples/cubed_sphere_to_mesh_regrid.py
srun.slurm: cluster configuration lacks support for cpu binding
ESMPy cubed sphere Grid Mesh Regridding Example
  interpolation mean relative error = 4.39699714748e-06
  interpolation max relative (pointwise) error = 9.70527691296e-05


=== Launching MPI ===
=== Launching MPI ===


ESMPy says they have a couple ways to launch mpi as noted in [http://www.earthsystemmodeling.org/esmf_releases/last_built/esmpy_doc/html/api.html#parallel-execution the API documentation], but I like the mpirun way...because it seems to work. I cannot figure out how to run the mpi_spawn_regrid.py example. If anyone does know, please inform me, but it might just be "You can't on a cluster" or "You can't with Intel MPI".
ESMPy says they have a couple ways to launch mpi as noted in [http://www.earthsystemmodeling.org/esmf_releases/last_built/esmpy_doc/html/api.html#parallel-execution the API documentation]. So far, only the mpirun methods works. I cannot figure out how to run the mpi_spawn_regrid.py example. If anyone does know, please inform me and edit this section, but it might just be "You can't on a cluster" or "You can't with Intel MPI".