Building CVS Baselibs: Difference between revisions

Updates for 4.0.7
mNo edit summary
Line 23: Line 23:
== Build Baselibs ==
== Build Baselibs ==


The next task is to build Baselibs. In order to correctly build it, two arguments are needed: <tt>ESMF_COMM</tt> and <tt>CONFIG_SETUP</tt>. <tt>ESMF_COMM</tt> is the MPI stack used by ESMF (usually, <tt>mvapich2</tt>, <tt>mpi</tt>, <tt>openmpi</tt>, or <tt>intelmpi</tt>). <tt>CONFIG_SETUP</tt> is actually an "identifier" that will allow you to build multiple versions of Baselibs for multiple compiler/MPI combination. The style recommended is for, say, Intel 13.0.1.117 and MVAPICH2 1.9a2 is: <tt>CONFIG_SETUP=ifort_13.0.1.117-mvapich2_1.9a2</tt> where you identify the compiler (by its name on the command line), its version, the MPI stack, and its version.
The next task is to build Baselibs. In order to correctly build it, two arguments are needed: <tt>ESMF_COMM</tt> and <tt>CONFIG_SETUP</tt>. <tt>ESMF_COMM</tt> is the MPI stack used by ESMF (usually <tt>mvapich2</tt>, <tt>mpi</tt> (for SGI MPT or other vendor MPI), <tt>openmpi</tt>, or <tt>intelmpi</tt>). <tt>CONFIG_SETUP</tt> is actually an "identifier" that will allow you to build multiple versions of Baselibs for multiple compiler/MPI combination in the same checkout. The style recommended is for, say, Intel 15.0.2.164 and Intel MPI 5.0.3.048 is: <tt>CONFIG_SETUP=ifort_15.0.2.164-intelmpi_5.0.3.048</tt> where you identify the compiler (by its name on the command line), its version, the MPI stack, and its version.  
 
Note that if you do not add a <tt>CONFIG_SETUP</tt>, it will instead build into a directory named after <tt>$(FC)</tt>, so <tt>ifort</tt> or <tt>gfortran</tt>. This is fine as long as you only build for that compiler once. Build again (say for a different MPI stack) and you will overwrite that first build!


=== Load modules ===
=== Load modules ===
Line 109: Line 111:
This example is for 8 nodes. This is not because of Intel MPI ''per se'' but because of codes like netcdf-C that use <tt>mpiexec</tt> as its call to start an MPI process. In Intel MPI older than 2017, mpiexec calls the old MPD version and not Hydra.
This example is for 8 nodes. This is not because of Intel MPI ''per se'' but because of codes like netcdf-C that use <tt>mpiexec</tt> as its call to start an MPI process. In Intel MPI older than 2017, mpiexec calls the old MPD version and not Hydra.


<!--
== Table of Modules ==
== Table of Modules ==


Line 170: Line 173:
| mpi-sgi/mpt.2.06rp16
| mpi-sgi/mpt.2.06rp16
|}
|}
-->


[[Category:Baselibs]]
[[Category:Baselibs]]
[[Category:SI Team]]
[[Category:SI Team]]