Building CVS Baselibs: Difference between revisions

mNo edit summary
Updates for 4.0.7
Line 1: Line 1:
This page will detail the process of building Baselibs. For the purposes of this page, GMAO-Baselibs-4_0_4 as an example tag, but this process should work with any "recent" tag (say, GMAO-Baselibs-3_3_0 or higher).
This page will detail the process of building Baselibs. For the purposes of this page, GMAO-Baselibs-4_0_7 as an example tag, but this process should work with any recent tag.


==Check out Baselibs==
==Check out Baselibs==
Line 17: Line 17:
Next, checkout the tag:
Next, checkout the tag:


  $ bcvs co -r GMAO-Baselibs-4_0_4 -d GMAO-Baselibs-4_0_4 Baselibs
  $ bcvs co -r GMAO-Baselibs-4_0_7 -d GMAO-Baselibs-4_0_7 Baselibs


This will create a <tt>GMAO-Baselibs-4_0_4</tt> inside which is a <tt>src</tt> directory.
This will create a <tt>GMAO-Baselibs-4_0_7</tt> inside which is a <tt>src</tt> directory.


== Build Baselibs ==
== Build Baselibs ==
Line 29: Line 29:
For our example we want to load these modules:
For our example we want to load these modules:


1) comp/intel-13.0.1.117
  1) comp/intel-15.0.2.164
2) other/mpi/mvapich2-1.9a2/intel-13.0.1.117
  2) mpi/impi-5.0.3.048
3) other/comp/gcc-4.6.3-sp1
  3) lib/mkl-15.0.2.164
4) other/SIVO-PyD/spd_1.6.0_gcc-4.6.3-sp1
  4) other/comp/gcc-4.6.3-sp1
  5) other/SIVO-PyD/spd_1.20.0_gcc-4.6.3-sp1_mkl-15.0.0.090


For other modules see the table at the end.
For other modules see the table at the end.
Line 40: Line 41:
=== Undo any CPP Environment Variable ===
=== Undo any CPP Environment Variable ===


Before doing any make, you should issue:
Before doing any make (especially with PGI), you should issue:


  $ unsetenv CPP DEFAULT_CPP
  $ unsetenv CPP DEFAULT_CPP


Some modules set these and most of the Baselibs assume the C-preprocessor will be gcc, not, say, pgcpp.
Some modules set these and most of the Baselibs assume the C-preprocessor will be gcc, not, say, pgcpp (which is '''not''' a C++ compiler!).


=== Set MPICC_CC and MPICXX_CXX variables if using MPT ===
=== (OPTIONAL) Set MPICC_CC and MPICXX_CXX variables if using MPT ===


If you are using MPT at Pleiades, you must set the MPICC_CC and MPICXX_CXX variables to be the "correct" compilers. This can be done either in the environment full (beware!) or during the install by adding at the end:
If you are using MPT at Pleiades, and you wish to use the Intel or PGI C and C++ compilers instead of GNU, you must set the MPICC_CC and MPICXX_CXX variables to be the "correct" compilers. This can be done either in the environment full (beware!) or during the install by adding at the end:


  MPICC_CC=icc  MPICXX_CXX=icpc
  MPICC_CC=icc  MPICXX_CXX=icpc
Line 57: Line 58:
So for the above example you'd issue:
So for the above example you'd issue:


  $ make install ESMF_COMM=mvapich2 CONFIG_SETUP=ifort_13.0.1.117-mvapich2_1.9a2 |& tee makeinstall.ifort_13.0.1.117-mvapich2_1.9a2.log
  $ make install ESMF_COMM=intelmpi CONFIG_SETUP=ifort_15.0.2.164-intelmpi_5.0.3.048 |& tee makeinstall.ifort_15.0.2.164-intelmpi_5.0.3.048.log


and it would build all the libraries. The <tt>tee</tt> is so that you can capture the install log, and also see it real-time.
and it would build all the libraries. The <tt>tee</tt> is so that you can capture the install log, and also see it real-time.


Once built, check for "Error" in your log file:
Once built, check for "Error" in your log file, and if you don't see any "<tt>Error 2</tt>" messages, you are probably safe.


<nowiki>$ grep Error makeinstall.ifort_13.0.1.117-mvapich2_1.9a2.log
==== make verify ====
    (test $HDF5_Make_Ignore && echo "*** Error ignored") ||          \
    (test $HDF5_Make_Ignore && echo "*** Error ignored") ||          \
      H5Eset_auto2(H5E_DEFAULT, PrintErrorStackFunc, PrintErrorStackData);
make[6]: [install-exec-hook] Error 1 (ignored)
make[6]: [install-data-hook] Error 1 (ignored)
      Whimper ( "Error creating scratch file" ) ;
/usr/local/other/SLES11.1/mvapich2/1.9a2/intel-13.0.1.117/bin/mpicc  -c -O2 -DH5_USE_16_API -DLINUX64 -Df2cFortran -DHDF4_NETCDF_HAVE_SD -DLINUX64 -DPGS_MET_COMPILE  -I/discover/swdev/mathomp4/Baselibs/GMAO-Baselibs-4_0_1/src/SDPToolkit/include -I/discover/swdev/mathomp4/Baselibs/GMAO-Baselibs-4_0_1/src/SDPToolkit/include/CUC -I/discover/swdev/mathomp4/Baselibs/GMAO-Baselibs-4_0_1/x86_64-unknown-linux-gnu/ifort_13.0.1.117-mvapich2_1.9a2/Linux/include/hdf -I/discover/swdev/mathomp4/Baselibs/GMAO-Baselibs-4_0_1/x86_64-unknown-linux-gnu/ifort_13.0.1.117-mvapich2_1.9a2/Linux/include/hdf5 PGS_MET_ErrorMsg.c -o /discover/swdev/mathomp4/Baselibs/GMAO-Baselibs-4_0_1/src/SDPToolkit/obj/linux64/MET/PGS_MET_ErrorMsg.o
cp PGS_MET_ErrorMsg.o tmp/METErrorMsg.o</nowiki>


If you don't see any "<tt>Error 2</tt>" messages, you are probably safe.
You can also use <tt>make verify</tt> which will check the status of the build (by looking for xxx.config and xxx.install files) and print a table like:
 
-------+---------+---------+--------------
Config | Install |  Check  |  Package
-------+---------+---------+--------------
  ok  |  ok    |  --    | jpeg
  ok  |  ok    |  --    | zlib
  ok  |  ok    |  --    | szlib
  ok  |  ok    |  --    | curl
  ok  |  ok    |  --    | hdf4
  ok  |  ok    |  --    | hdf5
  ok  |  ok    |  --    | h5edit
  ok  |  ok    |  --    | netcdf
  ok  |  ok    |  --    | netcdf-fortran
  ok  |  ok    |  --    | udunits2
  ok  |  ok    |  --    | nco
  ok  |  ok    |  --    | cdo
  ok  |  ok    |  --    | esmf
  ok  |  ok    |  --    | hdfeos
  ok  |  ok    |  --    | uuid
  ok  |  ok    |  --    | cmor
  ok  |  ok    |  --    | hdfeos5
  ok  |  ok    |  --    | SDPToolkit
-------+---------+---------+--------------
 
Note that this is not as comprehensive as looking at the log file as it is possible that some submakes will fail but not communicate that well enough to the main make process.  


== Checking Baselibs ==
== Checking Baselibs ==
Line 79: Line 97:
This is optional, but recommended. Many of the Baselibs have the ability to do a check. You should do this only in an environment where you can run MPI (like compute nodes at NCCS or NAS). This is because parallel NetCDF and ESMF tests are run:
This is optional, but recommended. Many of the Baselibs have the ability to do a check. You should do this only in an environment where you can run MPI (like compute nodes at NCCS or NAS). This is because parallel NetCDF and ESMF tests are run:


  $ make check ESMF_COMM=mvapich2 CONFIG_SETUP=ifort_13.0.1.117-mvapich2_1.9a2 |& tee makecheck.ifort_13.0.1.117-mvapich2_1.9a2.log
  $ make check ESMF_COMM=intelmpi CONFIG_SETUP=ifort_15.0.2.164-intelmpi_5.0.3.048 |& tee makecheck.ifort_15.0.2.164-intelmpi_5.0.3.048.log


Note that at the moment, many will exit with errors. For example, NetCDF has tests that require internet access.
Note that at the moment, many will exit with errors. For example, NetCDF has tests that require internet access and if a check is done on a compute node with no external access, it will fail.  


=== Checking with Intel MPI ===
=== Checking with Intel MPI ===


Note that if you use Intel MPI as the MPI stack, when you run the check (on discover) you have to manually run <tt>mpdboot</tt> to start a mpd ring:
Note that if you use an older Intel MPI as the MPI stack, when you run the check (on discover) you may have to manually run <tt>mpdboot</tt> to start a mpd ring:


  $ mpdboot -n 8 -r ssh -f $PBS_NODEFILE
  $ mpdboot -n 8 -r ssh -f $PBS_NODEFILE


This example is for 8 nodes.
This example is for 8 nodes. This is not because of Intel MPI ''per se'' but because of codes like netcdf-C that use <tt>mpiexec</tt> as its call to start an MPI process. In Intel MPI older than 2017, mpiexec calls the old MPD version and not Hydra.


== Table of Modules ==
== Table of Modules ==