Commit 83fd01e8 authored by Philip Carns's avatar Philip Carns

fill in recipes for different systems


git-svn-id: https://svn.mcs.anl.gov/repos/darshan/trunk@530 3b7491f3-a168-0410-bf4b-c445ed680a29
parent ab428c12
......@@ -161,5 +161,122 @@ export LD_PRELOAD=libfmpich.so:/home/carns/darshan-install/lib/libdarshan.so
== Darshan installation recipes
TODO: fill this in
The following recipes provide examples for some prominant HPC systems.
These are intended to be used as a starting point for installation on such
systems, although you will most likely have to adjust paths and options to
reflect the specifics of your system.
=== IBM Blue Gene/P
The IBM Blue Gene/P series produces static executables by default, uses a
different architecture for login and compute nodes, and uses an MPI
environment based on MPICH.
The following example shows how to configure Darshan on a BG/P system:
----
./configure --with-mem-align=16 \
--with-log-path=/home/carns/working/darshan/releases/logs \
--prefix=/home/carns/working/darshan/install --with-jobid-env=COBALT_JOBID \
--with-zlib=/soft/apps/zlib-1.2.3/ \
--host=powerpc-bgp-linux CC=/bgsys/drivers/ppcfloor/comm/default/bin/mpicc
----
.Rationale
[NOTE]
====
The memory alignment is set to 16 not because that is the proper alignment
for the BG/P CPU architecture, but because that is the optimal alignment for
the network transport used between compute nodes and I/O nodes in the
system. The jobid environment variable is set to `COBALT_JOBID` in this
case for use with the Cobalt scheduler, but other BG/P systems may use
different schedulers. The `--with-zlib` argument is used to point to a
version of zlib that has been compiled for use on the compute nodes rather
than the login node. The `--host` argument is used to force cross-compilation
of Darshan. The `CC` variable is set to point to a stock MPI compiler.
====
Once Darshan has been installed, use the `darshan-gen-*.pl` scripts as
described earlier in this document to produce darshan-enabled MPI compilers.
This method has been widely used and tested with both the GNU and IBM XL
compilers.
=== Cray XE (or similar)
The Cray environment produces static executables by default, uses a similar
architecture for login and compute nodes, and uses its own unique compiler
script system.
The following example shows how to configure Darshan on a Cray system:
----
module swap PrgEnv-pgi PrgEnv-gnu
./configure --with-mem-align=8 \
--with-log-path=/lustre/beagle/carns/darshan-logs \
--prefix=/home/carns/working/darshan/releases/install-darshan-2.2.0-pre1 \
--with-jobid-env=PBS_JOBID CC=cc
----
.Rationale
[NOTE]
====
Before compiling Darshan you must modify your environment to use the GNU
compilers rather than the default PGI or Cray compilers. Please see your
site documentation for details.
The job ID is set to `PBS_JOBID` for use with a Torque or PBS based scheduler.
The `CC` variable is configured to point the standard MPI compiler.
====
The darshan-runtime package does not provide any scripts or wrappers to use
for instrumenting static executables in the Cray environment. It may be
possible to do this manually. However, you _can_ instrument dynamic
executables using `LD_PRELOAD`. To do this, compile your application with
the `-dynamic` compiler option and follow the instructions for dynamic
executables listed earlier in this document. This method has been tested
with PGI and GNU compilers and is likely to work with other compiler
combinations as well.
Note that some Cray systems may require additional environment variables or
modules to be set in order to run dynamic executables on a compute node.
Please see your site documentation for details.
=== Linux clusters using Intel MPI
Most Intel MPI installations produce dynamic executables by default. To
configure Darshan in this environment you can use the following example:
----
./configure --with-mem-align=8 --with-log-path=/darshan-logs --with-jobid-env=PBS_JOBID CC=mpicc
----
.Rationale
[NOTE]
====
There is nothing unusual in this configuration except that you should use
the underlying GNU compilers rather than the Intel ICC compilers to compile
Darshan itself.
====
You can use the `LD_PRELOAD` method described earlier in this document to
instrument executables compiled with the Intel MPI compiler scripts. This
method has been briefly tested using both GNU and Intel compilers.
.Caveat
[NOTE]
====
Darshan is only known to work with C and C++ executables generated by the
Intel MPI suite. Darshan will not produce instrumentation for Fortran
executables. For more details please check this Intel forum discussion:
http://software.intel.com/en-us/forums/showthread.php?t=103447&o=a&s=lr
====
==== Linux clusters using MPICH or OpenMPI
Follow the generic instructions provided at the top of this document. The
only modification is to make sure that the `CC` used for compilation is
based on a GNU compiler. Once Darshan has been installed, it should be
capable of instrumenting executables built with GNU, Intel, and PGI
compilers.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment