darshan-runtime.txt 6.59 KB
Newer Older
1 2 3 4 5
Darshan-runtime installation and usage
======================================

== Introduction

6 7 8
This document describes darshan-runtime, which is the instrumentation
portion of the Darshan characterization tool.  It should be installed on the
system where you intend to collect I/O characterization information.
Philip Carns's avatar
Philip Carns committed
9

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
Darshan instruments applications via either compile time wrappers for static
executables or dynamic library preloading for dynamic executables.  An
application that has been instrumented with Darshan will produce a single
log file each time is executed.  This log summarizes the I/O access patterns
used by the application.

The darshan-runtime instrumentation only instruments MPI applications (the
application must at least call `MPI_Init()` and `MPI_Finalize()`).  However,
it captures both MPI-IO and POSIX file access.  It also captures limited
information about HDF5 and PnetCDF access.

This document provides generic installation instructions, but "recipes" for
several common HPC systems are provided at the end of the document as well.

== Requirements

* MPI C compiler
* zlib development headers and library

== Compilation and installation

.Configure and build example
----
tar -xvzf darshan-<version-number>.tar.gz
cd darshan-<version-number>/darshan-runtime
./configure --with-mem-align=8 --with-log-path=/darshan-logs --with-jobid-env=PBS_JOBID CC=mpicc
make
make install
----

.Explanation of configure arguments:
* `--with-mem-align` (mandatory): This value is system-dependent and will be
used by Darshan to determine if the buffer for a read or write operation is
aligned in memory.
* `--with-log-path` (this, or `--with-log-path-by-env`, is mandatory): This
specifies the parent directory for the directory tree where darshan logs
will be placed
* `--with-jobid-env` (mandatory): this specifies the environment variable that
Darshan should check to determine the jobid of a job.  Common values are
`PBS_JOBID` or `COBALT_JOBID`.  If you are not using a scheduler (or your
scheduler does not advertise the job ID) then you can specify `NONE` here.
Darshan will fall back to using the pid of the rank 0 process if the
specified environment variable is not set.
* `CC=`: specifies the MPI C compiler to use for compilation
* `--with-log-path-by-env`: specified an environment variable to use to
determine the log path at run time.
* `--with-log-hints=`: specifies hints to use when writing the Darshan log
file.  See `./configure --help` for details.
* `--with-zlib=`: specifies an alternate location for the zlib development
header and library

=== Cross compilation

On some systems (notably the IBM BlueGene series) the login nodes do not
have the same architecture or runtime environment as the compute nodes.  In
this case, you must configure darshan-runtime to be built using a cross
compiler.  The following arguments show an example for the BG/P system:

----
--host=powerpc-bgp-linux CC=/bgsys/drivers/ppcfloor/comm/default/bin/mpicc 
----

== Environment preparation

74 75
Once darshan-runtime has been installed, you must still prepare a location
to store Darshan log files and configure an instrumentation method.
76 77 78

=== Log directory

79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94
This step can be safely skipped if you configured darshan-runtime using the
`--with-log-path-by-env` option.  A more typical configuration, however, is
to provide a static directory hierarchy in which to gather Darshan log
files.

The `darshan-mk-log-dirs.pl` utility will configure the path specified at
configure time to include
subdirectories organized by year, month, and day in which log files will be
placed. The last subdirectories will have sticky permissions to enable
multiple users to write to the same directory.  If the log directory is
shared system-wide across many users then the following script should be run
as root.
 
----
darshan-mk-log-dirs.pl
----
95

96
=== Instrumentation method
97

98 99 100 101 102 103 104 105 106
The instrumentation method to use depends on whether the executables
produced by your MPI compiler are statically or dynamically linked.  If you
are unsure, you can check by running `ldd <executable_name>` on an example
executable.  Dynamically-linked executables will produce a list of shared
libraries when this command is executed.

Most MPI compilers allow you to toggle dynamic or static linking via options
such as `-dynamic` or `-static`.  Please check your MPI compiler man page
for details if you intend to force one mode or the other.
107 108 109

== Instrumenting statically-linked applications

110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130
Statically linked executables must be instrumented at compile time.  The
simplest way to do this is to generate an MPI compiler script (e.g. `mpicc`)
that includes the link options and libraries needed by Darshan.  Once this
is done, Darshan instrumentation is transparent; you simply compile
applications using the darshan-enabled MPI compiler scripts.

For MPICH-based MPI libraries, such as MPICH1, MPICH2, or MVAPICH, these
wrapper scripts can be generated automatically.  The following example
illustrates how to produce wrappers for C, C++, and Fortran compilers:

----
darshan-gen-cc.pl `which mpicc` --output mpicc.darshan
darshan-gen-cxx.pl `which mpicxx` --output mpicxx.darshan
darshan-gen-fortran.pl `which mpif77` --output mpif77.darshan
darshan-gen-fortran.pl `which mpif90` --output mpif90.darshan
-----

For other MPI Libraries you must manually modify the MPI compiler scripts to
add the necessary link options and libraries.  Please see the
`darshan-gen-*` scripts for examples or contact the Darshan users mailing
list for help.
131 132 133

== Instrumenting dynamically-linked applications

134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160
For dynamically-linked executables, darshan relies on the `LD_PRELOAD`
environment variable to insert instrumentation at run time.  The application
can be compiled using the normal, unmodified MPI compiler.

To use this mechanism, set the `LD_PRELOAD` environment variable to the full
path to the Darshan shared library, as in this example:

----
export LD_PRELOAD=/home/carns/darshan-install/lib/libdarshan.so
----

You can then run your application as usual.  Some environments may require a
special `mpirun` or `mpiexec` command line argument to propagate the
environment variable to all processes.  Other environments may require a
scheduler submission option to control this behavior.  Please check your
local site documentation for details.

=== Instrumenting dynamically-linked Fortran applications

Please follow the general steps outlined in the previous section.  For
Fortran applications compiled with MPICH you may have to take the additional
step fo adding
`libfmpich.so` to your `LD_PRELOAD` environment variable. For example:

----
export LD_PRELOAD=libfmpich.so:/home/carns/darshan-install/lib/libdarshan.so
----
161 162 163 164

== Darshan installation recipes

TODO: fill this in
Philip Carns's avatar
Philip Carns committed
165