Commit 790c3066 authored by rjzamora's avatar rjzamora
Browse files

making significant changes to simplify the exerciser benchmark, and to add...

making significant changes to simplify the exerciser benchmark, and to add documentation. You should look a few commits back to see some of the topology-aware and early-ccio-specific cunctionality that has been stripped out for simplicity
parent 4daaa7c8
## Building and Running the Exerciser on Vesta BG/Q (ALCF)
**WARNING - The instructions for running the exerciser (below) are out of date.**
**----WARNING----: These instructions are for building with a custom MPICH-CH4 installation - See the VESTA_XL directory for IBM MPIXL-build instructions**
```
These are instructions for building the custom_collective_io branch in HDF%
in xgitlab, building the exerciser and linking with that build of HDF5, and
running a basic exerciser test on BGQ-Cetus. This is all based on the
MPICH-CH4-OFI utilization on BGQ, located here:
These are instructions for building the parallel HDF5 exerciser, and
running a basic exerciser test on the IBM BG/Q Vesta machine mach(ALCF). Similar steps can be used for other BG/Q systems (e.g. ALCF Mira).
optimized:
/projects/aurora_app/mpich3-ch4-ofi/install/gnu/bin/mpi*
I am also including instructions for building HDF5 (CCIO and `develop` versions), because it is likely that you may need to do these things together. Feel free to skip the HDF5-build steps if they don't apply to you.
debug:
/projects/aurora_app/mpich3-ch4-ofi/install/gnu.debug/bin/mpi*
### Setting up the Directory Structure
The exerciser build includes linking with mpitrace. Looking at the Makefiles,
you will need to set:
For all instructions in this document, we assume the directory structure defined in this section. This structure is not necessary for the exerciser code to function correctly, but it will allow you to follow the instructions as closely as possible. The structure assumes that you will be building the `develop` and/or `CCIO` versions of HDF5 yourself within the defined structure. This is not required (you can simply skip the HDF5 build instructions, and use a different `HDF5_INSTALL_DIR` when you build the exerciser).
$HDF5_INSTALL_DIR
First, define the root directory for building HDF5 and the Exerciser:
to the location where you have installed HDF5.
```
export HDF5_ROOT=<your-desired-root-directory>
```
First you need to build HDF5:
Create the top level of the directory structure for this example:
mkdir <hdf5_root_dir>
cd <hdf5_root_dir>
```
mkdir $HDF5_ROOT
cd $HDF5_ROOT
mkdir exerciser
mkdir library
mkdir xgitlabrepos
cd xgitlabrepos
git clone git@xgitlab.cels.anl.gov:ExaHDF5/CustomCollectiveIO.git
mkdir repos
```
Clone the necessary git repositories. Note that the Custom Collective I/O (CCIO) version of the HDF5 is under development for the ExaHDF5 project (see: https://github.com/rjzamora/hdf5-ccio-develop/tree/ccio). The official development branch of HDF5 (which CCIO is a *fork* of) is located on Bitbucket (https://bitbucket.hdfgroup.org/projects/HDFFV/repos/hdf5/browse). You can use either/both of these HDF5 versions (or another one).
First, clone the repo with the Exerciser (if you already did this, just move the repo to this location):
```
cd gitrepos
git clone git@xgitlab.cels.anl.gov:ExaHDF5/BuildAndTest.git
cd CustomCollectiveIO
```
If using CCIO, clone it (be sure to use the 'ccio' branch of hdf5-ccio-develop):
```
git clone https://github.com/rjzamora/hdf5-ccio-develop.git
cd hdf5-ccio-develop
git checkout ccio
cd ..
```
--- VERY IMPORTANT -- SWITCH TO THE 'custom_collective_io' branch
If using `develop`, clone it:
git checkout custom_collective_io
```
git clone https://bitbucket.hdfgroup.org/scm/hdffv/hdf5.git
```
Create the rest of the directory structure for this example:
```
cd ../library
mkdir build
mkdir install
cd install
mkdir opt
mkdir debug
mkdir ccio
mkdir develop
cd ../build
mkdir opt
mkdir debug
--- the following instructions are for building debug, similar for opt the
only difference is the do-configure-opt script for opt vs do-configure-debug for debug
cd debug
cp -rL <hdf5_root_dir>/xgitlabrepos/CustomCollectiveIO/* .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-configure-debug .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-make .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-mpirun .
mkdir ccio
mkdir develop
```
Then modify do-configure-debug changing the 'pcoffman@alcf.anl.gov' to your
email id and:
--prefix=/projects/Performance/pkcoff/hdf5/install/gnu472-dbg
### (If Desired) Building the CCIO Branch of HDF5
to your install dir - eg:
Here, we are using the `ccio` directories to build and install the code, along with the `debug` configure options. To use an optimized configuration, simply use the `do-configure-opt` script:
--prefix=<hdf5_root_dir>/library/install/debug
```
cd $HDF5_ROOT/library/build/ccio
cp -rL $HDF5_ROOT/xgitlabrepos/hdf5-ccio-develop/* .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_CH4/do-configure-debug .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_CH4/do-make .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_CH4/do-mpirun .
```
Then run autogen and configure:
Make sure the following line is correct for your installation in `do-configure-debug` (make sure the `configure` and `--prefix` paths are correct):
```
$HDF5_ROOT/library/build/ccio/configure --without-pthread --disable-shared --enable-fortran \
--disable-cxx --enable-parallel --enable-symbols=yes \
--enable-build-mode=production --enable-optimization=high \
--with-zlib=/soft/libraries/alcf/current/gcc/ZLIB \
--prefix=$HDF5_ROOT/library/install/ccio \
2>&1
```
Then run `autogen` and `configure`. The configure script must be run in cobalt since it needs to run MPI programs on the backend during the configuration. For this example, the
`datascience` allocation is used:
```
export PATH=/soft/buildtools/autotools/feb2015/bin:$PATH
./autogen.sh
./autogen.sh
qsub -A datascience ./do-configure-debug
```
The configure script must be run in cobalt since it needs to run mpi
programs on the backend during the configuration -- for this example the
Performance allocation is used.
qsub -A Performance ./do-configure-debug
Once that completes, run the `do-make` script to build HDF5:
Once that completes, modify the do-make and change the email id from pcoffman@anl.gov to yours, then run the ./do-make to build it
qsub -A Performance ./do-make
```
qsub -A datascience ./do-make
```
Once the build has completed (check the `LOG.make.out` file for a 0 return status):
then run:
```
make install
on the front-end to do the install.
```
Once the HDF5 library is built, it should be in `$HDF5_ROOT/library/install/ccio/lib/libhdf5.a`
To build the exerciser agains the CCIO version of HDF5, you will need to use the `ccio` installation location of HDF5 in the example below (by setting `HDF5_INSTALL_DIR=$HDF5_ROOT/library/install/ccio`).
### (If Desired) Building the `develop` Branch of HDF5
The steps to build `develop` are the same as the steps to build CCIO. However, you now use the `develop` directories to build and install the code.
```
cd $HDF5_ROOT/library/build/develop
cp -rL $HDF5_ROOT/xgitlabrepos/hdf5/* .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_CH4/do-configure-debug .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_CH4/do-make .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_CH4/do-mpirun .
```
You will need to modify the following line in `do-configure` to point to the `develop` locations (make sure the `configure` and `--prefix` paths are correct):
```
$HDF5_ROOT/library/build/develop/configure --without-pthread --disable-shared --enable-fortran \
--disable-cxx --enable-parallel --enable-symbols=yes \
--enable-build-mode=production --enable-optimization=high \
--with-zlib=/soft/libraries/alcf/current/gcc/ZLIB \
--prefix=$HDF5_ROOT/library/install/develop \
2>&1
```
Once the HDF5 library is built - should be here:
<hdf5_root_dir>/library/install/debug/lib/libhdf5.a
Besides these changes, the instructions should be very similar to those for CCIO. To build the exerciser agains the `develop` branch, you will need to use the `develop` installation location of HDF5 in the example below (by setting `HDF5_INSTALL_DIR=$HDF5_ROOT/library/install/develop`).
Go ahead and build the exerciser:
cd <hdf5_root_dir>/exerciser
### Building the Exerciser
The specific instructions here will assume that you have used the same directory structure as the optional instructions for using the CCIO branch of HDF5 (above). However, the makefile example can be used with any HDF5 installation location (`HDF5_INSTALL_DIR`).
Create and enter to the `exerciser` build directory:
```
cd $HDF5_ROOT/exerciser
mkdir run
mkdir build
cd build
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/Makefile-debug .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/exerciser.c .
mkdir ccio
cd ccio
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_CH4/Makefile-debug .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/exerciser.c .
```
make HDF5_INSTALL_DIR=<hdf5_root_dir>/library/install/debug -f Makefile-debug
Now, choose the `HDF5_INSTALL_DIR` location, and run make:
cp hdf5Exerciser-ofi-debug-mpitrace ../run
```
make HDF5_INSTALL_DIR=$HDF5_ROOT/library/install/ccio -f Makefile-debug
```
then a sample run on say 32 nodes:
This should generate the `hdf5Exerciser-ofi-debug-mpitrace` executable.
cd ../run
qsub -A Performance -t 29 --nodecount 32 --mode c16 --cwd <hdf5_root_dir>/exerciser/run --env RUNJOB_LABEL=short:HDF5_CUSTOM_AGG_DEBUG=yes:HDF5_CUSTOM_AGG=yes ./hdf5Exerciser-ofi-debug-mpitrace --metacoll --derivedtype --addattr --minbuf 256 --maxbuf 4194304
### Running the Exerciser
Note the HDF5_CUSTOM_AGG and HDF5_CUSTOM_AGG_DEBUG env vars which should have
yes/no values.
For these specific instructions, we assume that you want to test the CCIO version of HDF5. First, go to the run directory and create a link to the CCIO-Exerciser executable:
I have followed these instructions and set everything up here:
```
cd $HDF5_ROOT/exerciser/run
ln -s ../ccio/hdf5Exerciser-ofi-debug-mpitrace hdf5Exerciser-ccio
```
/projects/Performance/pkcoff/hdf5/ccio-example
Copy the example python submission script:
To use as a reference....
```
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/Common/run-example.py .
```
This script will setup and run a simple example with 8 aggregator ranks (set by `lfs_count`). To run it on 32 nodes on Vesta:
```
qsub -A datascience -t 30 -n 32 run-example.py --machine vesta --exec ./hdf5Exerciser-ccio --ppn 16 --ccio
```
Note that I am using the `datascience` allocation (you should change this to whatever makes sense for you). Leave off the `--ccio` flag if you are not using the CCIO version of HDF5. Note that the topology API in HDF5 cannot use the mpixl library, since we are using MPICH-CH4 (so topology-aware agg selection will not be doing anything clever).
#!/bin/bash
#COBALT -q default -t 60 -n 1 -O LOG.configure -M pcoffman@alcf.anl.gov
#COBALT -q default -t 60 -n 1 -O LOG.configure
# !!!!!!
# BG/Q This is intended to be submitted via "qsub do-configure"
......@@ -25,7 +25,7 @@ echo "Driver version:"
export CC=/projects/aurora_app/mpich3-ch4-ofi/install/gnu.debug/bin/mpicc
export CFLAGS='-O3'
export CFLAGS='-O3 -Dtopo_timing -Dtopo_debug'
echo
# expecting INCLUDE_PATH to be e.g. /bgsys/drivers/ppcfloor/comm/xl/include
......@@ -78,11 +78,11 @@ export RUNPARALLEL="$PWD/do-mpirun --np 4 : "
set -x
./configure --without-pthread --disable-shared --enable-fortran \
$HDF5_ROOT/library/build/ccio/configure --without-pthread --disable-shared --enable-fortran \
--disable-cxx --enable-parallel --enable-symbols=yes \
--enable-build-mode=production --enable-optimization=high \
--with-zlib=/soft/libraries/alcf/current/gcc/ZLIB \
--prefix=/projects/Performance/pkcoff/hdf5/install/gnu472-dbg \
--prefix=$HDF5_ROOT/library/install/ccio \
2>&1
status=$?
......@@ -96,8 +96,3 @@ echo "configure is finished with status $status"
# echo "Done calling cobalt-mpirun -free wait"
exit $status
#!/bin/bash
#COBALT -q default -t 60 -n 1 -O LOG.configure -M pcoffman@alcf.anl.gov
#COBALT -q default -t 60 -n 1 -O LOG.configure
# !!!!!!
# BG/Q This is intended to be submitted via "qsub do-configure"
......@@ -25,7 +25,7 @@ echo "Driver version:"
export CC=/projects/aurora_app/mpich3-ch4-ofi/install/gnu/bin/mpicc
export CFLAGS='-O3'
export CFLAGS='-O3 -Dtopo_timing -Dtopo_debug'
echo
# expecting INCLUDE_PATH to be e.g. /bgsys/drivers/ppcfloor/comm/xl/include
......@@ -80,11 +80,11 @@ export RUNPARALLEL="$PWD/do-mpirun --np 4 : "
set -x
/projects/Performance/pkcoff/hdf5/build/gnu472-opt/configure --without-pthread --disable-shared --enable-fortran \
$HDF5_ROOT/library/build/develop/configure --without-pthread --disable-shared --enable-fortran \
--disable-cxx --enable-parallel \
--enable-build-mode=production --enable-optimization=high \
--with-zlib=/soft/libraries/alcf/current/gcc/ZLIB \
--prefix=/projects/Performance/pkcoff/hdf5/install/gnu472-opt \
--prefix=$HDF5_ROOT/library/install/develop \
2>&1
status=$?
......@@ -98,8 +98,3 @@ echo "configure is finished with status $status"
# echo "Done calling cobalt-mpirun -free wait"
exit $status
**WARNING - The instructions for running the exerciser (below) are out of date.**
```
These are instructions for building the custom_collective_io branch in HDF%
in xgitlab, building the exerciser and linking with that build of HDF5, and
running a basic exerciser test on BGQ-Cetus. This is all based on the
MPICH-CH4-OFI utilization on BGQ, located here:
optimized:
/projects/aurora_app/mpich3-ch4-ofi/install/gnu/bin/mpi*
debug:
/projects/aurora_app/mpich3-ch4-ofi/install/gnu.debug/bin/mpi*
The exerciser build includes linking with mpitrace. Looking at the Makefiles,
you will need to set:
$HDF5_INSTALL_DIR
to the location where you have installed HDF5.
First you need to build HDF5:
mkdir <hdf5_root_dir>
cd <hdf5_root_dir>
mkdir exerciser
mkdir library
mkdir xgitlabrepos
cd xgitlabrepos
git clone git@xgitlab.cels.anl.gov:ExaHDF5/CustomCollectiveIO.git
git clone git@xgitlab.cels.anl.gov:ExaHDF5/BuildAndTest.git
cd CustomCollectiveIO
--- VERY IMPORTANT -- SWITCH TO THE 'custom_collective_io' branch
git checkout custom_collective_io
cd ../library
mkdir build
mkdir install
cd install
mkdir opt
mkdir debug
cd ../build
mkdir opt
mkdir debug
--- the following instructions are for building debug, similar for opt the
only difference is the do-configure-opt script for opt vs do-configure-debug for debug
cd debug
cp -rL <hdf5_root_dir>/xgitlabrepos/CustomCollectiveIO/* .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-configure-debug .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-make .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-mpirun .
Then modify do-configure-debug changing the 'pcoffman@alcf.anl.gov' to your
email id and:
--prefix=/projects/Performance/pkcoff/hdf5/install/gnu472-dbg
to your install dir - eg:
--prefix=<hdf5_root_dir>/library/install/debug
Then run autogen and configure:
export PATH=/soft/buildtools/autotools/feb2015/bin:$PATH
./autogen.sh
The configure script must be run in cobalt since it needs to run mpi
programs on the backend during the configuration -- for this example the
Performance allocation is used.
qsub -A Performance ./do-configure-debug
Once that completes, modify the do-make and change the email id from pcoffman@anl.gov to yours, then run the ./do-make to build it
qsub -A Performance ./do-make
then run:
make install
on the front-end to do the install.
Once the HDF5 library is built - should be here:
<hdf5_root_dir>/library/install/debug/lib/libhdf5.a
Go ahead and build the exerciser:
cd <hdf5_root_dir>/exerciser
mkdir run
mkdir build
cd build
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/BGQ/Makefile-debug .
cp <hdf5_root_dir>/xgitlabrepos/BuildAndTest/Exerciser/exerciser.c .
make HDF5_INSTALL_DIR=<hdf5_root_dir>/library/install/debug -f Makefile-debug
cp hdf5Exerciser-ofi-debug-mpitrace ../run
then a sample run on say 32 nodes:
cd ../run
qsub -A Performance -t 29 --nodecount 32 --mode c16 --cwd <hdf5_root_dir>/exerciser/run --env RUNJOB_LABEL=short:HDF5_CUSTOM_AGG_DEBUG=yes:HDF5_CUSTOM_AGG=yes ./hdf5Exerciser-ofi-debug-mpitrace --metacoll --derivedtype --addattr --minbuf 256 --maxbuf 4194304
Note the HDF5_CUSTOM_AGG and HDF5_CUSTOM_AGG_DEBUG env vars which should have
yes/no values.
I have followed these instructions and set everything up here:
/projects/Performance/pkcoff/hdf5/ccio-example
To use as a reference....
```
......@@ -2,7 +2,7 @@ EXE=hdf5Exerciser
default: ${EXE}
HDF5_INSTALL_DIR=/home/zamora/hdf5_root_dir/library/install/opt-g-ccio-xl
HDF5_INSTALL_DIR=$HDF5_ROOT/library/install/ccio
exerciser.o: exerciser.c
mpixlc -c -g -O3 -qlanglvl=extc99 -I${HDF5_INSTALL_DIR}/include exerciser.c -o exerciser.o
......
## Exerciser ``README.md`` (ALCF VESTA)
## Building and Running the Exerciser on Vesta BG/Q (ALCF)
These are instructions for building the parallel HDF5 exerciser, and
running a basic exerciser test on Vesta (ALCF). Similar instructions can be used for other BG/Q systems (e.g. ALCF Mira).
running a basic exerciser test on the IBM BG/Q Vesta machine mach(ALCF). Similar steps can be used for other BG/Q systems (e.g. ALCF Mira).
I am also including instructions for building HDF5 (CCIO and `develop` versions), because it is likely that you may need to do these things together. Feel free to skip the HDF5-build steps if they don't apply to you.
### (Optional) Building CCIO Version of HDF5
### Setting up the Directory Structure
For all instructions in this document, we assume the directory structure defined in this section. This structure is not necessary for the exerciser code to function correctly, but it will allow you to follow the instructions as closely as possible. The structure assumes that you will be building the `develop` and/or `CCIO` versions of HDF5 yourself within the defined structure. This is not required (you can simply skip the HDF5 build instructions, and use a different `HDF5_INSTALL_DIR` when you build the exerciser).
First, define the root directory for building HDF5 and the Exerciser:
```
export HDF5_ROOT=<your-desired-root-directory>
```
Create the top level of the directory structure for this example:
```
mkdir $HDF5_ROOT
cd $HDF5_ROOT
mkdir exerciser
mkdir library
mkdir gitrepos
mkdir repos
```
Clone the necessary git repositories. Note that the Custom Collective I/O (CCIO) version of the HDF5 is under development for the ExaHDF5 project (see: https://github.com/rjzamora/hdf5-ccio-develop/tree/ccio). The official development branch of HDF5 (which CCIO is a *fork* of) is located on Bitbucket (https://bitbucket.hdfgroup.org/projects/HDFFV/repos/hdf5/browse). You can use either/both of these HDF5 versions (or another one).
First, clone the repo with the Exerciser (if you already did this, just move the repo to this location):
```
cd gitrepos
git clone https://github.com/rjzamora/hdf5-ccio-develop.git
git clone git@xgitlab.cels.anl.gov:ExaHDF5/BuildAndTest.git
cd hdf5-ccio-develop
```
If using CCIO, clone it (be sure to use the 'ccio' branch of hdf5-ccio-develop):
# VERY IMPORTANT -- SWITCH TO THE 'ccio' branch
```
git clone https://github.com/rjzamora/hdf5-ccio-develop.git
cd hdf5-ccio-develop
git checkout ccio
cd ..
```
If using `develop`, clone it:
```
git clone https://bitbucket.hdfgroup.org/scm/hdffv/hdf5.git
```
Create the rest of the directory structure for this example:
```
cd ../library
mkdir build
mkdir install
cd install
mkdir opt
mkdir ccio
mkdir develop
cd ../build
mkdir opt
cd debug
cp -rL $HDF5_ROOT/xgitlabrepos/CustomCollectiveIO/* .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-configure-debug .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-make .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/do-mpirun .
Then modify do-configure-debug changing the 'pcoffman@alcf.anl.gov' to your
email id and:
--prefix=/projects/Performance/pkcoff/hdf5/install/gnu472-dbg
mkdir ccio
mkdir develop
```
to your install dir - eg:
--prefix=$HDF5_ROOT/library/install/debug
### (If Desired) Building the CCIO Branch of HDF5
Then run autogen and configure:
export PATH=/soft/buildtools/autotools/feb2015/bin:$PATH
./autogen.sh
Here, we are using the `ccio` directories to build and install the code:
The configure script must be run in cobalt since it needs to run mpi
programs on the backend during the configuration -- for this example the
Performance allocation is used.
qsub -A Performance ./do-configure-debug
```
cd $HDF5_ROOT/library/build/ccio
cp -rL $HDF5_ROOT/xgitlabrepos/hdf5-ccio-develop/* .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_XL/do-configure .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_XL/do-make .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_XL/do-mpirun .
```
Once that completes, modify the do-make and change the email id from pcoffman@anl.gov to yours, then run the ./do-make to build it
qsub -A Performance ./do-make
Make sure the following line is correct for your installation in `do-configure` (make sure the `configure` and `--prefix` paths are correct):
then run:
make install
on the front-end to do the install.
```
$HDF5_ROOT/library/build/ccio/configure --without-pthread --disable-shared --enable-fortran \
--disable-cxx --enable-parallel \
--with-zlib=/soft/libraries/alcf/current/xl/ZLIB \
--prefix=$HDF5_ROOT/library/install/ccio \
2>&1
```
Once the HDF5 library is built - should be here:
$HDF5_ROOT/library/install/debug/lib/libhdf5.a
Then run `autogen` and `configure`. The configure script must be run in cobalt since it needs to run MPI programs on the backend during the configuration. For this example, the
`datascience` allocation is used:
Go ahead and build the exerciser:
```
export PATH=/soft/buildtools/autotools/feb2015/bin:$PATH
./autogen.sh
qsub -A datascience ./do-configure
```
cd $HDF5_ROOT/exerciser
mkdir run
mkdir build
cd build
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/Makefile-debug .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/exerciser.c .
Once that completes, run the `do-make` script to build HDF5:
make HDF5_INSTALL_DIR=$HDF5_ROOT/library/install/debug -f Makefile-debug
```
qsub -A datascience ./do-make
```
Once the build has completed (check the `LOG.make.out` file for a 0 return status):
cp hdf5Exerciser-ofi-debug-mpitrace ../run
```
make install
```
then a sample run on say 32 nodes:
Once the HDF5 library is built, it should be in `$HDF5_ROOT/library/install/ccio/lib/libhdf5.a`
cd ../run
qsub -A Performance -t 29 --nodecount 32 --mode c16 --cwd $HDF5_ROOT/exerciser/run --env RUNJOB_LABEL=short:HDF5_CUSTOM_AGG_DEBUG=yes:HDF5_CUSTOM_AGG=yes ./hdf5Exerciser-ofi-debug-mpitrace --metacoll --derivedtype --addattr --minbuf 256 --maxbuf 4194304
To build the exerciser agains the CCIO version of HDF5, you will need to use the `ccio` installation location of HDF5 in the example below (by setting `HDF5_INSTALL_DIR=$HDF5_ROOT/library/install/ccio`).
Note the HDF5_CUSTOM_AGG and HDF5_CUSTOM_AGG_DEBUG env vars which should have
yes/no values.
### (If Desired) Building the `develop` Branch of HDF5
I have followed these instructions and set everything up here:
The steps to build `develop` are the same as the steps to build CCIO. However, you now use the `develop` directories to build and install the code.
/projects/Performance/pkcoff/hdf5/ccio-example
To use as a reference....
```
cd $HDF5_ROOT/library/build/develop
cp -rL $HDF5_ROOT/xgitlabrepos/hdf5/* .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_XL/do-configure .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_XL/do-make .
cp $HDF5_ROOT/xgitlabrepos/BuildAndTest/Exerciser/BGQ/VESTA_XL/do-mpirun .
```
You will need to modify the following line in `do-configure` to point to the `develop` locations (make sure the `configure` and `--prefix` paths are correct):
```
$HDF5_ROOT/library/build/develop/configure --without-pthread --disable-shared --enable-fortran \