Commit 8db50f35 authored by Jakob Luettgau's avatar Jakob Luettgau
Browse files

Catch up with and merge upstream.

parents b6fb669b 01f3672e
......@@ -29,7 +29,7 @@ build_darshan:
paths:
- install/
test_darshan:
test_darshan_static:
tags:
- shell
- ecp-theta
......@@ -43,8 +43,7 @@ test_darshan:
stage: test
script:
- ls $PWD/install
- ls $PWD/install/bin
- export CRAYPE_LINK_TYPE=static
- darshan-test/regression/run-all.sh $PWD/install $PWD/scratch cray-module-alcf
artifacts:
......@@ -53,3 +52,34 @@ test_darshan:
- $PWD/scratch/*.debuglog
- $PWD/scratch/*.out
- $PWD/scratch/*.err
resource_group: debug-queue
test_darshan_dynamic:
tags:
- shell
- ecp-theta
rules:
- if: '$CI_PIPELINE_SOURCE == "web" && $RUN_TESTS == "true"'
when: on_success
- if: '$CI_PIPELINE_SOURCE == "schedules" && $RUN_TESTS == "true"'
when: on_success
stage: test
script:
- export CRAYPE_LINK_TYPE=dynamic
- darshan-test/regression/run-all.sh $PWD/install $PWD/scratch cray-module-alcf
- ldd $PWD/scratch/mpi-io-test
- nm $PWD/scratch/mpi-io-test | grep darshan
- nm $PWD/scratch/mpi-io-test | grep MPI
artifacts:
paths:
- $PWD/scratch/*.darshan
- $PWD/scratch/*.debuglog
- $PWD/scratch/*.out
- $PWD/scratch/*.err
resource_group: debug-queue
See README.txt for general instructions. This file contains notes for testing on the Blue Gene platform
(more specifically: cetus.alcf.anl.gov). This example assumes that you are using the MPICH profile conf
method to add instrumentation.
To run regression tests:
- compile and install both darshan-runtime and darshan-util in the same directory
examples:
# darshan runtime
../configure --with-mem-align=16 --with-log-path=/projects/SSSPPg/carns/darshan-logs --prefix=/home/carns/working/darshan/install-cetus --with-jobid-env=COBALT_JOBID --with-zlib=/soft/libraries/alcf/current/gcc/ZLIB --host=powerpc-bgp-linux CC=/bgsys/drivers/V1R2M2/ppc64/comm/bin/gcc/mpicc
make install
# darshan util
../configure --prefix=/home/carns/working/darshan/install-cetus
make install
- start a screen session by running "screen"
note: this is suggested because the tests may take a while to complete depending on scheduler
availability
- within the screen session, set your path to point to a stock set of MPI compiler scripts
export PATH=/bgsys/drivers/V1R2M2/ppc64/comm/bin/gcc:$PATH
- run regression tests
./run-all.sh /home/carns/working/darshan/install-cetus /projects/SSSPPg/carns/darshan-test bg-profile-conf-alcf
note: the f90 test is expected to fail due to a known problem in the profiling interface for the
F90 MPICH implementation on Mira.
......@@ -8,14 +8,18 @@ The master script must be executed with three arguments:
2) path to temporary directory (for building executables, collecting logs,
etc. during test)
3) platform type; options include:
- workstation-static (for static instrumentation on a standard workstation)
- workstation-dynamic (for dynamic instrumentation on a standard workstation)
- workstation-profile-conf (for static instrumentation using MPI profiling
configuration hooks on a standard workstation)
- bg-profile-conf-alcf (for static instrumentation using MPI profiling configuration
hooks on BGQ platforms @ the ALCF only)
- cray-module-alcf (for static instrumentation using a Darshan Cray module on
Cray systems @ the ALCF only)
- workstation-cc-wrapper (for static/dynamic instrumentation on a standard
workstation using Darshan compiler wrappers)
- workstation-profile-conf-static (for static instrumentation using MPI
profiling configuration hooks on a standard workstation)
- workstation-profile-conf-dynamic (for dynamic instrumentation using MPI
profiling configuration hooks on a standard workstation)
- workstation-ld-preload (for dynamic instrumentation via LD_PRELOAD on a
standard workstation)
- cray-module-alcf (for static/dynamic instrumentation using a Darshan
Cray module on Cray systems @ the ALCF only)
- cray-module-nersc (for static/dynamic instrumentation using a Darshan
Cray module on Cray systems @ NERSC only)
The platform type should map to a subdirectory containing scripts
that describe how to perform platform-specific tasks (like loading or
......
#!/bin/bash
# convert DXT env setting
if [ -n "${DXT_ENABLE_IO_TRACE+defined}" ]; then
DXT_ENV="--env DXT_ENABLE_IO_TRACE=$DXT_ENABLE_IO_TRACE"
fi
# submit job and get job id
jobid=`qsub --env DARSHAN_LOGFILE=$DARSHAN_LOGFILE $DXT_ENV --mode c16 --proccount $DARSHAN_DEFAULT_NPROCS -A radix-io -t 10 -n 1 --output $DARSHAN_TMP/$$-tmp.out --error $DARSHAN_TMP/$$-tmp.err --debuglog $DARSHAN_TMP/$$-tmp.debuglog "$@"`
if [ $? -ne 0 ]; then
echo "Error: failed to qsub $@"
exit 1
fi
output="foo"
rc=0
# loop as long as qstat succeeds and shows information about job
while [ -n "$output" -a "$rc" -eq 0 ]; do
sleep 5
output=`qstat $jobid`
rc=$?
done
# look for return code
grep "exit code of 0" $DARSHAN_TMP/$$-tmp.debuglog >& /dev/null
if [ $? -ne 0 ]; then
exit 1
else
exit 0
fi
......@@ -6,7 +6,7 @@ if [ -n "${DXT_ENABLE_IO_TRACE+defined}" ]; then
fi
# submit job and get job id
jobid=`qsub --env DARSHAN_LOGFILE=$DARSHAN_LOGFILE --env DARSHAN_DEFAULT_NPROCS=$DARSHAN_DEFAULT_NPROCS $DXT_ENV --proccount $DARSHAN_DEFAULT_NPROCS -A CSC250STDM12 -q debug-cache-quad -t 20 -n 1 --output $DARSHAN_TMP/$$-tmp.out --error $DARSHAN_TMP/$$-tmp.err --debuglog $DARSHAN_TMP/$$-tmp.debuglog $DARSHAN_TESTDIR/$DARSHAN_PLATFORM/cobalt-submit.sh "$@"`
jobid=`qsub --env DARSHAN_LOGFILE=$DARSHAN_LOGFILE --env DARSHAN_DEFAULT_NPROCS=$DARSHAN_DEFAULT_NPROCS $DXT_ENV --proccount $DARSHAN_DEFAULT_NPROCS -A CSC250STDM12 -q debug-cache-quad -t 20 -n 1 --run_project --output $DARSHAN_TMP/$$-tmp.out --error $DARSHAN_TMP/$$-tmp.err --debuglog $DARSHAN_TMP/$$-tmp.debuglog $DARSHAN_TESTDIR/$DARSHAN_PLATFORM/cobalt-submit.sh "$@"`
if [ $? -ne 0 ]; then
echo "Error: failed to qsub $@"
......
......@@ -16,7 +16,7 @@
# variables (as in a dynamically linked environment), or generate mpicc
# wrappers (as in a statically linked environment).
# Notes specific to this platform (workstation-static)
# Notes specific to this platform (workstation-cc-wrapper)
########################
# This particular env script assumes that mpicc and its variants for other
# languages are already in the path. The compiler scripts to be used in
......
......@@ -16,11 +16,11 @@
# variables (as in a dynamically linked environment), or generate mpicc
# wrappers (as in a statically linked environment).
# Notes specific to this platform (workstation-dynamic)_
# Notes specific to this platform (workstation-ld-preload)
########################
# This particular env script assumes that mpicc and its variants for other
# languages are already in the path, and that they will produce dynamic
# executables by default. Test programs are compile usign the existing
# executables by default. Test programs are compiled using the existing
# scripts, and LD_PRELOAD is set to enable instrumentation.
# The runjob command is just mpiexec, no scheduler
......
......@@ -16,10 +16,10 @@
# variables (as in a dynamically linked environment), or generate mpicc
# wrappers (as in a statically linked environment).
# Notes specific to this platform (workstation-dynamic)_
# Notes specific to this platform (workstation-profile-conf-dynamic)
########################
# This particular env script assumes that mpicc and its variants for other
# languages are already in the path, and that they will produce static
# languages are already in the path, and that they will produce dynamic
# executables by default. Darshan instrumentation is added by specifying
# a profiling configuration file using environment variables.
......
......@@ -16,24 +16,26 @@
# variables (as in a dynamically linked environment), or generate mpicc
# wrappers (as in a statically linked environment).
# Notes specific to this platform (bg-profile-conf-alcf)
# Notes specific to this platform (workstation-profile-conf-static)
########################
# This particular env script assumes that mpicc and its variants for other
# languages are already in the path, and that they will produce static
# executables by default. Darshan instrumentation is added by specifying
# a profiling configuration file using environment variables.
# the RUNJOB command is the most complex part here. We use a script that submits
# a cobalt job, waits for its completion, and checks its return status
# The runjob command is just mpiexec, no scheduler
export DARSHAN_CC=mpicc
export DARSHAN_CXX=mpicxx
export DARSHAN_F77=mpif77
export DARSHAN_F90=mpif90
export MPICC_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-bg-cc
export MPICXX_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-bg-cxx
export MPIF90_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-bg-f
export MPIF77_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-bg-f
export MPICC_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-cc-static
export MPICXX_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-cxx-static
export MPIF90_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-f-static
export MPIF77_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-f-static
# MPICH 3.1.1 and newer use MPIFORT rather than MPIF90 and MPIF77 in env var
# name
export MPIFORT_PROFILE=$DARSHAN_PATH/share/mpi-profile/darshan-f-static
export DARSHAN_RUNJOB="bg-profile-conf-alcf/runjob.sh"
export DARSHAN_RUNJOB="mpiexec -n $DARSHAN_DEFAULT_NPROCS"
......@@ -224,6 +224,9 @@ endif
install -d $(DESTDIR)$(libdir)/Number
install -d $(DESTDIR)$(libdir)/Number/Bytes
install -m 644 $(srcdir)/darshan-job-summary/lib/Number/Bytes/Human.pm $(DESTDIR)$(libdir)/Number/Bytes
install -d $(DESTDIR)$(libdir)/Pod
install -m 644 $(srcdir)/darshan-job-summary/lib/Pod/Constants.pm $(DESTDIR)$(libdir)/Pod/
install -m 644 $(srcdir)/darshan-job-summary/lib/Pod/LaTeX.pm $(DESTDIR)$(libdir)/Pod/
install -d $(DESTDIR)$(datarootdir)
install -m 644 $(srcdir)/darshan-job-summary/share/* $(DESTDIR)$(datarootdir)
install -d $(DESTDIR)$(libdir)/pkgconfig
......
package Pod::Constants;
use 5.006002;
use strict;
use warnings;
use base qw(Pod::Parser Exporter);
use Carp;
our $VERSION = 0.19;
# An ugly hack to go from caller() to the relevant parser state
# variable
my %parsers;
sub end_input {
#my ($parser, $command, $paragraph, $line_num) = (@_);
my $parser = shift;
return unless $parser->{active};
print "Found end of $parser->{active}\n" if $parser->{DEBUG};
my $whereto = $parser->{wanted_pod_tags}->{$parser->{active}};
print "\$_ will be set to:\n---\n$parser->{paragraphs}\n---\n" if $parser->{DEBUG};
$parser->{paragraphs} =~ s/^\s*|\s*$//gs if $parser->{trimmed_tags}->{$parser->{active}};
if (ref $whereto eq 'CODE') {
print "calling sub\n" if $parser->{DEBUG};
local ($_) = $parser->{paragraphs};
$whereto->();
print "done\n" if $parser->{DEBUG};
} elsif (ref $whereto eq 'SCALAR') {
print "inserting into scalar\n" if $parser->{DEBUG};
$$whereto = $parser->{paragraphs};
} elsif (ref $whereto eq 'ARRAY') {
print "inserting into array\n" if $parser->{DEBUG};
@$whereto = split /\n/, $parser->{paragraphs};
} elsif (ref $whereto eq 'HASH') {
print "inserting into hash\n" if $parser->{DEBUG};
# Oh, sorry, should I be in LISP101?
%$whereto = (
map { map { s/^\s*|\s*$//g; $_ } split /=>/ } grep m/^
( (?:[^=]|=[^>])+ ) # scan up to "=>"
=>
( (?:[^=]|=[^>])+ =? )# don't allow more "=>"'s
$/x, split /\n/, $parser->{paragraphs},);
} else { die $whereto }
$parser->{active} = undef;
}
# Pod::Parser overloaded command
sub command {
my ($parser, $command, $paragraph, $line_num) = @_;
$paragraph =~ s/(?:\r\n|\n\r)/\n/g;
print "Got command =$command, value=$paragraph\n" if $parser->{DEBUG};
$parser->end_input() if $parser->{active};
my ($lookup);
# first check for a catch-all for this command type
if ( exists $parser->{wanted_pod_tags}->{"*$command"} ) {
$parser->{paragraphs} = $paragraph;
$parser->{active} = "*$command";
} elsif ($command =~ m/^(head\d+|item|(for|begin))$/) {
if ( $2 ) {
# if it's a "for" or "begin" section, the title is the
# first word only
($lookup, $parser->{paragraphs}) = $paragraph =~ m/^\s*(\S*)\s*(.*)/s;
} else {
# otherwise, it's up to the end of the line
($lookup, $parser->{paragraphs}) = $paragraph =~ m/^\s*(\S[^\n]*?)\s*\n(.*)$/s;
}
# Look for a match by name
if (defined $lookup && exists $parser->{wanted_pod_tags}->{$lookup}) {
print "Found $lookup\n" if ($parser->{DEBUG});
$parser->{active} = $lookup;
} elsif ($parser->{DEBUG}) {
local $^W = 0;
print "Ignoring =$command $paragraph (lookup = $lookup)\n"
}
} else {
# nothing
print "Ignoring =$command (not known)\n" if $parser->{DEBUG};
}
}
# Pod::Parser overloaded verbatim
sub verbatim {
my ($parser, $paragraph, $line_num) = @_;
$paragraph =~ s/(?:\r\n|\n\r)/\n/g;
my $status = $parser->{active} ? 'using' : 'ignoring';
print "Got paragraph: $paragraph ($status)\n" if $parser->{DEBUG};
$parser->{paragraphs} .= $paragraph if defined $parser->{active}
}
# Pod::Parser overloaded textblock
sub textblock { goto \&verbatim }
sub import {
my $class = shift;
# if no args, just return
return unless (@_);
# try to guess the source file of the caller
my $source_file;
if (caller ne 'main') {
(my $module = caller.'.pm') =~ s|::|/|g;
$source_file = $INC{$module};
}
$source_file ||= $0;
croak "Cannot find source file (guessed $source_file) for package ".caller unless -f $source_file;
# nasty tricks with the stack so we don't have to be silly with
# caller()
unshift @_, $source_file;
goto \&import_from_file;
}
sub import_from_file {
my $filename = shift;
my $parser = __PACKAGE__->new();
$parser->{wanted_pod_tags} = {};
$parser->{trimmed_tags} = {};
$parser->{trim_next} = 0;
$parser->{DEBUG} = 0;
$parser->{active} = undef;
$parsers{caller()} = $parser;
$parser->add_hook(@_);
print "Pod::Parser: DEBUG: Opening $filename for reading\n" if $parser->{DEBUG};
open my $fh, '<', $filename or croak "cannot open $filename for reading; $!";
$parser->parse_from_filehandle($fh, \*STDOUT);
close $fh;
}
sub add_hook {
my $parser;
if (eval { $_[0]->isa(__PACKAGE__) }) {
$parser = shift;
} else {
$parser = $parsers{caller()} or croak 'add_hook called, but don\'t know what for - caller = '.caller;
}
while (my ($pod_tag, $var) = splice @_, 0, 2) {
#print "$pod_tag: $var\n";
if (lc($pod_tag) eq '-trim') {
$parser->{trim_next} = $var;
} elsif ( lc($pod_tag) eq '-debug' ) {
$parser->{DEBUG} = $var;
} elsif (lc($pod_tag) eq '-usage') {
# an idea for later - automatic "usage"
#%wanted_pod_tags{@tags}
} else {
if ((ref $var) =~ /^(?:SCALAR|CODE|ARRAY|HASH)$/) {
print "Will look for $pod_tag.\n" if $parser->{DEBUG};
$parser->{wanted_pod_tags}->{$pod_tag} = $var;
$parser->{trimmed_tags}->{$pod_tag} = 1 if $parser->{trim_next};
} else {
croak "Sorry - need a reference to import POD sections into, not the scalar value $var"
}
}
}
}
sub delete_hook {
my $parser;
if (eval { $_[0]->isa(__PACKAGE__) }) {
$parser = shift;
} else {
$parser = $parsers{caller()} or croak 'delete_hook called, but don\'t know what for - caller = '.caller;
}
while ( my $label = shift ) {
delete $parser->{wanted_pod_tags}->{$label};
delete $parser->{trimmed_tags}->{$label};
}
}
1;
__END__
=encoding utf-8
=head1 NAME
Pod::Constants - Include constants from POD
=head1 SYNOPSIS
our ($myvar, $VERSION, @myarray, $html, %myhash);
use Pod::Constants -trim => 1,
'Pod Section Name' => \$myvar,
'Version' => sub { eval },
'Some list' => \@myarray,
html => \$html,
'Some hash' => \%myhash;
=head2 Pod Section Name
This string will be loaded into $myvar
=head2 Version
# This is an example of using a closure. $_ is set to the
# contents of the paragraph. In this example, "eval" is
# used to execute this code at run time.
$VERSION = 0.19;
=head2 Some list
Each line from this section of the file
will be placed into a separate array element.
For example, this is $myarray[2].
=head2 Some hash
This text will not go into the hash, because
it doesn't look like a definition list.
key1 => Some value (this will go into the hash)
var2 => Some Other value (so will this)
wtf = This won't make it in.
=head2 %myhash's value after the above:
( key1 => "Some value (this will go into the hash)",
var2 => "Some Other value (so will this)" )
=begin html <p>This text will be in $html</p>
=cut
=head1 DESCRIPTION
This module allows you to specify those constants that should be
documented in your POD, and pull them out a run time in a fairly
arbitrary fashion.
Pod::Constants uses Pod::Parser to do the parsing of the source file.
It has to open the source file it is called from, and does so directly
either by lookup in %INC or by assuming it is $0 if the caller is
"main" (or it can't find %INC{caller()})
=head2 ARBITARY DECISIONS
I have made this code only allow the "Pod Section Name" to match
`headN', `item', `for' and `begin' POD sections. If you have a good
reason why you think it should match other POD sections, drop me a
line and if I'm convinced I'll put it in the standard version.
For `for' and `begin' sections, only the first word is counted as
being a part of the specifier, as opposed to `headN' and `item', where
the entire rest of the line counts.
=head1 FUNCTIONS
=head2 import(@args)
This function is called when we are "use"'d. It determines the source
file by inspecting the value of caller() or $0.
The form of @args is HOOK => $where.
$where may be a scalar reference, in which case the contents of the
POD section called "HOOK" will be loaded into $where.
$where may be an array reference, in which case the contents of the
array will be the contents of the POD section called "HOOK", split
into lines.
$where may be a hash reference, in which case any lines with a "=>"
symbol present will have everything on the left have side of the =>
operator as keys and everything on the right as values. You do not
need to quote either, nor have trailing commas at the end of the
lines.
$where may be a code reference (sub { }), in which case the sub is
called when the hook is encountered. $_ is set to the value of the
POD paragraph.
You may also specify the behaviour of whitespace trimming; by default,
no trimming is done except on the HOOK names. Setting "-trim => 1"
turns on a package "global" (until the next time import is called)
that will trim the $_ sent for processing by the hook processing
function (be it a given function, or the built-in array/hash
splitters) for leading and trailing whitespace.
The name of HOOK is matched against any "=head1", "=head2", "=item",
"=for", "=begin" value. If you specify the special hooknames "*item",
"*head1", etc, then you will get a function that is run for every
Note that the supplied functions for array and hash splitting are
exactly equivalent to fairly simple Perl blocks:
Array:
HOOK => sub { @array = split /\n/, $_ }
Hash:
HOOK => sub {
%hash =
(map { map { s/^\s+|\s+$//g; $_ } split /=>/, $_ }
(grep m/^
( (?:[^=]|=[^>])+ ) # scan up to "=>"
=>
( (?:[^=]|=[^>])+ =? )# don't allow more "=>"'s
$/x, split /\n/, $_));
}
Well, they're simple if you can grok map, a regular expression like
that and a functional programming style. If you can't I'm sure it is
probably voodoo to you.
Here's the procedural equivalent:
HOOK => sub {
for my $line (split /\n/, $_) {
my ($key, $value, $junk) = split /=>/, $line;
next if $junk;
$key =~ s/^\s+|\s+$//g
$value =~ s/^\s+|\s+$//g
$hash{$key} = $value;
}
},
=head2 import_from_file($filename, @args)
Very similar to straight "import", but you specify the source filename
explicitly.
=head2 add_hook(NAME => value)
This function adds another hook, it is useful for dynamic updating of
parsing through the document.
For an example, please see t/01-constants.t in the source
distribution. More detailed examples will be added in a later
release.
=head2 delete_hook(@list)
Deletes the named hooks. Companion function to add_hook
=head2 CLOSURES AS DESTINATIONS
If the given value is a ref CODE, then that function is called, with
$_ set to the value of the paragraph. This can be very useful for
applying your own custom mutations to the POD to change it from human
readable text into something your program can use.
After I added this function, I just kept on thinking of cool uses for
it. The nice, succinct code you can make with it is one of