Commit 72bf8ea4 authored by Valentin Reis's avatar Valentin Reis

Initial commit.

parent cebb7e10
# benchmark-applications
This repository contains sample benchmark applications instrumented to interact with NRM.
It contains similar code as the previous "progress-benchmarks" repo, without the 600MB extra branches.
\ No newline at end of file
This repository contains sample benchmark applications instrumented to report progress to NRM through libNRM.
It contains similar code as the previous "progress-benchmarks" repo, without the 600MB extra branches.
# "simple" - contains a random walk and a dgemm
On KNL machines at ANL, the correct env vars are obtained via:
source /opt/intel/bin/ intel64
# "graph500 - contains the graph500 benchmark
Copyright (c) 2011-2017 Graph500 Steering Committee
New code under University of Illinois/NCSA Open Source License
see license.txt or
Old code, including but not limited to generator code:
/* Copyright (C) 2009-2010 The Trustees of Indiana University. */
/* */
/* Use, modification and distribution is subject to the Boost Software */
/* License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at */
/* */
#+TITLE: Graph 500 Benchmarks 1 ("Search") and 2 ("Shortest Path")
#+AUTHOR: Graph 500 Steering Committee
#+OPTIONS: H:3 num:t toc:t \n:nil @:t ::t |:t ^:t -:t f:t *:t <:t
#+OPTIONS: TeX:t LaTeX:t skip:nil d:nil todo:t pri:nil tags:not-in-toc
#+OPTIONS: ^:{}
#+STYLE: <style>body {margin-left: 10%; margin-right: 10%;} table {margin-left:auto; margin-right:auto;}</style>
Contributors: David A. Bader (Georgia Institute of Technology),
Jonathan Berry (Sandia National Laboratories), Simon Kahan (Pacific
Northwest National Laboratory and University of Washington), Richard
Murphy (Micron Technology), E. Jason Riedy (Georgia
Institute of Technology), Jeremiah Willcock (Indiana University),
Anton Korzh (Micron Technology), and Marcin Zalewski (Pacific
Northwest National Laboratory).
Version History:
- V0.1 :: Draft, created 28 July 2010
- V0.2 :: Draft, created 29 September 2010
- V0.3 :: Draft, created 30 September 2010
- V1.0 :: Created 1 October 2010
- V1.1 :: Created 3 October 2010
- V1.2 :: Created 15 September 2011
- V2.0 :: Created 20 June 2017
Version 0.1 of this document was part of the Graph 500 community
benchmark effort, led by Richard Murphy (Micron Technology). The
intent is that there will be at least three variants of
implementations, on shared memory and threaded systems, on distributed
memory clusters, and on external memory map-reduce clouds. This
specification is for the first two of potentially several benchmark
References: "Introducing the Graph 500," Richard C. Murphy, Kyle
B. Wheeler, Brian W. Barrett, James A. Ang, Cray User's Group (CUG),
May 5, 2010.
"DFS: A Simple to Write Yet Difficult to Execute Benchmark," Richard
C. Murphy, Jonathan Berry, William McLendon, Bruce Hendrickson,
Douglas Gregor, Andrew Lumsdaine, IEEE International Symposium on
Workload Characterizations 2006 (IISWC06), San Jose, CA, 25-27 October
* Brief Description of the Graph 500 Benchmark
Data-intensive supercomputer applications are an increasingly
important workload, but are ill-suited for platforms designed for 3D
physics simulations. Application performance cannot be improved
without a meaningful benchmark. Graphs are a core part of most
analytics workloads. Backed by a steering committee of over 30
international HPC experts from academia, industry, and national
laboratories, this specification establishes a large-scale benchmark
for these applications. It will offer a forum for the community and
provide a rallying point for data-intensive supercomputing
problems. This is the first serious approach to augment the Top 500
with data-intensive applications.
The intent of this benchmark problems ("Search" and "Shortest Path") is to develop a
compact application that has multiple analysis techniques (multiple
kernels) accessing a single data structure representing a weighted,
undirected graph. In addition to a kernel to construct the graph from
the input tuple list, there are two additional computational
kernels to operate on the graph.
This benchmark includes a scalable data generator which produces edge tuples
containing the start vertex and end vertex for each edge. The first kernel
constructs an /undirected/ graph in a format usable by all subsequent
kernels. No subsequent modifications are permitted to benefit specific
kernels. The second kernel performs a breadth-first search of the graph. The
third kernel performs multiple single-source shortest path computations on the
graph. All three kernels are timed.
** References
D.A. Bader, J. Feo, J. Gilbert, J. Kepner, D. Koester, E. Loh,
K. Madduri, W. Mann, Theresa Meuse, HPCS Scalable Synthetic Compact
Applications #2 Graph Analysis (SSCA#2 v2.2 Specification), 5
September 2007.
* Overall Benchmark
The benchmark performs the following steps:
1. Generate the edge list.
2. Construct a graph from the edge list (*timed*, kernel 1).
3. Randomly sample 64 unique search keys with degree at least one,
not counting self-loops.
4. For each search key:
1. Compute the parent array (*timed*, kernel 2).
2. Validate that the parent array is a correct BFS search tree
for the given search tree.
4. For each search key:
1. Compute the parent array and the distance array (*timed*,
kernel 3).
2. Validate that the parent array is a correct SSSP search tree
for the given search tree.
6. Compute and output performance information.
Only the sections marked as *timed* are included in the performance
information. Note that all uses of "random" permit pseudorandom number
generation. Note that the [[#kernel2][Kernel 2]] and [[#kernel3][Kernel 3]] are run in separate
loops and not consecutively off the same initial vertex. [[#kernel2][Kernel 2]] and
[[#kernel3][Kernel 3]] can be run on graphs of different scales that are generated by
separate runs of [[#kernel1][Kernel 1]].
* Generating the Edge List
** Brief Description
The scalable data generator will construct a list of edge tuples
containing vertex identifiers. Each edge is undirected with its
endpoints given in the tuple as StartVertex, EndVertex and
Weight. If the edge tuples are only to be used for running [[#kernel2][Kernel 2]], it
is permissible to not generate edge weights. This allows BFS runs that
are not encumbered by unnecessary memory usage resulting from storing
edge weights.
The intent of the first kernel below is to convert a list with no
locality into a more optimized form. The generated list of input
tuples must not exhibit any locality that can be exploited by the
computational kernels. Thus, the vertex numbers must be randomized
and a random ordering of tuples must be presented to [[#kernel1][Kernel 1]].
The data generator may be parallelized, but the vertex names
must be globally consistent and care must be taken to minimize effects
of data locality at the processor level.
** Detailed Text Description
The edge tuples will have the form <StartVertex, EndVertex, Weight>
where StartVertex is one endpoint vertex label, EndVertex is the other
endpoint vertex label, and Weight is the weight of the edge. The
space of labels is the set of integers beginning with *zero* up to but
not including the number of vertices N (defined below), and the space
of weights is the range [0,1) of single precision floats. The
kernels are not provided the size N explicitly and must discover it if required for
constructing the graph.
The benchmark takes only one parameter as input:
- SCALE :: The logarithm base two of the number of vertices.
The benchmark also contains internal parameters with required settings
for submission. Experimenting with different setting is useful for
testing and exploration but not permitted for submitted results.
- edgefactor = 16 :: The ratio of the graph's edge count to its vertex
count (i.e., half the average degree of a vertex in the graph).
These inputs determine the graph's size:
- N :: the total number of vertices, 2^{SCALE}. An implementation may
use any set of N distinct integers to number the vertices, but at
least 48 bits must be allocated per vertex number and 32 bits must be
allocated for edge weight unless benchmark is run in BFS-only mode.
Other parameters may be assumed to fit within the natural word of the machine.
N is derived from the problem's scaling parameter.
- M :: the number of edges. M = edgefactor * N.
The graph generator is a Kronecker generator similar to the Recursive
MATrix (R-MAT) scale-free graph generation algorithm [Chakrabarti, et
al., 2004]. For ease of discussion, the description of this R-MAT
generator uses an adjacency matrix data structure; however,
implementations may use any alternate approach that outputs the
equivalent list of edge tuples. This model recursively sub-divides the
adjacency matrix of the graph into four equal-sized partitions and
distributes edges within these partitions with unequal
probabilities. Initially, the adjacency matrix is empty, and edges are
added one at a time. Each edge chooses one of the four partitions with
probabilities A, B, C, and D, respectively. These probabilities, the
initiator parameters, are provided in Table [[tbl:initiator]]. The weight
is chosen randomly with uniform distribution from the interval
of [0, 1).
#+CAPTION: Initiator parameters for the Kronecker graph generator
#+LABEL: tbl:initiator
| A = 0.57 | B = 0.19 |
| C = 0.19 | D = 1-(A+B+C) = 0.05 |
The next section details a high-level implementation for this
generator. High-performance, parallel implementations are included in
the reference implementation.
The graph generator creates a small number of multiple edges between
two vertices as well as self-loops. Multiple edges, self-loops, and
isolated vertices may be ignored in the subsequent kernels if
correctness is preserved but must be included in the edge list
provided to the first kernel. The algorithm also generates the data
tuples with high degrees of locality. Thus, as a final step, vertex
numbers must be randomly permuted, and then the edge tuples randomly
It is permissible to run the data generator in parallel. In this case,
it is necessary to ensure that the vertices are named globally, and
that the generated data does not possess any locality, either in local
memory or globally across processors.
The scalable data generator should be run before starting kernel 1,
storing its results to either RAM or disk. If stored to disk, the
data may be retrieved before starting kernel 1. The data generator and
retrieval operations need not be timed.
** Sample High-Level Implementation of the Kronecker Generator
The GNU Octave routine in Algorithm [[alg:generator]] is an
attractive implementation in that it is embarrassingly parallel and
does not require the explicit formation of the adjacency matrix.
#+CAPTION: High-level generator code
#+LABEL: alg:generator
#+INCLUDE: "octave/kronecker_generator.m" src Octave
** References
D. Chakrabarti, Y. Zhan, and C. Faloutsos, R-MAT: A recursive model
for graph mining, SIAM Data Mining 2004.
Section 17.6, Algorithms in C (third edition). Part 5 Graph
Algorithms, Robert Sedgewick (Programs 17.7 and 17.8)
P. Sanders, Random Permutations on Distributed, External and
Hierarchical Memory, Information Processing Letters 67 (1988) pp
* Kernel 1 – Graph Construction
:CUSTOM_ID: kernel1
** Description
The first kernel may transform the edge list to any data structures
(held in internal or external memory) that are used for the remaining
kernels. For instance, kernel 1 may construct a (sparse) graph from a
list of tuples; each tuple contains endpoint vertex identifiers for an
edge, and a weight that represents data assigned to the edge.
The graph may be represented in any manner, but it may not be modified
by or between subsequent kernels. Space may be reserved in the data
structure for marking or locking, but the data stored cannot be reused
between subsequent kernels. Only one copy of a kernel will be run at
a time; that kernel has exclusive access to any such marking or
locking space and is permitted to modify that space (only).
There are various internal memory representations for sparse graphs,
including (but not limited to) sparse matrices and (multi-level)
linked lists. For the purposes of this application, the kernel is
provided only the edge list and the edge list's size. Further
information such as the number of vertices must be computed within this
kernel. Algorithm [[alg:kernel1]] provides a high-level sample
implementation of kernel 1.
The process of constructing the graph data structure (in internal or
external memory) from the set of tuples must be timed.
#+CAPTION: High-level implementation of kernel 1
#+LABEL: alg:kernel1
#+INCLUDE: "octave/kernel_1.m" src Octave
** References
Section 17.6 Algorithms in C third edition Part 5 Graph Algorithms,
Robert Sedgewick (Program 17.9)
* Sampling 64 Search Keys
The search keys must be randomly sampled from the vertices in the
graph. To avoid trivial searches, sample only from vertices that are
connected to some other vertex. Their degrees, not counting self-loops,
must be at least one. If there are fewer than 64 such vertices, run
fewer than 64 searches. This should never occur with the graph sizes
in this benchmark, but there is a non-zero probability of producing a
trivial or nearly trivial graph. The number of search keys used is
included in the output, but this step is untimed.
* Kernel 2 – Breadth-First Search
:CUSTOM_ID: kernel2
** Description
A Breadth-First Search (BFS) of a graph starts with a single source
vertex, then, in phases, finds and labels its neighbors, then the
neighbors of its neighbors, etc. This is a fundamental method on
which many graph algorithms are based. A formal description of BFS can
be found in Cormen, Leiserson, and Rivest. Below, we specify the
input and output for a BFS benchmark, and we impose some constraints
on the computation. However, we do not constrain the choice of BFS
algorithm itself, as long as it produces a correct BFS tree as output.
This benchmark's memory access pattern (internal or external) is data-dependent
with small average prefetch depth. As in a simple
concurrent linked-list traversal benchmark, performance reflects an
architecture's throughput when executing concurrent threads, each of
low memory concurrency and high memory reference density. Unlike such
a benchmark, this one also measures resilience to hot-spotting when
many of the memory references are to the same location; efficiency
when every thread's execution path depends on the asynchronous
side-effects of others; and the ability to dynamically load balance
unpredictably sized work units. Measuring synchronization performance
is not a primary goal here.
You may not search from multiple search keys concurrently. No
information can be passed between different invocations of this
kernel. The kernel may return a depth array to be used in validation.
*ALGORITHM NOTE* We allow a benign race condition when vertices at BFS
level k are discovering vertices at level k+1. Specifically, we do
not require synchronization to ensure that the first visitor must
become the parent while locking out subsequent visitors. As long as
the discovered BFS tree is correct at the end, the algorithm is
considered to be correct.
** Kernel 2 Output
For each search key, the routine must return an array containing valid
breadth-first search parent information (per vertex). The parent of
the search key is itself, and the parent of any vertex not included in
the tree is -1. Algorithm [[alg:kernel2]] provides a sample (and
inefficient) high-level implementation of kernel two.
#+CAPTION: High-level implementation of kernel 2
#+LABEL: alg:kernel2
#+INCLUDE: "octave/kernel_2.m" src Octave
* Kernel 3 – Single Source Shortest Paths
:CUSTOM_ID: kernel3
** Description
A single-source shortest paths (SSSP) computation finds the shortest
distance from a given starting vertex to every other vertex in the
graph. A formal description of SSSP on graphs with non-negative weights
also can be found in Cormen, Leiserson, and Rivest. We specify the
input and output for a SSSP benchmark, and we impose some constraints on
the computation. However, we do not constrain the choice of SSSP
algorithm itself, as long as the implementation produces a correct SSSP
distance vector and parent tree as output. This is a separate kernel
and cannot use data computed by [[#kernel2][Kernel 2]] (BFS).
This kernel extends the overall benchmark with additional tests and data
access per vertex. Many but not all algorithms for SSSP are similar to
BFS and suffer from similar issues of hot-spotting and duplicate memory
You may not search from multiple initial vertices concurrently. No
information can be passed between different invocations of this kernel.
*ALGORITHM NOTE* We allow benign race conditions within SSSP as well.
We do not require that a /first/ visitor must prevent subsequent
visitors from taking the parent slot. As long as the SSSP distances and
parent tree are correct at the end, the algorithm is considered to be
** Kernel 3 Output
For each initial vertex, the routine must return a the distance of
each vertex from the initial vertex and the parent of each vertex in a
valid single-source shortest path tree. The parent of the initial
vertex is itself, and the parent of any vertex not included in the
tree is -1. Algorithm \ref{alg:kernel.3} provides a sample high-level
implementation of [[#kernel3][Kernel 3]].
# <<alg:kernel.3>>
#+CAPTION: High-level implementation of Kernel 3
#+NAME: alg:kernel.3
#+INCLUDE: "octave/kernel_3.m" src Octave
** References
The Shortest Path Problem: Ninth DIMACS Implementation Challenge.
C. Demetrescu, A.V. Goldberg, and D.S. Johnson, eds. DIMACS series in
discrete mathematics and theoretical computer science, American
Mathematical Society, 2009.
9th DIMACS Implementation Challenge - Shortest Paths.
* Validation
It is not intended that the results of full-scale runs of this benchmark
can be validated by exact comparison to a standard reference result. At
full scale, the data set is enormous, and its exact details depend on the
pseudo-random number generator and BFS or SSSP algorithm used. Therefore,
the validation of an implementation of the benchmark uses soft checking
of the results.
We emphasize that the intent of this benchmark is to exercise these
algorithms on the largest data sets that will fit on machines being
evaluated. However, for debugging purposes it may be desirable to run
on small data sets, and it may be desirable to verify parallel results
against serial results, or even against results from the executable
The executable specification validates its results by comparing them
with results computed directly from the tuple list.
The validation procedure for BSP ([[#kernel2][Kernel 2]]) is similar to one from version 1.2
of the benchmark. The validation procedure for SSSP ([[#kernel3][Kernel 3]]) constructs
search depth tree in place of distance array and then runs SSSP validation routine.
After each search, run (but do not time) a function that ensures that the
discovered parent tree and distance vector are correct by ensuring that:
1) the BFS/SSSP tree is a tree and does not contain cycles,
2) each tree edge edge connects vertices whose
a) BFS levels differ by exactly one,
b) SSSP distances differ by at most the weight of the edge,
3) every edge in the input list has vertices with
a) BFS levels that differ by at most one or that both are not in the
BFS tree,
b) SSSP distances that differ by at most the weight of the edge or
are not in the SSSP tree,
4) the BFS/SSSP tree spans an entire connected component's vertices,
5) a node and its BFS/SSSP parent are joined by an edge of the original
Algorithm [[alg:validate]] shows a sample validation routine.
#+CAPTION: High-level implementation of kernel 2 validation
#+LABEL: alg:validate
#+INCLUDE: "octave/validate.m" src Octave
* Computing and Outputting Performance Information
** Timing
Start the time for a search immediately prior to visiting the search
root. Stop the time for that search when the output has been written
to memory. Do not time any I/O outside of the search routine. The spirit of the
benchmark is to gauge the performance of a single search. We run many
searches in order to compute means and variances, not to amortize data
structure setup time.
** Performance Metric (TEPS)
In order to compare the performance of Graph 500 "Search"
implementations across a variety of architectures, programming models,
and productivity languages and frameworks, we adopt a new performance
metric described in this section. In the spirit of well-known
computing rates floating-point operations per second (flops) measured
by the LINPACK benchmark and global updates per second (GUPs) measured
by the HPCC RandomAccess benchmark, we define a new rate called traversed
edges per second (TEPS). We measure TEPS through the benchmarking of
[[Kernel%202][Kernel 2]] and [[Kernel 3]] as follows. Let time_{K}(n) be
the measured execution time for a kernel run. Let m be the number of
undirected edges in a traversed component of the graph counted as number
of self-loop edge tuples within component traversed added to halved number
of non self-loop edge tuples within component traversed. We define the
normalized performance rate (number of edge traversals per second) as:
TEPS(n) = m / time_{K}(n)
** Output
The output must contain the following information:
- SCALE :: Graph generation parameter
- edgefactor :: Graph generation parameter
- NBFS :: Number of BFS searches run, 64 for non-trivial graphs
- construction_time :: The single kernel 1 time
- bfs_min_time, bfs_firstquartile_time, bfs_median_time, bfs_thirdquartile_time, bfs_max_time :: Quartiles for the kernel 2 times
- bfs_mean_time, bfs_stddev_time :: Mean and standard deviation of the kernel 2 times
- bfs_min_nedge, bfs_firstquartile_nedge, bfs_median_nedge, bfs_thirdquartile_nedge, bfs_max_nedge :: Quartiles for the number of
input edges visited by kernel 2, see TEPS section above.
- bfs_mean_nedge, bfs_stddev_nedge :: Mean and standard deviation of the number of
input edges visited by kernel 2, see TEPS section above.
- bfs_min_TEPS, bfs_firstquartile_TEPS, bfs_median_TEPS, bfs_thirdquartile_TEPS, bfs_max_TEPS :: Quartiles for the kernel 2 TEPS
- bfs_harmonic_mean_TEPS, bfs_harmonic_stddev_TEPS :: Mean and standard
deviation of the kernel 2 TEPS.
- sssp_min_time, sssp_firstquartile_time, sssp_median_time, sssp_thirdquartile_time, sssp_max_time :: Quartiles for the kernel 3 times
- sssp_mean_time, sssp_stddev_time :: Mean and standard deviation of the kernel 3 times
- sssp_min_nedge, sssp_firstquartile_nedge, sssp_median_nedge, sssp_thirdquartile_nedge, sssp_max_nedge :: Quartiles for the number of
input edges visited by kernel 3, see TEPS section above.
- sssp_mean_nedge, sssp_stddev_nedge :: Mean and standard deviation of the number of
input edges visited by kernel 3, see TEPS section above.
- sssp_min_TEPS, sssp_firstquartile_TEPS, sssp_median_TEPS, sssp_thirdquartile_TEPS, sssp_max_TEPS :: Quartiles for the kernel 3 TEPS
- sssp_harmonic_mean_TEPS, sssp_harmonic_stddev_TEPS :: Mean and standard
deviation of the kernel 3 TEPS.
*Note*: Because TEPS is a rate, the rates are compared using
*harmonic* means.
The **TEPS* fields (all fields that end with "TEPS") for [[#kernel2][Kernel 2]] or [[#kernel3][Kernel 3]]
can be set to zero if only one kernel was run. It is permissible to run [[#kernel2][Kernel
2]] and [[#kernel3][Kernel 3]] on different graphs. In such situation, two outputs can be
submitted, each with the **TEPS* for one of the kernels set to zeros.
Additional fields are permitted. Algorithm [[alg:output]] provides
a high-level sample.
#+CAPTION: High-level implementation of the output routine
#+INCLUDE: "octave/output.m" src Octave
** References
Nilan Norris, The Standard Errors of the Geometric and Harmonic Means
and Their Application to Index Numbers, The Annals of Mathematical
Statistics, vol. 11, num. 4, 1940.
* Sample Driver
A high-level sample driver for the above routines is given in
Algorithm [[alg:driver]].
#+CAPTION: High-level sample driver
#+LABEL: alg:driver
#+INCLUDE: "octave/driver.m" src Octave
* Evaluation Criteria
In approximate order of importance, the goals of this benchmark are:
- Fair adherence to the intent of the benchmark specification
- Maximum problem size for a given machine
- Minimum execution time for a given problem size
Less important goals:
- Minimum code size (not including validation code)
- Minimal development time
- Maximal maintainability
- Maximal extensibility
Compiling should be pretty straightforward as long as you have a valid MPI-3 library loaded in your PATH.
There is no more OpenMP,Sequential and XMT versions of benchmark.
On single node you can run MPI code with reasonable performance.
To build binaries change directory to src and execute make.
If you are lucky four binaries would be built, two of which are of interest:
graph500_reference_bfs runs BFS kernel (and skips weight generation)
graph500_reference_bfs_sssp runs both BFS and SSSP kernels
Both binaries require one integer parameter which is scale of the graph.
Validation can be deactivated by specifying SKIP_VALIDATION=1 as an environment variable.
bfs_sssp binary would skip BFS part if SKIP_BFS=1 is present in your environment.
If you want to store/read generated graph from/to file use environment variables TMPFILE=<filename> and also REUSEFILE=1 to keep the file.
It's advised to use bfs_sssp binary to generate graph files as it generates both files of edges and weights (filename.weights)
bfs binary would only use/write edges file. And once bfs_sssp cant open weights file it would generate both files even if edges files is present.
Current settings assume you are using powers of 2: total number of cores and number of cores per node.
It's possible to have non-power of two of nodes if you comment macro defined in common.h SIZE_MUST_BE_POWER_OF_TWO.
Be aware normally that will drop your performance by more then 20%.
If you want to use non-power of two processes per node, you should add -DPROCS_PER_NODE_NOT_POWER_OF_TWO to CFLAGS in src/Makefile,
this one will enable SIZE_MUST_BE_POWER_OF_TWO automatically.
AML = Active Messages Library
AML is an SPMD communication library built on top of MPI3 intented to be used in fine grain applications like Graph500
Two main goals of AML : user code clarity while delivering high performance through tricky internal implementation
It's targeted to support asynchronous small messages delivery
while having reasonable performance on modern multicore systems by
doing transparantly to user following
1. message coalescing
2. software routing on multicore systems
To enable both optimizations messages are delivered asynchronously.
To ensure delivery = an completion of handler executions on remote nodes collective barrier should be called.
Current version support only one-sided message (cannot send a response from active message handler)
but future version would support two-sided active messages.
For each process all delivered AMs are executed sequentially, so atomicity is guaranted and no locking required.
Progress of AM delivery is passive which means that handlers are executed inside library calls (aml_send and aml_barrier).
How to send messages:
1. call aml_init(..)
2. register handler of an active message whose prototype should be:
void handler(int fromPE,void* data,int dataSize)
where fromPE is sender's rank, data is pointer to message sent by sender and dataSize being size in bytes
registration is done using function aml_register_handler( handler, handlerid) where handlerid is integer in range [0..255]
3. send messages to other nodes using
where data is dataSize bytes of data to be sent to PE with rank destPE and to be processed by handler registered under handlerid
4. call collectively aml_barrier() which would not only synchronize all processes but also ensure that all active messages
sent prior to aml_barrier call are delivered (and requested handlers completed its execution) after exit from aml_barrier
5. call aml_finalize()
/* Copyright (c) 2011-2017 Graph500 Steering Committee
All rights reserved.
Developed by: Anton Korzh
Graph500 Steering Committee
New code under University of Illinois/NCSA Open Source License
see license.txt or
// AML: active messages library v 1.0
// MPI-3 passive transport
// transparent message aggregation greatly increases message-rate for loosy interconnects
// shared memory optimization used
// Implementation basic v1.0
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#ifdef __APPLE__
#define SYSCTL_CORE_COUNT "machdep.cpu.core_count"
#include <sys/sysctl.h>
#include <sys/types.h>
#include <mach/thread_policy.h>
#include <mach/thread_act.h>
// code borrowed from
typedef struct cpu_set {
uint32_t count;
} cpu_set_t;
static inline void
CPU_ZERO(cpu_set_t *cs) { cs->count = 0; }
static inline void
CPU_SET(int num, cpu_set_t *cs) { cs->count |= (1 << num); }
static inline int
CPU_ISSET(int num, cpu_set_t *cs) { return (cs->count & (1 << num)); }
int sched_getaffinity(pid_t pid, size_t cpu_size, cpu_set_t *cpu_set)
int32_t core_count = 0;
size_t len = sizeof(core_count);
int ret = sysctlbyname("machdep.cpu.core_count", &core_count, &len, 0, 0);
if (ret) {
printf("error while get core count %d\n", ret);
return -1;
cpu_set->count = 0;
for (int i = 0; i < core_count; i++) {
cpu_set->count |= (1 << i);
return 0;
int pthread_setaffinity_np(pthread_t thread, size_t cpu_size,
cpu_set_t *cpu_set)
thread_port_t mach_thread;
int core = 0;
for (core = 0; core < 8 * cpu_size; core++) {
if (CPU_ISSET(core, cpu_set)) break;
thread_affinity_policy_data_t policy = { core };
mach_thread = pthread_mach_thread_np(thread);
thread_policy_set(mach_thread, THREAD_AFFINITY_POLICY,
(thread_policy_t)&policy, 1);
return 0;
#include <malloc.h>
#ifdef __clang__
#define inline static inline
#include <unistd.h>
#include <mpi.h>
#define MAXGROUPS 65536 //number of nodes (core processes form a group on a same node)
#define AGGR (1024*32) //aggregation buffer size per dest in bytes : internode
#define AGGR_intra (1024*32) //aggregation buffer size per dest in bytes : intranode
#define NRECV 4 // number of preposted recvs internode
#define NRECV_intra 4 // number of preposted recvs intranode
#define NSEND 4 // number of available sends internode
#define NSEND_intra 4 // number of send intranode
#define SOATTR __attribute__((visibility("default")))
#define SENDSOURCE(node) ( sendbuf+(AGGR*nbuf[node]))
#define SENDSOURCE_intra(node) ( sendbuf_intra+(AGGR_intra*nbuf_intra[node]) )
#define ushort unsigned short
static int myproc,num_procs;
static int mygroup,num_groups;
static int mylocal,group_size;
static int loggroup,groupmask;
#define PROC_FROM_GROUPLOCAL(g,l) ((l)+((g)<<loggroup))
#define GROUP_FROM_PROC(p) ((p) >> loggroup)
#define LOCAL_FROM_PROC(p) ((p) & groupmask)
#define PROC_FROM_GROUPLOCAL(g,l) ((g)*group_size+(l))
#define GROUP_FROM_PROC(p) ((p)/group_size)
#define LOCAL_FROM_PROC(p) ((p)%group_size)
volatile static int ack=0;
volatile static int inbarrier=0;
static void (*aml_handlers[256]) (int,void *,int); //pointers to user-provided AM handlers
//internode comm (proc number X from each group)
//intranode comm (all cores of one nodegroup)
MPI_Comm comm, comm_intra;
// MPI stuff for sends