TAPIOCA is a static library implementing the two-phase I/O scheme on top of MPI I/O. This library is topology-aware in that it provides a couple of aggregator placement strategies taking into account the network characteristics and the data access pattern. TAPIOCA is optimized for large-scale supercomputers through an implementation made using MPI one-sided communication (RMA) and non-blocking operation.
TAPIOCA (before being named like this) has been introduced in a SC'16 Workshop paper: [Topology-Aware Data Aggregation for Intensive I/O on Large-Scale Supercomputers](http://www.francoistessier.info/documents/COM-HPC16-IO.pdf)
* TAPIOCA_STRATEGY = SHORTEST_PATH / LONGEST_PATH / TOPOLOGY_AWARE / CONTENTION_AWARE * TAPIOCA_NBAGGR = Number of aggregators per file * TAPIOCA_BUFFERSIZE = Buffer size in bytes. Use a multiple of the file system block size to avoid lock contention. Two allocations of this buffer size will be made to perform double-buffering * TAPIOCA_COMMSPLIT = true / false. If true, MPI_Comm_split will be used to create one sub-communicator per aggregator. If false, the sub-communicator will be created from MPI_Groups. In the case of a single shared file as output on a large-scale run, set this variable to false can divide by two the time needed to elect the aggregators. * TAPIOCA_DEVNULL = true / false. If true, instead of effectively writing the file, the write operation is made in /dev/null. Useful for aggregation time measurements.