Commit d1ed5182 authored by Adrian Pope's avatar Adrian Pope
Browse files


parent f0da113e
GenericIO - For more information, please visit the wiki:
# GenericIO
GenericIO is a write-optimized library for writing self-describing scientific data files on large-scale parallel file systems.
## Reference
Habib, et al., HACC: Simulating Future Sky Surveys on State-of-the-Art Supercomputing Architectures, New Astronomy, 2015
## Source Code
A source archive is available here: [genericio-20190417.tar.gz](, or from git:
git clone
## Output file partitions (subfiles)
If you're running on an IBM BG/Q supercomputer, then the number of subfiles (partitions) chosen is based on the I/O nodes in an automatic way. Otherwise, by default, the GenericIO library picks the number of subfiles based on a fairly-naive hostname-based hashing scheme. This works reasonably-well on small clusters, but not on larger systems. On a larger system, you might want to set these environmental variables:
Where the number of partitions (256 above) determines the number of subfiles used. If you're using a Lustre file system, for example, an optimal number of files is:
# of files * stripe count ~ # OSTs
On Titan, for example, there are 1008 OSTs, and a default stripe count of 4, so we use approximately 256 files.
## Benchmarks
Once you build the library and associated programs (using make), you can run, for example:
$ mpirun -np 8 ./mpi/GenericIOBenchmarkWrite /tmp/out.gio 123456 2
Wrote 9 variables to /tmp/out (4691036 bytes) in 0.2361s: 18.9484 MB/s
$ mpirun -np 8 ./mpi/GenericIOBenchmarkRead /tmp/out.gio
Read 9 variables from /tmp/out (4688028 bytes) in 0.223067s: 20.0426 MB/s [excluding header read]
The read benchmark always reads all of the input data. The output benchmark takes two numerical parameters, one if the number of data rows to write, and the second is a random seed (which slightly perturbs the per-rank output sizes, but not by much). Each row is 36 bytes for these benchmarks.
The write benchmark can be passed the -c parameter to enable output compression. Both benchmarks take an optional -a parameter to request that homogeneous aggregates (i.e. "float4") be used instead of using separate arrays for each position/velocity component.
## Python module
The repository includes a genericio Python module that can read genericio-formatted files and return numpy arrays. This is included in the standard build. To use it, once you've built genericio, you can read genericio data as follows:
$ python
>>> import genericio
>>> genericio.gio_inspect('m000-99.fofproperties')
Number of Elements: 1691
[data type] Variable name
[i 32] fof_halo_count
[i 64] fof_halo_tag
[f 32] fof_halo_mass
[f 32] fof_halo_mean_x
[f 32] fof_halo_mean_y
[f 32] fof_halo_mean_z
[f 32] fof_halo_mean_vx
[f 32] fof_halo_mean_vy
[f 32] fof_halo_mean_vz
[f 32] fof_halo_vel_disp
(i=integer,f=floating point, number bits size)
>>> genericio.gio_read('m000-99.fofproperties','fof_halo_mass')
array([[ 4.58575588e+13],
[ 5.00464689e+13],
[ 5.07078771e+12],
[ 1.35221006e+13],
[ 5.29125710e+12],
[ 7.12849857e+12]], dtype=float32)
[Click here to go to the README for the alternative python interface](new_python/
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment