3.17 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
The codes-workload component provides a standardized mechanism to describe
I/O workloads to be executed in a storage simulation.

This is the API that storage simulations will use to retrieve I/O
operations to execute within the simulation.  The primary functions are:

- codes_workload_load(): loads the specified workload instance
- codes_workload_get_next(): retrieves the next I/O operation for a given
- codes_workload_get_next_rc(): reverse version of the above

The operations are described by the codes_workload_op struct.  The end of
the stream of operations for a given rank is indicated by the CODES_WK_END

Implementation of the codes/codes-workload.h API. 

This is the API to be implemented by specific workload generator methods
(a generator that produces operations based on a Darshan log, for example).
Multiple workload generator methods can be used by the same simulation as
long as they support the same interface.  This API is similar to the
top-level codes/codes-workload.h API, except that there is no reverse
computation function.  Workload generators do not need to handle that case.

This is an example workload generator that implements the
codes-workload-method.h interface.  It produces a static workload for
testing purposes.

37 38 39 40 41
This is the implementation of the codes-workload.h API for the I/O kernel
language generator. 

42 43 44 45 46 47
This is the implementation of the workload generator API for Darshan workloads.
Darshan trace events are stored in a file which needs to be passed to the 
_workload_load function in the params arguments.

48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68
test program (tests/workload/*):
codes-workload-test.c: main routine for a simulator that provides an example
of 48 clients executing a workload described by the test-workload-method on
16 servers.  Can be executed as follows:

(parallel, optimistic example)
mpiexec -n 16 tests/workload/codes-workload-test --sync=3

(serial, conservative example)
tests/workload/codes-workload-test --sync=1

The test code is split up so that the compute node LPs are implemented in
codes-workload-test-cn-lp.* and the server LPs are implemented in
codes-workload-test-svr-lp.*.  Note that the timing information is
completely arbitrary for testing purposes; this is not a true storage
simulator but just a test harness.  

The compute node LP implements its own barrier and delay operations.  Other
operations are sent to the server LPs for execution.

Philip Carns's avatar
Philip Carns committed
69 70 71 72 73 74 75 76
The test programs produce output (the simulated completion time of each
client and server) in a subdirectory called
codes-workload-test-results-<BIGNUMBERS>/.  This output should be precisely
consistent regardless of the number of processes used in the simulation and
whether the simulation is executed in conservative or optimistic mode.

Running "make check" in the build directory will execute a single process
conservative version of the codes-workload-test simulation.
77 78 79 80