Commit 8007cd43 authored by Misbah Mubarak's avatar Misbah Mubarak

Updating documentation to reflect multiple cores per node, adding example config file

parent 213897b2
......@@ -86,6 +86,16 @@ mpirun -np 4 ./src/network-workloads//model-net-mpi-replay --sync=3
--workload_type="dumpi" --
../src/network-workloads/conf/modelnet-mpi-test-dfly-amg-216.conf
-------- Running multiple MPI ranks (or cores) mapped to a compute node -------
13- Update the config file to have multiple core/proc LPs mapped on to a
model-net LP. See example config file in:
../src/network-workloads/conf/modelnet-mpi-test-dfly-mul-cores.conf
14- If running multiple MPI ranks per node with random allocations, the
allocation files must also be generated with multiple cores per node. See
scripts/allocation_gen/README for how to generate allocation files that use
multiple cores per node.
----- sampling and debugging options for MPI Simulation Layer ----
Runtime options can be used to enable time-stepped series data of simulation
......
LPGROUPS
{
MODELNET_GRP
{
repetitions="264";
nw-lp="16";
modelnet_dragonfly="4";
modelnet_dragonfly_router="1";
}
}
PARAMS
{
packet_size="512";
modelnet_order=( "dragonfly", "dragonfly_router");
# scheduler options
modelnet_scheduler="fcfs";
chunk_size="256";
# modelnet_scheduler="round-robin";
num_routers="8";
local_vc_size="16384";
global_vc_size="32768";
cn_vc_size="16384";
local_bandwidth="5.25";
global_bandwidth="4.7";
cn_bandwidth="5.25";
message_size="592";
routing="adaptive";
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment