- 06 Nov, 2014 3 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
- 05 Nov, 2014 2 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
- 04 Nov, 2014 3 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
- 03 Nov, 2014 4 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
- 09 Oct, 2014 2 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
NOTES: - torus does not use it, as it has some routing logic preventing direct usage. It probably can be done, but I'll leave that for later. - collectives in torus, dragonfly don't use it yet.
-
- 29 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
-
- 21 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
(have NO idea how the triton tests passed without this...)
-
- 20 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
Rather than model-net LPs directly sending messages to other model-net LPs, LPs can route the intended message through the scheduler interface to be queued up for reception by the receiver (see the diff of loggp.c). This has the benefit of enabling things like priority and fairness for N->1 communication patterns. Currently, no packetizing is supported, and I haven't yet wrote checks for it - beware. Loggp is currently the only supported model. simplenet could also be supported without much trouble, but I doubt there's any demand for it at the moment. This should NOT be used by the dragonfly/torus models, as they have their own routing backend.
-
- 19 Aug, 2014 2 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
- 18 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
-
- 15 Aug, 2014 5 commits
-
-
Jonathan Jenkins authored
Torus and simplewan each have problems precluding them from the current scheduling fix: - simplewan - each "device" has N input/output ports. It can't simply tell the scheduler when they are (it is?) idle because the scheduler doesn't know which packets go to which ports - torus - also has N input/output ports (two for each dimension). Also, the same routing "queue" (via the "next_link_available_time" var) is used for incoming and outgoing messages, so we can't guarantee the scheduler that we'll be available at time x (an incoming msg could arrive and then be routed at time x-1). This isn't a problem for the dragonfly network as terminals aren't intermediate routers. Ideally what needs to happen here is for the intermediate packets/chunks to be queued up in the scheduler.
-
Jonathan Jenkins authored
- previously, packet issues were done without any consideration for device availability - within epsilon, preventing any meaningful scheduling - enabled for loggp only, other networks will be incorporated shortly
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
- 14 Aug, 2014 1 commit
-
-
mubarak authored
Integrating DUMPI's MPI trace replay with model-net. Currently, supports replaying MPI point-to-point messaging on top of torus/dragonfly and simple-net network models.
-
- 11 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
-
- 08 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
- N priorities processed in increasing order - queue for priority i not touched until 0...i-1 queues are empty - example injection API usage shown in test program - currently, only the priority scheduler has need of it
-
- 06 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
-
- 04 Aug, 2014 1 commit
-
-
Jonathan Jenkins authored
-
- 31 Jul, 2014 4 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
All configuration now proceeds on a per-LP level, and requires separate registration and configuration calls, as seen in the test programs. model_net_set_params is no longer used, and is replaced by model_net_register and model_net_configure. The dragonfly network, having two LP types bundled in the same code-path, is special-cased in the registration code. LP-mapping in model-net now has the following defaults: - counts via codes_mapping_get_lp_count are now with respect to the calling network LP's annotation. - when looking up network LPs via codes_mapping_get_lp_info/codes_mapping_get_lp_id, the annotation of the calling network LP is used. Hence, routing now occurs only between LPs of the same annotation. If the destination LP's group specified by model_net_*event does not contain a modelnet LP with the same annotation as the modelnet LP in the sender's group, then an error will occur (in codes_mapping). Known Issues: - modelnet users currently cannot specify which modelnet LP to use in the case of multiple modelnet LPs in the sender's group. This will be fixed in future commits after a consensus is reached on the best way to expose this information.
-
- 24 Jul, 2014 2 commits
-
-
Jonathan Jenkins authored
-
Jonathan Jenkins authored
-
- 18 Jul, 2014 1 commit
-
-
Jonathan Jenkins authored
-
- 16 Jul, 2014 1 commit
-
-
mubarak authored
-
- 14 Jul, 2014 1 commit
-
-
Jonathan Jenkins authored
-
- 08 Jul, 2014 1 commit
-
-
Jonathan Jenkins authored
- "modelnet" parameter in cfg is now a no-op - "modelnet_order" parameter in cfg is required, listing order in which networks are indexed to the model - modified "model_net_set_params" signature - updated tests to use the new interface
-