1. 21 Jun, 2019 1 commit
  2. 04 Jun, 2019 2 commits
  3. 02 Jun, 2019 1 commit
  4. 23 May, 2019 1 commit
    • Neil McGlohon's avatar
      model-net-mpi-replay: Fix random permutation pattern init · 6e378c26
      Neil McGlohon authored
      Somehow the bug where the first destination of the random permutation
      pattern wasn't randomized (but instead defaulted to rank 0) returned.
      This resulted in attempted self messages if terminal 0 is allocated
      for the random permutation workload as it will, by default, attempt
      to send messages to itself resulting in a failed assert.
      This fix checks to see if the rank has generated any data yet. If
      it hasn't, then it will pick a non-self random destination and will
      send to that destination until the random permutation threshold has
      been met.
  5. 21 May, 2019 1 commit
    • Neil McGlohon's avatar
      Dragonfly Dally: busy_time fixes based on dfc · dac6c39d
      Neil McGlohon authored
      During the creation of the dragonfly-dally.C model, QoS features
      were added. The addition of these features for some reason
      also removed lines handling last_buf_full behavior. This resulted
      in busy time always being reported as zero. I've added lines from
      dragonfly-custom.C regarding this to restore previous behavior.
      There are still added lines from QoS regarding last_buf_full and
      busy time in router_packet_send() that are mildly confusing but
      don't appear to affect final results and are left as is.
  6. 16 May, 2019 3 commits
  7. 15 May, 2019 1 commit
  8. 14 May, 2019 2 commits
    • Neil McGlohon's avatar
      mpi-replay: Add compute_time_speedup parameter · f075794c
      Neil McGlohon authored
      This commit adds functionality to accept a compute_time_speedup="X"
      parameter in a model's configuration file to accelerate the delay
      applied when accounting for compute_time in model-net-mpi-replay.c.
      X here is a double where 2.0 represents a 2x reduction in delay
      resulting from compute_time simulation. e.g. a setting of "2.0" means
      that all increments of compute_time are reduced by a factor of 2.
    • Neil McGlohon's avatar
      Update all usage of "server" lp to "nw-lp" · 71f2692b
      Neil McGlohon authored
      Update all usage of "server" lp to "nw-lp"
      This change is made for consistency across all configuration files
      and workload generators in core CODES. Where older models and versions
      of CODES utilized the "server" identifier to specify the number of
      server LPs per modelnet group repetition, newer models and
      configurations utilized the less paradigm specific "nw-lp" identifier.
      This shift slightly fractured the codebase and led to confusion when
      running similar/same configurations across different network models.
      Specifically, this change converts all usage of the "server" identifier
      to the "nw-lp" identifier. All tests, existing default configuration
      files, and models have been updated to use the "nw-lp" identifier for
      the workload server LPs. Some docs and readmes have been updated as well
      to no longer instruct new users to use the "server" identifier.
  9. 13 May, 2019 1 commit
  10. 06 May, 2019 2 commits
  11. 02 May, 2019 2 commits
  12. 01 May, 2019 2 commits
  13. 08 Apr, 2019 1 commit
  14. 16 Jan, 2019 3 commits
  15. 15 Jan, 2019 6 commits
  16. 10 Jan, 2019 3 commits
  17. 07 Jan, 2019 1 commit
  18. 04 Jan, 2019 1 commit
  19. 17 Dec, 2018 6 commits