1. 23 Jun, 2015 4 commits
  2. 22 Jun, 2015 4 commits
  3. 20 Jun, 2015 3 commits
  4. 19 Jun, 2015 2 commits
    • Kenneth Raffenetti's avatar
      netmod/portals4: remove unused variables · 7cda493b
      Kenneth Raffenetti authored
      No reviewer.
      7cda493b
    • Rob Latham's avatar
      better approach for do_accumulate_op · f039eebb
      Rob Latham authored
      commit 83253a41
      
       triggerd a bunch of new warnings.  Take a different
      approach.  For simplicity of implementation, do_accumulate_op is defined
      as MPI_User_function.  We could split up internal routine and
      user-provided routines, but that complicates the code for little
      benefit:
      
      Instead, keep do_accumlate_op with an int type, but check for overflow
      before explicitly casting.  In many places the count is simply '1'.  In
      stream processing there is an interal limit of 256k, so the assertion
      should never fire.
      Signed-off-by: default avatarXin Zhao <xinzhao3@illinois.edu>
      f039eebb
  5. 18 Jun, 2015 2 commits
  6. 17 Jun, 2015 4 commits
  7. 16 Jun, 2015 4 commits
    • Lena Oden's avatar
      Close remainig conns before sockset is destroyed · 9c4b9b17
      Lena Oden authored
      
      
      Loser of Head-to-Head connections are not necessarily
      closed, if the sock set is destroyed. This patch
      looks for all open connections, close the socket
      and free the memory recourses. Fixes #2180
      Signed-off-by: Kenneth Raffenetti's avatarKen Raffenetti <raffenet@mcs.anl.gov>
      9c4b9b17
    • Lena Oden's avatar
      Handling of discard connection to avoid reconnect · ac07f982
      Lena Oden authored
      
      
      The loser of a head-to-head connection sometimes tries
      to reconnect later, afer MPI_Finalize was called  This
      can lead to several errors in the socket layer, depending
      on the state of the disarded connection and the appereance
      of the connection events. Refs #2180
      This Patch has two ways to handle this:
      
      1.)
      Discarded connections are marked with CONN_STATE_DISCARD,
      so they are hold from connection.  Furthermore, an error on
      any discarded connection (because the remote side closed in
      MPI_Finalize) is ignored and the connection is closed.
      
      2.)
      Add a finalize flag for process groups. If a process group is
      closing and tries to close all VCs, a flag is set to mark this.
      If the flag is set, a reconnection (in the socket state) is
      refused and the connection is closed on both sided.
      
      Both steps are necessary to catch all reconnection tries after
      MPI_Finalize was called.
      Signed-off-by: Kenneth Raffenetti's avatarKen Raffenetti <raffenet@mcs.anl.gov>
      ac07f982
    • Kenneth Raffenetti's avatar
      netmod/portals4: fix per target event counting · 394d46b7
      Kenneth Raffenetti authored
      Ignore local completion events (SENDs) when counting outstanding
      ops to remote targets.
      
      No reviewer.
      394d46b7
    • Kenneth Raffenetti's avatar
      netmod/portals4: remove unused variable · c80f2c4e
      Kenneth Raffenetti authored
      No reviewer.
      c80f2c4e
  8. 15 Jun, 2015 5 commits
    • Xin Zhao's avatar
      Bug-fix in Request_load_recv_iov() when initial value of segment_first is not 0. · 93b114e3
      Xin Zhao authored
      
      
      Originally Request_load_recv_iov() function assumes that
      the initial value of req->dev.segment_first is always zero,
      which is not correct if we set it to a non-zero value for
      streaming the RMA operations.
      
      The way Request_load_recv_iov() works is that, it is triggered
      multiple times for the same receiving request until all data is
      received. During this process, req->dev.segment_first is rewritten
      to the current offset value. When the initial value of
      req->dev.segment_first is non-zero, we need another variable
      to store that value until the receiving process for this request
      is finished. Here we use a static variable in this function to
      reach the purpose.
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
      93b114e3
    • Xin Zhao's avatar
      Bug-fix in calculating streaming size in GetAccumulate pkt handler. · 9d3ddf8a
      Xin Zhao authored
      
      
      In this patch, we fix the mistakes in calculating the streaming
      size in GetAccumulate pkt handler on the target side. The original
      code has two mistakes here:
      
      1. The original code use the size and extent of the target datatype,
         which is wrong. Here we should use the size / extent of the basic
         type in the target datatype.
      
      2. The original code always use the total data size to calculate
         the current streaming size, which is wrong. Here we should use
         the current rest data size to calculate.
      
      This patch fixes these two issues.
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
      9d3ddf8a
    • Xin Zhao's avatar
      Use MPIDI_msg_sz_t instead of int for orig_segment_first. · 2ae64ae5
      Xin Zhao authored
      
      
      Here we assign req->dev.segment_first to orig_segment_first.
      Since req->dev.segment_first is a MPIDI_msg_sz_t type, we should
      use the same type for orig_segment_first.
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
      2ae64ae5
    • Xin Zhao's avatar
      Increase time limit of test/mpi/rma/atomic_get test to 5 min. · 266a3adf
      Xin Zhao authored
      
      
      This test occasionally run more than 3 min (default time limit)
      on OFI platform. This patch increases the time limit to 5 min.
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
      266a3adf
    • Lena Oden's avatar
      Add a isend-irecv test for multiple processors · 9d508d5d
      Lena Oden authored
      
      
      This test uses irecv and isend to transfer data in
      an alltoall manner between multiple processes.
      The idea of this test is testing, if MPI can handle
      multiple processes trying to connect to each other from
      both sides at the same time.
      Signed-off-by: default avatarAntonio J. Pena <apenya@mcs.anl.gov>
      9d508d5d
  9. 14 Jun, 2015 3 commits
    • Min Si's avatar
      Expose AM flush ordering and issue per OP flush if unordered. · 5324a41f
      Min Si authored
      
      
      This patch includes three changes:
      (1) Added netmod API get_ordering to allow netmod to expose the network
      ordering. A netmod may issue some packets via multiple connections in
      parallel if those packets (such as RMA) do not require ordering, and
      thus the packets may be unordered. This patch sets the network ordering
      in every existing netmod (tcp|mxm|ofi|portals|llc) to true, since all
      packets are sent orderly via one connection.
      (2) Nemesis exposes the window packet orderings such as AM flush
      ordering at init time. It supports ordered packets only when netmod
      supports ordered network.
      (3) If AM flush is ordered (flush must be finished after all previous
      operations), then CH3 RMA only requests FLUSH ACK on the last operation.
      Otherwise, CH3 must request per-OP FLUSH ACK to ensure all operations
      are remotely completed.
      Signed-off-by: default avatarXin Zhao <xinzhao3@illinois.edu>
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
      5324a41f
    • Min Si's avatar
      Always free issued OPs when window resource is used up. · c83b6b2d
      Min Si authored
      
      
      When win resource is used up, the current code frees OPs before
      completion only if flush_remote is ordered. However, we can always free
      them even on out-of-order network. Because remote completion is waited
      by ack counter, and local completion (flush_local) is translated to
      remote completion (flush).
      Signed-off-by: default avatarXin Zhao <xinzhao3@illinois.edu>
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
      c83b6b2d
    • Min Si's avatar
      Move outstanding_acks increment to flush sending step. · eef0c70a
      Min Si authored
      
      
      The outstanding_acks counter was increased at each sync call (such as
      fence and flush). However, the counter had to be decreased again if
      flush ack is not required. It is more straightforward if increasing it
      only when the flush packet is issued (FLUSH flag piggyback or a separate
      flush message).
      Signed-off-by: default avatarXin Zhao <xinzhao3@illinois.edu>
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
      eef0c70a
  10. 12 Jun, 2015 9 commits