1. 03 Nov, 2014 9 commits
  2. 01 Nov, 2014 3 commits
    • Xin Zhao's avatar
      Bug-fix: avoid free NULL pointer in RMA. · 72a1e6f8
      Xin Zhao authored
      req->dev.user_buf points to the data sent from origin process
      to target process, and for FOP sometimes it points to the IMMED
      area in packet header when data can be fit in packet header.
      In such case, we should not free req->dev.user_buf in final
      request handler since that data area will be freed by the
      runtime when packet header is freed.
      In this patch we initialize user_buf to NULL when creating the
      request, and set it to NULL when FOP is completed, and avoid free
      a NULL pointer in final request handler.
      Signed-off-by: default avatarMin Si <msi@il.is.s.u-tokyo.ac.jp>
    • Igor Ivanov's avatar
    • Xin Zhao's avatar
      Bug-fix: always waiting for remote completion in Win_unlock. · c76aa786
      Xin Zhao authored and Pavan Balaji's avatar Pavan Balaji committed
      The original implementation includes an optimization which
      allows Win_unlock for exclusive lock to return without
      waiting for remote completion. This relys on the
      assumption that window memory on target process will not
      be accessed by a third party until that target process
      finishes all RMA operations and grants the lock to other
      processes. However, this assumption is not correct if user
      uses assert MPI_MODE_NOCHECK. Consider the following code:
                P0                              P1           P2
          MPI_Win_lock(P1, NULL, exclusive);
          MPI_Win_unlock(P1, exclusive);
          MPI_Send (P2);                                MPI_Recv(P0);
                                                        MPI_Win_lock(P1, MODE_NOCHECK, exclusive);
                                                        MPI_Win_unlock(P1, exclusive);
      Both P0 and P2 issue exclusive lock to P1, and P2 uses assert
      MPI_MODE_NOCHECK because the lock should be granted to P2 after
      synchronization between P2 and P0. However, in the original
      implementation, GET operation on P2 might not get the updated
      value since Win_unlock on P0 return without waiting for remote
      In this patch we delete this optimization. In Win_free, since every
      Win_unlock guarantees the remote completion, target process no
      longer needs to do additional counting works to detect target-side
      completion, but only needs to do a global barrier.
      Signed-off-by: Pavan Balaji's avatarPavan Balaji <balaji@anl.gov>
  3. 31 Oct, 2014 2 commits
  4. 30 Oct, 2014 3 commits
    • Xin Zhao's avatar
      Clean up white-space and code format in RMA code. · fe283e91
      Xin Zhao authored
      No reviewer.
    • Min Si's avatar
      Bug-fix: trigger final req handler for receiving derived datatype. · 920661c3
      Min Si authored
      There are two request handlers used when receiving data:
      (1) OnDataAvail, which is triggered when data is arrived;
      (2) OnFinal, which is triggered when receiving data is finished;
      When receiving large derived datatype, the receiving iov can be divided
      into multiple iovs. The OnDataAvail handler is set to iov load function
      when still waiting for remaining data. However, such handler should be
      set to OnFinal when starting receiving the last iov.
      The original code does not set OnDataAvail handler to OnFinal at end.
      This patch fixes this bug.
      Note that this bug only appears in RMA calls, because only the RMA
      packet handers need to specify OnFinal.
      Resolve #2189.
      Signed-off-by: default avatarXin Zhao <xinzhao3@illinois.edu>
    • Pavan Balaji's avatar
      Disable hugepage support by default. · 669f4286
      Pavan Balaji authored
      This patch is a workaround for an issue with older HPC-X machines.
      Once we are comfortable upgrading to the latest HPC-X version, the
      default value of the CVAR should be changed to true.
      Signed-off-by: default avatarXin Zhao <xinzhao3@illinois.edu>
  5. 29 Oct, 2014 2 commits
  6. 28 Oct, 2014 1 commit
  7. 27 Oct, 2014 1 commit
  8. 26 Oct, 2014 1 commit
  9. 25 Oct, 2014 2 commits
  10. 24 Oct, 2014 1 commit
  11. 23 Oct, 2014 2 commits
  12. 22 Oct, 2014 2 commits
  13. 20 Oct, 2014 6 commits
  14. 17 Oct, 2014 5 commits