On Wed, Sep 06, 2017 at 04:02:24PM -0400, Chuck Lever wrote: > > > On Sep 6, 2017, at 3:39 PM, Jason Gunthorpe <jgunthorpe@xxxxxxxxxxxxxxxxxxxx> wrote: > > > > On Wed, Sep 06, 2017 at 02:33:50PM -0400, Chuck Lever wrote: > > > >> B. Force RPC completion to wait for Send completion, which > >> would allow the post-v4.6 scatter-gather code to work > >> safely. This would need some guarantee that Sends will > >> always complete in a short period. > > > > Why is waiting for the send completion so fundamentally different from > > waiting for the remote RPC reply? > > > > I would say that 99% of time the send completion and RPC reply > > completion will occure approximately concurrently. > > > > eg It is quite likely the RPC reply SEND carries an embeded ack > > for the requesting SEND.. > > Depends on implementation. Average RTT on IB is 3-5 usecs. > Average RPC RTT is about an order of magnitude more. Typically > the Send is ACK'd more quickly than the RPC Reply can be sent. > > But I get your point: the normal case isn't a problem. > > The problematic case arises when the Send is not able to complete > because the NFS server is not reachable. User starts pounding on > ^C, RPC can't complete because Send won't complete, control > doesn't return to user. Sure, but why is that so different from the NFS server not generating a response? I thought you already implemetned a ctrl-c scheme that killed the QP? Or was that just a discussion? That is the only way to async terminate outstanding RPCs and clean up. Killing the QP will allow the send to be 'completed'. Having ctrl-c escalate to a QP tear down after a short timeout seems reasonable. 99% of cases will not need the teardown since the send will complete.. How does sockets based NFS handle this? Doesn't it zero copy from these same buffers into SKBs? How does it cancel the SKBs before the NIC transmits them? Seems like exactly the same kind of problem to me.. Jason -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html