Re: [PATCH RFC 0/5] xprtrdma Send completion batching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Sep 6, 2017, at 4:09 PM, Jason Gunthorpe <jgunthorpe@xxxxxxxxxxxxxxxxxxxx> wrote:
> 
> On Wed, Sep 06, 2017 at 04:02:24PM -0400, Chuck Lever wrote:
>> 
>>> On Sep 6, 2017, at 3:39 PM, Jason Gunthorpe <jgunthorpe@xxxxxxxxxxxxxxxxxxxx> wrote:
>>> 
>>> On Wed, Sep 06, 2017 at 02:33:50PM -0400, Chuck Lever wrote:
>>> 
>>>> B. Force RPC completion to wait for Send completion, which
>>>> would allow the post-v4.6 scatter-gather code to work
>>>> safely. This would need some guarantee that Sends will
>>>> always complete in a short period.
>>> 
>>> Why is waiting for the send completion so fundamentally different from
>>> waiting for the remote RPC reply?
>>> 
>>> I would say that 99% of time the send completion and RPC reply
>>> completion will occure approximately concurrently.
>>> 
>>> eg It is quite likely the RPC reply SEND carries an embeded ack
>>> for the requesting SEND..
>> 
>> Depends on implementation. Average RTT on IB is 3-5 usecs.
>> Average RPC RTT is about an order of magnitude more. Typically
>> the Send is ACK'd more quickly than the RPC Reply can be sent.
>> 
>> But I get your point: the normal case isn't a problem.
>> 
>> The problematic case arises when the Send is not able to complete
>> because the NFS server is not reachable. User starts pounding on
>> ^C, RPC can't complete because Send won't complete, control
>> doesn't return to user.
> 
> Sure, but why is that so different from the NFS server not generating
> a response?
> 
> I thought you already implemetned a ctrl-c scheme that killed the QP?
> Or was that just a discussion?

No, we want to _avoid_ killing the QP if we can. A ctrl-C (or a
timer signal, say) on an otherwise healthy connection must not
perturb other outstanding RPCs, if possible.

What I implemented was a scheme to invalidate the memory of a
(POSIX) signaled RPC before it completes, in case the RPC Reply
hadn't yet arrived.

Currently, the only time the QP might be killed is if the server
attempts to RDMA Write an RPC Reply into one of these invalidated
memory regions. That case can't be avoided with the current RPC-
over-RDMA protocol.


> That is the only way to async terminate outstanding RPCs and clean
> up. Killing the QP will allow the send to be 'completed'.

It forces outstanding Sends to flush.

But as you explained it at the time, xprtrdma needs to wait
somehow for the QP to complete it's transition to error state
before allowing RPCs to complete. Probably ib_drain_qp would be
enough.

And again, we want to preserve the connection if it is healthy.


> Having ctrl-c escalate to a QP tear down after a short timeout seems
> reasonable. 99% of cases will not need the teardown since the send
> will complete..

So I think we are partially there already. If an RPC timeout occurs
(which should be after a few minutes) then xprtrdma does disconnect,
which tears down the QP.

If a timer signal fires on an RPC waiting for a server that is
unreachable, the application won't see the signal until the RPC
times out. Maybe that's how it works now?

And, otherwise, a ^C on an app waiting for an unresponsive server
will not have immediate results. But again, I think that's how it
works now.


> How does sockets based NFS handle this? Doesn't it zero copy from these
> same buffers into SKBs? How does it cancel the SKBs before the NIC
> transmits them?
> 
> Seems like exactly the same kind of problem to me..

TCP has keep-alive, where the sockets consumer is notified as soon
as the network layer determines the remote is unresponsive. The
connection is closed from underneath the consumer.

For RDMA, which has no keep-alive mechanism, we seem to be going
with waiting for the RPC to time out, then the consumer itself
breaks the connection.


--
Chuck Lever



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux