On Wed, Sep 06, 2017 at 05:00:29PM -0400, Chuck Lever wrote: > What I implemented was a scheme to invalidate the memory of a > (POSIX) signaled RPC before it completes, in case the RPC Reply > hadn't yet arrived. > > Currently, the only time the QP might be killed is if the server > attempts to RDMA Write an RPC Reply into one of these invalidated > memory regions. That case can't be avoided with the current RPC- > over-RDMA protocol. Okay.. > And again, we want to preserve the connection if it is healthy. Well, if SENDs are not completing then it is not healthly. It is analogous to what TCP Keep Alive does. > > How does sockets based NFS handle this? Doesn't it zero copy from these > > same buffers into SKBs? How does it cancel the SKBs before the NIC > > transmits them? > > > > Seems like exactly the same kind of problem to me.. > > TCP has keep-alive, where the sockets consumer is notified as soon > as the network layer determines the remote is unresponsive. The > connection is closed from underneath the consumer. keep-alive is something different, it pings the remote periodicaly and detects dead connections even when there is no traffic. RDMA detects dead connections via retries and timeouts, but only if there is traffic. My questions was about how the same situation is handled in TCP. If it does DMA directly from the source buffers by chaining them into SKBs then it has exactly the same problem, it cannot release the buffers until SKB is released from the TCP stack after NIC xmit. Like in RDMA, this only happens once TCP has got the ack back. Perhaps the answer is that TCP does not zero copy these buffers, or TCP doesn't care about transmitting random memory, but it seems a question worth asking. Jason -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html