> On Feb 14, 2021, at 11:21 AM, Trond Myklebust <trondmy@xxxxxxxxxxxxxxx> wrote: > > On Sat, 2021-02-13 at 23:30 +0000, Chuck Lever wrote: >> >> >>> On Feb 13, 2021, at 5:10 PM, Trond Myklebust < >>> trondmy@xxxxxxxxxxxxxxx> wrote: >>> >>> On Sat, 2021-02-13 at 21:53 +0000, Chuck Lever wrote: >>>> Hi Trond- >>>> >>>>> On Feb 13, 2021, at 3:25 PM, trondmy@xxxxxxxxxx wrote: >>>>> >>>>> From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> >>>>> >>>>> Use a counter to keep track of how many requests are queued >>>>> behind >>>>> the >>>>> xprt->xpt_mutex, and keep TCP_CORK set until the queue is >>>>> empty. >>>> >>>> I'm intrigued, but IMO, the patch description needs to explain >>>> why this change should be made. Why abandon Nagle? >>>> >>> >>> This doesn't change the Nagle/TCP_NODELAY settings. It just >>> switches to >>> using the new documented kernel interface. >>> >>> The only change is to use TCP_CORK so that we don't send out >>> partially >>> filled TCP frames, when we can see that there are other RPC replies >>> that are queued up for transmission. >>> >>> Note the combination TCP_CORK+TCP_NODELAY is common, and the main >>> effect of the latter is that when we turn off the TCP_CORK, then >>> there >>> is an immediate forced push of the TCP queue. >> >> The description above suggests the patch is just a >> clean-up, but a forced push has potential to change >> the server's behavior. > > Well, yes. That's very much the point. > > Right now, the TCP_NODELAY/Nagle setting means that we're doing that > forced push at the end of _every_ RPC reply, whether or not there is > more stuff that can be queued up in the socket. The MSG_MORE is the > only thing that keeps us from doing the forced push on every sendpage() > call. > So the TCP_CORK is there to _further delay_ that forced push until we > think the queue is empty. My concern is that waiting for the queue to empty before pushing could improve throughput at the cost of increased average round-trip latency. That concern is based on experience I've had attempting to batch sends in the RDMA transport. > IOW: it attempts to optimise the scheduling of that push until we're > actually done pushing more stuff into the socket. Yep, clear, thanks. It would help a lot if the above were included in the patch description. And, I presume that the TCP layer will push anyway if it needs to reclaim resources to handle more queued sends. Let's also consider starvation; ie, that the server will continue queuing replies such that it never uncorks. The logic in the patch appears to depend on the client stopping at some point to wait for the server to catch up. There probably should be a trap door that uncorks after a few requests (say, 8) or certain number of bytes are pending without a break. -- Chuck Lever