Re: [RFC][PATCH] Vector read/write support for NFS (DIO) client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This issue has come up several times recently. My preference would be to
tie the availability of slots to the TCP window size, and basically say
that if the SOCK_ASYNC_NOSPACE flag is set on the socket, then we hold
off allocating more slots until we get a ->write_space() callback which
clears that flag.

For the RDMA case, we can continue to use the current system of a fixed
number of preallocated slots.

I take it then that we'd want a similar scheme for UDP as well? I guess
I'm just not sure what the slot table is supposed to be for.
[andros] I look at the rpc_slot table as a representation of the amount of data the connection to the server
can handle - basically the #slots should = double the bandwidth-delay product divided by the max(rsize/wsize).
For TCP, this is the window size. (ping of max MTU packet * interface bandwidth).
There is no reason to allocate more rpc_rqsts that can fit on the wire.
I agree with checking for space on the link.

The above formula is a good lower bound on the maximum number of slots, but there are many times when a client could use more slots than the above formula. For example, we don't want to punish writes if rsize > wsize. Also, you have to account for the server memory, which can sometimes hold several write requests while waiting for them to be sync'd to disk, leaving the TCP buffers less than full.

Also, I think any solution should allow admins to limit the maximum number of slots. Too many slots can increase request randomness at the server, and sometimes severely reduce performance.

Dean

Possibly naive question, and maybe you or Andy have scoped this out
already...

Wouldn't it make more sense to allow the code to allocate rpc_rqst's as
needed, and manage congestion control in reserve_xprt ?
[andros] Congestion control is not what the rpc_slot table is managing. It does need to have
a minimum which experience has set at 16. It's the maximum that needs to be dynamic.
Congestion control by the lower layers should work unfettered within the # of rpc_slots. Today that
is not always the case when 16 slots is not enough to fill the wire, and the administrator has
not changed the # of rpc_slots.

It appears that
that at least is what xprt_reserve_xprt_cong is supposed to do. The TCP
variant (xprt_reserve_xprt) doesn't do that currently, but we could do
it there and that would seem to make for more parity between the TCP
and UDP in this sense.

We could do that similarly for RDMA too. Simply keep track of how many
RPCs are in flight and only allow reserving the xprt when that number
hasn't crossed the max number of slots...

--
Jeff Layton<jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux