On May 3, 2011, at 4:20 PM, Chuck Lever wrote: > > On May 3, 2011, at 4:13 PM, Andy Adamson wrote: > >> >> On May 3, 2011, at 4:06 PM, Chuck Lever wrote: >> >>> Hi- >>> >>> On May 2, 2011, at 9:40 PM, andros@xxxxxxxxxx wrote: >>> >>>> I would appreciate comments on this patch. I've done a small amount of testing >>>> where I've set the /proc/sys/sunrpc/tcp_slot_table_entries to 2, and watched >>>> the dynamic allocator increase it to 3. I have a 10G test system being >>>> configured. >>> >>> For TCP transports, then, the effective slot table size varies between the tcp_slot_table_entries setting and the estimated number of RPCs that can fit in the TCP window? >> >> Yes. >> >>> What happens when the TCP window shrinks such that fewer RPCs can fit than are allowed by the tcp_slot_table_entries setting? >> >> We don't shrink the rpc_slot table size. The idea being that if the TCP hits size X then shrinks to size X-n, there is a good chance that it will get back to size X. In the worst case, this is still better than a hard configured 'to large' slot table size. > > Seems to me that we wouldn't need the tcp_slot_table_entries setting at all, then. Always start at 2 and it will adjust itself as needed. Yep - for TCP. UDP and RDMA will still use just the default. I hope to determine a new reasonable tcp default with testing. > Unless we find we need a hard upper bound in some cases. This is also a goal for testing - adding more and more network delay in a 10G test environment to increase the TCP window. We may upper bound it by looking at available memory. > >>> Have you done performance testing with loopback mounts (ie server and client on the same machine, mounting over the "lo" interface)? As I recall this was the test case that Trond found worked poorly when we boosted the tcp_slot_table_entries setting to default to a 64 entry slot table. >> >> No, not yet. I'll add this to the planned testing. So having way too many default rpc_slots is bad? I could test by starting with a default of 2 slots and seeing how many are needed. > > Apparently there was a problem with having too many slots, but I was never clear what that problem might be. The report was only poor performance on loopback NFS mounts. OK. I'll check it out. -->Andy > > -- > Chuck Lever > chuck[dot]lever[at]oracle[dot]com > > > > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html