On May 3, 2011, at 4:06 PM, Chuck Lever wrote: > Hi- > > On May 2, 2011, at 9:40 PM, andros@xxxxxxxxxx wrote: > >> I would appreciate comments on this patch. I've done a small amount of testing >> where I've set the /proc/sys/sunrpc/tcp_slot_table_entries to 2, and watched >> the dynamic allocator increase it to 3. I have a 10G test system being >> configured. > > For TCP transports, then, the effective slot table size varies between the tcp_slot_table_entries setting and the estimated number of RPCs that can fit in the TCP window? Yes. > What happens when the TCP window shrinks such that fewer RPCs can fit than are allowed by the tcp_slot_table_entries setting? We don't shrink the rpc_slot table size. The idea being that if the TCP hits size X then shrinks to size X-n, there is a good chance that it will get back to size X. In the worst case, this is still better than a hard configured 'to large' slot table size. > > Have you done performance testing with loopback mounts (ie server and client on the same machine, mounting over the "lo" interface)? As I recall this was the test case that Trond found worked poorly when we boosted the tcp_slot_table_entries setting to default to a 64 entry slot table. No, not yet. I'll add this to the planned testing. So having way too many default rpc_slots is bad? I could test by starting with a default of 2 slots and seeing how many are needed. -->Andy > -- > Chuck Lever > chuck[dot]lever[at]oracle[dot]com > > > > -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html