On Feb 16, 2013, at 8:39 AM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote: > On Fri, Feb 15, 2013 at 05:20:58PM -0500, Jeff Layton wrote: >> An excellent question, and not an easy one to answer. Clearly 1024 >> entries was not enough. We now cap the size as a function of the >> available low memory, which I think is a reasonable way to keep it from >> ballooning so large that the box falls over. We also have a shrinker >> and periodic cache cleaner to prune off entries that have expired. >> >> Of course one thing I haven't really considered enough is the >> performance implications of walking the potentially much longer hash >> chains here. >> >> If that is a problem, then one way to counter that without moving to a >> different structure altogether might be to alter the hash function >> based on the max size of the cache. IOW, grow the number of hash buckets >> as the max cache size grows? The trouble with a hash table is that once you've allocated it, it's a heavy lift to increase the table size. That sort of logic adds complexity and additional locking, and is often difficult to test. > Another reason to organize the cache per client address? In theory, an active single client could evict all entries for other clients, but do we know this happens in practice? > With a per-client maximum number of entries, sizing the hash tables > should be easier. When a server has only one client, should that client be allowed to maximize the use of a server's resources (eg, use all of the DRC resource the server has available)? How about when a server has one active client and multiple quiescent clients? -- Chuck Lever chuck[dot]lever[at]oracle[dot]com -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html