Re: [PATCH RFC] nfsd: report length of the largest hash chain in reply cache stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 15, 2013 at 05:20:58PM -0500, Jeff Layton wrote:
> An excellent question, and not an easy one to answer. Clearly 1024
> entries was not enough. We now cap the size as a function of the
> available low memory, which I think is a reasonable way to keep it from
> ballooning so large that the box falls over. We also have a shrinker
> and periodic cache cleaner to prune off entries that have expired.
> 
> Of course one thing I haven't really considered enough is the
> performance implications of walking the potentially much longer hash
> chains here.
> 
> If that is a problem, then one way to counter that without moving to a
> different structure altogether might be to alter the hash function
> based on the max size of the cache. IOW, grow the number of hash buckets
> as the max cache size grows?

Another reason to organize the cache per client address?

Two levels of hash tables might be good enough: one global hash table
for the client address, one per-client for the rest.

With a per-client maximum number of entries, sizing the hash tables
should be easier.

If we wanted to be fancy in theory the address lookup could probably be
lockless in the typical case.

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux