Re: [PATCH RFC] nfsd: report length of the largest hash chain in reply cache stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 15 Feb 2013 16:14:56 -0500
Chuck Lever <chuck.lever@xxxxxxxxxx> wrote:

> 
> On Feb 15, 2013, at 3:04 PM, Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> 
> > So we can get a feel for how effective the hashing function is.
> > 
> > As Chuck Lever pointed out to me, it's generally acceptable to do
> > "expensive" stuff when reading the stats since that's a relatively
> > rare activity.
> 
> A good measure of the efficacy of a hash function is the ratio of the maximum chain length to the optimal chain length (which can be computed by dividing the total number of cache entries by the number of hash chains).
> 

Right, the number of chains is always 64 for now (maybe we should print
that out in the stats too), so you can compute that from the values
provided here.

> If we plan to stick with a hash table for this cache, there should be some indication when the hash function falls over.  This will matter because the DRC can now grow much larger, which is turning out to be the real fundamental change with this work.
> 

That's the kicker. With the patch below, computing the max chain length
on the fly is somewhat expensive since you have to walk every entry.
It's certainly possible though (even likely) that the real max length
will occur at some point when we're not looking at this file. So how to
best gauge that?

Maybe we should just punt and move it all to a rbtree or something. A
self-balancing structure is nice and simple to deal with, even if the
insertion penalty is a bit higher...

> A philosophical question though is "How can we know when the DRC is large enough?"
> 

An excellent question, and not an easy one to answer. Clearly 1024
entries was not enough. We now cap the size as a function of the
available low memory, which I think is a reasonable way to keep it from
ballooning so large that the box falls over. We also have a shrinker
and periodic cache cleaner to prune off entries that have expired.

Of course one thing I haven't really considered enough is the
performance implications of walking the potentially much longer hash
chains here.

If that is a problem, then one way to counter that without moving to a
different structure altogether might be to alter the hash function
based on the max size of the cache. IOW, grow the number of hash buckets
as the max cache size grows?

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux