Re: [PATCH RFC 1/3] nfsd: don't try to reuse an expired DRC entry off the list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 5 Dec 2013 05:29:19 -0800
Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:

> On Thu, Dec 05, 2013 at 06:00:51AM -0500, Jeff Layton wrote:
> > Currently when we are processing a request, we try to scrape an expired
> > or over-limit entry off the list in preference to allocating a new one
> > from the slab.
> > 
> > This is unnecessarily complicated. Just use the slab layer.
> 
> I really don't think pruning caches from lookup functions is a good
> idea. To many chances for locking inversions, unbounded runtime and
> other issues.  So I'd prefer my earlier version of the patch to
> remove it entirely if we want to change anything beyond your minimal
> fix.
> 

It's not likely to hit a locking inversion here since we don't have a
lot of different locks, but point taken...

If we take that out, then that does mean that the cache may grow larger
than the max...possibly much larger since the pruner workqueue job
only runs every 120s and you can put a lot more entries into the cache
in that period.

That said, I don't feel too strongly about it, so if you think leaving
it to the workqueue job and the shrinker is the right thing to do, then
so be it.

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux