On Thu, Dec 05, 2013 at 08:41:56AM -0500, Jeff Layton wrote: > On Thu, 5 Dec 2013 05:29:19 -0800 > Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote: > > > On Thu, Dec 05, 2013 at 06:00:51AM -0500, Jeff Layton wrote: > > > Currently when we are processing a request, we try to scrape an expired > > > or over-limit entry off the list in preference to allocating a new one > > > from the slab. > > > > > > This is unnecessarily complicated. Just use the slab layer. > > > > I really don't think pruning caches from lookup functions is a good > > idea. To many chances for locking inversions, unbounded runtime and > > other issues. So I'd prefer my earlier version of the patch to > > remove it entirely if we want to change anything beyond your minimal > > fix. > > > > It's not likely to hit a locking inversion here since we don't have a > lot of different locks, but point taken... > > If we take that out, then that does mean that the cache may grow larger > than the max...possibly much larger since the pruner workqueue job > only runs every 120s and you can put a lot more entries into the cache > in that period. Yeah, this scares me. I looked through my old mail but couldn't find your previous results: how long did you find the hash buckets needed to be before the difference was noticeable on your setup? We typically do a failed-lookup-and-insert on every cached operation so I wouldn't be surprised if a workload of small random writes, for example, could produce an unreasonably large cache in that time. --b. > > That said, I don't feel too strongly about it, so if you think leaving > it to the workqueue job and the shrinker is the right thing to do, then > so be it. > > -- > Jeff Layton <jlayton@xxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html