On Tue, May 30, David Chinner wrote: > You've just described the embodiment of the two order's of magnitude > issue I mentioned. That's not a wrong assumption - think of the > above case with global_unused count now being 1.28*10^7 instead of > 1.28x10^4. How many dentries do you have to free before freeing any > on the small superblock if we don't free one per call? (quick > answer: 99.9%). > > If we shrink one per call, we've freed all 128 dentries while there > is still 1*10^5 dentries on the large list. That seems like a much > better balance to make within the constraints of the shrinker > resolution we have to work with. With the effect that the dcache is completely useless for small filesystems as long as there is one big one. Filesystems where regularily a small amount of files is used don't have any cached dentries but the filesystem where someone touched every file still has a lot of dentries in cache although they are never used again. > Hmm - need to do something with that age_limit field, right? That > would imply we need a timestamp in the dentry as well, and we don't > shrink any sb that doesn't have dentries older than the age limit. > If we scan all the sb's and still have more to free, then we halve > the age limit and scan again.... This probably is the way to go. > > No. prune_dcache() is working on the unused list in the opposite (reverse) > > direction. shrink_dcache_sb() (basically my prune_dcache_sb()) is shrinking > > all unused dentries. In that case it is better to visit the unused list in the > > normal (forward) direction (~only one pass). > > Why? Forward or reverse it's only one traversal to free all dentries > - you go till the list is empty. Either way, with the prefetch of > the next entry in the list there's little perfomrance difference > once you've got outside some tiny subset of the list that might be > hot in cache.... Ooops, I was still thinking of the global-unused-list here. - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html