Re: [patch 0/5] refault distance-based file cache sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Tue, May 01, 2012 at 02:26:56PM -0700, Andrew Morton wrote:
> Well, think of a stupid workload which creates a large number of very
> large but sparse files (populated with one page in each 64, for
> example).  Get them all in cache, then sit there touching the inodes to
> keep then fresh.  What's the worst case here?

I suspect in that scenario we may drop more inodes than before and so
a ton of their cache with it and actually worsen the LRU effect
instead of improving them.

I don't think it's a reliablity issue, or we would probably be bitten
by it already, especially with a ton of inodes with just one page at a
very large file offset accessed in a loop. This only makes more sticky
a badness we already have. Testing it for sure, wouldn't be a bad idea
though.

At first glance it sounds like a good tradeoff, as normally the
"worsening" effect of when we have too many and large radix trees that
would lead to more inodes to be dropped than before, shouldn't
materialize and we'd just make better use of the memory we already
allocated to make more accurate decisions on the active/inactive
LRU balancing.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux