Re: [PATCH 0/5 v2] add extent status tree caching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 12, 2013 at 10:21:45PM -0500, Eric Sandeen wrote:
> 
> Reading extents via fiemap almost certainly moves that metadata into
> kernel cache, simply by the act of reading the block device to get them.

Well, if the file system has an extent cache.  It certainly will end
up reading the pages involved with the extents into the buffer and/or
page cache (depending on how the file system does things).

> I see Dave's point that we _do_ have an interface today to read
> all file extents into cache.  We don't mark them as particularly sticky,
> however.
> 
> This seems pretty clearly driven by a Google workload need; something you
> can probably test.  Does FIEMAP do the job for you or not?  If not, why not?

If you are using memory containers the way we do, in practice every
single process is going to be under memory pressure.  See previous
comments I've made about why in a cloud environment, memory is your
most precious resource, since motherboards have limited numbers of
DIMM slots, and high-density DIMMS are expensive --- this is why
services like Amazon EC2 and Linode charge $$$ if you need much more
than 512mb of memory.  This is because in order to make cloud systems
cost effective from a financial ROI point of view (especially once you
include power and cooling costs), you need to pack a large number of
workloads on each machine, and this is true regardless of whether you
are using containers or VM's as your method of isolation.

So basically, if you are trying to use your memory efficiently, _and_
you are trying to meet 99.9 percentile latency SLA numbers for your
performance-critical workloads, you need to have a way of telling the
system that certain pieces of memory (in this case, certain parts of
the extent cache) are more important than others (for example, a
no-longer-used inode/dentry in the inode/dentry cache or other slab
objects).

						- Ted

P.S.  In previous versions of this patch (which never went upstream,
using a different implementation which also never went upstream), this
ioctl nailed the relevant portions of the extent cache into memory
permanently, and they wouldn't be evicted no matter how much memory
pressure you would be under.  In the Google environment, this wasn't a
major issue, since all jobs run under a restrictive memory container
and so a buggy or malicious program which attempted to precache too
many files would end up OOM-kiling itself (after which point the
situation would correct itself).

In this version of the patch, I've made the cache entries sticky, but
they aren't permanently nailed in place.  This is because not all
systems will be running with containers, and I wanted to make sure we
had a safety valve against abuse.  Could someone still degrade the
system performance if they tried to abuse this ioctl?  Sure, but
someone can do the same thing with a "while (1) fork();" bomb.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux