On 2010-02-24, at 09:10, Christoph Lameter wrote:
On Wed, 24 Feb 2010, Jan Kara wrote:
fine (and you probably don't want much more because the memory is
better
used for something else), when a machine does random rewrites,
going to 40%
might be well worth it. So I'd like to discuss how we could measure
that
increasing amount of dirtiable memory helps so that we could
implement
dynamic sizing of it.
Another issue around dirty limits is that they are global. If you are
running multiple jobs on the same box (memcg or cpusets or you set
affinities to separate the box) then every job may need different
dirty
limits. One idea that I had in the past was to set dirty limits
based on
nodes or cpusets. But that will not cover the other cases that I have
listed above.
The best solution would be an algorithm that can accomodate multiple
loads
and manage the amount of dirty memory automatically.
Why not dirty limits per file and/or a function of the IO randomness
vs the file size? Doing streaming on a large file can easily be
detected and limited appropriately (either the filesystem can keep up
and the "smaller" limit will not be hit, or it can't keep up and the
application needs to be throttled nearly regardless of what the limit
is). Doing streaming or random IO on small files is almost
indistinguishable anyway and should pretty much be treated as random
IO subject to a "larger" global limit.
Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html