Re: [LSF/VM TOPIC] Dynamic sizing of dirty_limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 24 Feb 2010, Jan Kara wrote:

> fine (and you probably don't want much more because the memory is better
> used for something else), when a machine does random rewrites, going to 40%
> might be well worth it. So I'd like to discuss how we could measure that
> increasing amount of dirtiable memory helps so that we could implement
> dynamic sizing of it.

Another issue around dirty limits is that they are global. If you are
running multiple jobs on the same box (memcg or cpusets or you set
affinities to separate the box) then every job may need different dirty
limits. One idea that I had in the past was to set dirty limits based on
nodes or cpusets. But that will not cover the other cases that I have
listed above.

The best solution would be an algorithm that can accomodate multiple loads
and manage the amount of dirty memory automatically.



--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux