[LSF/VM TOPIC] Dynamic sizing of dirty_limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hi,

  one more suggestion for discussion:
Currently, the amount of dirtiable memory is fixed - either to a percentage
of ram (dirty_limit) or to a fix number of megabytes. The problem with this
is that if you have application doing random writes on a file (like some
simple databases do), you'll get a big performance improvement if you
increase the amount of dirtiable memory (because you safe quite some
rewrites and also get larger chunks of sequential IO) (*)
On the other hand for sequential IO increasing dirtiable memory (beyond
certain level) does not really help - you end up doing the same IO.  So for
a machine is doing sequential IO, having 10% of memory dirtiable is just
fine (and you probably don't want much more because the memory is better
used for something else), when a machine does random rewrites, going to 40%
might be well worth it. So I'd like to discuss how we could measure that
increasing amount of dirtiable memory helps so that we could implement
dynamic sizing of it.

(*) We ended up increasing dirty_limit in SLES 11 to 40% as it used to be
with old kernels because customers running e.g. LDAP (using BerkelyDB
heavily) were complaining about performance problems.

								Honza
-- 
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux