On Wed, 24 Feb 2010, Jan Kara wrote: > fine (and you probably don't want much more because the memory is better > used for something else), when a machine does random rewrites, going to 40% > might be well worth it. So I'd like to discuss how we could measure that > increasing amount of dirtiable memory helps so that we could implement > dynamic sizing of it. Another issue around dirty limits is that they are global. If you are running multiple jobs on the same box (memcg or cpusets or you set affinities to separate the box) then every job may need different dirty limits. One idea that I had in the past was to set dirty limits based on nodes or cpusets. But that will not cover the other cases that I have listed above. The best solution would be an algorithm that can accomodate multiple loads and manage the amount of dirty memory automatically. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>