On Fri, 12 Mar 2010 00:27:09 +0100 Andrea Righi <arighi@xxxxxxxxxxx> wrote: > On Thu, Mar 11, 2010 at 10:03:07AM -0500, Vivek Goyal wrote: > > I am still setting up the system to test whether we see any speedup in > > writeout of large files with-in a memory cgroup with small memory limits. > > I am assuming that we are expecting a speedup because we will start > > writeouts early and background writeouts probably are faster than direct > > reclaim? > > mmh... speedup? I think with a large file write + reduced dirty limits > you'll get a more uniform write-out (more frequent small writes), > respect to few and less frequent large writes. The system will be more > reactive, but I don't think you'll be able to see a speedup in the large > write itself. > Ah, sorry. I misunderstood something. But it's depends on dirty_ratio param. If background_dirty_ratio = 5 dirty_ratio = 100 under 100M cgroup, I think background write-out will be a help. (nonsense ? ;) And I wonder make -j can get better number....Hmm. Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>