On Thu, 2009-09-10 at 21:21 +0800, Wu Fengguang wrote: > On Thu, Sep 10, 2009 at 08:57:42PM +0800, Chris Mason wrote: > > On Thu, Sep 10, 2009 at 09:42:01AM +0800, Wu Fengguang wrote: > > > On Wed, Sep 09, 2009 at 11:44:13PM +0800, Jan Kara wrote: > > > > On Wed 09-09-09 22:51:48, Wu Fengguang wrote: > > > > > Some filesystem may choose to write much more than ratelimit_pages > > > > > before calling balance_dirty_pages_ratelimited_nr(). So it is safer to > > > > > determine number to write based on real number of dirtied pages. > > > > > > > > > > The increased write_chunk may make the dirtier more bumpy. This is > > > > > filesystem writers' duty not to dirty too much at a time without > > > > > checking the ratelimit. > > > > I don't get this. balance_dirty_pages_ratelimited_nr() is called when we > > > > dirty the page, not when we write it out. So a problem would only happen if > > > > filesystem dirties pages by set_page_dirty() and won't call > > > > balance_dirty_pages_ratelimited_nr(). But e.g. generic_perform_write() > > > > and do_wp_page() takes care of that. So where's the problem? > > > > > > It seems that btrfs_file_write() is writing in chunks of up to 1024-pages > > > (1024 is the computed nrptrs value in a 32bit kernel). And it calls > > > balance_dirty_pages_ratelimited_nr() each time it dirtied such a chunk. > > > > I can easily change this to call more often, but we do always call > > balance_dirty_pages to reflect how much ram we've really sent down. > > Btrfs is doing OK. 2MB/4MB looks like reasonable chunk sizes. The > need-change part is balance_dirty_pages_ratelimited_nr(), hence this > patch :) I'm not getting it, it calls set_page_dirty() for each page, right? and then it calls into balance_dirty_pages_ratelimited_nr(), that sounds right. What is the problem with that? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html