On Wed 16-03-16 13:46:17, Tejun Heo wrote: > Hello, > > (cc'ing Jan) > > On Mon, Mar 14, 2016 at 05:09:00PM +0100, Michal Hocko wrote: > > On Sun 13-03-16 23:22:23, Tetsuo Handa wrote: > > [...] > > > > I am not familiar with the writeback code so I might be missing > > something essential here but why are we even queueing more and more > > work without checking there has been enough already scheduled or in > > progress. > > > > Something as simple as: > > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c > > index 6915c950e6e8..aa52e23ac280 100644 > > --- a/fs/fs-writeback.c > > +++ b/fs/fs-writeback.c > > @@ -887,7 +887,7 @@ void wb_start_writeback(struct bdi_writeback *wb, long nr_pages, > > { > > struct wb_writeback_work *work; > > > > - if (!wb_has_dirty_io(wb)) > > + if (!wb_has_dirty_io(wb) || writeback_in_progress(wb)) > > return; > > I'm not sure this would be safe. It shouldn't harm correctness as > wb_start_writeback() isn't used in sync case but this might change > flush behavior in various ways. Dropping GFP_ATOMIC as suggested by > Tetsuo is likely better. Yes, there can be different requests for different numbers of pages to be written and you don't want to discard a request to clean 4000 pages just because a writeback of 10 pages is just running. As Tejun says, this is not a hard requirement but in general it would be unexpected for the users of the api... Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html