> > > @@ -633,6 +633,14 @@ static long wb_writeback(struct bdi_writeback *wb, > > > break; > > > > > > /* > > > + * Background writeout and kupdate-style writeback are > > > + * easily livelockable. Stop them if there is other work > > > + * to do so that e.g. sync can proceed. > > > + */ > > > + if ((work->for_background || work->for_kupdate) && > > > + !list_empty(&wb->bdi->work_list)) > > > + break; > > > + /* > > > > So what happens if an application sits in a loop doing write&fsync to a > > file? The vm's call for help gets ignored and your data doesn't get > > written back for three days?? > write & fsync wouldn't influece this because fsync() doesn't queue any > work for flusher thread (all the IO is done on behalf of the process doing > fsync()). Right. The fsync functions will call into __filemap_fdatawrite_range() to start writeback directly, instead of relaying to the flusher threads. > If someone would be doing: > while (1) sync(); > Then this would make bdi-flusher thread ignore any VM's requests. Yes, at least for now. > But we won't have much dirty data in this case anyway. With a heavy dirtier, it's still possible to maintain 20% dirty pages when we are busy sync()ing. It helps to knock down the dirty limit in this case. > The subtle thing here is that noone actually ever calls flusher thread to > do less work than it does when doing "kupdate" or "background" writeback as > defined above. Yes. > But if we grow some calls to flusher thread for just a > limited amount of pages in future, then your are right it could be a > problem especially if flusher thread could be flooded with such requests. There's already such an interface? With nr_pages and !for_background, such as wakeup_flusher_threads(total_scanned) in the direct reclaim path. I suspect such requests will be piling up on memory pressure, each request will do one kzalloc() -- it's memory allocation at vmscan time! Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html