On Wed, 2009-10-14 at 09:38 +0800, Wu Fengguang wrote: > > > Hmm, probably you've discussed this in some other email but why do we > > > cycle in this loop until we get below dirty limit? We used to leave the > > > loop after writing write_chunk... So the time we spend in > > > balance_dirty_pages() is no longer limited, right? > > Right, this is a legitimate concern. Quite. > > Wu was saying that without the loop nr_writeback wasn't limited, but > > since bdi_writeback_wakeup() is driven from writeout completion, I'm not > > sure how again that was so. > > Let me summarize the ideas :) > > There are two cases: > > - there are no bdi or block io queue to limit nr_writeback > This must be fixed. It either let nr_writeback grow to dirty_thresh > (with loop) and thus squeeze nr_dirty, or grow out of control > totally (without loop). Current state is, the nr_writeback wait > queue for NFS is there; the one for btrfs is still missing. > > - there is a nr_writeback limit, but is larger than dirty_thresh > In this case nr_dirty will be close to 0 regardless of the loop. > The loop will help to keep > nr_dirty + nr_writeback + nr_unstable < dirty_thresh > Without the loop, the "real" dirty threshold would be larger > (determined by the nr_writeback limit). > > > We can move all of bdi_dirty to bdi_writeout, if the bdi writeout queue > > permits, but it cannot grow beyond the total limit, since we're actually > > waiting for writeout completion. > > Yes, this explains the second case. It's some trade-off like: the > nr_writeback limit can not be trusted in small memory systems, so do > the loop to impose the dirty_thresh, which unfortunately can hurt > responsiveness on all systems with prolonged wait time.. Ok, so I'm still puzzled. set_page_dirty() balance_dirty_pages_ratelimited() balance_dirty_pages_ratelimited_nr(1) balance_dirty_pages(nr); So we call balance_dirty_pages() with an appropriate count for each set_page_dirty() successful invocation, right? balance_dirty_pages() guarantees that: nr_dirty + nr_writeback + nr_unstable < dirty_thresh && (nr_dirty + nr_writeback + nr_unstable < (dirty_thresh + background_thresh)/2 || bdi_dirty + bdi_writeback + bdi_unstable < bdi_thresh) Now without loop, without writeback limit, I still see no way to actually generate more 'dirty' pages than dirty_thresh. As soon as we hit dirty_thresh a process will wait for exactly the same amount of pages to get cleaned (writeback completed) as were dirtied (+/- the ratelimit fuzz which should even out over processes). That should bound things to dirty_thresh -- the wait is on writeback complete, so nr_writeback is bounded too. [ I forgot the exact semantics of unstable, if we clear writeback before unstable, we need to fix something ] Now, a nr_writeback queue that limits writeback will still be useful, esp for high speed devices. Once they ramp up and bdi_thresh exceeds the queue size, it'll take effect. So you reap the benefits when needed. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html