The patch titled writeback: quit throttling when bdi dirty pages dropped low has been added to the -mm tree. Its filename is writeback-quit-throttling-when-bdi-dirty-pages-dropped-low.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://userweb.kernel.org/~akpm/stuff/added-to-mm.txt to find out what to do about this The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: writeback: quit throttling when bdi dirty pages dropped low From: Wu Fengguang <fengguang.wu@xxxxxxxxx> Tests show that bdi_thresh may take minutes to ramp up on a typical desktop. The time should be improvable but cannot be eliminated totally. So when (background_thresh + dirty_thresh)/2 is reached and balance_dirty_pages() starts to throttle the task, it will suddenly find the (still low and ramping up) bdi_thresh is exceeded _excessively_. Here we definitely don't want to stall the task for one minute (when it's writing to USB stick). So introduce an alternative way to break out of the loop when the bdi dirty/write pages has dropped by a reasonable amount. When dirty_background_ratio is set close to dirty_ratio, bdi_thresh may also be constantly exceeded due to the task_dirty_limit() gap. This is addressed by another patch to lower the background threshold when necessary. It will take at least 100ms before trying to break out. Note that this opens the chance that during normal operation, a huge number of slow dirtiers writing to a really slow device might manage to outrun bdi_thresh. But the risk is pretty low. It takes at least one 100ms sleep loop to break out, and the global limit is still enforced. Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx> Cc: Chris Mason <chris.mason@xxxxxxxxxx> Cc: Dave Chinner <david@xxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> Cc: Jens Axboe <axboe@xxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> Cc: Li Shaohua <shaohua.li@xxxxxxxxx> Cc: Theodore Ts'o <tytso@xxxxxxx> Cc: Richard Kennedy <richard@xxxxxxxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Mel Gorman <mel@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Michael Rubin <mrubin@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page-writeback.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff -puN mm/page-writeback.c~writeback-quit-throttling-when-bdi-dirty-pages-dropped-low mm/page-writeback.c --- a/mm/page-writeback.c~writeback-quit-throttling-when-bdi-dirty-pages-dropped-low +++ a/mm/page-writeback.c @@ -526,6 +526,7 @@ static void balance_dirty_pages(struct a { long nr_reclaimable; long nr_dirty, bdi_dirty; /* = file_dirty + writeback + unstable_nfs */ + long bdi_prev_dirty = 0; unsigned long background_thresh; unsigned long dirty_thresh; unsigned long bdi_thresh; @@ -578,6 +579,25 @@ static void balance_dirty_pages(struct a bdi_stat(bdi, BDI_WRITEBACK); } + /* + * bdi_thresh takes time to ramp up from the initial 0, + * especially for slow devices. + * + * It's possible that at the moment dirty throttling starts, + * bdi_dirty = nr_dirty + * = (background_thresh + dirty_thresh) / 2 + * >> bdi_thresh + * Then the task could be blocked for a dozen second to flush + * all the exceeded (bdi_dirty - bdi_thresh) pages. So offer a + * complementary way to break out of the loop when 250ms worth + * of dirty pages have been cleaned during our pause time. + */ + if (nr_dirty < dirty_thresh && + bdi_prev_dirty - bdi_dirty > + bdi->write_bandwidth >> (PAGE_CACHE_SHIFT + 2)) + break; + bdi_prev_dirty = bdi_dirty; + if (bdi_dirty >= bdi_thresh) { pause = HZ/10; goto pause; _ Patches currently in -mm which might be from fengguang.wu@xxxxxxxxx are linux-next.patch writeback-integrated-background-writeback-work.patch writeback-trace-wakeup-event-for-background-writeback.patch writeback-stop-background-kupdate-works-from-livelocking-other-works.patch writeback-stop-background-kupdate-works-from-livelocking-other-works-update.patch writeback-avoid-livelocking-wb_sync_all-writeback.patch writeback-avoid-livelocking-wb_sync_all-writeback-update.patch writeback-check-skipped-pages-on-wb_sync_all.patch writeback-check-skipped-pages-on-wb_sync_all-update.patch writeback-check-skipped-pages-on-wb_sync_all-update-fix.patch writeback-io-less-balance_dirty_pages.patch writeback-consolidate-variable-names-in-balance_dirty_pages.patch writeback-per-task-rate-limit-on-balance_dirty_pages.patch writeback-per-task-rate-limit-on-balance_dirty_pages-fix.patch writeback-prevent-duplicate-balance_dirty_pages_ratelimited-calls.patch writeback-account-per-bdi-accumulated-written-pages.patch writeback-bdi-write-bandwidth-estimation.patch writeback-show-bdi-write-bandwidth-in-debugfs.patch writeback-quit-throttling-when-bdi-dirty-pages-dropped-low.patch writeback-reduce-per-bdi-dirty-threshold-ramp-up-time.patch writeback-make-reasonable-gap-between-the-dirty-background-thresholds.patch writeback-scale-down-max-throttle-bandwidth-on-concurrent-dirtiers.patch writeback-add-trace-event-for-balance_dirty_pages.patch writeback-make-nr_to_write-a-per-file-limit.patch mm-page-writebackc-fix-__set_page_dirty_no_writeback-return-value.patch mm-find_get_pages_contig-fixlet.patch mm-smaps-export-mlock-information.patch memcg-add-page_cgroup-flags-for-dirty-page-tracking.patch memcg-document-cgroup-dirty-memory-interfaces.patch memcg-document-cgroup-dirty-memory-interfaces-fix.patch memcg-create-extensible-page-stat-update-routines.patch memcg-add-lock-to-synchronize-page-accounting-and-migration.patch writeback-create-dirty_info-structure.patch memcg-add-dirty-page-accounting-infrastructure.patch memcg-add-kernel-calls-for-memcg-dirty-page-stats.patch memcg-add-dirty-limits-to-mem_cgroup.patch memcg-add-dirty-limits-to-mem_cgroup-use-native-word-to-represent-dirtyable-pages.patch memcg-add-dirty-limits-to-mem_cgroup-catch-negative-per-cpu-sums-in-dirty-info.patch memcg-add-dirty-limits-to-mem_cgroup-avoid-overflow-in-memcg_hierarchical_free_pages.patch memcg-add-dirty-limits-to-mem_cgroup-correct-memcg_hierarchical_free_pages-return-type.patch memcg-add-dirty-limits-to-mem_cgroup-avoid-free-overflow-in-memcg_hierarchical_free_pages.patch memcg-cpu-hotplug-lockdep-warning-fix.patch memcg-add-cgroupfs-interface-to-memcg-dirty-limits.patch memcg-break-out-event-counters-from-other-stats.patch memcg-check-memcg-dirty-limits-in-page-writeback.patch memcg-use-native-word-page-statistics-counters.patch memcg-use-native-word-page-statistics-counters-fix.patch memcg-add-mem_cgroup-parameter-to-mem_cgroup_page_stat.patch memcg-pass-mem_cgroup-to-mem_cgroup_dirty_info.patch memcg-make-throttle_vm_writeout-memcg-aware.patch memcg-make-throttle_vm_writeout-memcg-aware-fix.patch memcg-simplify-mem_cgroup_page_stat.patch memcg-simplify-mem_cgroup_dirty_info.patch memcg-make-mem_cgroup_page_stat-return-value-unsigned.patch memcg-use-zalloc-rather-than-mallocmemset.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html