The patch titled writeback: scale down max throttle bandwidth on concurrent dirtiers has been added to the -mm tree. Its filename is writeback-scale-down-max-throttle-bandwidth-on-concurrent-dirtiers.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** See http://userweb.kernel.org/~akpm/stuff/added-to-mm.txt to find out what to do about this The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: writeback: scale down max throttle bandwidth on concurrent dirtiers From: Wu Fengguang <fengguang.wu@xxxxxxxxx> This will noticeably reduce the fluctuaions of pause time when there are 100+ concurrent dirtiers. The more parallel dirtiers (1 dirtier => 4 dirtiers), the smaller bandwidth each dirtier will share (bdi_bandwidth => bdi_bandwidth/4), the less gap to the dirty limit ((C-A) => (C-B)), the less stable the pause time will be (given the same fluctuation of bdi_dirty). For example, if A drifts to A', its pause time may drift from 5ms to 6ms, while B to B' may drift from 50ms to 90ms. It's much larger fluctuations in relative ratio as well as absolute time. Fig.1 before patch, gap (C-B) is too low to get smooth pause time throttle_bandwidth_A = bdi_bandwidth .........o | o <= A' | o | o | o | o throttle_bandwidth_B = bdi_bandwidth / 4 .....|...........o | | o <= B' ---------------------------------------------+-----------+---o A B C The solution is to lower the slope of the throttle line accordingly, which makes B stabilize at some point more far away from C. Fig.2 after patch throttle_bandwidth_A = bdi_bandwidth .........o | o <= A' | o | o lowered max throttle bandwidth for B ===> * o | * o throttle_bandwidth_B = bdi_bandwidth / 4 .............* o | | * o ---------------------------------------------+-------+-------o A B C Note that C is actually different points for 1-dirty and 4-dirtiers cases, but for easy graphing, we move them together. Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx> Cc: Chris Mason <chris.mason@xxxxxxxxxx> Cc: Dave Chinner <david@xxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> Cc: Jens Axboe <axboe@xxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: KOSAKI Motohiro <kosaki.motohiro@xxxxxxxxxxxxxx> Cc: Li Shaohua <shaohua.li@xxxxxxxxx> Cc: Theodore Ts'o <tytso@xxxxxxx> Cc: Richard Kennedy <richard@xxxxxxxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Mel Gorman <mel@xxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Michael Rubin <mrubin@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page-writeback.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff -puN mm/page-writeback.c~writeback-scale-down-max-throttle-bandwidth-on-concurrent-dirtiers mm/page-writeback.c --- a/mm/page-writeback.c~writeback-scale-down-max-throttle-bandwidth-on-concurrent-dirtiers +++ a/mm/page-writeback.c @@ -537,6 +537,7 @@ static void balance_dirty_pages(struct a unsigned long background_thresh; unsigned long dirty_thresh; unsigned long bdi_thresh; + unsigned long task_thresh; unsigned long bw; unsigned long pause = 0; bool dirty_exceeded = false; @@ -566,7 +567,7 @@ static void balance_dirty_pages(struct a break; bdi_thresh = bdi_dirty_limit(bdi, dirty_thresh); - bdi_thresh = task_dirty_limit(current, bdi_thresh); + task_thresh = task_dirty_limit(current, bdi_thresh); /* * In order to avoid the stacked BDI deadlock we need @@ -605,14 +606,23 @@ static void balance_dirty_pages(struct a break; bdi_prev_dirty = bdi_dirty; - if (bdi_dirty >= bdi_thresh) { + if (bdi_dirty >= task_thresh) { pause = HZ/10; goto pause; } + /* + * When bdi_dirty grows closer to bdi_thresh, it indicates more + * concurrent dirtiers. Proportionally lower the max throttle + * bandwidth. This will resist bdi_dirty from approaching to + * close to task_thresh, and help reduce fluctuations of pause + * time when there are lots of dirtiers. + */ bw = bdi->write_bandwidth; - bw = bw * (bdi_thresh - bdi_dirty); + bw = bw / (bdi_thresh / BDI_SOFT_DIRTY_LIMIT + 1); + + bw = bw * (task_thresh - bdi_dirty); bw = bw / (bdi_thresh / TASK_SOFT_DIRTY_LIMIT + 1); pause = HZ * (pages_dirtied << PAGE_CACHE_SHIFT) / (bw + 1); _ Patches currently in -mm which might be from fengguang.wu@xxxxxxxxx are linux-next.patch writeback-integrated-background-writeback-work.patch writeback-trace-wakeup-event-for-background-writeback.patch writeback-stop-background-kupdate-works-from-livelocking-other-works.patch writeback-stop-background-kupdate-works-from-livelocking-other-works-update.patch writeback-avoid-livelocking-wb_sync_all-writeback.patch writeback-avoid-livelocking-wb_sync_all-writeback-update.patch writeback-check-skipped-pages-on-wb_sync_all.patch writeback-check-skipped-pages-on-wb_sync_all-update.patch writeback-check-skipped-pages-on-wb_sync_all-update-fix.patch writeback-io-less-balance_dirty_pages.patch writeback-consolidate-variable-names-in-balance_dirty_pages.patch writeback-per-task-rate-limit-on-balance_dirty_pages.patch writeback-per-task-rate-limit-on-balance_dirty_pages-fix.patch writeback-prevent-duplicate-balance_dirty_pages_ratelimited-calls.patch writeback-account-per-bdi-accumulated-written-pages.patch writeback-bdi-write-bandwidth-estimation.patch writeback-show-bdi-write-bandwidth-in-debugfs.patch writeback-quit-throttling-when-bdi-dirty-pages-dropped-low.patch writeback-reduce-per-bdi-dirty-threshold-ramp-up-time.patch writeback-make-reasonable-gap-between-the-dirty-background-thresholds.patch writeback-scale-down-max-throttle-bandwidth-on-concurrent-dirtiers.patch writeback-add-trace-event-for-balance_dirty_pages.patch writeback-make-nr_to_write-a-per-file-limit.patch mm-page-writebackc-fix-__set_page_dirty_no_writeback-return-value.patch mm-find_get_pages_contig-fixlet.patch mm-smaps-export-mlock-information.patch memcg-add-page_cgroup-flags-for-dirty-page-tracking.patch memcg-document-cgroup-dirty-memory-interfaces.patch memcg-document-cgroup-dirty-memory-interfaces-fix.patch memcg-create-extensible-page-stat-update-routines.patch memcg-add-lock-to-synchronize-page-accounting-and-migration.patch writeback-create-dirty_info-structure.patch memcg-add-dirty-page-accounting-infrastructure.patch memcg-add-kernel-calls-for-memcg-dirty-page-stats.patch memcg-add-dirty-limits-to-mem_cgroup.patch memcg-add-dirty-limits-to-mem_cgroup-use-native-word-to-represent-dirtyable-pages.patch memcg-add-dirty-limits-to-mem_cgroup-catch-negative-per-cpu-sums-in-dirty-info.patch memcg-add-dirty-limits-to-mem_cgroup-avoid-overflow-in-memcg_hierarchical_free_pages.patch memcg-add-dirty-limits-to-mem_cgroup-correct-memcg_hierarchical_free_pages-return-type.patch memcg-add-dirty-limits-to-mem_cgroup-avoid-free-overflow-in-memcg_hierarchical_free_pages.patch memcg-cpu-hotplug-lockdep-warning-fix.patch memcg-add-cgroupfs-interface-to-memcg-dirty-limits.patch memcg-break-out-event-counters-from-other-stats.patch memcg-check-memcg-dirty-limits-in-page-writeback.patch memcg-use-native-word-page-statistics-counters.patch memcg-use-native-word-page-statistics-counters-fix.patch memcg-add-mem_cgroup-parameter-to-mem_cgroup_page_stat.patch memcg-pass-mem_cgroup-to-mem_cgroup_dirty_info.patch memcg-make-throttle_vm_writeout-memcg-aware.patch memcg-make-throttle_vm_writeout-memcg-aware-fix.patch memcg-simplify-mem_cgroup_page_stat.patch memcg-simplify-mem_cgroup_dirty_info.patch memcg-make-mem_cgroup_page_stat-return-value-unsigned.patch memcg-use-zalloc-rather-than-mallocmemset.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html