The patch titled Subject: mm/page-writeback: Fix performance when BDI's share of ratio is 0. has been added to the -mm tree. Its filename is mm-page-writeback-fix-performance-when-bdis-share-of-ratio-is-0.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-page-writeback-fix-performance-when-bdis-share-of-ratio-is-0.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-page-writeback-fix-performance-when-bdis-share-of-ratio-is-0.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Chi Wu <wuchi.zero@xxxxxxxxx> Subject: mm/page-writeback: Fix performance when BDI's share of ratio is 0. Fix performance when BDI's share of ratio is 0. The issue is similar to commit 74d369443325 ("writeback: Fix performance regression in wb_over_bg_thresh()"). Balance_dirty_pages and the writeback worker will also disagree on whether writeback when a BDI uses BDI_CAP_STRICTLIMIT and BDI's share of the thresh ratio is zero. For example, A thread on cpu0 writes 32 pages and then balance_dirty_pages, it will wake up background writeback and pauses because wb_dirty > wb->wb_thresh = 0 (share of thresh ratio is zero). A thread may runs on cpu0 again because scheduler prefers pre_cpu. Then writeback worker may runs on other cpus(1,2..) which causes the value of wb_stat(wb, WB_RECLAIMABLE) in wb_over_bg_thresh is 0 and does not writeback and returns. Thus, balance_dirty_pages keeps looping, sleeping and then waking up the worker who will do nothing. It remains stuck in this state until the writeback worker hit the right dirty cpu or the dirty pages expire. The fix that we should get the wb_stat_sum radically when thresh is low. Link: https://lkml.kernel.org/r/20210428225046.16301-1-wuchi.zero@xxxxxxxxx Signed-off-by: Chi Wu <wuchi.zero@xxxxxxxxx> Cc: Tejun Heo <tj@xxxxxxxxxx> Cc: Miklos Szeredi <mszeredi@xxxxxxxxxx> Cc: Sedat Dilek <sedat.dilek@xxxxxxxxx> Cc: Jens Axboe <axboe@xxxxxx> Cc: Jan Kara <jack@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page-writeback.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) --- a/mm/page-writeback.c~mm-page-writeback-fix-performance-when-bdis-share-of-ratio-is-0 +++ a/mm/page-writeback.c @@ -1944,6 +1944,8 @@ bool wb_over_bg_thresh(struct bdi_writeb struct dirty_throttle_control * const gdtc = &gdtc_stor; struct dirty_throttle_control * const mdtc = mdtc_valid(&mdtc_stor) ? &mdtc_stor : NULL; + unsigned long reclaimable; + unsigned long thresh; /* * Similar to balance_dirty_pages() but ignores pages being written @@ -1956,8 +1958,13 @@ bool wb_over_bg_thresh(struct bdi_writeb if (gdtc->dirty > gdtc->bg_thresh) return true; - if (wb_stat(wb, WB_RECLAIMABLE) > - wb_calc_thresh(gdtc->wb, gdtc->bg_thresh)) + thresh = wb_calc_thresh(gdtc->wb, gdtc->bg_thresh); + if (thresh < 2 * wb_stat_error()) + reclaimable = wb_stat_sum(wb, WB_RECLAIMABLE); + else + reclaimable = wb_stat(wb, WB_RECLAIMABLE); + + if (reclaimable > thresh) return true; if (mdtc) { @@ -1971,8 +1978,13 @@ bool wb_over_bg_thresh(struct bdi_writeb if (mdtc->dirty > mdtc->bg_thresh) return true; - if (wb_stat(wb, WB_RECLAIMABLE) > - wb_calc_thresh(mdtc->wb, mdtc->bg_thresh)) + thresh = wb_calc_thresh(mdtc->wb, mdtc->bg_thresh); + if (thresh < 2 * wb_stat_error()) + reclaimable = wb_stat_sum(wb, WB_RECLAIMABLE); + else + reclaimable = wb_stat(wb, WB_RECLAIMABLE); + + if (reclaimable > thresh) return true; } _ Patches currently in -mm which might be from wuchi.zero@xxxxxxxxx are mm-page-writeback-fix-performance-when-bdis-share-of-ratio-is-0.patch