On Fri 09-02-24 01:20:18, Kemeng Shi wrote: > In kupdate writeback, only expired inode (have been dirty for longer than > dirty_expire_interval) is supposed to be written back. However, kupdate > writeback will writeback non-expired inode left in b_io or b_more_io from > last wb_writeback. As a result, writeback will keep being triggered > unexpected when we keep dirtying pages even dirty memory is under > threshold and inode is not expired. To be more specific: > Assume dirty background threshold is > 1G and dirty_expire_centisecs is > > 60s. When we running fio -size=1G -invalidate=0 -ioengine=libaio > --time_based -runtime=60... (keep dirtying), the writeback will keep > being triggered as following: > wb_workfn > wb_do_writeback > wb_check_background_flush > /* > * Wb dirty background threshold starts at 0 if device was idle and > * grows up when bandwidth of wb is updated. So a background > * writeback is triggered. > */ > wb_over_bg_thresh > /* > * Dirtied inode will be written back and added to b_more_io list > * after slice used up (because we keep dirtying the inode). > */ > wb_writeback > > Writeback is triggered per dirty_writeback_centisecs as following: > wb_workfn > wb_do_writeback > wb_check_old_data_flush > /* > * Write back inode left in b_io and b_more_io from last wb_writeback > * even the inode is non-expired and it will be added to b_more_io > * again as slice will be used up (because we keep dirtying the > * inode) > */ > wb_writeback > > Fix this by moving non-expired inode in io list from last wb_writeback to > dirty list in kudpate writeback. > > Test as following: > /* make it more easier to observe the issue */ > echo 300000 > /proc/sys/vm/dirty_expire_centisecs > echo 100 > /proc/sys/vm/dirty_writeback_centisecs > /* create a idle device */ > mkfs.ext4 -F /dev/vdb > mount /dev/vdb /bdi1/ > /* run buffer write with fio */ > fio -name test -filename=/bdi1/file -size=800M -ioengine=libaio -bs=4K \ > -iodepth=1 -rw=write -direct=0 --time_based -runtime=60 -invalidate=0 > > Result before fix (run three tests): > 1360MB/s > 1329MB/s > 1455MB/s > > Result after fix (run three tests); > 790MB/s > 1820MB/s > 1804MB/s > > Signed-off-by: Kemeng Shi <shikemeng@xxxxxxxxxxxxxxx> OK, I don't find this a particularly troubling problem but I agree it might be nice to fix. But filtering the lists in wb_writeback() like this seems kind of wrong - the queueing is managed in queue_io() and I'd prefer to keep it that way. What if we just modified requeue_inode() to not requeue_io() inodes in case we are doing kupdate style writeback and inode isn't expired? Sure we will still possibly writeback unexpired inodes once before calling redirty_tail_locked() on them but that shouldn't really be noticeable? Honza > --- > fs/fs-writeback.c | 20 ++++++++++++++++++++ > 1 file changed, 20 insertions(+) > > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c > index 5ab1aaf805f7..a9a918972719 100644 > --- a/fs/fs-writeback.c > +++ b/fs/fs-writeback.c > @@ -2046,6 +2046,23 @@ static long writeback_inodes_wb(struct bdi_writeback *wb, long nr_pages, > return nr_pages - work.nr_pages; > } > > +static void filter_expired_io(struct bdi_writeback *wb) > +{ > + struct inode *inode, *tmp; > + unsigned long expired_jiffies = jiffies - > + msecs_to_jiffies(dirty_expire_interval * 10); > + > + spin_lock(&wb->list_lock); > + list_for_each_entry_safe(inode, tmp, &wb->b_io, i_io_list) > + if (inode_dirtied_after(inode, expired_jiffies)) > + redirty_tail(inode, wb); > + > + list_for_each_entry_safe(inode, tmp, &wb->b_more_io, i_io_list) > + if (inode_dirtied_after(inode, expired_jiffies)) > + redirty_tail(inode, wb); > + spin_unlock(&wb->list_lock); > +} > + > /* > * Explicit flushing or periodic writeback of "old" data. > * > @@ -2070,6 +2087,9 @@ static long wb_writeback(struct bdi_writeback *wb, > long progress; > struct blk_plug plug; > > + if (work->for_kupdate) > + filter_expired_io(wb); > + > blk_start_plug(&plug); > for (;;) { > /* > -- > 2.30.0 > -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR