Re: [PATCH mm-unstable RFC 1/5] writeback: move wb_over_bg_thresh() call outside lock section

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+Jens & Jan

The patch looks good but it would be nice to pass this patch through
the eyes of experts of this area.

On Mon, Apr 3, 2023 at 3:03 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
>
> wb_over_bg_thresh() calls mem_cgroup_wb_stats() which invokes an rstat
> flush, which can be expensive on large systems. Currently,
> wb_writeback() calls wb_over_bg_thresh() within a lock section, so we
> have to make the rstat flush atomically. On systems with a lot of
> cpus/cgroups, this can cause us to disable irqs for a long time,
> potentially causing problems.
>
> Move the call to wb_over_bg_thresh() outside the lock section in
> preparation to make the rstat flush in mem_cgroup_wb_stats() non-atomic.
> The list_empty(&wb->work_list) should be okay outside the lock section
> of wb->list_lock as it is protected by a separate lock (wb->work_lock),
> and wb_over_bg_thresh() doesn't seem like it is modifying any of the b_*
> lists the wb->list_lock is protecting. Also, the loop seems to be
> already releasing and reacquring the lock, so this refactoring looks
> safe.
>
> Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> ---
>  fs/fs-writeback.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 195dc23e0d831..012357bc8daa3 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -2021,7 +2021,6 @@ static long wb_writeback(struct bdi_writeback *wb,
>         struct blk_plug plug;
>
>         blk_start_plug(&plug);
> -       spin_lock(&wb->list_lock);
>         for (;;) {
>                 /*
>                  * Stop writeback when nr_pages has been consumed
> @@ -2046,6 +2045,9 @@ static long wb_writeback(struct bdi_writeback *wb,
>                 if (work->for_background && !wb_over_bg_thresh(wb))
>                         break;
>
> +
> +               spin_lock(&wb->list_lock);
> +
>                 /*
>                  * Kupdate and background works are special and we want to
>                  * include all inodes that need writing. Livelock avoidance is
> @@ -2075,13 +2077,19 @@ static long wb_writeback(struct bdi_writeback *wb,
>                  * mean the overall work is done. So we keep looping as long
>                  * as made some progress on cleaning pages or inodes.
>                  */
> -               if (progress)
> +               if (progress) {
> +                       spin_unlock(&wb->list_lock);
>                         continue;
> +               }
> +
>                 /*
>                  * No more inodes for IO, bail
>                  */
> -               if (list_empty(&wb->b_more_io))
> +               if (list_empty(&wb->b_more_io)) {
> +                       spin_unlock(&wb->list_lock);
>                         break;
> +               }
> +
>                 /*
>                  * Nothing written. Wait for some inode to
>                  * become available for writeback. Otherwise
> @@ -2093,9 +2101,7 @@ static long wb_writeback(struct bdi_writeback *wb,
>                 spin_unlock(&wb->list_lock);
>                 /* This function drops i_lock... */
>                 inode_sleep_on_writeback(inode);
> -               spin_lock(&wb->list_lock);
>         }
> -       spin_unlock(&wb->list_lock);
>         blk_finish_plug(&plug);
>
>         return nr_pages - work->nr_pages;
> --
> 2.40.0.348.gf938b09366-goog
>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux