Re: [PATCH 01/13] iomap: clear the per-folio dirty bits on all writeback failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christoph Hellwig <hch@xxxxxx> writes:

> write_cache_pages always clear the page dirty bit before calling into the
> file systems, and leaves folios with a writeback failure without the
> dirty bit after return.  We also clear the per-block writeback bits for

you mean per-block dirty bits, right?

> writeback failures unless no I/O has submitted, which will leave the
> folio in an inconsistent state where it doesn't have the folio dirty,
> but one or more per-block dirty bits.  This seems to be due the place
> where the iomap_clear_range_dirty call was inserted into the existing
> not very clearly structured code when adding per-block dirty bit support
> and not actually intentional.  Switch to always clearing the dirty on
> writeback failure.
>
> Fixes: 4ce02c679722 ("iomap: Add per-block dirty state tracking to improve performance")
> Signed-off-by: Christoph Hellwig <hch@xxxxxx>
> ---

Thanks for catching it. Small nit.

>  fs/iomap/buffered-io.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index f72df2babe561a..98d52feb220f0a 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -1849,10 +1849,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>  		 */

		/*
		 * Let the filesystem know what portion of the current page
		 * failed to map. If the page hasn't been added to ioend, it
		 * won't be affected by I/O completion and we must unlock it
		 * now.
		 */
The comment to unlock it now becomes stale here.

>  		if (wpc->ops->discard_folio)
>  			wpc->ops->discard_folio(folio, pos);
> -		if (!count) {
> -			folio_unlock(folio);
> -			goto done;
> -		}
>  	}
>  
>  	/*
> @@ -1861,6 +1857,12 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
>  	 * all the dirty bits in the folio here.
>  	 */
>  	iomap_clear_range_dirty(folio, 0, folio_size(folio));

Maybe why not move iomap_clear_range_dirty() before?

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 200c26f95893..c875ba632dd8 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -1842,6 +1842,13 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
        if (count)
                wpc->ioend->io_folios++;

+       /*
+        * We can have dirty bits set past end of file in page_mkwrite path
+        * while mapping the last partial folio. Hence it's better to clear
+        * all the dirty bits in the folio here.
+        */
+       iomap_clear_range_dirty(folio, 0, folio_size(folio));
+
        WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list));
        WARN_ON_ONCE(!folio_test_locked(folio));
        WARN_ON_ONCE(folio_test_writeback(folio));
@@ -1867,13 +1874,6 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
                        goto done;
                }
        }
-
-       /*
-        * We can have dirty bits set past end of file in page_mkwrite path
-        * while mapping the last partial folio. Hence it's better to clear
-        * all the dirty bits in the folio here.
-        */
-       iomap_clear_range_dirty(folio, 0, folio_size(folio));
        folio_start_writeback(folio);
        folio_unlock(folio);

> +
> +	if (error && !count) {
> +		folio_unlock(folio);
> +		goto done;
> +	}
> +
>  	folio_start_writeback(folio);
>  	folio_unlock(folio);
>  

-ritesh




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux