Re: [PATCH v2] btrfs: do proper folio cleanup when run_delalloc_nocow() failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





在 2024/12/4 13:05, Qu Wenruo 写道:
Just like cow_file_range(), from day 1 btrfs doesn't really clean the
dirty flags, if it has an ordered extent created successfully.

Per error handling protocol (according to the iomap, and the btrfs
handling if it failed at the beginning of the range), we should clear
all dirty flags for the involved folios.

Or the range of that folio will still be marked dirty, but has no
EXTENT_DEALLLOC set inside the io tree.

Since the folio range is still dirty, it will still be the target for
the next writeback, but since there is no EXTENT_DEALLLOC, no new
ordered extent will be created for it.

This means the writeback of that folio range will fall back to COW
fixup path. However the COW fixup path itself is being re-evaluated as
the newly introduced pin_user_pages_*() should prevent us hitting an
out-of-band dirty folios, and we're moving to deprecate such COW fixup
path.

We already have an experimental patch that will make fixup COW path to
crash, to verify there is no such out-of-band dirty folios anymore.
So here we need to avoid going COW fixup path, by doing proper folio
dirty flags cleanup.

Unlike the fix in cow_file_range(), which holds the folio and extent
lock until error or a fully successfully run, here we have no such luxury
as we can fallback to COW, and in that case the extent/folio range will
be unlocked by cow_file_range().

So here we introduce a new helper, cleanup_dirty_folios(), to clear the
dirty flags for the involved folios.

And since the final fallback_to_cow() call can also fail, and we rely on
@cur_offset to do the proper cleanup, here we remove the unnecessary and
incorrect @cur_offset assignment.

Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Qu Wenruo <wqu@xxxxxxxx>
---
Changelog:
v2:
- Fix the incorrect @cur_offset assignment to @end
   The @end is not aligned to sector size, nor @cur_offset should be
   updated before fallback_to_cow() succeeded.

- Add one extra ASSERT() to make sure the range is properly aligned
---
  fs/btrfs/inode.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++-
  1 file changed, 58 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index e8232ac7917f..92df6dfff2e4 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1969,6 +1969,48 @@ static int can_nocow_file_extent(struct btrfs_path *path,
  	return ret < 0 ? ret : can_nocow;
  }
+static void cleanup_dirty_folios(struct btrfs_inode *inode,
+				 struct folio *locked_folio,
+				 u64 start, u64 end, int error)
+{
+	struct btrfs_fs_info *fs_info = inode->root->fs_info;
+	struct address_space *mapping = inode->vfs_inode.i_mapping;
+	pgoff_t start_index = start >> PAGE_SHIFT;
+	pgoff_t end_index = end >> PAGE_SHIFT;
+	u32 len;
+
+	ASSERT(end + 1 - start < U32_MAX);
+	ASSERT(IS_ALIGNED(start, fs_info->sectorsize) &&
+	       IS_ALIGNED(end + 1, fs_info->sectorsize));
+	len = end + 1 - start;
+
+	/*
+	 * Handle the locked folio first.
+	 * btrfs_folio_clamp_*() helpers can handle range out of the folio case.
+	 */
+	btrfs_folio_clamp_clear_dirty(fs_info, locked_folio, start, len);
+	btrfs_folio_clamp_set_writeback(fs_info, locked_folio, start, len);
+	btrfs_folio_clamp_clear_writeback(fs_info, locked_folio, start, len);
+
+	for (pgoff_t index = start_index; index <= end_index; index++) {
+		struct folio *folio;
+
+		/* Already handled at the beginning. */
+		if (index == locked_folio->index)
+			continue;
+		folio = __filemap_get_folio(mapping, index, FGP_LOCK, GFP_NOFS);
+		/* Cache already dropped, no need to do any cleanup. */
+		if (IS_ERR(folio))
+			continue;
+		btrfs_folio_clamp_clear_dirty(fs_info, folio, start, len);
+		btrfs_folio_clamp_set_writeback(fs_info, folio, start, len);
+		btrfs_folio_clamp_clear_writeback(fs_info, folio, start, len);
+		folio_unlock(folio);
+		folio_put(folio);
+	}
+	mapping_set_error(mapping, error);
+}
+
  /*
   * when nowcow writeback call back.  This checks for snapshots or COW copies
   * of the extents that exist in the file, and COWs the file as required.
@@ -2217,7 +2259,6 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
  		cow_start = cur_offset;
if (cow_start != (u64)-1) {
-		cur_offset = end;
  		ret = fallback_to_cow(inode, locked_folio, cow_start, end);
  		cow_start = (u64)-1;
  		if (ret)
@@ -2228,6 +2269,22 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
  	return 0;
error:
+	/*
+	 * We have some range with ordered extent created.
+	 *
+	 * Ordered extents and extent maps will be cleaned up by
+	 * btrfs_mark_ordered_io_finished() later, but we also need to cleanup
+	 * the dirty flags of folios.
+	 *
+	 * Or they can be written back again, but without any EXTENT_DELALLOC flag
+	 * in io tree.
+	 * This will force the writeback to go COW fixup, which is being deprecated.
+	 *
+	 * Also such left-over dirty flags do no follow the error handling protocol.
+	 */
+	if (cur_offset > start)
+		cleanup_dirty_folios(inode, locked_folio, start, cur_offset - 1, ret);
+
  	/*
  	 * If an error happened while a COW region is outstanding, cur_offset
  	 * needs to be reset to cow_start to ensure the COW region is unlocked

It turns out that, we can not directly use extent_clear_unlock_delalloc() for the range [cur_offset, end].

The problem is @cur_offset can be updated to @cow_start, but the fallback_to_cow() may have failed, and cow_file_range() will do the proper cleanup by unlock all the folios.

In that case, we can hit VM_BUG_ON() with folio already unlocked.

This means we should skip the failed COW range during error handling, making the error handling way more complex.

I'll need to find a better solution for this.

Thanks,
Qu




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux