Re: [PATCH 39/40] btrfs: pass private data end end_io handler to btrfs_repair_one_sector

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2022/3/22 23:56, Christoph Hellwig wrote:
Allow the caller to control what happens when the repair bio completes.
This will be needed streamline the direct I/O path.

Signed-off-by: Christoph Hellwig <hch@xxxxxx>
---
  fs/btrfs/extent_io.c | 15 ++++++++-------
  fs/btrfs/extent_io.h |  8 ++++----
  fs/btrfs/inode.c     |  4 +++-
  3 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 2fdb5d7dd51e1..5a1447db28228 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2627,10 +2627,10 @@ static bool btrfs_check_repairable(struct inode *inode,
  }

  blk_status_t btrfs_repair_one_sector(struct inode *inode,
-			    struct bio *failed_bio, u32 bio_offset,
-			    struct page *page, unsigned int pgoff,
-			    u64 start, int failed_mirror,
-			    submit_bio_hook_t *submit_bio_hook)
+		struct bio *failed_bio, u32 bio_offset, struct page *page,
+		unsigned int pgoff, u64 start, int failed_mirror,
+		submit_bio_hook_t *submit_bio_hook,
+		void *bi_private, void (*bi_end_io)(struct bio *bio))

Not a big fan of extra parameters for a function which already had enough...

And I always have a question on repair (aka read from extra copy).

Can't we just make the repair part to be synchronous?
Instead of putting everything into another endio call back, and wait for
the read and re-check in the same context.

That would definitely streamline the workload way more than this.

And I don't think users would complain about btrfs is slow on reading
when correcting the corrupted data.

Thanks,
Qu
  {
  	struct io_failure_record *failrec;
  	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
@@ -2660,9 +2660,9 @@ blk_status_t btrfs_repair_one_sector(struct inode *inode,
  	repair_bio = btrfs_bio_alloc(inode, 1, REQ_OP_READ);
  	repair_bbio = btrfs_bio(repair_bio);
  	repair_bbio->file_offset = start;
-	repair_bio->bi_end_io = failed_bio->bi_end_io;
  	repair_bio->bi_iter.bi_sector = failrec->logical >> 9;
-	repair_bio->bi_private = failed_bio->bi_private;
+	repair_bio->bi_private = bi_private;
+	repair_bio->bi_end_io = bi_end_io;

  	if (failed_bbio->csum) {
  		const u32 csum_size = fs_info->csum_size;
@@ -2758,7 +2758,8 @@ static blk_status_t submit_read_repair(struct inode *inode,
  		ret = btrfs_repair_one_sector(inode, failed_bio,
  				bio_offset + offset,
  				page, pgoff + offset, start + offset,
-				failed_mirror, btrfs_submit_data_bio);
+				failed_mirror, btrfs_submit_data_bio,
+				failed_bio->bi_private, failed_bio->bi_end_io);
  		if (!ret) {
  			/*
  			 * We have submitted the read repair, the page release
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index 0239b26d5170a..54e54269cfdba 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -304,10 +304,10 @@ struct io_failure_record {
  };

  blk_status_t btrfs_repair_one_sector(struct inode *inode,
-			    struct bio *failed_bio, u32 bio_offset,
-			    struct page *page, unsigned int pgoff,
-			    u64 start, int failed_mirror,
-			    submit_bio_hook_t *submit_bio_hook);
+		struct bio *failed_bio, u32 bio_offset, struct page *page,
+		unsigned int pgoff, u64 start, int failed_mirror,
+		submit_bio_hook_t *submit_bio_hook,
+		void *bi_private, void (*bi_end_io)(struct bio *bio));

  #ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
  bool find_lock_delalloc_range(struct inode *inode,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 93b3ef48cea2f..e25d9d860c679 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -7799,7 +7799,9 @@ static blk_status_t btrfs_check_read_dio_bio(struct btrfs_dio_private *dip,
  				ret = btrfs_repair_one_sector(inode, &bbio->bio,
  						bio_offset, bvec.bv_page, pgoff,
  						start, bbio->mirror_num,
-						submit_dio_repair_bio);
+						submit_dio_repair_bio,
+						bbio->bio.bi_private,
+						bbio->bio.bi_end_io);
  				if (ret)
  					err = ret;
  			}




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux