[PATCH V3] block: optimize for small block size IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



__blk_queue_split() may be a bit heavy for small block size(such as
512B, or 4KB) IO, so introduce one flag to decide if this bio includes
multiple page. And only consider to try splitting this bio in case
that the multiple page flag is set.

~3% - 5% IOPS improvement can be observed on io_uring test over
null_blk(MQ), and the io_uring test code is from fio/t/io_uring.c

bch_bio_map() should be the only one which doesn't use bio_add_page(),
so force to mark bio built via bch_bio_map() as MULTI_PAGE.

RAID5 has similar usage too, however the bio is really single-page bio,
so not necessary to handle it.

Cc: Coly Li <colyli@xxxxxxx>
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Cc: Keith Busch <kbusch@xxxxxxxxxx>
Cc: linux-bcache@xxxxxxxxxxxxxxx
Acked-by: Coly Li <colyli@xxxxxxx>
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
---
V3:
	- simplify check in __bio_add_page() as suggested by Christoph
V2:
	- share bit flag with passthrough IO
	- deal with adding multipage in one bio_add_page()

 block/bio.c               | 9 +++++++++
 block/blk-merge.c         | 4 ++++
 block/bounce.c            | 3 +++
 drivers/md/bcache/util.c  | 2 ++
 include/linux/blk_types.h | 3 +++
 5 files changed, 21 insertions(+)

diff --git a/block/bio.c b/block/bio.c
index 8f0ed6228fc5..eeb81679689b 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -583,6 +583,8 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
 	bio_set_flag(bio, BIO_CLONED);
 	if (bio_flagged(bio_src, BIO_THROTTLED))
 		bio_set_flag(bio, BIO_THROTTLED);
+	if (bio_flagged(bio_src, BIO_MULTI_PAGE))
+		bio_set_flag(bio, BIO_MULTI_PAGE);
 	bio->bi_opf = bio_src->bi_opf;
 	bio->bi_ioprio = bio_src->bi_ioprio;
 	bio->bi_write_hint = bio_src->bi_write_hint;
@@ -757,6 +759,9 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page,
 		if (page_is_mergeable(bv, page, len, off, same_page)) {
 			bv->bv_len += len;
 			bio->bi_iter.bi_size += len;
+
+			if (!*same_page)
+				bio_set_flag(bio, BIO_MULTI_PAGE);
 			return true;
 		}
 	}
@@ -789,6 +794,10 @@ void __bio_add_page(struct bio *bio, struct page *page,
 	bio->bi_iter.bi_size += len;
 	bio->bi_vcnt++;
 
+	if (!bio_flagged(bio, BIO_MULTI_PAGE) && (bio->bi_vcnt >= 2 ||
+				bv->bv_len > PAGE_SIZE))
+		bio_set_flag(bio, BIO_MULTI_PAGE);
+
 	if (!bio_flagged(bio, BIO_WORKINGSET) && unlikely(PageWorkingset(page)))
 		bio_set_flag(bio, BIO_WORKINGSET);
 }
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 48e6725b32ee..737bbec9e153 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -309,6 +309,10 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio,
 				nr_segs);
 		break;
 	default:
+		if (!bio_flagged(*bio, BIO_MULTI_PAGE)) {
+			*nr_segs = 1;
+			return;
+		}
 		split = blk_bio_segment_split(q, *bio, &q->bio_split, nr_segs);
 		break;
 	}
diff --git a/block/bounce.c b/block/bounce.c
index f8ed677a1bf7..4b18a2accccc 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -253,6 +253,9 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
 	bio->bi_iter.bi_sector	= bio_src->bi_iter.bi_sector;
 	bio->bi_iter.bi_size	= bio_src->bi_iter.bi_size;
 
+	if (bio_flagged(bio_src, BIO_MULTI_PAGE))
+		bio_set_flag(bio, BIO_MULTI_PAGE);
+
 	switch (bio_op(bio)) {
 	case REQ_OP_DISCARD:
 	case REQ_OP_SECURE_ERASE:
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index 62fb917f7a4f..71f5cbb6fdd6 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -253,6 +253,8 @@ start:		bv->bv_len	= min_t(size_t, PAGE_SIZE - bv->bv_offset,
 
 		size -= bv->bv_len;
 	}
+
+	bio_set_flag(bio, BIO_MULTI_PAGE);
 }
 
 /**
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index d688b96d1d63..10b9a3539716 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -220,6 +220,9 @@ enum {
 				 * throttling rules. Don't do it again. */
 	BIO_TRACE_COMPLETION,	/* bio_endio() should trace the final completion
 				 * of this bio. */
+	BIO_MULTI_PAGE = BIO_USER_MAPPED,
+				/* used for optimize small BS IO from FS, so
+				 * share the bit flag with passthrough IO */
 	BIO_QUEUE_ENTERED,	/* can use blk_queue_enter_live() */
 	BIO_TRACKED,		/* set if bio goes through the rq_qos path */
 	BIO_FLAG_LAST
-- 
2.20.1





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux