[PATCH] block: throttle: charge io re-submission for iops limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Commit 111be8839817 ("block-throttle: avoid double charge") marks bio as
BIO_THROTTLED unconditionally if __blk_throtl_bio() is called on this bio,
then this bio won't be called into __blk_throtl_bio() any more. This way
is to avoid double charge in case of bio splitting. It is reasonable for
read/write throughput limit, but not reasonable for IOPS limit because
block layer provides io accounting against split bio.

Chunguang Xu has already observed this issue and fixed it in commit
4f1e9630afe6 ("blk-throtl: optimize IOPS throttle for large IO scenarios").
However, that patch only covers bio splitting in __blk_queue_split(), and
we have other kind of bio splitting, such as bio_split() & submit_bio_noacct()
and other ways.

This patch tries to fix the issue in one generic way, by always charge
the bio for iops limit in blk_throtl_bio() in case that BIO_THROTTLED
is set. This way is reasonable: re-submission & fast-cloned bio is charged
if it is submitted to same disk/queue, and BIO_THROTTLED will be cleared
if bio->bi_bdev is changed.

Reported-by: lining2020x@xxxxxxx
Cc: Tejun Heo <tj@xxxxxxxxxx>
Cc: Chunguang Xu <brookxu@xxxxxxxxxxx>
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
---
 block/blk-merge.c    | 2 --
 block/blk-throttle.c | 2 +-
 block/blk-throttle.h | 8 +++++---
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 4de34a332c9f..f5255991b773 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -368,8 +368,6 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio,
 		trace_block_split(split, (*bio)->bi_iter.bi_sector);
 		submit_bio_noacct(*bio);
 		*bio = split;
-
-		blk_throtl_charge_bio_split(*bio);
 	}
 }
 
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 7c462c006b26..ea532c178385 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2043,7 +2043,7 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
 }
 #endif
 
-void blk_throtl_charge_bio_split(struct bio *bio)
+void blk_throtl_charge_for_iops_limit(struct bio *bio)
 {
 	struct blkcg_gq *blkg = bio->bi_blkg;
 	struct throtl_grp *parent = blkg_to_tg(blkg);
diff --git a/block/blk-throttle.h b/block/blk-throttle.h
index 175f03abd9e4..954b9cac19b7 100644
--- a/block/blk-throttle.h
+++ b/block/blk-throttle.h
@@ -158,20 +158,22 @@ static inline struct throtl_grp *blkg_to_tg(struct blkcg_gq *blkg)
 static inline int blk_throtl_init(struct request_queue *q) { return 0; }
 static inline void blk_throtl_exit(struct request_queue *q) { }
 static inline void blk_throtl_register_queue(struct request_queue *q) { }
-static inline void blk_throtl_charge_bio_split(struct bio *bio) { }
+static inline void blk_throtl_charge_for_iops_limit(struct bio *bio) { }
 static inline bool blk_throtl_bio(struct bio *bio) { return false; }
 #else /* CONFIG_BLK_DEV_THROTTLING */
 int blk_throtl_init(struct request_queue *q);
 void blk_throtl_exit(struct request_queue *q);
 void blk_throtl_register_queue(struct request_queue *q);
-void blk_throtl_charge_bio_split(struct bio *bio);
+void blk_throtl_charge_for_iops_limit(struct bio *bio);
 bool __blk_throtl_bio(struct bio *bio);
 static inline bool blk_throtl_bio(struct bio *bio)
 {
 	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
 
-	if (bio_flagged(bio, BIO_THROTTLED))
+	if (bio_flagged(bio, BIO_THROTTLED)) {
+		blk_throtl_charge_for_iops_limit(bio);
 		return false;
+	}
 	if (!tg->has_rules[bio_data_dir(bio)])
 		return false;
 
-- 
2.31.1




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux