Hi,
在 2025/02/21 10:55, Ming Lei 写道:
Hi Yukuai,
On Thu, Feb 20, 2025 at 09:38:12PM +0800, Yu Kuai wrote:
Hi,
在 2025/02/20 19:17, Ming Lei 写道:
When the current bio needs to be throttled because of bps limit, the wait
time for the extra bytes may be less than 1 jiffy, tg_within_bps_limit()
adds one extra 1 jiffy.
However, when taking roundup time into account, the extra 1 jiffy
may become not necessary, then bps limit becomes not accurate. This way
causes blktests throtl/001 failure in case of CONFIG_HZ_100=y.
Fix it by not adding the 1 jiffy in case that the roundup time can
cover it.
Cc: Tejun Heo <tj@xxxxxxxxxx>
Cc: Yu Kuai <yukuai3@xxxxxxxxxx>
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
---
block/blk-throttle.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 8d149aff9fd0..8348972c517b 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -729,14 +729,14 @@ static unsigned long tg_within_bps_limit(struct throtl_grp *tg, struct bio *bio,
extra_bytes = tg->bytes_disp[rw] + bio_size - bytes_allowed;
jiffy_wait = div64_u64(extra_bytes * HZ, bps_limit);
- if (!jiffy_wait)
- jiffy_wait = 1;
-
/*
* This wait time is without taking into consideration the rounding
* up we did. Add that time also.
*/
jiffy_wait = jiffy_wait + (jiffy_elapsed_rnd - jiffy_elapsed);
+ if (!jiffy_wait)
+ jiffy_wait = 1;
Just wonder, will wait (0, 1) less jiffies is better than wait (0, 1)
more jiffies.
How about following changes?
Thanks,
Kuai
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 8d149aff9fd0..f8430baf3544 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -703,6 +703,7 @@ static unsigned long tg_within_bps_limit(struct
throtl_grp *tg, struct bio *bio,
u64 bps_limit)
{
bool rw = bio_data_dir(bio);
+ long long carryover_bytes;
long long bytes_allowed;
u64 extra_bytes;
unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
@@ -727,10 +728,11 @@ static unsigned long tg_within_bps_limit(struct
throtl_grp *tg, struct bio *bio,
/* Calc approx time to dispatch */
extra_bytes = tg->bytes_disp[rw] + bio_size - bytes_allowed;
- jiffy_wait = div64_u64(extra_bytes * HZ, bps_limit);
+ jiffy_wait = div64_u64_rem(extra_bytes * HZ, bps_limit,
carryover_bytes);
Hi, Thanks for the test.
This is a mistake, carryover_bytes is much bigger than expected :(
That's why the result is much worse. My bad.
&carryover_bytes
+ /* carryover_bytes is dispatched without waiting */
if (!jiffy_wait)
The if condition shound be removed.
- jiffy_wait = 1;
+ tg->carryover_bytes[rw] -= carryover_bytes;
/*
* This wait time is without taking into consideration the rounding
+
return jiffy_wait;
Looks result is worse with your patch:
throtl/001 (basic functionality) [failed]
runtime 6.488s ... 28.862s
--- tests/throtl/001.out 2024-11-21 09:20:47.514353642 +0000
+++ /root/git/blktests/results/nodev/throtl/001.out.bad 2025-02-21 02:51:36.723754146 +0000
@@ -1,6 +1,6 @@
Running throtl/001
+13
1
-1
-1
+13
1
...
(Run 'diff -u tests/throtl/001.out /root/git/blktests/results/nodev/throtl/001.out.bad' to see the entire diff)
And I realize now that throtl_start_new_slice() will just clear
the carryover_bytes, I tested in my VM and with following changes,
throtl/001 never fail with CONFIG_HZ_100.
Thanks,
Kuai
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 8d149aff9fd0..4fc005af82e0 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -703,6 +703,7 @@ static unsigned long tg_within_bps_limit(struct
throtl_grp *tg, struct bio *bio,
u64 bps_limit)
{
bool rw = bio_data_dir(bio);
+ long long carryover_bytes;
long long bytes_allowed;
u64 extra_bytes;
unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;
@@ -727,10 +728,8 @@ static unsigned long tg_within_bps_limit(struct
throtl_grp *tg, struct bio *bio,
/* Calc approx time to dispatch */
extra_bytes = tg->bytes_disp[rw] + bio_size - bytes_allowed;
- jiffy_wait = div64_u64(extra_bytes * HZ, bps_limit);
-
- if (!jiffy_wait)
- jiffy_wait = 1;
+ jiffy_wait = div64_u64_rem(extra_bytes * HZ, bps_limit,
&carryover_bytes);
+ tg->carryover_bytes[rw] -= div64_u64(carryover_bytes, HZ);
/*
* This wait time is without taking into consideration the rounding
@@ -775,10 +774,14 @@ static bool tg_may_dispatch(struct throtl_grp *tg,
struct bio *bio,
* long since now. New slice is started only for empty throttle
group.
* If there is queued bio, that means there should be an active
* slice and it should be extended instead.
+ *
+ * If throtl_trim_slice() doesn't clear carryover_bytes, which means
+ * debt is still not paid, don't start new slice in this case.
*/
- if (throtl_slice_used(tg, rw) && !(tg->service_queue.nr_queued[rw]))
+ if (throtl_slice_used(tg, rw) &&
!(tg->service_queue.nr_queued[rw]) &&
+ tg->carryover_bytes[rw] >= 0) {
throtl_start_new_slice(tg, rw, true);
- else {
+ } else {
if (time_before(tg->slice_end[rw],
jiffies + tg->td->throtl_slice))
throtl_extend_slice(tg, rw,
thanks,
Ming
.