[PATCH 1/3] block, bfq: do not idle if only one cgroup is activated

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If only one group is activated, specifically
'bfqd->num_groups_with_pending_reqs == 1', there is no need to guarantee
the same share of the throughput of queues in the same group.

Thus change the condition from '> 0' to '> 1' in
bfq_asymmetric_scenario(). By the way, if 'num_groups_with_pending_reqs'
is greater than 1, there is no need to check 'varied_queue_weights' and
'multiple_classes_busy', thus move the judgement forward.

Test procedure:
run "fio -numjobs=1 -ioengine=psync -bs=4k -direct=1 -rw=randread..." multiple
times in the same cgroup(not root).

Test result: total bandwidth(Mib/s)
| total jobs | before this patch | after this patch      |
| ---------- | ----------------- | --------------------- |
| 1          | 33.8              | 33.8                  |
| 2          | 33.8              | 65.4 (32.7 each job)  |
| 4          | 33.8              | 106.8 (26.7 each job) |
| 8          | 33.8              | 126.4 (15.8 each job) |

By the way, if I test with "fio -numjobs=1/2/4/8 ...", test result is
the same with or without this patch.

Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
---
 block/bfq-iosched.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 727955918563..2768a4c1cc45 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -709,7 +709,9 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
  * much easier to maintain the needed state:
  * 1) all active queues have the same weight,
  * 2) all active queues belong to the same I/O-priority class,
- * 3) there are no active groups.
+ * 3) there is one active group at most.
+ * If the last condition is false, there is no need to guarantee the
+ * same share of the throughput of queues in the same group.
  * In particular, the last condition is always true if hierarchical
  * support or the cgroups interface are not enabled, thus no state
  * needs to be maintained in this case.
@@ -717,7 +719,16 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
 static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
 				   struct bfq_queue *bfqq)
 {
-	bool smallest_weight = bfqq &&
+	bool smallest_weight;
+	bool varied_queue_weights;
+	bool multiple_classes_busy;
+
+#ifdef CONFIG_BFQ_GROUP_IOSCHED
+	if (bfqd->num_groups_with_pending_reqs > 1)
+		return true;
+#endif
+
+	smallest_weight = bfqq &&
 		bfqq->weight_counter &&
 		bfqq->weight_counter ==
 		container_of(
@@ -729,21 +740,17 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
 	 * For queue weights to differ, queue_weights_tree must contain
 	 * at least two nodes.
 	 */
-	bool varied_queue_weights = !smallest_weight &&
+	varied_queue_weights = !smallest_weight &&
 		!RB_EMPTY_ROOT(&bfqd->queue_weights_tree.rb_root) &&
 		(bfqd->queue_weights_tree.rb_root.rb_node->rb_left ||
 		 bfqd->queue_weights_tree.rb_root.rb_node->rb_right);
 
-	bool multiple_classes_busy =
+	multiple_classes_busy =
 		(bfqd->busy_queues[0] && bfqd->busy_queues[1]) ||
 		(bfqd->busy_queues[0] && bfqd->busy_queues[2]) ||
 		(bfqd->busy_queues[1] && bfqd->busy_queues[2]);
 
-	return varied_queue_weights || multiple_classes_busy
-#ifdef CONFIG_BFQ_GROUP_IOSCHED
-	       || bfqd->num_groups_with_pending_reqs > 0
-#endif
-		;
+	return varied_queue_weights || multiple_classes_busy;
 }
 
 /*
-- 
2.31.1




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux