Re: [PATCH 3/4] block/mq-deadline: fallback to per-cpu insertion buckets under contention

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/19/24 5:05 PM, Jens Axboe wrote:
> On 1/19/24 4:16 PM, Bart Van Assche wrote:
>> On 1/19/24 08:02, Jens Axboe wrote:
>>> If we attempt to insert a list of requests, but someone else is already
>>> running an insertion, then fallback to queueing that list internally and
>>> let the existing inserter finish the operation. The current inserter
>>> will either see and flush this list, of if it ends before we're done
>>> doing our bucket insert, then we'll flush it and insert ourselves.
>>>
>>> This reduces contention on the dd->lock, which protects any request
>>> insertion or dispatch, by having a backup point to insert into which
>>> will either be flushed immediately or by an existing inserter. As the
>>> alternative is to just keep spinning on the dd->lock, it's very easy
>>> to get into a situation where multiple processes are trying to do IO
>>> and all sit and spin on this lock.
>>
>> With this alternative patch I achieve 20% higher IOPS than with patch
>> 3/4 of this series for 1..4 CPU cores (null_blk + fio in an x86 VM):
> 
> Performance aside, I think this is a much better approach rather than
> mine. Haven't tested yet, but I think this instead of my patch 3 and the
> other patches and this should further drastically cut down on the
> overhead. Can you send a "proper" patch and I'll just replace the one
> that I have?

I'd probably just fold in this incremental, as I think it cleans it up.

diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 88991a791c56..977c512465ca 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -599,10 +599,21 @@ static struct request *dd_dispatch_prio_aged_requests(struct deadline_data *dd,
 	return NULL;
 }
 
+static void __dd_do_insert(struct request_queue *q, blk_insert_t flags,
+			   struct list_head *list, struct list_head *free)
+{
+	while (!list_empty(list)) {
+		struct request *rq;
+
+		rq = list_first_entry(list, struct request, queuelist);
+		list_del_init(&rq->queuelist);
+		dd_insert_request(q, rq, flags, free);
+	}
+}
+
 static void dd_do_insert(struct request_queue *q, struct list_head *free)
 {
 	struct deadline_data *dd = q->elevator->elevator_data;
-	struct request *rq;
 	LIST_HEAD(at_head);
 	LIST_HEAD(at_tail);
 
@@ -611,16 +622,8 @@ static void dd_do_insert(struct request_queue *q, struct list_head *free)
 	list_splice_init(&dd->at_tail, &at_tail);
 	spin_unlock(&dd->insert_lock);
 
-	while (!list_empty(&at_head)) {
-		rq = list_first_entry(&at_head, struct request, queuelist);
-		list_del_init(&rq->queuelist);
-		dd_insert_request(q, rq, BLK_MQ_INSERT_AT_HEAD, free);
-	}
-	while (!list_empty(&at_tail)) {
-		rq = list_first_entry(&at_tail, struct request, queuelist);
-		list_del_init(&rq->queuelist);
-		dd_insert_request(q, rq, 0, free);
-	}
+	__dd_do_insert(q, BLK_MQ_INSERT_AT_HEAD, &at_head, free);
+	__dd_do_insert(q, 0, &at_tail, free);
 }
 
 /*

-- 
Jens Axboe





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux