On 4/11/23 05:40, Christoph Hellwig wrote:
On Fri, Apr 07, 2023 at 04:58:14PM -0700, Bart Van Assche wrote:
+ if (hctx->queue->elevator) {
+ struct request *rq, *next;
+
+ list_for_each_entry_safe(rq, next, &tmp, queuelist)
+ blk_mq_requeue_request(rq, false);
+ blk_mq_kick_requeue_list(hctx->queue);
+ } else {
+ spin_lock(&hctx->lock);
+ list_splice_tail_init(&tmp, &hctx->dispatch);
+ spin_unlock(&hctx->lock);
+ }
Given that this isn't exactly a fast path, is there any reason to
not always go through the requeue_list?
Hi Christoph,
I will simplify this patch by letting blk_mq_hctx_notify_dead() always
send requests to the requeue list.
Thanks,
Bart.