RE: [PATCH RFC v7 02/12] blk-mq: rename blk_mq_update_tag_set_depth()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > Kashyap, I've also attached an updated patch for the elevator_count
> > patch; if you agree John can include it in the next version.
>
> Hannes - Patch looks good.   Header does not include problem statement.
> How about adding below in header ?
>
> High CPU utilization on "native_queued_spin_lock_slowpath" due to lock
> contention is possible in mq-deadline and bfq io scheduler when
> nr_hw_queues is more than one.
> It is because kblockd work queue can submit IO from all online CPUs
> (through
> blk_mq_run_hw_queues) even though only one hctx has pending commands.
> Elevator callback "has_work" for mq-deadline and bfq scheduler consider
> pending work if there are any IOs on request queue and it does not account
> hctx context.

Hannes/John - We need one more correction for below patch -

https://github.com/hisilicon/kernel-dev/commit/ff631eb80aa0449eaeb78a282fd7eff2a9e42f77

I noticed - that elevator_queued count goes negative mainly because there
are some case where IO was submitted from dispatch queue(not scheduler
queue) and request still has "RQF_ELVPRIV" flag set.
In such cases " dd_finish_request" is called without " dd_insert_request". I
think it is better to decrement counter once it is taken out from dispatched
queue. (Ming proposed to use dispatch path for decrementing counter, but I
somehow did not accounted assuming RQF_ELVPRIV will be set only if IO is
submitted from scheduler queue.)

Below is additional change. Can you merge this ?

diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 9d75374..bc413dd 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -385,6 +385,8 @@ static struct request *dd_dispatch_request(struct
blk_mq_hw_ctx *hctx)

        spin_lock(&dd->lock);
        rq = __dd_dispatch_request(dd);
+       if (rq)
+               atomic_dec(&rq->mq_hctx->elevator_queued);
        spin_unlock(&dd->lock);

        return rq;
@@ -574,7 +576,6 @@ static void dd_finish_request(struct request *rq)
                        blk_mq_sched_mark_restart_hctx(rq->mq_hctx);
                spin_unlock_irqrestore(&dd->zone_lock, flags);
        }
-       atomic_dec(&rq->mq_hctx->elevator_queued);
 }

 static bool dd_has_work(struct blk_mq_hw_ctx *hctx)
--
2.9.5

Kashyap



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux