On 8/26/21 7:40 AM, Zhen Lei wrote: > lock protection needs to be added only in > dd_finish_request(), which is unlikely to cause significant performance > side effects. Not sure the above is correct. Every new atomic instruction has a measurable performance overhead. But I guess in this case that overhead is smaller than the time needed to sum 128 per-CPU variables. > Tested on my 128-core board with two ssd disks. > fio bs=4k rw=read iodepth=128 cpus_allowed=0-95 <others> > Before: > [183K/0/0 iops] > [172K/0/0 iops] > > After: > [258K/0/0 iops] > [258K/0/0 iops] Nice work! > Fixes: fb926032b320 ("block/mq-deadline: Prioritize high-priority requests") Shouldn't the Fixes: tag be used only for patches that modify functionality? I'm not sure it is appropriate to use this tag for performance improvements. > struct deadline_data { > @@ -277,9 +278,9 @@ deadline_move_request(struct deadline_data *dd, struct dd_per_prio *per_prio, > } > > /* Number of requests queued for a given priority level. */ > -static u32 dd_queued(struct deadline_data *dd, enum dd_prio prio) > +static __always_inline u32 dd_queued(struct deadline_data *dd, enum dd_prio prio) > { > - return dd_sum(dd, inserted, prio) - dd_sum(dd, completed, prio); > + return dd->per_prio[prio].nr_queued; > } Please leave out "__always_inline". Modern compilers are smart enough to inline this function without using the "inline" keyword. > @@ -711,6 +712,8 @@ static void dd_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, > > prio = ioprio_class_to_prio[ioprio_class]; > dd_count(dd, inserted, prio); > + per_prio = &dd->per_prio[prio]; > + per_prio->nr_queued++; > > if (blk_mq_sched_try_insert_merge(q, rq, &free)) { > blk_mq_free_requests(&free); I think the above is wrong - nr_queued should not be incremented if the request is merged into another request. Please move the code that increments nr_queued past the above if-statement. Thanks, Bart.