Re: Regarding patch "block/blk-mq: Don't complete locally if capacities are different"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/02/24 10:03, Christian Loehle wrote:
> On 7/31/24 14:46, MANISH PANDEY wrote:
> > Hi Qais Yousef,
> 
> Qais already asked the important question, still some from my end.
> 
> > Recently we observed below patch has been merged
> > https://lore.kernel.org/all/20240223155749.2958009-3-qyousef@xxxxxxxxxxx
> > 
> > This patch is causing performance degradation ~20% in Random IO along with significant drop in Sequential IO performance. So we would like to revert this patch as it impacts MCQ UFS devices heavily. Though Non MCQ devices are also getting impacted due to this.
> 
> I'm curious about the sequential IO part in particular, what's the blocksize and throughput?
> If blocksize is large enough the completion and submission parts are hopefully not as critical.
> 
> > 
> > We have several concerns with the patch
> > 1. This patch takes away the luxury of affining best possible cpus from   device drivers and limits driver to fall in same group of CPUs.
> > 
> > 2. Why can't device driver use irq affinity to use desired CPUs to complete the IO request, instead of forcing it from block layer.
> > 
> > 3. Already CPUs are grouped based on LLC, then if a new categorization is required ?
> 
> As Qais hinted at, because of systems that share LLC on all CPUs but are HMP.
> 
> > 
> >> big performance impact if the IO request
> >> was done from a CPU with higher capacity but the interrupt is serviced
> >> on a lower capacity CPU.
> > 
> > This patch doesn't considers the issue of contention in submission path and completion path. Also what if we want to complete the request of smaller capacity CPU to Higher capacity CPU?
> > Shouldn't a device driver take care of this and allow the vendors to use the best possible combination they want to use?
> > Does it considers MCQ devices and different SQ<->CQ mappings?
> 
> So I'm assuming you're seeing something like the following:
> Some CPU(s) (call them S) are submitting IO, hardirq triggers on
> S.
> Before the patch the completion softirq could run on a !S CPU,
> now it runs on S. Am I then correct in assuming your workload
> is CPU-bound on S? Would you share some details about the
> workload, too?
> 
> What's the capacity of CPU(s) S then?
> IOW does this help?
> 
> -->8--
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e3c3c0c21b55..a4a2500c4ef6 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1164,7 +1164,7 @@ static inline bool blk_mq_complete_need_ipi(struct request *rq)
>         if (cpu == rq->mq_ctx->cpu ||
>             (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags) &&
>              cpus_share_cache(cpu, rq->mq_ctx->cpu) &&
> -            cpus_equal_capacity(cpu, rq->mq_ctx->cpu)))
> +            arch_scale_cpu_capacity(cpu) >= arch_scale_cpu_capacity(rq->mq_ctx->cpu)))
>                 return false;
>  
>         /* don't try to IPI to an offline CPU */

FWIW, that's what I had in the first version of the patch, but moved away from
it. I think this will constitute a policy.

Keep in mind that driver setting affinity like Manish case is not something
represent a kernel driver as I don't anticipate in-kernel driver to hardcode
affinities otherwise they won't be portable. irqbalancers usually move the
interrupts, and I'm not sure we can make an assumption about the reason an
interrupt is triggering on different capacity CPU.

My understanding of rq_affinity=1 is to match the perf of requester. Given that
the characteristic of HMP system is that power has an equal importance to perf
(I think this now has become true for all systems by the way), saying that the
match in one direction is better than the other is sort of forcing a policy of
perf first which I don't think is a good thing to enforce. We don't have enough
info to decide at this level. And our users care about both.

If no matching is required, it makes sense to set rq_affinity to 0. When
matching is enabled, we need to rely on per-task iowait boost to help the
requester to run at a bigger CPU, and naturally the completion will follow when
rq_affinity=1. If the requester doesn't need the big perf, but the irq
triggered on a bigger core, I struggle to understand why it is good for
completion to run on bigger core without the requester also being on a similar
bigger core to truly maximize perf.

By the way, if we assume LLC wasn't the same, then assuming HMP system too, and
reverting my patch, then the behavior was to move the completion from bigger
core to little core.

So two things to observe:

1. The patch keeps the behavior when LLC truly is not shared on such systems,
   which was in the past.
2. LLC in this case is most likely L2, and the usual trend is that the bigger
   the core the bigger L2. So the LLC characteristic is different and could
   have impacted performance. No one seem to have cared in the past. I think
   capacity gives this notion now implicitly.




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux