Re: Regarding patch "block/blk-mq: Don't complete locally if capacities are different"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

We agree with the points like rq_affinity can vary from target to target.

But also please consider below use cases

1. Storage devices with single hardware queue (like UFS 2.x/ 3.x)
in a HMP system with shared LLC, not all the CPUs will be active and online. Hence for large amount (say ~1GB) of random IO transactions , if requester CPU is from smaller cpu group, then due to capacity based grouping, Large cluster CPUs would be mostly unused /idle, as the completion would also happen on same capacity CPU. Again due to this, the smaller / mid capacity CPUs only would have to submit and complete the request. Actually we could have completed the requested on large capacity CPUs and could have better utilized the power of Large capacity CPUs.

But to move IO requests from S/M cluster CPUs to L cluster, scheduler would need to wait until a threshold IO hits, and by that time Performance application would spent some runtime and would report low performance as overall results.


On 08/02/24 10:03, Christian Loehle wrote:
> So I'm assuming you're seeing something like the following:
> Some CPU(s) (call them S) are submitting IO, hardirq triggers on
> S.
> Before the patch the completion softirq could run on a !S CPU,
> now it runs on S. Am I then correct in assuming your workload
> is CPU-bound on S? Would you share some details about the
> workload, too?
>
> What's the capacity of CPU(s) S then?


Yes.. for few SoCs we follow the same.
Say an SoC with 3 clusters ( S/M/L), if M CPU(s) are submitting IO requests ( which is a large data transfer), then in this case since LLC is shared, we can allow L CPU(s) to complete the request via softirq / hardirq. This will make sure to avoid contention in submission and completion path.


Again consider an SoC with 2 clusters ( 4 cpus of small capacity / 2 CPUs of Mid capacity). In such case, if M CPUs are used for IO submersion and completions, then these CPU would face heavy work load and hence Performance will be impacted. Now if the new patch was not there, we could have moved few IO completions to S cluster CPUs.

> If no matching is required, it makes sense to set rq_affinity to 0.

> I don't get why irq_affinity=1 is compatible with this case? Isn't
> this custom
>setup is a fully managed system by you and means you want >rq_affinity=0? What
>do you lose if you move to rq_affinity=0?

actually rq_affinity=0 would help for few SoC's having MCQ like UFS 4.x, but even this won't be generic solution for us. As this won't considers an SoC which doesn't shares LLC, and thus would have significant performance issue. Also since the change is picked up in all the kernel branches, so it would be very difficult to experiment on older Socs and get the best solution for each target.

This won't work for many SoCs like above example of 2 clusters, as
rq_affinity to 0 would then complete the request on hardIRQ context, which may not be suited for all the SoCs.
Also not all SoCs are arm based and shares LLC.



2. Storage devices with MCQ (Like UFS 4.x / NVME), usages ESI/MSI interrupts, hence we would have opportunity to bind ESI IRQ associated with an CQ.


On 08/05/24 10:17, Bart Van Assche wrote:
> On 8/4/24 7:07 PM, Qais Yousef wrote:
> > irqbalancers usually move the interrupts, and I'm not sure we can
> > make an assumption about the reason an interrupt is triggering on
> > different capacity CPU.
> User space software can't modify the affinity of managed interrupts.
> From include/linux/irq.h:

> > >True. But this is special case and was introduced for isolated
> > > CPUs. I don't think drivers can request this themselves

There are drivers, which can manage the cpu affinty for IRQs
using irq_set_affinity_hint() based on the use case.
Since in the SoC's the LLC is shared ( M and L clusters). So if IO is submitted from M capacity CPU(s), and request is completed on L capacity cpu(s). Though the capacity of M and L is different, but since the LLC is shared, so completion can be done on L cluster without the need of IPI. With the new change, we may not get advantage of shared LLC.

Proposed solution:
We can have a solution which is backward compatible and a new control flag (QUEUE_FLAG_SAME_CAPACITY_FORCE) could be provided for this new change to maintain backward compatibly.

How about introducing a new rq_affinity ( may be rq_affinity = 3) for using cpus_equal_capacity() using new flag QUEUE_FLAG_SAME_CAPACITY.


if (cpu == rq->mq_ctx->cpu ||
	(!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags) &&
	  cpus_share_cache(cpu, rq->mq_ctx->cpu) &&
+	  (test_bit(QUEUE_FLAG_CPU_CAPACITY, &rq->q->queue_flags))
	   && cpus_equal_capacity(cpu, rq->mq_ctx->cpu)))
		return false;



Could you please consider raising similar change, if this seems fine for all.


Regards
Manish




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux