Re: Regarding patch "block/blk-mq: Don't complete locally if capacities are different"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 8/22/2024 7:54 PM, Bart Van Assche wrote:
On 8/22/24 3:46 AM, MANISH PANDEY wrote:
On 8/21/2024 10:52 PM, Bart Van Assche wrote:
> What is the performance impact of the above change?
 >
No impact at all
Is this a good summary of this email thread?
* The first email in this thread reports an important performance
   regression.
* In your previous email there is a candidate fix for the performance
   regression.
* Above I read that the proposed fix has no performance impact at all
   on any setup.

Is this a good summary of this email thread? If so, do you agree that
this must be confusing everyone who is following this email thread?

Thanks,

Bart.

Hi Bart,

Performance impact due to addition of cpu capacity check (https://lore.kernel.org/all/20240223155749.2958009-3-qyousef@xxxxxxxxxxx/) ...[1]
is already mentioned in the first email.

But let me summarize it again:

We are not able to get advantage of affining the IRQ in different capacity CPU(s)/clusters and complete the request in higher cluster cpu(s), even though the LLC is shared between these clusters as it is causing the block completion to happen on SOFTIRQ context, if requester and completion clusters are different.

Below is the performance impact with the current patch [1]

1. For MCQ capable UFS host (paired with UFS 4.x), we are observing ~20% random R/W performance drop.

2. For single doorbell ufs hosts (paired with UFS 2.x/ UFS 3.x), we are observing ~7-10% random R/W performance drop.


Also in previous emails on this thread, below were few suggestions to add check for equal or greater capacity cpus by @Christian Loehle
https://lore.kernel.org/all/3feb5226-7872-432b-9781-29903979d34a@xxxxxxx/

> From: Christian Loehle @ 2024-08-02  9:03 UTC (permalink / raw)
> [......]
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index e3c3c0c21b55..a4a2500c4ef6 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1164,7 +1164,7 @@ static inline bool
> blk_mq_complete_need_ipi(struct request *rq)
>        if (cpu == rq->mq_ctx->cpu ||
>            (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags) &&
>             cpus_share_cache(cpu, rq->mq_ctx->cpu) &&
> -            cpus_equal_capacity(cpu, rq->mq_ctx->cpu)))
> +            arch_scale_cpu_capacity(cpu) >=
>              arch_scale_cpu_capacity(rq->mq_ctx->cpu)))
>                return false;
>
>       /* don't try to IPI to an offline CPU */


There can be SoCs with different CPU cluster configurations and to have optimal IO load balancing or to avoid contention b/w submission path and completion path, we may need to complete IO request of large capacity CPU(s) on small cluster cpus. So the above propose solution may not be suffice to all the use cases.

Hence with below proposed solution, we are trying to propose a new rq flag QUEUE_FLAG_CPU_CAPACITY. The proposed solution will provide us a way such that users who are benefited with CPU capacity check [1] would be able to use the fix as it is, and if a user (including us) want to bypass cpu capacity fix [1], they can set rq_affinity to 3 and would be able to retain performance drop as mentioned in first email. This would give flexibility to user to choose what's the best for their system.

if (cpu == rq->mq_ctx->cpu ||
	    (!test_bit(QUEUE_FLAG_SAME_FORCE, &rq->q->queue_flags) &&
	     cpus_share_cache(cpu, rq->mq_ctx->cpu) &&
+	     (test_bit(QUEUE_FLAG_CPU_CAPACITY, &rq->q->queue_flags) ||
	      cpus_equal_capacity(cpu, rq->mq_ctx->cpu))))
		return false;


Regards
Manish






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux