On 8/5/2024 10:47 PM, Bart Van Assche wrote:
On 8/4/24 7:07 PM, Qais Yousef wrote:
irqbalancers usually move the interrupts, and I'm not sure we can
make an assumption about the reason an interrupt is triggering on
different capacity CPU.
User space software can't modify the affinity of managed interrupts.
From include/linux/irq.h:
* IRQD_AFFINITY_MANAGED - Affinity is auto-managed by the kernel
That flag is tested by the procfs code that implements the smp_affinity
procfs attribute:
static ssize_t write_irq_affinity(int type, struct file *file,
const char __user *buffer, size_t count, loff_t *pos)
{
[ ... ]
if (!irq_can_set_affinity_usr(irq) || no_irq_affinity)
return -EIO;
[ ... ]
}
I'm not sure whether or not the interrupts on Manish test setup are
managed. Manish, can you please provide the output of the following
commands?
adb shell 'grep -i ufshcd /proc/interrupts'
adb shell 'grep -i ufshcd /proc/interrupts | while read a b; do ls -ld
/proc/irq/${a%:}/smp_affinity; done'
adb shell 'grep -i ufshcd /proc/interrupts | while read a b; do grep -aH
. /proc/irq/${a%:}/smp_affinity; done'
In our SoC's we manage Power and Perf balancing by dynamically changing
the IRQs based on the load. Say if we have more load, we assign UFS IRQs
on Large cluster CPUs and if we have less load, we affine the IRQs on
Small cluster CPUs.
Also for some SoC's since IRQs are mainly delivered to Small cluster
CPUs by default, so we manage to complete the request via using
QUEUE_FLAG_SAME_FORCE.
This issue is more affecting UFS MCQ devices, which usages ESI/MSI IRQs
and have distributed ESI IRQs for CQs.
Mostly we use Large cluster CPUs for binding IRQ and CQ and hence
completing more completions on Large cluster which won't be from same
capacity CPU as request may be from S/M clusters.
Thanks,
Bart.