On 6.2.2024 18.12, Mikhail Gavrilov wrote:
On Tue, Feb 6, 2024 at 4:24 PM Mathias Nyman
<mathias.nyman@xxxxxxxxxxxxxxx> wrote:
I confirm after reverting all listed commits and 57e153dfd0e7
performance of the network returned to theoretical maximum.
That patch changes how we request MSI/MSI-X interrupt(s) for xhci.
Is there any change is /proc/interrupts between a good and bad case?
Such as xhci_hcd using MSI-X instead of MSI, or eth0 and xhci_hcd
interrupting on the same CPU?
On the good kernel I have - 32 xhci_hcd, and bad only - 4.
In both scenarios using PCI-MSIX.
I attached both interrupt output as archives to this message.
Thanks,
Looks like your network adapter ends up interrupting CPU0 in the bad case due
to the change in how many interrupts are requested by xhci_hcd before it.
bad case:
CPU0 CPU1 ... CPU31
87: 18213809 0 ... 0 IR-PCI-MSIX-0000:0e:00.0 0-edge enp14s0
Does manually changing it to some other CPU help? picking one that doesn't already
handle a lot of interrupts. CPU0 could also in general be more busy, possibly spending
more time with interrupts disabled.
For example change to CPU23 in the bad case:
echo 800000 > /proc/irq/87/smp_affinity
Check from proc/interrupts that enp14s0 interrupts actually go to CPU23 after this.
Thanks
Mathias