On 21/08/2015 05:33, Mihai Neagu wrote: > Radim, > > Thanks for your answer. Indeed setting IRQ affinity to a specific core > seems to be respected. > > However, on software emulation and on the real machine, while IRQ > affinity defaults to 3, all interrupts go on CPU0, while on KVM they go > on CPU1. Are you sure that KVM doesn't do some kind of round robin? Also, it's probably not _all_ interrupts but most of them. For example my laptop has: smp_affinity cpu0 cpu1 cpu2 cpu3 |||| 1: 32824 1764 151 130 IO-APIC-edge i8042 0123 12: 1587436 203990 16666 17064 IO-APIC-edge i8042 0123 24: 63361 5160 212545 1276 PCI-MSI-edge ahci ..2. 25: 2 0 1082 10710 PCI-MSI-edge xhci_hcd ..23 29: 1867844 72169 61 57514 PCI-MSI-edge iwlwifi 0... I added a column at the end with the SMP affinity of the interrupts. You can see that: * real hardware also shows traces of interrupts delivered before the affinity was set (especially vectors 24 and 29) * real hardware also distributes interrupts across multiple CPUs when the affinity covers multiple processors (see vectors 1, 12 and 25) > I wonder why KVM would act differently than both the real machine and > the software emulation in this particular aspect. > > Is there a machine or processor that I can specify at KVM command line > to make it behave like the real x86_64 processor which defaults > interrupts to CPU0? The behavior of interrupt arbitration is not specified in the processor documentation and can change across processors and implementations. Presumably it is either patented or an Intel trade secret. Older processors were documented to pick a processor that was not running another interrupt service routine. This can explain why software emulation shows that all interrupts go on CPU0. At least some newer processors are not doing it anymore, and the SDM has some high-level explanation of their behavior and its effects: [...] the chipset bus controller accepts messages from the I/O APIC agents in the system and directs interrupts to the processors on the system bus. When using the lowest priority delivery mode, the chipset chooses a target processor to receive the interrupt out of the set of possible targets. In operating systems that use the lowest priority delivery mode but do not update the TPR, the TPR information saved in the chipset will potentially cause the interrupt to be always delivered to the same processor from the logical set. This behavior is functionally backward compatible with the P6 family processor but may result in unexpected performance implications. Other processors are doing so called "vector hashing" (i.e. pick a CPU number based on the affinity mask and vector number, and use it most of the time), which would also explain why the interrupts all go on CPU0. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html