On 08/16/2012 10:45 AM, Alexander Gordeev wrote:
Currently multiple MSI mode is limited to a single vector per device (at
least on x86 and PPC). This series breathes life into pci_enable_msi_block()
and makes it possible to set interrupt affinity for multiple IRQs, similarly
to MSI-X. Yet, only for x86 and only when IOMMUs are present.
Although IRQ and PCI subsystems are modified, the current behaviour left
intact. The drivers could just start using multiple MSIs just by following
the existing documentation.
The AHCI device driver makes use of the new mode and a new function -
pci_enable_msi_block_auto() - patches 4,5.
The series is adapted to Ingo's -tip repository, x86/apic branch.
Patches 4,5 could be applied independently of patches 1-3.
Attached patches:
1/5 x86, MSI: Support multiple MSIs in presense of IRQ remapping
2/5 x86, MSI: Allocate as many multiple IRQs as requested
3/5 x86, MSI: Minor readability fixes
4/5 PCI, MSI: Enable multiple MSIs with pci_enable_msi_block_auto()
5/5 AHCI: Support multiple MSIs
Numbers? I would like to see measurements that show this is a benefit.
I'm especially wondering about lock contention for the case of
multiple, fast devices (e.g. RAM over SATA, or PCIe flash).
Because I see the following obvious problems immediately:
1) AHCI takes a host-wide lock during interrupt processing, which quite
reduces the value of "interrupts generated by different
ports could be serviced on different CPUs"
2) We do not put AHCI-specific code in libata-core. Try libahci.c or
ahci.c.
Regards,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html