Arnd, On Tue, 29 Jan 2013 21:33:08 +0100, Thomas Petazzoni wrote: > Basically, I have currently two suggestions: > > * From Jason Gunthorpe, to not use any host bridge, and instead use > only PCI-to-PCI bridges, one per PCIe interface. > > * From you, to not use any PCI-to-PCI bridge, and use only host > bridges, one per PCIe interface. Thinking more about this, this solution (using one emulated host bridge per PCIe interface) would cause one problem: the PCIe device itself would no longer be in slot 0. If I'm correct, with one host bridge per PCIe interface, we would have the following topology: bus 0, slot 0: emulated host bridge 0 bus 0, slot 1: PCIe device connected to PCIe interface 0 bus 1, slot 0: emulated host bridge 1 bus 1, slot 1: PCIe device connected to PCIe interface 1 bus 2, slot 0: emulated host bridge 2 bus 2, slot 1: PCIe device connected to PCIE interface 2 etc. However, one of the reason to use a PCI-to-PCI bridge was to ensure that the PCIe devices were all listed in slot 0. According to the Marvell engineers who work on the PCIe stuff, some new PCIe devices have this requirement. I don't have a lot of details about this, but I was told that most of the new Intel NICs require this, for example the Intel X520 fiber NIC. Maybe PCIe experts (Jason?) could provide more details about this, and confirm/infirm this statement. The usage of PCI-to-PCI bridge allows to have each PCIe device on its own bus, at slot 0, which also solves this problem. Best regards, Thomas -- Thomas Petazzoni, Free Electrons Kernel, drivers, real-time and embedded Linux development, consulting, training and support. http://free-electrons.com -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html