On Thu, 2011-06-23 at 22:54 -0500, Jon Mason wrote: > There is a sizable performance boost for having the largest possible > maximum payload size on each PCI-E device. However, the maximum payload > size must be uniform on a given PCI-E fabric, and each device, bridge, > and root port can have a different max size. To find and configure the > optimal MPS settings, one must walk the fabric and determine the largest > MPS available on all the devices on the given fabric. Ok, so a few comments: - Don't do a pci_walk_bus_self() that takes a "self" bool, that's ugly. If you want common code, then do a __pci_walk_bus() that takes a "self" argument (call it include_self ?) and two small wrappers for pci_walk_bus() and pci_walk_bus_self() (tho arguably the later isn't needed, just use the __ variant directly) - Thinking more, I don't see the point of that pci_walk_bus_self() ... It will cause a bridge device to be called twice. If all you want is to make sure the toplevel one is called, just do that manually before you call pci_walk_bus_self() - Does pci_walk_bus() provide any guarantee in term of API as to the order in which it walks the devices ? IE. Parent first, then children ? That's how it's implemented today but are we certain that will remain ? It's not documented as such.... Your "forcemax" case as it is implemented will work only as long as this ordering is respected. - I would like a way to specify the MPS of the host bridge, it may not be the max of the RC P2P bridge (it -generally is- but I'd like a way to override that easily). - I think we need to handle MRRS at the same time. We need MRRS to be clamped by MPS, don't we ? In addition to being clamped top-down. - For MRRS we -must- have the arch provide the max of the host bridge - pcie_mps_forcemax -> pcibios_use_max_mps(bus) or something like that. I want it to be a function so I can make it per-platform or per host bridge even if I have to. Might be a good spot to add an argument for the bridge to return a platform maximum to clamp everything else as well. - Finally, should we make this "forcemax" the default behaviour or are we afraid we'll break some nvidia gunk ? (I doubt it will break, that would imply the two cards come up with small & different MPS in practice). Do we want to keep an API for drivers who want to clamp things between two devices who want to do peer to peer ? Cheers, Ben. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html