On Tue, Jan 19, 2016 at 04:36:36PM +0000, Keith Busch wrote: > On Tue, Jan 19, 2016 at 08:02:20AM -0800, Christoph Hellwig wrote: > > As this seems to require special drivers to bind to it, and Intel > > people refuse to even publicly tell what the code does I'd like > > to NAK this code until we get an explanation and use cases for it. > > We haven't opened the h/w specification, but we've been pretty open with > what it provides, how the code works, and our intended use case. The > device provides additional pci domains for people who need more than > the 256 busses a single domain provides. > > What information may I provide to satisfy your use case concerns? Are > you wanting to know what devices we have in mind that require additional > domains? VMD is simply a convenient way to create a new PCIe host bridge that happens to sit on the existing PCIe root bus. It changes how I/O is routed (i.e. BDF translation), but not its contents. We've actually gone through some effort in the code *avoid* special drivers by implementing the existing host bridge abstractions. The cases where existing drivers wouldn't work are due to limitations, not arbitrary filters. (For example, it doesn't know how to route legacy IO ports or INTx.) -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html