On 2019-01-17 8:19 a.m., Vincent Whitchurch wrote: > On the endpoint, the PCIe endpoint driver sets up (hardcoded) BARs and > memory regions as required to allow the endpoint and the root complex to > access each other's memory. This statement describes NTB hardware pretty well. In essence that's what an NTB device is: a BAR that maps to a window in other hosts memory. Right now the entire NTB upstream software stack (ntb_transport and ntb_netdev) is specific to that ecosystem and only exposes a network device so the hosts can communicate. This code works but has some issues and was never able to perform at full PCIe line speeds (which everyone expects). So it's not clear to me if anyone is doing anything real with it. The companies that are working on NTB, that I'm aware of, have mostly done their own out-of-tree stuff. It would be interesting to unify ntb_transport with the virtio stack because I suspect they do very similar things right now and there's a lot more devices above virtio than just a network device. However, the main problem people working on NTB face (besides performance) is trying to get multi-host working in a general and sensible way given that the hardware typically has limited BAR resources (among other limitations). Logan