I'm not sure if it is fine to give non Linux kernel references here,
but Tegra194 has this implemented (not in an optimized way though)
for Tegra194(RP) <-> Tegra194(EP) configuration.
It does use Tegra194's proprietary syncpoint shim hardware to facilitate
interrupt generation from RP to EP. (FWIW, regular MSIs are used from EP
to RP).
syncpoint shim hardware gets mapped to a portion of the BAR during
initialization, and when RP does a write to this BAR region, it
generates an interrupt to the local CPU (i.e. EP's local CPU).
You can take a look at the code here
EPF driver (on EP system):
https://nv-tegra.nvidia.com/gitweb/?p=linux-nvidia.git;a=blob;f=drivers/pci/endpoint/functions/pci-epf-tegra-vnet.c;h=f55790f8c569368ad6012aeb9726b9a6c08c5304;
hb=6dc57fec39c444e4c4448be61ddd19c55693daf1
EP's device driver (on RP system):
https://nv-tegra.nvidia.com/gitweb/?p=linux-nvidia.git;a=blob;f=drivers/net/ethernet/nvidia/pcie/tegra_vnet.c;h=af74baae1452fea25c3c5292a36a4cd1d8f22e50;hb=6dc57fec39c444e4c4448be61ddd19c55693daf1
As I mentioned, this is not an optimized version and we are yet to
upstream it (hence it may not be of upstream code quality).
We get around 5Gbps throughput with this.
- Vidya Sagar
On 5/26/2021 10:31 PM, Logan Gunthorpe wrote:
External email: Use caution opening links or attachments
On 2021-05-26 10:28 a.m., Bjorn Helgaas wrote:
[+to Kishon, Jon, Logan, who might have more insight]
On Wed, May 26, 2021 at 08:44:59AM -0700, Tim Harvey wrote:
Greetings,
Is there an existing driver to implement a network interface
controller via a PCIe endpoint? I'm envisioning a system with a PCIe
master and multiple endpoints that all have a network interface to
communicate with each other.
That sounds awfully similar to NTB. See ntb_netdev and ntb_transport.
Though IMO NTB has proven to be a poor solution to the problem. Modern
network cards with RDMA are pretty much superior in every way.
Logan