RE: [EXT] Re: PCIe RC\EP virtio rdma solution discussion.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> EP side use vhost, RC side use virtio.
> > I don’t think anyone works on this thread now.
> > If using eDMA, it needs both sides to have a transfer queue.
> > I don't know how to easily implement it on the vhost side.
> We had implemented this solution at the design stage of our proposal.
> This solution has to prepare a network device and register to the kernel
> from scratch for the endpoint. There is a lot of duplicated code, so we
> think the solution 1 is better, as Frank said.
> > Solution 3(I am working on)
> >
> > Implement infiniband rdma driver at both EP and RC side.
> > EP side build EDMA hardware queue based on EP/RC side’s send and
> receive
> > queue and when eDMA finished, write status to complete queue for both
> EP/RC
> > side. Use ipoib implement network transfer.
> The new InfiniBand device has to implement an InfiniBand network layer.
> I think it is overengineered for this peer-to-peer communication. In
> addition, the driver of the InfiniBand device should be implemented or
> emulate the existing InfiniBand device to use the upstream driver. We
> want to reduce the cost of implementation and maintenance.

Infiniband driver is quite complex. That's reason why progress is slow in my
side.  I hope the endpoint maintainer(kw) and PCI maintainer(Bjorn) can
provide comments.  

> > The whole upstream effort is quite huge for these. I don’t want to waste
> > time and efforts because direction is wrong.
> >
> > I think Solution 1 is an easy path.
> >
> >
> >
> Best,
> 
> Shunsuke.





[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux