RE: [EXPERIMENTAL v1 0/4] RDMA loopback device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Yuval Shaia <yuval.shaia@xxxxxxxxxx>
> Sent: Monday, March 4, 2019 10:48 AM
> To: Bart Van Assche <bvanassche@xxxxxxx>
> Cc: Parav Pandit <parav@xxxxxxxxxxxx>; Ira Weiny <ira.weiny@xxxxxxxxx>;
> Leon Romanovsky <leon@xxxxxxxxxx>; Dennis Dalessandro
> <dennis.dalessandro@xxxxxxxxx>; linux-rdma@xxxxxxxxxxxxxxx
> Subject: Re: [EXPERIMENTAL v1 0/4] RDMA loopback device
> 
> On Mon, Mar 04, 2019 at 08:10:05AM -0800, Bart Van Assche wrote:
> > On Mon, 2019-03-04 at 09:56 +0200, Yuval Shaia wrote:
> > > Suggestion: To enhance 'loopback' performances, can you consider
> > > using shared memory or any other IPC instead of going thought the
> network stack?
> >
> > I'd like to avoid having to implement yet another initiator block
> > driver. Using IPC implies writing a new block driver and also coming
> > up with a new block-over- IPC protocol. Using RDMA has the advantage
> > that the existing NVMeOF initator block driver and protocol can be used.
> >
> > Bart.
> 
> No, no, i didn't mean to implement new driver, just that the xmit of the
> packet would be by use of memcpy instead of going through TP stack. This
> would make the data exchange extremely fast when the traffic is between
> two entities on the same host.
> 
Can you please review the other patches in this patchset and not just cover-letter?
It does what you are describing without the network stack.




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux