RE: [RFC v3 00/11] HFI Virtual Network Interface Controller (VNIC)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> On Tue, 2017-02-07 at 16:54 -0800, Vishwanathapura, Niranjana wrote:
> > On Tue, Feb 07, 2017 at 09:58:50PM +0000, Bart Van Assche wrote:
> > > On Tue, 2017-02-07 at 21:44 +0000, Hefty, Sean wrote:
> > > > This is Ethernet - not IP - encapsulation over a non-InfiniBand
> device/protocol.
> > >
> > > That's more than clear from the cover letter. In my opinion the
> > > cover letter should explain why it is considered useful to have such
> > > a driver upstream and what the use cases are of encapsulating
> > > Ethernet frames inside RDMA packets.
> >
> > We believe on our HW, HFI VNIC design gives better hardware resource
> > usage which is also scalable and hence room for better performance.
> > Also as evident in the cover letter, it gives us better manageability
> > by defining virtual Ethernet switches overlaid on the fabric and use
> > standard Ethernet support provided by Linux.
> 
> That kind of language is appropriate for a marketing brochure but not for a
> technical forum.

Well.  That is not totally true.  Perhaps more detail on how we get better performance but we thought this has been covered already.

> Even reading your statement twice did not make me any wiser.
> You mentioned "better hardware resource usage". Compared to what? Is that
> perhaps compared to IPoIB?  Since Ethernet frames have an extra header and
> are larger than IPoIB frames, how can larger frames result in better hardware
> resource usage? 

Yes, as compared to IPoIB.  The problem with IPoIB is it introduces a significant amount of Verbs overhead which is not needed for Ethernet encapsulation.  Especially on hardware such as ours.  As Jason has mentioned having a more generic "skb_send" or "skb_qp" has been discussed in the past.

As we discussed at the plumbers conference not all send/receive paths are "Queue Pairs".  Yes we have a send queue (multiple send queues actually) and a recv queue (again multiple queues) but there is no pairing of the queues at all.  There are no completion semantics required either.  This reduced overhead results in better performance on our hardware.

> And what is a virtual Ethernet switch? Is this perhaps packet
> forwarding by software? If so, why are virtual Ethernet switches needed since
> the Linux networking stack already supports packet forwarding?

Virtual Ethernet switches provide packet switching through the native OPA switches via OPA Virtual Fabrics (a tuple of the path information including lid/pkey/sl/mtu).  This is not packet forwarding within the node.  A large advantage here is that the virtual switches are centrally managed by the EM in a very scalable way.  For example, the IPoIB configuration semantics such as multicast group join/create, Path Record queries, etc are all eliminated.  Further reducing overhead.

Ira

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux