RE: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Fischer, Anna [mailto:anna.fischer@xxxxxx]
> Sent: Saturday, November 08, 2008 3:10 AM
> To: Greg KH; Yu Zhao
> Cc: Matthew Wilcox; Anthony Liguori; H L; randy.dunlap@xxxxxxxxxx;
> grundler@xxxxxxxxxxxxxxxx; Chiang, Alexander;
linux-pci@xxxxxxxxxxxxxxx;
> rdreier@xxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
jbarnes@xxxxxxxxxxxxxxxx;
> virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx; kvm@xxxxxxxxxxxxxxx;
> mingo@xxxxxxx; keir.fraser@xxxxxxxxxxxxx; Leonid Grossman;
> eddie.dong@xxxxxxxxx; jun.nakajima@xxxxxxxxx; avi@xxxxxxxxxx
> Subject: RE: [PATCH 0/16 v6] PCI: Linux kernel SR-IOV support
> 


> > But would such an api really take advantage of the new IOV
interfaces
> > that are exposed by the new device type?
> 
> I agree with what Yu says. The idea is to have hardware capabilities
to
> virtualize a PCI device in a way that those virtual devices can
represent
> full PCI devices. The advantage of that is that those virtual device
can
> then be used like any other standard PCI device, meaning we can use
> existing
> OS tools, configuration mechanism etc. to start working with them.
Also,
> when
> using a virtualization-based system, e.g. Xen or KVM, we do not need
> to introduce new mechanisms to make use of SR-IOV, because we can
handle
> VFs as full PCI devices.
> 
> A virtual PCI device in hardware (a VF) can be as powerful or complex
as
> you like, or it can be very simple. But the big advantage of SR-IOV is
> that hardware presents a complete PCI device to the OS - as opposed to
> some resources, or queues, that need specific new configuration and
> assignment mechanisms in order to use them with a guest OS (like, for
> example, VMDq or similar technologies).
> 
> Anna


Ditto. 
Taking netdev interface as an example - a queue pair is a great way to
scale across cpu cores in a single OS image, but it is just not a good
way to share device across multiple OS images. 
The best unit of virtualization is a VF that is implemented as a
complete netdev pci device (not a subset of a pci device).
 This way, native netdev device drivers can work for direct hw access to
a VF "as is", and most/all Linux networking features (including VMQ)
will work in a guest.
Also, guest migration for netdev interfaces (both direct and virtual)
can be supported via native Linux mechanism (bonding driver), while Dom0
can retain "veto power" over any guest direct interface operation it
deems privileged (vlan, mac address, promisc mode, bandwidth allocation
between VFs, etc.).
 
Leonid
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux