On Wed, Jan 11, 2012 at 05:21:53PM +0000, Stefan Hajnoczi wrote: > On Wed, Jan 11, 2012 at 3:39 PM, Michael S. Tsirkin <mst@xxxxxxxxxx> wrote: > > On Wed, Jan 11, 2012 at 02:28:48PM +0000, Stefan Hajnoczi wrote: > >> On Wed, Jan 11, 2012 at 9:10 AM, Benjamin Herrenschmidt > >> <benh@xxxxxxxxxxxxxxxxxxx> wrote: > >> > On Wed, 2012-01-11 at 08:47 +0000, Stefan Hajnoczi wrote: > >> >> > >> >> This is also an opportunity to stop using CPU physical addresses in > >> >> the ring and instead perform DMA like a normal PCI device (use bus > >> >> addresses). > >> > > >> > Euh why ? > >> > >> Because it's a paravirt hack that ends up hitting corner cases. It's > >> not possible to do virtio-pci passthrough under nested virtualization > >> unless we use an IOMMU. Imagine passing virtio-net from L0 into the > >> L2 guest (i.e. PCI-passthrough). If virtio-pci is really "PCI" this > >> should be possible but it's not when we use physical addresses instead > >> of bus addresses. > >> > >> Stefan > > > > It won't be hard to show siginificant performance regression if > > we do this. Hard to justify for something as niche as nested virt. > > For x86 this should be mostly a nop. Maybe it should, but AFAIK it isn't. > For ppc and SPARC architectures maybe you're right. I still think > it's a design flaw because if virtio v2 doesn't use bus addresses then > it will simply not be possible to do passthrough for nested virt and > other cases we haven't hit yet. > > Stefan virtio-pci does not implement things like SRIOV or FLR so it won't work anyway. If we ever fix this, and if we really want to pass through a virtio device (why?) using an emulated iommu seems silly - we probably want a PV IOMMU as well. -- MST _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization