On Fri, 21 Feb 2020 14:27:27 +1100 David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> wrote: > On Thu, Feb 20, 2020 at 05:31:35PM +0100, Christoph Hellwig wrote: > > On Thu, Feb 20, 2020 at 05:23:20PM +0100, Christian Borntraeger wrote: > > > >From a users perspective it makes absolutely perfect sense to use the > > > bounce buffers when they are NEEDED. > > > Forcing the user to specify iommu_platform just because you need bounce buffers > > > really feels wrong. And obviously we have a severe performance issue > > > because of the indirections. > > > > The point is that the user should not have to specify iommu_platform. > > We need to make sure any new hypervisor (especially one that might require > > bounce buffering) always sets it, > > So, I have draft qemu patches which enable iommu_platform by default. > But that's really because of other problems with !iommu_platform, not > anything to do with bounce buffering or secure VMs. > > The thing is that the hypervisor *doesn't* require bounce buffering. > In the POWER (and maybe s390 as well) models for Secure VMs, it's the > *guest*'s choice to enter secure mode, so the hypervisor has no reason > to know whether the guest needs bounce buffering. As far as the > hypervisor and qemu are concerned that's a guest internal detail, it > just expects to get addresses it can access whether those are GPAs > (iommu_platform=off) or IOVAs (iommu_platform=on). I very much agree! > > > as was a rather bogus legacy hack > > It was certainly a bad idea, but it was a bad idea that went into a > public spec and has been widely deployed for many years. We can't > just pretend it didn't happen and move on. > > Turning iommu_platform=on by default breaks old guests, some of which > we still care about. We can't (automatically) do it only for guests > that need bounce buffering, because the hypervisor doesn't know that > ahead of time. Turning iommu_platform=on for virtio-ccw makes no sense whatsover, because for CCW I/O there is no such thing as IOMMU and the addresses are always physical addresses. > > > that isn't extensibe for cases that for example require bounce buffering. > > In fact bounce buffering isn't really the issue from the hypervisor > (or spec's) point of view. It's the fact that not all of guest memory > is accessible to the hypervisor. Bounce buffering is just one way the > guest might deal with that. > Agreed. Regards, Halil
Attachment:
pgpDtepmz3oHs.pgp
Description: OpenPGP digital signature