On Tue, Aug 13, 2019 at 08:45:37AM -0700, Ram Pai wrote: > On Wed, Aug 14, 2019 at 12:24:39AM +1000, David Gibson wrote: > > On Tue, Aug 13, 2019 at 03:26:17PM +0200, Christoph Hellwig wrote: > > > On Mon, Aug 12, 2019 at 07:51:56PM +1000, David Gibson wrote: > > > > AFAICT we already kind of abuse this for the VIRTIO_F_IOMMU_PLATFORM, > > > > because to handle for cases where it *is* a device limitation, we > > > > assume that if the hypervisor presents VIRTIO_F_IOMMU_PLATFORM then > > > > the guest *must* select it. > > > > > > > > What we actually need here is for the hypervisor to present > > > > VIRTIO_F_IOMMU_PLATFORM as available, but not required. Then we need > > > > a way for the platform core code to communicate to the virtio driver > > > > that *it* requires the IOMMU to be used, so that the driver can select > > > > or not the feature bit on that basis. > > > > > > I agree with the above, but that just brings us back to the original > > > issue - the whole bypass of the DMA OPS should be an option that the > > > device can offer, not the other way around. And we really need to > > > fix that root cause instead of doctoring around it. > > > > I'm not exactly sure what you mean by "device" in this context. Do > > you mean the hypervisor (qemu) side implementation? > > > > You're right that this was the wrong way around to begin with, but as > > well as being hard to change now, I don't see how it really addresses > > the current problem. The device could default to IOMMU and allow > > bypass, but the driver would still need to get information from the > > platform to know that it *can't* accept that option in the case of a > > secure VM. Reversed sense, but the same basic problem. > > > > The hypervisor does not, and can not be aware of the secure VM > > restrictions - only the guest side platform code knows that. > > This statement is almost entirely right. I will rephrase it to make it > entirely right. > > The hypervisor does not, and can not be aware of the secure VM > requirement that it needs to do some special processing that has nothing > to do with DMA address translation - only the guest side platform code > know that. > > BTW: I do not consider 'bounce buffering' as 'DMA address translation'. > DMA address translation, translates CPU address to DMA address. Bounce > buffering moves the data from one buffer at a given CPU address to > another buffer at a different CPU address. Unfortunately the current > DMA ops conflates the two. The need to do 'DMA address translation' > is something the device can enforce. But the need to do bounce > buffering, is something that the device should not be aware and should be > entirely a decision made locally by the kernel/driver in the secure VM. Christoph, Since we have not heard back from you, I am not sure where you stand on this issue now. One of the three things are possible.. (a) our above explaination did not make sense and hence you decided to ignore it. (b) our above above made some sense and need more time to think and respond. (c) you totally forgot about this. I hope it is (b). We want a solution that works for all, and your inputs are important to us. Thanks, RP _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization