On Mon, Aug 8, 2022 at 8:46 AM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote: > > On Mon, Aug 08, 2022 at 11:18:50AM +0100, Will Deacon wrote: > > Hi Michael, > > > > On Sun, Aug 07, 2022 at 09:14:43AM -0400, Michael S. Tsirkin wrote: > > > Will, thanks very much for the analysis and the writeup! > > > > No problem, and thanks for following up. > > > > > On Fri, Aug 05, 2022 at 07:11:06PM +0100, Will Deacon wrote: > > > > So how should we fix this? One possibility is for us to hack crosvm to > > > > clear the VIRTIO_F_ACCESS_PLATFORM flag when setting the vhost features, > > > > but others here have reasonably pointed out that they didn't expect a > > > > kernel change to break userspace. On the flip side, the offending commit > > > > in the kernel isn't exactly new (it's from the end of 2020!) and so it's > > > > likely that others (e.g. QEMU) are using this feature. > > > > > > Exactly, that's the problem. > > > > > > vhost is reusing the virtio bits and it's only natural that > > > what you are doing would happen. > > > > > > To be precise, this is what we expected people to do (and what QEMU does): > > > > > > > > > #define QEMU_VHOST_FEATURES ((1 << VIRTIO_F_VERSION_1) | > > > (1 << VIRTIO_NET_F_RX_MRG) | .... ) > > > > > > VHOST_GET_FEATURES(... &host_features); > > > host_features &= QEMU_VHOST_FEATURES > > > VHOST_SET_FEATURES(host_features & guest_features) > > > > > > > > > Here QEMU_VHOST_FEATURES are the bits userspace knows about. > > > > > > Our assumption was that whatever userspace enables, it > > > knows what the effect on vhost is going to be. > > > > > > But yes, I understand absolutely how someone would instead just use the > > > guest features. It is unfortunate that we did not catch this in time. > > > > > > > > > In hindsight, we should have just created vhost level macros > > > instead of reusing virtio ones. Would address the concern > > > about naming: PLATFORM_ACCESS makes sense for the > > > guest since there it means "whatever access rules platform has", > > > but for vhost a better name would be VHOST_F_IOTLB. > > > We should have also taken greater pains to document what > > > we expect userspace to do. I remember now how I thought about something > > > like this but after coding this up in QEMU I forgot to document this :( > > > Also, I suspect given the history the GET/SET features ioctl and burned > > > wrt extending it and we have to use a new when we add new features. > > > All this we can do going forward. > > > > Makes sense. The crosvm developers are also pretty friendly in my > > experience, so I'm sure they wouldn't mind being involved in discussions > > around any future ABI extensions. Just be aware that they _very_ recently > > moved their mailing lists, so I think it lives here now: > > > > https://groups.google.com/a/chromium.org/g/crosvm-dev > > > > > But what can we do about the specific issue? > > > I am not 100% sure since as Will points out, QEMU and other > > > userspace already rely on the current behaviour. > > > > > > Looking at QEMU specifically, it always sends some translations at > > > startup, this in order to handle device rings. > > > > > > So, *maybe* we can get away with assuming that if no IOTLB ioctl was > > > ever invoked then this userspace does not know about IOTLB and > > > translation should ignore IOTLB completely. > > > > There was a similar suggestion from Stefano: > > > > https://lore.kernel.org/r/20220806105225.crkui6nw53kbm5ge@sgarzare-redhat > > > > about spotting the backend ioctl for IOTLB and using that to enable > > the negotiation of F_ACCESS_PLATFORM. Would that work for qemu? > > Hmm I would worry that this disables the feature for old QEMU :( > > > > > I am a bit nervous about breaking some *other* userspace which actually > > > wants device to be blocked from accessing memory until IOTLB > > > has been setup. If we get it wrong we are making guest > > > and possibly even host vulnerable. > > > And of course just revering is not an option either since there > > > are now whole stacks depending on the feature. > > > > Absolutely, I'm not seriously suggesting the revert. I just did it locally > > to confirm the issue I was seeing. > > > > > Will I'd like your input on whether you feel a hack in the kernel > > > is justified here. > > > > If we can come up with something that we have confidence in and won't be a > > pig to maintain, then I think we should do it, but otherwise we can go ahead > > and change crosvm to mask out this feature flag on the vhost side for now. > > We mainly wanted to raise the issue to illustrate that this flag continues > > to attract problems in the hope that it might inform further usage and/or > > spec work in this area. > > > > In any case, I'm happy to test any kernel patches with our setup if you > > want to give it a shot. > > Thanks! > I'm a bit concerned that the trick I proposed changes the configuration > where iotlb was not set up from "access to memory not allowed" to > "access to all memory allowed". This just might have security > implications if some application assumed the former. > And the one Stefano proposed disables IOTLB for old QEMU versions. Adding hacks to vhost in order to work around userspace applications that misunderstand the vhost model seems like a it will lead to problems. Userspace applications need to follow the vhost model: vhost is designed for virtqueue passthrough, but the rest of the vhost interface is not suitable for pass through. It's similar to how VFIO PCI passthrough needs to do a significant amount of stuff in userspace to emulate a PCI configuration space and it won't work properly if you pass through the physical PCI device's PCI configuration space. The emulator has to mediate between the guest device and vhost device because it still emulates the VIRTIO transport, configuration space, device lifecycle, etc even when all virtqueues are passed through. Let's document this for vhost and vDPA because it is not obvious. Stefan