[Cc+ Steve, libvirt, Daniel, Laine] On Tue, 20 Sep 2022 16:56:42 -0300 Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > On Tue, Sep 13, 2022 at 09:28:18AM +0200, Eric Auger wrote: > > Hi, > > > > On 9/13/22 03:55, Tian, Kevin wrote: > > > We didn't close the open of how to get this merged in LPC due to the > > > audio issue. Then let's use mails. > > > > > > Overall there are three options on the table: > > > > > > 1) Require vfio-compat to be 100% compatible with vfio-type1 > > > > > > Probably not a good choice given the amount of work to fix the remaining > > > gaps. And this will block support of new IOMMU features for a longer time. > > > > > > 2) Leave vfio-compat as what it is in this series > > > > > > Treat it as a vehicle to validate the iommufd logic instead of immediately > > > replacing vfio-type1. Functionally most vfio applications can work w/o > > > change if putting aside the difference on locked mm accounting, p2p, etc. > > > > > > Then work on new features and 100% vfio-type1 compat. in parallel. > > > > > > 3) Focus on iommufd native uAPI first > > > > > > Require vfio_device cdev and adoption in Qemu. Only for new vfio app. > > > > > > Then work on new features and vfio-compat in parallel. > > > > > > I'm fine with either 2) or 3). Per a quick chat with Alex he prefers to 3). > > > > I am also inclined to pursue 3) as this was the initial Jason's guidance > > and pre-requisite to integrate new features. In the past we concluded > > vfio-compat would mostly be used for testing purpose. Our QEMU > > integration fully is based on device based API. > > There are some poor chicken and egg problems here. > > I had some assumptions: > a - the vfio cdev model is going to be iommufd only > b - any uAPI we add as we go along should be generally useful going > forward > c - we should try to minimize the 'minimally viable iommufd' series > > The compat as it stands now (eg #2) is threading this needle. Since it > can exist without cdev it means (c) is made smaller, to two series. > > Since we add something useful to some use cases, eg DPDK is deployable > that way, (b) is OK. > > If we focus on a strict path with 3, and avoid adding non-useful code, > then we have to have two more (unwritten!) series beyond where we are > now - vfio group compartmentalization, and cdev integration, and the > initial (c) will increase. > > 3 also has us merging something that currently has no usable > userspace, which I also do dislike alot. > > I still think the compat gaps are small. I've realized that > VFIO_DMA_UNMAP_FLAG_VADDR has no implementation in qemu, and since it > can deadlock the kernel I propose we purge it completely. Steve won't be happy to hear that, QEMU support exists but isn't yet merged. > P2P is ongoing. > > That really just leaves the accounting, and I'm still not convinced at > this must be a critical thing. Linus's latest remarks reported in lwn > at the maintainer summit on tracepoints/BPF as ABI seem to support > this. Let's see an actual deployed production configuration that would > be impacted, and we won't find that unless we move forward. I'll try to summarize the proposed change so that we can get better advice from libvirt folks, or potentially anyone else managing locked memory limits for device assignment VMs. Background: when a DMA range, ex. guest RAM, is mapped to a vfio device, we use the system IOMMU to provide GPA to HPA translation for assigned devices. Unlike CPU page tables, we don't generally have a means to demand fault these translations, therefore the memory target of the translation is pinned to prevent that it cannot be swapped or relocated, ie. to guarantee the translation is always valid. The issue is where we account these pinned pages, where accounting is necessary such that a user cannot lock an arbitrary number of pages into RAM to generate a DoS attack. Duplicate accounting should be resolved by iommufd, but is outside the scope of this discussion. Currently, vfio tests against the mm_struct.locked_vm relative to rlimit(RLIMIT_MEMLOCK), which reads task->signal->rlim[limit].rlim_cur, where task is the current process. This is the same limit set via the setrlimit syscall used by prlimit(1) and reported via 'ulimit -l'. Note that in both cases above, we're dealing with a task, or process limit and both prlimit and ulimit man pages describe them as such. iommufd supposes instead, and references existing kernel implementations, that despite the descriptions above these limits are actually meant to be user limits and therefore instead charges pinned pages against user_struct.locked_vm and also marks them in mm_struct.pinned_vm. The proposed algorithm is to read the _task_ locked memory limit, then attempt to charge the _user_ locked_vm, such that user_struct.locked_vm cannot exceed the task locked memory limit. This obviously has implications. AFAICT, any management tool that doesn't instantiate assigned device VMs under separate users are essentially untenable. For example, if we launch VM1 under userA and set a locked memory limit of 4GB via prlimit to account for an assigned device, that works fine, until we launch VM2 from userA as well. In that case we can't simply set a 4GB limit on the VM2 task because there's already 4GB charged against user_struct.locked_vm for VM1. So we'd need to set the VM2 task limit to 8GB to be able to launch VM2. But not only that, we'd need to go back and also set VM1's task limit to 8GB or else it will fail if a DMA mapped memory region is transient and needs to be re-mapped. Effectively any task under the same user and requiring pinned memory needs to have a locked memory limit set, and updated, to account for all tasks using pinned memory by that user. How does this affect known current use cases of locked memory management for assigned device VMs? Does qemu://system by default sandbox into per VM uids or do they all use the qemu user by default. I imagine qemu://session mode is pretty screwed by this, but I also don't know who/where locked limits are lifted for such VMs. Boxes, who I think now supports assigned device VMs, could also be affected. > So, I still like 2 because it yields the smallest next step before we > can bring all the parallel work onto the list, and it makes testing > and converting non-qemu stuff easier even going forward. If a vfio compatible interface isn't transparently compatible, then I have a hard time understanding its value. Please correct my above description and implications, but I suspect these are not just theoretical ABI compat issues. Thanks, Alex