On Wed, Dec 13, 2023 at 10:23:13AM -0700, Alex Williamson wrote: > On Tue, 12 Dec 2023 17:06:39 -0800 > Keith Busch <kbusch@xxxxxxxxxx> wrote: > > > I was examining an issue where a user process utilizing vfio is hitting > > the RLIMIT_MEMLOCK limit during a ioctl(VFIO_IOMMU_MAP_DMA) call. The > > amount of memory, though, should have been well below the memlock limit. > > > > The test maps the same address range to multiple devices. Each time the > > same address range is mapped to another device, the lock count is > > increasing, creating a multiplier on the memory lock accounting, which > > was unexpected to me. > > > > Another strange thing, the /proc/PID/status shows VmLck is indeed > > increasing toward the limit, but /proc/PID/smaps shows that nothing has > > been locked. > > > > The mlock() syscall doesn't doubly account for previously locked ranges > > when asked to lock them again, so I was initially expecting the same > > behavior with vfio since they subscribe to the same limit. > > > > So a few initial questions: > > > > Is there a reason vfio is doubly accounting for the locked pages for > > each device they're mapped to? > > > > Is the discrepency on how much memory is locked depending on which > > source I consult expected? > > Locked page accounting is at the vfio container level and those > containers are unaware of other containers owned by the same process, > so unfortunately this is expected. IOMMUFD resolves this by having > multiple IO address spaces within the same iommufd context. Thanks for the reply! Sounds like I need to better familiarize myself with iommufd. :) > I don't know the reason smaps is not showing what you expect or if it > should. Thanks, It was just unexpected, but not hugely concerning right now. Not sure if anyone cares, but I think a process could exceed the ulimit by locking different ranges through vfio and mlock since their accounting is done differently.