Re: [PATCH RFC 04/12] kernel/user: Allow user::locked_vm to be usable for iommufd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 24 Mar 2022 19:27:39 -0300
Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:

> On Thu, Mar 24, 2022 at 02:40:15PM -0600, Alex Williamson wrote:
> > On Tue, 22 Mar 2022 13:15:21 -0300
> > Jason Gunthorpe via iommu <iommu@xxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> >   
> > > On Tue, Mar 22, 2022 at 09:29:23AM -0600, Alex Williamson wrote:
> > >   
> > > > I'm still picking my way through the series, but the later compat
> > > > interface doesn't mention this difference as an outstanding issue.
> > > > Doesn't this difference need to be accounted in how libvirt manages VM
> > > > resource limits?      
> > > 
> > > AFACIT, no, but it should be checked.
> > >   
> > > > AIUI libvirt uses some form of prlimit(2) to set process locked
> > > > memory limits.    
> > > 
> > > Yes, and ulimit does work fully. prlimit adjusts the value:
> > > 
> > > int do_prlimit(struct task_struct *tsk, unsigned int resource,
> > > 		struct rlimit *new_rlim, struct rlimit *old_rlim)
> > > {
> > > 	rlim = tsk->signal->rlim + resource;
> > > [..]
> > > 		if (new_rlim)
> > > 			*rlim = *new_rlim;
> > > 
> > > Which vfio reads back here:
> > > 
> > > drivers/vfio/vfio_iommu_type1.c:        unsigned long pfn, limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> > > drivers/vfio/vfio_iommu_type1.c:        unsigned long limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> > > 
> > > And iommufd does the same read back:
> > > 
> > > 	lock_limit =
> > > 		task_rlimit(pages->source_task, RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> > > 	npages = pages->npinned - pages->last_npinned;
> > > 	do {
> > > 		cur_pages = atomic_long_read(&pages->source_user->locked_vm);
> > > 		new_pages = cur_pages + npages;
> > > 		if (new_pages > lock_limit)
> > > 			return -ENOMEM;
> > > 	} while (atomic_long_cmpxchg(&pages->source_user->locked_vm, cur_pages,
> > > 				     new_pages) != cur_pages);
> > > 
> > > So it does work essentially the same.  
> > 
> > Well, except for the part about vfio updating mm->locked_vm and iommufd
> > updating user->locked_vm, a per-process counter versus a per-user
> > counter.  prlimit specifically sets process resource limits, which get
> > reflected in task_rlimit.  
> 
> Indeed, but that is not how the majority of other things seem to
> operate it.
> 
> > For example, let's say a user has two 4GB VMs and they're hot-adding
> > vfio devices to each of them, so libvirt needs to dynamically modify
> > the locked memory limit for each VM.  AIUI, libvirt would look at the
> > VM size and call prlimit to set that value.  If libvirt does this to
> > both VMs, then each has a task_rlimit of 4GB.  In vfio we add pinned
> > pages to mm->locked_vm, so this works well.  In the iommufd loop above,
> > we're comparing a per-task/process limit to a per-user counter.  So I'm
> > a bit lost how both VMs can pin their pages here.  
> 
> I don't know anything about libvirt - it seems strange to use a
> securityish feature like ulimit but not security isolate processes
> with real users.
> 
> But if it really does this then it really does this.
> 
> So at the very least VFIO container has to keep working this way.
> 
> The next question is if we want iommufd's own device node to work this
> way and try to change libvirt somehow. It seems libvirt will have to
> deal with this at some point as iouring will trigger the same problem.
> 
> > > This whole area is a bit peculiar (eg mlock itself works differently),
> > > IMHO, but with most of the places doing pins voting to use
> > > user->locked_vm as the charge it seems the right path in today's
> > > kernel.  
> > 
> > The philosophy of whether it's ultimately a better choice for the
> > kernel aside, if userspace breaks because we're accounting in a
> > per-user pool rather than a per-process pool, then our compatibility
> > layer ain't so transparent.  
> 
> Sure, if it doesn't work it doesn't work. Lets be sure and clearly
> document what the compatability issue is and then we have to keep it
> per-process.
> 
> And the same reasoning likely means I can't change RDMA either as qemu
> will break just as well when qemu uses rdma mode.
> 
> Which is pretty sucky, but it is what it is..

I added Daniel Berrangé to the cc list for my previous reply, hopefully
he can comment whether libvirt has the sort of user security model you
allude to above that maybe makes this a non-issue for this use case.
Unfortunately it's extremely difficult to prove that there are no such
use cases out there even if libvirt is ok.  Thanks,

Alex





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux