Hi Jason, On Thu, 4 Mar 2021 13:54:02 -0400, Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > On Thu, Mar 04, 2021 at 09:46:03AM -0800, Jacob Pan wrote: > > > Right, I was assuming have three use cases of IOASIDs: > > 1. host supervisor SVA (not a concern, just one init_mm to bind) > > 2. host user SVA, either one IOASID per process or perhaps some private > > IOASID for private address space > > 3. VM use for guest SVA, each IOASID is bound to a guest process > > > > My current cgroup proposal applies to #3 with IOASID_SET_TYPE_MM, which > > is allocated by the new /dev/ioasid interface. > > > > For #2, I was thinking you can limit the host process via PIDs cgroup? > > i.e. limit fork. So the host IOASIDs are currently allocated from the > > system pool with quota of chosen by iommu_sva_init() in my patch, 0 > > means unlimited use whatever is available. > > https://lkml.org/lkml/2021/2/28/18 > > Why do we need two pools? > > If PASID's are limited then why does it matter how the PASID was > allocated? Either the thing requesting it is below the limit, or it > isn't. > you are right. it should be tracked based on the process regardless it is allocated by the user (/dev/ioasid) or indirectly by kernel drivers during iommu_sva_bind_device(). Need to consolidate both 2 and 3 and decouple cgroup and IOASID set. > For something like qemu I'd expect to put the qemu process in a cgroup > with 1 PASID. Who cares what qemu uses the PASID for, or how it was > allocated? > For vSVA, we will need one PASID per guest process. But that is up to the admin based on whether or how many SVA capable devices are directly assigned. > Jason Thanks, Jacob