On 1/8/25 12:05 PM, Simona Vetter wrote: > On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote: >> >> On 2024/12/22 9:59, Demi Marie Obenour wrote: >>> On 12/20/24 10:35 AM, Simona Vetter wrote: >>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote: >>>>> From: Honglei Huang <Honglei1.Huang@xxxxxxx> >>>>> >>>>> A virtio-gpu userptr is based on HMM notifier. >>>>> Used for let host access guest userspace memory and >>>>> notice the change of userspace memory. >>>>> This series patches are in very beginning state, >>>>> User space are pinned currently to ensure the host >>>>> device memory operations are correct. >>>>> The free and unmap operations for userspace can be >>>>> handled by MMU notifier this is a simple and basice >>>>> SVM feature for this series patches. >>>>> The physical PFNS update operations is splited into >>>>> two OPs in here. The evicted memories won't be used >>>>> anymore but remap into host again to achieve same >>>>> effect with hmm_rang_fault. >>>> >>>> So in my opinion there are two ways to implement userptr that make sense: >>>> >>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu >>>> notifier >>>> >>>> - unpinnned userptr where you entirely rely on userptr and do not hold any >>>> page references or page pins at all, for full SVM integration. This >>>> should use hmm_range_fault ideally, since that's the version that >>>> doesn't ever grab any page reference pins. >>>> >>>> All the in-between variants are imo really bad hacks, whether they hold a >>>> page reference or a temporary page pin (which seems to be what you're >>>> doing here). In much older kernels there was some justification for them, >>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is >>>> now all sorted out. So there's really only fully pinned, or true svm left >>>> as clean design choices imo. >>>> >>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for >>>> you? >>> >>> +1 on using FOLL_LONGTERM. Fully dynamic memory management has a huge cost >>> in complexity that pinning everything avoids. Furthermore, this avoids the >>> host having to take action in response to guest memory reclaim requests. >>> This avoids additional complexity (and thus attack surface) on the host side. >>> Furthermore, since this is for ROCm and not for graphics, I am less concerned >>> about supporting systems that require swappable GPU VRAM. >> >> Hi Sima and Demi, >> >> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next >> version. >> >> And for the first pin variants implementation, the MMU notifier is also >> needed I think.Cause the userptr feature in UMD generally used like this: >> the registering of userptr always is explicitly invoked by user code like >> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free, >> there is no explicit API for it, at least in hsakmt/KFD stack. User just >> need call system call "free(userptrAddr)", then kernel driver will release >> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if >> user has been free the userptr except for MMU notifior.And in UMD theres is >> no way to get the free() operation is invoked by user.The only way is use >> MMU notifier in virtio-GPU driver and free the corresponding data in host by >> some virtio CMDs as far as I can see. >> >> And for the second way that is use hmm_range_fault, there is a predictable >> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory >> may migrate when GPU/device is working. In bare metal, when memory is >> migrating KFD driver will pause the compute work of the device in >> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted >> memories to GPU then restore the compute work of device to ensure the >> correction of the data. But in virtio-GPU driver the migration happen in >> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD >> can be used for notify host but as lack of mmap_write_lock protection in >> host kernel, host will hold invalid data for a short period of time, this >> may lead to some issues. And it is hard to fix as far as I can see. >> >> I will extract some APIs into helper according to your request, and I will >> refactor the whole userptr implementation, use some callbacks in page >> getting path, let the pin method and hmm_range_fault can be choiced >> in this series patches. > > Ok, so if this is for svm, then you need full blast hmm, or the semantics > are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does > not work. > > The other option is that hsakmt/kfd api is completely busted, and that's > kinda not a kernel problem. > -Sima On further thought, I believe the driver needs to migrate the pages to device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM pin on them. The reason is that it isn’t possible to migrate these pages back to "host" memory without unmapping them from the GPU. For the reasons I mention in [1], I believe that temporarily revoking access to virtio-GPU blob objects is not feasible. Instead, the pages must be treated as if they are permanently in device memory until guest userspace unmaps them from the GPU, after which they must be migrated back to host memory. The problems with other approaches are most obvious if one considers a Xen guest using a virtio-GPU backend that is not all-powerful. Normal guest memory is not accessible to the GPU, and Xen uses the IOMMU to enforce this restriction. Therefore, the guest must migrate pages to virtio-GPU blob objects before the GPU can access them. From Xen’s perspective, virtio-GPU blob objects belong to the backend domain, so Xen allows the GPU to access them. However, the pages in blob objects _cannot_ be used in Xen grant table operations, because Xen doesn’t consider them to belong to the guest! Similarly, if the guest has an assigned PCI device, that device will not be able to access the blob object’s pages. I’m no expert on Linux memory management, so I’m not sure how to implement this behavior. What I _can_ say is that a blob object is I/O memory, and behaves somewhat similar to a PCI BAR in a system with no P2PDMA support: CPU access works, but DMA from other devices does not. Furthermore, the memory can’t be used for page tables or granted to other Xen guests, and it will go away if the device is hot-unplugged. In fact, if the PCI transport is used, the blob object is located in the BAR of an (emulated) device. There are non-PCI transports, though, so assuming that blob objects are located in a PCI BAR is not a good idea. The reason that pinning the objects in "device" memory is a reasonable approach is that the host (or backend, in the Xen case) can still migrate pages between device and host memory and not allocate backing store for pages that are never accessed. Therefore, it is not necessary for every CPU access to go across the PCIe bus even for dGPUs. Instead, if guest CPU accesses are much more frequent than device accesses, the memory will be migrated to the host side. It’s up to the virtio-GPU backend implementation to make sure that this happens. For KVM, this should be automatic, but for Xen, this might need additional Xen patches so that the backend domain is notified when pages are accessed or dirtied. [1]: https://lore.kernel.org/dri-devel/9572ba57-5552-4543-a3b0-6097520a12a3@xxxxxxxxx -- Sincerely, Demi Marie Obenour (she/her/hers)