Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Feb 08, 2025 at 05:44:14PM +0800, Huang, Honglei1 wrote:
> On 2025/2/8 10:43, Demi Marie Obenour wrote:
> > On Fri, Feb 07, 2025 at 09:30:45PM -0500, Demi Marie Obenour wrote:
> > > On Fri, Feb 07, 2025 at 07:07:11PM +0800, Huang, Honglei1 wrote:
> > > > On 2025/2/7 2:21, Demi Marie Obenour wrote:
> > > > > On Thu, Feb 06, 2025 at 06:53:55PM +0800, Huang, Honglei1 wrote:
> > > > > > On 2025/1/31 8:33, Demi Marie Obenour wrote:
> > > > > > > On Wed, Jan 29, 2025 at 03:54:59PM -0500, Demi Marie Obenour wrote:
> > > > > > > > On 1/8/25 12:05 PM, Simona Vetter wrote:
> > > > > > > > > On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
> > > > > > > > > > 
> > > > > > > > > > On 2024/12/22 9:59, Demi Marie Obenour wrote:
> > > > > > > > > > > On 12/20/24 10:35 AM, Simona Vetter wrote:
> > > > > > > > > > > > On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
> > > > > > > > > > > > > From: Honglei Huang <Honglei1.Huang@xxxxxxx>
> > > > > > > > > > > > > 
> > > > > > > > > > > > > A virtio-gpu userptr is based on HMM notifier.
> > > > > > > > > > > > > Used for let host access guest userspace memory and
> > > > > > > > > > > > > notice the change of userspace memory.
> > > > > > > > > > > > > This series patches are in very beginning state,
> > > > > > > > > > > > > User space are pinned currently to ensure the host
> > > > > > > > > > > > > device memory operations are correct.
> > > > > > > > > > > > > The free and unmap operations for userspace can be
> > > > > > > > > > > > > handled by MMU notifier this is a simple and basice
> > > > > > > > > > > > > SVM feature for this series patches.
> > > > > > > > > > > > > The physical PFNS update operations is splited into
> > > > > > > > > > > > > two OPs in here. The evicted memories won't be used
> > > > > > > > > > > > > anymore but remap into host again to achieve same
> > > > > > > > > > > > > effect with hmm_rang_fault.
> > > > > > > > > > > > 
> > > > > > > > > > > > So in my opinion there are two ways to implement userptr that make sense:
> > > > > > > > > > > > 
> > > > > > > > > > > > - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
> > > > > > > > > > > >       notifier
> > > > > > > > > > > > 
> > > > > > > > > > > > - unpinnned userptr where you entirely rely on userptr and do not hold any
> > > > > > > > > > > >       page references or page pins at all, for full SVM integration. This
> > > > > > > > > > > >       should use hmm_range_fault ideally, since that's the version that
> > > > > > > > > > > >       doesn't ever grab any page reference pins.
> > > > > > > > > > > > 
> > > > > > > > > > > > All the in-between variants are imo really bad hacks, whether they hold a
> > > > > > > > > > > > page reference or a temporary page pin (which seems to be what you're
> > > > > > > > > > > > doing here). In much older kernels there was some justification for them,
> > > > > > > > > > > > because strange stuff happened over fork(), but with FOLL_LONGTERM this is
> > > > > > > > > > > > now all sorted out. So there's really only fully pinned, or true svm left
> > > > > > > > > > > > as clean design choices imo.
> > > > > > > > > > > > 
> > > > > > > > > > > > With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
> > > > > > > > > > > > you?
> > > > > > > > > > > 
> > > > > > > > > > > +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
> > > > > > > > > > > in complexity that pinning everything avoids.  Furthermore, this avoids the
> > > > > > > > > > > host having to take action in response to guest memory reclaim requests.
> > > > > > > > > > > This avoids additional complexity (and thus attack surface) on the host side.
> > > > > > > > > > > Furthermore, since this is for ROCm and not for graphics, I am less concerned
> > > > > > > > > > > about supporting systems that require swappable GPU VRAM.
> > > > > > > > > > 
> > > > > > > > > > Hi Sima and Demi,
> > > > > > > > > > 
> > > > > > > > > > I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
> > > > > > > > > > version.
> > > > > > > > > > 
> > > > > > > > > > And for the first pin variants implementation, the MMU notifier is also
> > > > > > > > > > needed I think.Cause the userptr feature in UMD generally used like this:
> > > > > > > > > > the registering of userptr always is explicitly invoked by user code like
> > > > > > > > > > "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
> > > > > > > > > > there is no explicit API for it, at least in hsakmt/KFD stack. User just
> > > > > > > > > > need call system call "free(userptrAddr)", then kernel driver will release
> > > > > > > > > > the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
> > > > > > > > > > user has been free the userptr except for MMU notifior.And in UMD theres is
> > > > > > > > > > no way to get the free() operation is invoked by user.The only way is use
> > > > > > > > > > MMU notifier in virtio-GPU driver and free the corresponding data in host by
> > > > > > > > > > some virtio CMDs as far as I can see.
> > > > > > > > > > 
> > > > > > > > > > And for the second way that is use hmm_range_fault, there is a predictable
> > > > > > > > > > issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
> > > > > > > > > > may migrate when GPU/device is working. In bare metal, when memory is
> > > > > > > > > > migrating KFD driver will pause the compute work of the device in
> > > > > > > > > > mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
> > > > > > > > > > memories to GPU then restore the compute work of device to ensure the
> > > > > > > > > > correction of the data. But in virtio-GPU driver the migration happen in
> > > > > > > > > > guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
> > > > > > > > > > can be used for notify host but as lack of mmap_write_lock protection in
> > > > > > > > > > host kernel, host will hold invalid data for a short period of time, this
> > > > > > > > > > may lead to some issues. And it is hard to fix as far as I can see.
> > > > > > > > > > 
> > > > > > > > > > I will extract some APIs into helper according to your request, and I will
> > > > > > > > > > refactor the whole userptr implementation, use some callbacks in page
> > > > > > > > > > getting path, let the pin method and hmm_range_fault can be choiced
> > > > > > > > > > in this series patches.
> > > > > > > > > 
> > > > > > > > > Ok, so if this is for svm, then you need full blast hmm, or the semantics
> > > > > > > > > are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
> > > > > > > > > not work.
> > > > > > > > > 
> > > > > > > > > The other option is that hsakmt/kfd api is completely busted, and that's
> > > > > > > > > kinda not a kernel problem.
> > > > > > > > > -Sima
> > > > > > > > 
> > > > > > > > On further thought, I believe the driver needs to migrate the pages to
> > > > > > > > device memory (really a virtio-GPU blob object) *and* take a FOLL_LONGTERM
> > > > > > > > pin on them.  The reason is that it isn’t possible to migrate these pages
> > > > > > > > back to "host" memory without unmapping them from the GPU.  For the reasons
> > > > > > > > I mention in [1], I believe that temporarily revoking access to virtio-GPU
> > > > > > > > blob objects is not feasible.  Instead, the pages must be treated as if
> > > > > > > > they are permanently in device memory until guest userspace unmaps them
> > > > > > > > from the GPU, after which they must be migrated back to host memory.
> > > > > > > 
> > > > > > > Discussion on IRC indicates that migration isn't reliable.  This is
> > > > > > > because Linux core memory management is largely lock-free for
> > > > > > > performance reasons, so there is no way to prevent temporary elevation
> > > > > > > of a page's reference count.  A page with an elevated reference count
> > > > > > > cannot be migrated.
> > > > > > > 
> > > > > > > The only alternative I can think of is for the hypervisor to perform the
> > > > > > > migration.  The hypervisor can revoke the guest's access to the page
> > > > > > > without the guest's consent or involvement.  The host can then replace
> > > > > > > the page with one of its own pages, which might be on the CPU or GPU.
> > > > > > > Further migration between the CPU and GPU is controlled by the host
> > > > > > > kernel-mode driver (KMD) and host kernel memory management.  The guest
> > > > > > > kernel driver must take a FOLL_LONGTERM pin before telling the host to
> > > > > > > use the pages, but that is all.
> > > > > > > 
> > > > > > > On KVM, this should be essentially automatic, as guest memory really is
> > > > > > > just host userspace memory.  On Xen, this requires that the backend
> > > > > > > domain can revoke fronted access to _any_ frontend page, or at least
> > > > > > > frontend pages that have been granted to the backend.  The backend will
> > > > > > > then need to be able to handle page faults for the frontend pages, and
> > > > > > > replace the frontend pages with its own pages at will.  Furthermore,
> > > > > > > revoking pages that the backend has installed into the frontend must
> > > > > > > never fail, because the backend will panic if it does fail.
> > > > > > > 
> > > > > > > Sima, is putting guest pages under host kernel control the only option?
> > > > > > > I thought that this could be avoided by leaving the pages on the CPU if
> > > > > > > migration fails, but that won't work because there will be no way to
> > > > > > > migrate them to the GPU later, causing performance problems that would
> > > > > > > be impossible to debug.  Is waiting (possibly forever) on migration to
> > > > > > > finish an option?  Otherwise, this might mean extra complexity in the
> > > > > > > Xen hypervisor, as I do not believe the primitives needed are currently
> > > > > > > available.  Specifically, in addition to the primitives discussed at Xen
> > > > > > > Project Summit 2024, the backend also needs to intercept access to, and
> > > > > > > replace the contents of, arbitrary frontend-controlled pages.
> > > > > > 
> > > > > > Hi Demi,
> > > > > > 
> > > > > > I agree that to achieve the complete SVM feature in virtio-GPU, it is
> > > > > > necessary to have the hypervisor deeply involved and add new features.
> > > > > > It needs solid design, I saw the detailed reply in a another thread, it
> > > > > > is very helpful,looking forward to the response from the Xen/hypervisor
> > > > > > experts.
> > > > > 
> > > > >   From further discussion with Sima, I suspect that virtio-GPU cannot
> > > > > support SVM with reasonable performance.  Native contexts have such good
> > > > > performance for graphics workloads because graphics workloads very rarely
> > > > > perform blocking waits for host GPU operations to complete, so one can
> > > > > make all frequently-used operations asynchronous and therefore hide the
> > > > > guest <=> host latency.  SVM seems to require many synchronous GPU
> > > > > operations, and I believe those will severely harm performance with
> > > > > virtio-GPU.
> > > > > 
> > > > > If you need full SVM for your workloads, I recommend using hardware
> > > > > SR-IOV.  This should allow the guest to perform GPU virtual memory
> > > > > operations without host involvement, which I expect will be much faster
> > > > > than paravirtualizing these operations.  Scalable I/O virtualization
> > > > > might also work, but it might also require paravirtualizing the
> > > > > performance-critical address-space operations unless the hardware has
> > > > > stage 2 translation tables.
> > > > > 
> > > > 
> > > > Yes I think so, the SR-IOV or some other hardware virtualization are clean
> > > > design for ROCm/compute currently. But actually those hardware features
> > > > supported solution also have their own limitation, like high hardware cost
> > > > and the performance decreasing caused by different guest VMs hardware
> > > > workload schedule. We are trying a low-cost, high-performance virtualization
> > > > solution, it appears to be difficult to support full feature VS SR-IOV at
> > > > present. But it doesn't prevent us from enabling part of functions.
> > > > 
> > > > > > So for the current virito-GPU userptr implementation, It can not support the
> > > > > > full SVM feature, it just can only let GPU access the user space memory,
> > > > > > maybe can be called by userptr feature. I think I will finish this small
> > > > > > part firstly and then to try to complete the whole SVM feature.
> > > > > 
> > > > > I think you will still have problems if the host is able to migrate
> > > > > pages in any way.  This requires that the host install an MMU notifier
> > > > > for the pages it has received from the guest, which in turn implies that
> > > > > the host must be able to prevent the guest from accessing the pages.
> > > > > If the pages are used in grant table operations, this isn't possible.
> > > > > 
> > > > > If you are willing to have the pages be pinned on the host side things
> > > > > are much simpler.  Such pages will always be in system memory, and will
> > > > > never be able to migrate to VRAM.  This will result in a performance
> > > > > penalty and will likely require explicit prefetching by programs using
> > > > > ROCm, but this may be acceptable for your use-cases.  The performance
> > > > > penalty is the same as that with XNACK disabled, which is the case for
> > > > > all RDNA2+ GPUs, so all code that aims to be portable to recent consumer
> > > > > hardware will have to account for it anyway.
> > > > 
> > > > Totally agreed. Actually memory migrating to VRAM is very common in GFX
> > > > side, but in ROCm/KFD, maybe it can be disabled and not often used as far as
> > > > I know. ROCm/KFD always uses SDMA to transfer or copy data maybe this is
> > > > faster than migrating to VRAM (needs further verification).
> > > > But we have some method to workaround it. Really thanks for your reminding.
> > > 
> > > I think you will do okay if you treat virtio-GPU as providing a virtual
> > > GPU with no XNACK support.  XNACK is necessary for migrating pages
> > > between GPU and CPU based on demand, and it is this migration that is
> > > so hard to implement.  Furthermore, I highly doubt that the combination
> > > of AMDKFD and the hardware you are targeting even supports XNACK.
> 
> Yes the goal of this patch set is to support functions without memory
> migration related. It seems like XNACK is hard to support at present.
> 
> > > At Xen Project Summit 2024, AMD mentioned that it wanted to enable both
> > > rendering (Vulkan/OpenGL) and compute (ROCm) with virtio-GPU native
> > > contexts under Xen.  The only GPUs for which AMDKFD will enable XNACK
> > > support are GFX9 GPUs, which are GCN and CDNA.  Shipping a GCN GPU in a
> > > new design would be very unusual and CDNA (Instinct) accelerators do not
> > > support graphics, so either AMD is using separate devices for compute
> > > and graphics or the workloads will run with no XNACK support.  Since you
> > > mention HW cost as an important consideration, I suspect the latter.
> > > 
> > > I believe that the Instinct accelerators that support XNACK also support
> > > SR-IOV, but if you wish to combine XNACK and virtio-GPU, this should be
> > > possible subject to caveats.  The main caveat is that under no
> > > circumstances can the host's Xen driver install an MMU notifier for
> > > pages that the guest can use in grant table operations or DMA.  A driver
> > > that installs an MMU notifier promises that it can block access to
> > > pages in a bounded amount of time, and if the guest can DMA to the pages
> > > or grant them to other domains this is not possible.  Without the Xen
> > > driver installing an MMU notifier, there is no way for the pages to be
> > > migrated to the GPU without risking use-after-free or at least data
> > > corruption.  Instead, one of the following options will be needed:
> > > 
> > > 1. hipMallocManaged() allocates the memory from the backend using the
> > >     Map primitive discussed elsewhere.  Such memory is not mappable in
> > >     the IOMMU (if there is an assigned PCI device) and cannot be used for
> > >     grant table operations.  Memory allocated via system allocators
> > >     (anonymous pages) is not able to be migrated.
> > > 
> > > 2. The frontend uses shadow buffers for all I/O.  This allows the
> > >     backend to use a new Steal primitive to revoke the guest's accesses
> > >     to anonymous pages and handle page faults accordingly.
> > > 
> > > 3. Same as 2 except that the frontend allocates all memory (except
> > >     bounce buffers) from the backend, just like a KVM guest does, rather
> > >     than from the Xen toolstack.
> > > 
> > > 4. The frontend tries to migrate the pages to backend-provided ones, and
> > >     falls back to leaving them pinned on the CPU.  The frontend's MMU
> > >     notifier tells the backend to stop accessing the pages, blocking
> > >     until the backend confirms this.  The frontend then moves the pages
> > >     to its own memory and returns from the notifier.  This may require
> > >     new AMDKFD APIs.
> 
> This step needs new AMDKFD userspace APIs, KFD has some internel APIs for
> it, need exported into UMD. But actually the most challenging parts are
> adding hypervisior primitive, and adding the frontend <-> backend sync
> solution. KFD needs mmap_write_lock to handle migrate/update operations,
> this lock needs be removed or the sync between frontend and backend is hard
> to implement. It maybe needs refactor the KFD SVM, only for this task needs
> a lot of work, and there may be other work that needs to do I haven't
> discovered it yet.
> Before starting all of it, we may need a solid design or another
> clever/compromise solution that may reduce some of the workload. Your
> reminding are professional and detailed, many things I haven't noticed,
> really thanks a lot.

You're welcome.  I am glad to have helped.

I believe the best first step would be to implement Map and Revoke as
described in https://lore.kernel.org/xen-devel/Z6U7yOrMyLZWqPA4@itl-email/T/,
and to submit the code for upstream review.  These primitives should be
sufficient for graphics, and they are are also prerequisites for future
SVM work.  By submitting the code upstream without waiting for
virtio-GPU ROCm support, you will be able to get review of Xen-related
code to happen in parallel with the ROCm work.  This will shorten the
critical path for upstreaming, and will reduce this risk of spending
development resources on designs that would not be acceptable upstream.
Sending the current patches would be a good idea, even if they are known
to have bugs, as this will allow upstream to help you fix them.

Once Map and Revoke are implemented, you should be able to implement
full support for virtio-GPU native contexts in Xen+QEMU.  This unlocks
OpenGL and Vulkan, along with OpenCL via rusticl.  This work can happen
in parallel with the kernel and hypervisor patches being reviewed.  I
also expect supporting ROCm with no XNACK to be feasible here, though I
am not certain as I am not familiar with the details of the AMDKFD
driver.

Finally, XNACK support can be implemented.  This should be the last
step, as it depends on all of the other work and will be a signfiicant
amount of effort in its own right.  It also might not be necessary in
practice, as if one is willing to abandon fine-grained cache coherency
I believe one can achieve equivalent performance without it, at the
expense of additional effort from the programmer.  Due to the limited
hardware support for XNACK, I expect that programs intended to be run
on end-user devices (as opposed to servers) will not require it, and it
might not be possible to achieve reasonable performance with virtio-GPU.
Since my understanding is that AMD's use-cases for virtio-GPU are
primarily in the automotive sector, I am not sure that AMD will gain any
value from enabling it, unless either UDNA devices will support XNACK in
ROCm or automotive OEMs will be including Instinct accelerators in the
cars they make.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux