Re: [RFC PATCH 3/3] drm/virtio: implement blob userptr resource object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/24/25 7:42 PM, Demi Marie Obenour wrote:
> On 1/8/25 12:05 PM, Simona Vetter wrote:
>> On Fri, Dec 27, 2024 at 10:24:29AM +0800, Huang, Honglei1 wrote:
>>>
>>> On 2024/12/22 9:59, Demi Marie Obenour wrote:
>>>> On 12/20/24 10:35 AM, Simona Vetter wrote:
>>>>> On Fri, Dec 20, 2024 at 06:04:09PM +0800, Honglei Huang wrote:
>>>>>> From: Honglei Huang <Honglei1.Huang@xxxxxxx>
>>>>>>
>>>>>> A virtio-gpu userptr is based on HMM notifier.
>>>>>> Used for let host access guest userspace memory and
>>>>>> notice the change of userspace memory.
>>>>>> This series patches are in very beginning state,
>>>>>> User space are pinned currently to ensure the host
>>>>>> device memory operations are correct.
>>>>>> The free and unmap operations for userspace can be
>>>>>> handled by MMU notifier this is a simple and basice
>>>>>> SVM feature for this series patches.
>>>>>> The physical PFNS update operations is splited into
>>>>>> two OPs in here. The evicted memories won't be used
>>>>>> anymore but remap into host again to achieve same
>>>>>> effect with hmm_rang_fault.
>>>>>
>>>>> So in my opinion there are two ways to implement userptr that make sense:
>>>>>
>>>>> - pinned userptr with pin_user_pages(FOLL_LONGTERM). there is not mmu
>>>>>    notifier
>>>>>
>>>>> - unpinnned userptr where you entirely rely on userptr and do not hold any
>>>>>    page references or page pins at all, for full SVM integration. This
>>>>>    should use hmm_range_fault ideally, since that's the version that
>>>>>    doesn't ever grab any page reference pins.
>>>>>
>>>>> All the in-between variants are imo really bad hacks, whether they hold a
>>>>> page reference or a temporary page pin (which seems to be what you're
>>>>> doing here). In much older kernels there was some justification for them,
>>>>> because strange stuff happened over fork(), but with FOLL_LONGTERM this is
>>>>> now all sorted out. So there's really only fully pinned, or true svm left
>>>>> as clean design choices imo.
>>>>>
>>>>> With that background, why does pin_user_pages(FOLL_LONGTERM) not work for
>>>>> you?
>>>>
>>>> +1 on using FOLL_LONGTERM.  Fully dynamic memory management has a huge cost
>>>> in complexity that pinning everything avoids.  Furthermore, this avoids the
>>>> host having to take action in response to guest memory reclaim requests.
>>>> This avoids additional complexity (and thus attack surface) on the host side.
>>>> Furthermore, since this is for ROCm and not for graphics, I am less concerned
>>>> about supporting systems that require swappable GPU VRAM.
>>>
>>> Hi Sima and Demi,
>>>
>>> I totally agree the flag FOLL_LONGTERM is needed, I will add it in next
>>> version.
>>>
>>> And for the first pin variants implementation, the MMU notifier is also
>>> needed I think.Cause the userptr feature in UMD generally used like this:
>>> the registering of userptr always is explicitly invoked by user code like
>>> "registerMemoryToGPU(userptrAddr, ...)", but for the userptr release/free,
>>> there is no explicit API for it, at least in hsakmt/KFD stack. User just
>>> need call system call "free(userptrAddr)", then kernel driver will release
>>> the userptr by MMU notifier callback.Virtio-GPU has no other way to know if
>>> user has been free the userptr except for MMU notifior.And in UMD theres is
>>> no way to get the free() operation is invoked by user.The only way is use
>>> MMU notifier in virtio-GPU driver and free the corresponding data in host by
>>> some virtio CMDs as far as I can see.
>>>
>>> And for the second way that is use hmm_range_fault, there is a predictable
>>> issues as far as I can see, at least in hsakmt/KFD stack. That is the memory
>>> may migrate when GPU/device is working. In bare metal, when memory is
>>> migrating KFD driver will pause the compute work of the device in
>>> mmap_wirte_lock then use hmm_range_fault to remap the migrated/evicted
>>> memories to GPU then restore the compute work of device to ensure the
>>> correction of the data. But in virtio-GPU driver the migration happen in
>>> guest kernel, the evict mmu notifier callback happens in guest, a virtio CMD
>>> can be used for notify host but as lack of mmap_write_lock protection in
>>> host kernel, host will hold invalid data for a short period of time, this
>>> may lead to some issues. And it is hard to fix as far as I can see.
>>>
>>> I will extract some APIs into helper according to your request, and I will
>>> refactor the whole userptr implementation, use some callbacks in page
>>> getting path, let the pin method and hmm_range_fault can be choiced
>>> in this series patches.
>>
>> Ok, so if this is for svm, then you need full blast hmm, or the semantics
>> are buggy. You cannot fake svm with pin(FOLL_LONGTERM) userptr, this does
>> not work.
> 
> Is this still broken in the virtualized case?  Page migration between host
> and device memory is completely transparent to the guest kernel, so pinning
> guest memory doesn't interfere with the host KMD at all.  In fact, the host
> KMD is not even aware of it.

To elaborate further:

Memory in a KVM guest is *not* host physical memory, or even host kernel
memory.  It is host *userspace* memory, and in particular, *it is fully pageable*.
There *might* be a few exceptions involving structures that are accessed by
the (physical) CPU, but none of these are relevant here.

This means that memory management works very differently than in the
non-virtualized case.  The host KMD can migrate pages between host memory
and device memory without either the guest kernel or host userspace being
aware that such migration has happened.  This means that pin(FOLL_LONGTERM)
in the guest doesn't pin memory on the host.  Instead, it pins memory in the
*guest*.  The host will continue to migrate pages between host and device
as needed.  I’m no expert on SVM, but I suspect this is the desired behavior.

Xen is significantly trickier, because most guest memory is provided by
the Xen toolstack via the hypervisor and is _not_ pageable.  Therefore,
it cannot be mapped into the GPU without using Xen grant tables.  Since
Xen grants do not support non-cooperative revocation, this requires a
FOLL_LONGTERM pin *anyway*.  Furthermore, granted pages _cannot_ be
migrated from host to device, so unless the GPU is an iGPU all of its
accesses will need to cross the PCI bus.  This will obviously be slow.

The guest can avoid this problem by migrating userptr memory to virtio-GPU
blob objects _before_ pinning it.  Virtio-GPU blob objects are backed by
host userspace memory, so the host can migrate them between device and host
memory just like in the KVM case.  Under KVM, such migration would be be
slightly wasteful but otherwise harmless in the common case.  In the case
where PCI passthrough is also in use, however, it might be necessary even
for KVM guests.  This is because PCI passthrough requires pinned memory,
and pinned memory cannot be migrated to the device.

Since AMD’s automotive use-case uses Xen, and since KVM might also need
page migration, I recommend that the initial implementation _always_
migrate pages to blob objects no matter what the hypervisor is.  Direct
GPU access to guest memory can be implemented as a KVM-specific optimization
later.

Also worth noting is that only pages that have been written need to be
migrated.  If a page hasn't been written, it should not be migrated, because
unwritten pages of a blob objects will read as zero.  However, the migration
should almost certainly be done in 2M chunks, rather than 4K ones.  This is
because the TLBs of at least AMD GPU are optimized for 2M pages, and GPU access
to 4K pages takes a 30% performance penalty.  This nicely matches the penalty
that AMD observed.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux