RE: vm binding interfaces and parallel with mmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry for the outlook reply, so in XE that works the opposite of AMDGPU. Mappings keep a reference to the BO and the mapping exist until they are explicitly destroyed or the VM is destroyed, so if a mapping exists the BO exists. Quickly implemented a prototype extension to the VM bind IOCTL that blows away all mappings on a BO per Jason’s suggestion, for XE it was really straight forward.

 

I’d have to double check the i915 reference counting wrt to BO and mappings but I suspect it works like XE.

 

IMO this paradigm is the way to go as it does match open / mmap / close semantics.

 

Matt

 

From: Christian König <christian.koenig@xxxxxxx>
Sent: Thursday, August 25, 2022 6:37 AM
To: Jason Ekstrand <jason@xxxxxxxxxxxxxx>
Cc: Bas Nieuwenhuizen <bas@xxxxxxxxxxxxxxxxxxx>; Dave Airlie <airlied@xxxxxxxxx>; dri-devel <dri-devel@xxxxxxxxxxxxxxxxxxxxx>; Daniel Vetter <daniel.vetter@xxxxxxxx>; Brost, Matthew <matthew.brost@xxxxxxxxx>; Ben Skeggs <skeggsb@xxxxxxxxx>
Subject: Re: vm binding interfaces and parallel with mmap

 

Am 24.08.22 um 18:14 schrieb Jason Ekstrand:

On Mon, Aug 22, 2022 at 8:27 AM Christian König <christian.koenig@xxxxxxx> wrote:

[SNIP]

>> I suppose it also asks the question around paralleling
>>
>> fd = open()
>> ptr = mmap(fd,)
>> close(fd)
>> the mapping is still valid.
>>
>> I suppose our equiv is
>> handle = bo_alloc()
>> gpu_addr = vm_bind(handle,)
>> gem_close(handle)
>> is the gpu_addr still valid does the VM hold a reference on the kernel
>> bo internally.
> For Vulkan it looks like this is undefined and the above is not necessary:
>
> "It is important to note that freeing a VkDeviceMemory object with
> vkFreeMemory will not cause resources (or resource regions) bound to
> the memory object to become unbound. Applications must not access
> resources bound to memory that has been freed."
> (32.7.6)

 

I'm not sure about this particular question.  We need to be sure that maps get cleaned up eventually.  On the one hand, I think it's probably a valid API implementation to have each mapped page hold a reference similar to mmap and have vkDestroyImage or vkDestroyBuffer do an unmap to clean up the range.  However, clients may be surprised when they destroy a large memory object and can't reap the memory because of extra BO references they don't know about.  If BOs unmap themselves on close or if we had a way to take a VM+BO and say "unmap this BO from everywhere, please", we can clean up the memory pretty easily.  Without that, it's a giant PITA to do entirely inside the userspace driver because it requires us to globally track every mapping and that means data structures and locks.  Yes, such an ioctl would require the kernel to track things but the kernel is already tracking everything that's bound, so hopefully it doesn't add much.


For both amdgpu as well as the older radeon mapping a BO does *not* grab a reference to it. Whenever a BO is released all it's mappings just disappear.

We need to keep track of the mappings anyway to recreate page tables after (for example) suspend and resume, so that isn't any overhead.

Regards,
Christian.


 

--Jason

 

Additional to what was discussed here so far we need an array on in and
out drm_syncobj for both map as well as unmap.

E.g. when the mapping/unmapping should happen and when it is completed
etc...

Christian.

>
>
>> Dave.
>>> Dave.

 


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux