Re: [PATCH 00/13] drm: Fix reservation locking for pin/unpin and console

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/28/24 11:19, Thomas Zimmermann wrote:
> Hi
> 
> Am 27.02.24 um 19:14 schrieb Dmitry Osipenko:
>> Hello,
>>
>> Thank you for the patches!
>>
>> On 2/27/24 13:14, Thomas Zimmermann wrote:
>>> Dma-buf locking semantics require the caller of pin and unpin to hold
>>> the buffer's reservation lock. Fix DRM to adhere to the specs. This
>>> enables to fix the locking in DRM's console emulation. Similar changes
>>> for vmap and mmap have been posted at [1][2]
>>>
>>> Most DRM drivers and memory managers acquire the buffer object's
>>> reservation lock within their GEM pin and unpin callbacks. This
>>> violates dma-buf locking semantics. We get away with it because PRIME
>>> does not provide pin/unpin, but attach/detach, for which the locking
>>> semantics is correct.
>>>
>>> Patches 1 to 8 rework DRM GEM code in various implementations to
>>> acquire the reservation lock when entering the pin and unpin callbacks.
>>> This prepares them for the next patch. Drivers that are not affected
>>> by these patches either don't acquire the reservation lock (amdgpu)
>>> or don't need preparation (loongson).
>>>
>>> Patch 9 moves reservation locking from the GEM pin/unpin callbacks
>>> into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
>>> internally it still gets the reservation lock.
>>>
>>> With the updated GEM callbacks, the rest of the patchset fixes the
>>> fbdev emulation's buffer locking. Fbdev emulation needs to keep its
>>> GEM buffer object inplace while updating its content. This required
>>> a implicit pinning and apparently amdgpu didn't do this at all.
>>>
>>> Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
>>> The former function map a GEM buffer into the kernel's address space
>>> with regular vmap operations, but keeps holding the reservation lock.
>>> The _vunmap_local() helper undoes the vmap and releases the lock. The
>>> updated GEM callbacks make this possible. Between the two calls, the
>>> fbdev emulation can update the buffer content without have the buffer
>>> moved or evicted. Update fbdev-generic to use vmap_local helpers,
>>> which fix amdgpu. The idea of adding a "local vmap" has previously been
>>> attempted at [3] in a different form.
>>>
>>> Patch 11 adds implicit pinning to the DRM client's regular vmap
>>> helper so that long-term vmap'ed buffers won't be evicted. This only
>>> affects fbdev-dma, but GEM DMA helpers don't require pinning. So
>>> there are no practical changes.
>>>
>>> Patches 12 and 13 remove implicit pinning from the vmap and vunmap
>>> operations in gem-vram and qxl. These pin operations are not supposed
>>> to be part of vmap code, but were required to keep the buffers in place
>>> for fbdev emulation. With the conversion o ffbdev-generic to to
>>> vmap_local helpers, that code can finally be removed.
>> Isn't it a common behaviour for all DRM drivers to implicitly pin BO
>> while it's vmapped? I was sure it should be common /o\
> 
> That's what I originally thought as well, but the intention is for pin
> and vmap to be distinct operation. So far each driver has been
> different, as you probably know best from your vmap refactoring. :)
> 
>>
>> Why would you want to kmap BO that isn't pinned?
> 
> Pinning places the buffer object for the GPU. As a side effect, the
> buffer is then kept in place, which enables vmap. So pinning only makes
> sense for buffer objects that never move (shmem, dma). That's what patch
> 11 is for.
> 
>>
>> Shouldn't TTM's vmap() be changed to do the pinning?
> 
> I don't think so. One problem is that pinning needs a memory area (vram,
> GTT, system ram, etc) specified, which vmap simply doesn't know about.
> That has been a problem for fbdev emulation at some point. Our fbdev
> code tried to pin as part of vmap, but chose the wrong area and suddenly
> the GPU could not see the buffer object any longer.  So the next best
> thing for vmap was to pin the buffer object where ever it is currently
> located. That is what gem-vram and qxl did so far. And of course, the
> fbdev code needs to unpin and vunmap the buffer object quickly, so that
> it can be relocated if the GPU needs it.  Hence, the vmap_local
> interface removes such short-term pinning in favor of holding the
> reservation lock.
> 
>>
>> I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
>> It should be a rather serious problem requiring backporting of the
>> fixes, but I don't see the fixes tags on the patches (?)
> 
> No chance TBH. The old code has worked for years and backporting all
> this would require your vmap patches at a minimum.
> 
> Except maybe for amdgpu. It uses fbdev-generic, which requires pinning,
> but amdgpu doesn't pin. That looks fishy, but I'm not aware of any bug
> reports either. I guess, a quick workaround could fix older amdgpu if
> necessary.

Thanks! I'll make another pass on the patches on Monday

-- 
Best regards,
Dmitry




[Index of Archives]     [Linux Virtualization]     [Linux Virtualization]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]