Re: [PATCH 00/13] drm: Fix reservation locking for pin/unpin and console

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 27.02.24 um 19:14 schrieb Dmitry Osipenko:
Hello,

Thank you for the patches!

On 2/27/24 13:14, Thomas Zimmermann wrote:
Dma-buf locking semantics require the caller of pin and unpin to hold
the buffer's reservation lock. Fix DRM to adhere to the specs. This
enables to fix the locking in DRM's console emulation. Similar changes
for vmap and mmap have been posted at [1][2]

Most DRM drivers and memory managers acquire the buffer object's
reservation lock within their GEM pin and unpin callbacks. This
violates dma-buf locking semantics. We get away with it because PRIME
does not provide pin/unpin, but attach/detach, for which the locking
semantics is correct.

Patches 1 to 8 rework DRM GEM code in various implementations to
acquire the reservation lock when entering the pin and unpin callbacks.
This prepares them for the next patch. Drivers that are not affected
by these patches either don't acquire the reservation lock (amdgpu)
or don't need preparation (loongson).

Patch 9 moves reservation locking from the GEM pin/unpin callbacks
into drm_gem_pin() and drm_gem_unpin(). As PRIME uses these functions
internally it still gets the reservation lock.

With the updated GEM callbacks, the rest of the patchset fixes the
fbdev emulation's buffer locking. Fbdev emulation needs to keep its
GEM buffer object inplace while updating its content. This required
a implicit pinning and apparently amdgpu didn't do this at all.

Patch 10 introduces drm_client_buffer_vmap_local() and _vunmap_local().
The former function map a GEM buffer into the kernel's address space
with regular vmap operations, but keeps holding the reservation lock.
The _vunmap_local() helper undoes the vmap and releases the lock. The
updated GEM callbacks make this possible. Between the two calls, the
fbdev emulation can update the buffer content without have the buffer
moved or evicted. Update fbdev-generic to use vmap_local helpers,
which fix amdgpu. The idea of adding a "local vmap" has previously been
attempted at [3] in a different form.

Patch 11 adds implicit pinning to the DRM client's regular vmap
helper so that long-term vmap'ed buffers won't be evicted. This only
affects fbdev-dma, but GEM DMA helpers don't require pinning. So
there are no practical changes.

Patches 12 and 13 remove implicit pinning from the vmap and vunmap
operations in gem-vram and qxl. These pin operations are not supposed
to be part of vmap code, but were required to keep the buffers in place
for fbdev emulation. With the conversion o ffbdev-generic to to
vmap_local helpers, that code can finally be removed.
Isn't it a common behaviour for all DRM drivers to implicitly pin BO
while it's vmapped? I was sure it should be common /o\

No, at least amdgpu and radon doesn't pin kmapped BOs and I don't think nouveau does either.

Why would you want to kmap BO that isn't pinned?

The usual use case is to call the ttm kmap function when you need CPU access.

When the buffer hasn't moved we can use the cached CPU mapping, if the buffer has moved since the last time or this is the first time that is called we setup a new mapping.

Shouldn't TTM's vmap() be changed to do the pinning?

Absolutely not, no. That would break tons of use cases.

Regards,
Christian.


I missed that TTM doesn't pin BO on vmap() and now surprised to see it.
It should be a rather serious problem requiring backporting of the
fixes, but I don't see the fixes tags on the patches (?)





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux