Am 16.11.20 um 21:48 schrieb Daniel Vetter:
On Mon, Nov 16, 2020 at 09:04:37PM +0100, Thomas Zimmermann wrote:
Flushing the fbdev's shadow buffer requires vmap'ing the BO memory, which
in turn requires pinning the BO. While being pinned, the BO cannot be moved
into VRAM for scanout. Consequently, a concurrent modeset operation that
involves the fbdev framebuffer would likely fail.
Resolve this problem be acquiring the modeset lock of the planes that use
the fbdev framebuffer. On non-atomic drivers, also acquire the mode-config
lock. This serializes the flushing of the framebuffer with concurrent
modeset operations.
Signed-off-by: Thomas Zimmermann <tzimmermann@xxxxxxx>
I think this is the wrong lock. What you want is the buffer lock, the
dma_resv lock of the underlying gem_bo underneath the fb we have. And hold
that from _vmap to the end of _vunmap.
+1 Yes exactly that came to my mind as well while reading this.
If you want to prevent a BO from moving while inside the kernel taking
the reservation lock is usually the right thing to do.
Only when you need to return to userspace AND keep the BO in the same
place then it is valid to pin it.
Regards,
Christian.
Unfortuantely that's going to be one nasty refactoring adventure :-/
So I think for plan B we need something nasty like this, but this here has
disadvantage that fbdev activity in the background can seriously hurt the
native kms client (despite that you're trying to filter a bit, you're
creating a big lock across all planes and we've really worked hard to make
those stand-alone as much as possible).
I think we can do better here, since we're only worried about modesets
from fbdev itself. Nothing else needs to be able to pull the buffer from
system memory into vram while we have it pinned here.
Holding mutex_lock(fb_helper->lock) here instead, with a big comment
explaining why that's enough, and that the clean fix would be holding the
dma_resv_lock, should do the trick.
-Daniel
---
drivers/gpu/drm/drm_fb_helper.c | 43 +++++++++++++++++++++++++++++++--
1 file changed, 41 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 5a22c744378c..af485c71a42a 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -394,20 +394,59 @@ static void drm_fb_helper_damage_blit_real(struct drm_fb_helper *fb_helper,
static int drm_fb_helper_damage_blit(struct drm_fb_helper *fb_helper,
struct drm_clip_rect *clip)
{
+ struct drm_device *dev = fb_helper->dev;
+ struct drm_framebuffer *fb = fb_helper->fb;
struct drm_client_buffer *buffer = fb_helper->buffer;
+ struct drm_modeset_acquire_ctx ctx;
+ struct drm_framebuffer *plane_fb;
+ struct drm_plane *plane;
struct dma_buf_map map, dst;
int ret;
+ if (!drm_drv_uses_atomic_modeset(dev))
+ mutex_lock(&dev->mode_config.mutex);
+
+ drm_modeset_acquire_init(&ctx, 0);
+
+retry:
+ drm_for_each_plane(plane, dev) {
+ ret = drm_modeset_lock(&plane->mutex, &ctx);
+ if (ret == -EDEADLK) {
+ ret = drm_modeset_backoff(&ctx);
+ if (!ret)
+ goto retry;
+ } else if (ret) {
+ goto out;
+ }
+
+ if (drm_drv_uses_atomic_modeset(dev))
+ plane_fb = plane->state->fb;
+ else
+ plane_fb = plane->fb;
+
+ if (plane_fb != fb) {
+ drm_modeset_unlock(&plane->mutex);
+ continue;
+ }
+ }
+
ret = drm_client_buffer_vmap(buffer, &map);
if (ret)
- return ret;
+ goto out;
dst = map;
drm_fb_helper_damage_blit_real(fb_helper, clip, &dst);
drm_client_buffer_vunmap(buffer);
- return 0;
+out:
+ drm_modeset_drop_locks(&ctx);
+ drm_modeset_acquire_fini(&ctx);
+
+ if (!drm_drv_uses_atomic_modeset(dev))
+ mutex_unlock(&dev->mode_config.mutex);
+
+ return ret;
}
static void drm_fb_helper_damage_work(struct work_struct *work)
--
2.29.2
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel