Re: [PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 26.11.20 um 12:04 schrieb Daniel Vetter:
On Thu, Nov 26, 2020 at 11:15 AM Thomas Zimmermann <tzimmermann@xxxxxxx> wrote:
Hi

Am 25.11.20 um 17:32 schrieb Daniel Vetter:
[...]
I guess full locking is required :-/ I'm not exactly sure how to make this
happen with the current plethora of helpers ... I think we need an
_locked version of vmap/vunmap callbacks in drm_gem_object_funcs.
I think we might be able to get away without new callbacks.

I looked through the sources that implement and use vmap. All the
implementations are without taking resv locks. For locking, we can wrap
them in lock/unlock pairs. Having something like drm_gem_vmap_unlocked()
that locks and vmaps should make this easy.

In terms of implementation, only vram helpers do a pin+map in their vmap
code. And as I mentioned before, this is actually wrong. The pattern
dates back to when the code was still in individual drivers. It's time
to clean this up. Vram helpers can use drm_gem_ttm_vmap() instead.

Finally, there aren't that many users of vmap. A few drivers use it
while blitting framebuffers into HW buffers and ast does some permanent
mapping of the cursor BO. All this is trivial to turn into small pairs
of lock+vmap and vunmap+unlock.

That leaves generic fbdev. The shadow buffer is also trivial to fix, as
outlined during this discussion.

The code for fbdev in hardware buffers is a special case. It vmaps the
buffer during initialization and only vunmaps it during shutdown. As
this has worked so far without locking, I'd leave it as it is and put a
big comment next to is.

Please keep in mind that you only need to grab the lock if the buffer is not pinned otherwise.

In other words when we are scanning out from the BO it is guaranteed that it can't move around.

Maybe this makes the case here easier to handle.


Hardware fbdev buffers is only required by few drivers; namely those
that require the CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM config option to work.
We should consider to make the fbdev shadow buffer the default and have
drivers opt-in for the hardware buffer, if they need it.

And then document that if the callers of the _locked version wants a
permanent mapping, it also needs to pin it. Plus I guess ideally implement
the unlocked/permanent versions in terms of that, so that drivers only
have to implement one or the other.
For my understanding, pinning is only done in prepare_fb code. And ast
pins its cursor BOs into vram. We should document to hols vmap/vunmap
only for time and cover them with resv locks. Pinning is for cases where
the hardware requires buffers in a special location, but does not
protect against concurrent threat. I think those are the implicit rules
already.

I updated the radeon patchset, where all this appears to be working well.
Hm yeah if you want to do the full change, then that works out too.
It's just a pile of work.

But if we can finish off with an dma_resv_assert_locked in
dma_buf_vmap/vunmap, then I think that's ok. It does mean that
exporters must implement vmap caching, or performance will be
terrible. So quite some update for the dma-buf docs.

That's one possibility, but I think we should keep the ability to use pin+vmap instead of lock+vmap.

Regards,
Christian.


But if you're willing to do all that conversion of callers, then of
course I'm not stopping you. Not at all, it's great to see that kind
of maze untangled.
-Daniel

Best regards
Thomas

That should give us at least some way forward to gradually conver all the
drivers and helpers over to dma_resv locking.
-Daniel

The pin count is currently maintained by the vmap implementation in vram
helpers. Calling vmap is an implicit pin; calling vunmap is an implicit
unpin. This prevents eviction in the damage worker. But now I was told than
pinning is only for BOs that are controlled by userspace and internal users
should acquire the resv lock. So vram helpers have to be fixed, actually.

In vram helpers, unmapping does not mean eviction. The unmap operation only
marks the BO as unmappable. The real unmap happens when the eviction takes
place. This avoids many modifications to the page tables. IOW an unpinned,
unmapped BO will remain in VRAM until the memory is actually needed.

Best regards
Thomas

So I'm still not seeing how this can go boom.

Now long term it'd be nice to cut everything over to dma_resv locking, but
the issue there is that beyond ttm, none of the helpers (and few of the
drivers) use dma_resv. So this is a fairly big uphill battle. Quick
interim fix seems like the right solution to me.
-Daniel

Regards,
Christian.

Best regards
Thomas

There's no recursion taking place, so I guess the reservation
lock could be
acquired/release in drm_client_buffer_vmap/vunmap(), or a
separate pair of
DRM client functions could do the locking.
Given how this "do the right locking" is a can of worms (and I think
it's
worse than what you dug out already) I think the fb_helper.lock hack is
perfectly good enough.

I'm also somewhat worried that starting to use dma_resv lock in generic
code, while many helpers/drivers still have their hand-rolled locking,
will make conversion over to dma_resv needlessly more complicated.
-Daniel

Best regards
Thomas

[1] https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcgit.freedesktop.org%2Fdrm%2Fdrm-tip%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_fb_helper.c%3Fid%3Dac60f3f3090115d21f028bffa2dcfb67f695c4f2%23n394&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C73458d36471547ca128008d891fb0958%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637419854682660550%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Cky%2BozENU1nsd4hlfAdsvA6wC0RXsex7gpFuvHlCROM%3D&amp;reserved=0

Please note that the reservation lock you need to take here is part of
the GEM object.

Usually we design things in the way that the code needs to take a lock
which protects an object, then do some operations with the object and
then release the lock again.

Having in the lock inside the operation can be done as well, but
returning with it is kind of unusual design.

Sorry for the noob questions. I'm still trying to understand the
implications of acquiring these locks.
Well this is the reservation lock of the GEM object we are
talking about
here. We need to take that for a couple of different operations,
vmap/vunmap doesn't sound like a special case to me.

Regards,
Christian.

Best regards
Thomas
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C73458d36471547ca128008d891fb0958%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637419854682670543%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Sa5ao1X5JGFgcnhNiDbCjI4SlMMWzHITBylAZsG%2BVzs%3D&amp;reserved=0
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer





--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer





--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer



_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux