Re: [PATCH drm-next v2 04/16] maple_tree: add flag MT_FLAGS_LOCK_NONE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/27/23 19:36, Matthew Wilcox wrote:
On Mon, Feb 27, 2023 at 06:39:33PM +0100, Danilo Krummrich wrote:
On 2/21/23 19:31, Matthew Wilcox wrote:
Lockdep will shout at you if you get it wrong ;-)  But you can safely
take the spinlock before calling mas_store_gfp(GFP_KERNEL) because
mas_nomem() knows to drop the lock before doing a sleeping allocation.
Essentially you're open-coding mtree_store_range() but doing your own
thing in addition to the store.

As already mentioned, I went with your advice to just take the maple tree's
internal spinlock within the GPUVA manager and leave all the other locking
to the drivers as intended.

However, I run into the case that lockdep shouts at me for not taking the
spinlock before calling mas_find() in the iterator macros.

Now, I definitely don't want to let the drivers take the maple tree's
spinlock before they use the iterator macro. Of course, drivers shouldn't
even know about the underlying maple tree of the GPUVA manager.

One way to make lockdep happy in this case seems to be taking the spinlock
right before mas_find() and drop it right after for each iteration.

While we don't have any lockdep checking of this, you really shouldn't be
using an iterator if you're going to drop the lock between invocations.
The iterator points into the tree, so you need to invalidate the iterator
any time you drop the lock.

The tree can't change either way in my case. Changes to the DRM GPUVA manager (and hence the tree) are protected by drivers, either by serializing tree accesses or by having another external lock ensuring mutual exclusion. Just as a reminder, in the latter case drivers usually lock multiple transactions to the manager (and hence the tree) to ensure they appear atomic.

So, really the only purpose for me taking the internal lock is to ensure I satisfy lockdep and the maple tree's internal requirements on locking for future use cases you mentioned (e.g. slab cache defragmentation).

It's the rcu_dereference_check() in mas_root() that triggers in my case:

[ 28.745706] lib/maple_tree.c:851 suspicious rcu_dereference_check() usage!

               stack backtrace:
[ 28.746057] CPU: 8 PID: 1518 Comm: nouveau_dma_cop Not tainted 6.2.0-rc6-vmbind-0.2+ #104 [ 28.746061] Hardware name: ASUS System Product Name/PRIME Z690-A, BIOS 2103 09/30/2022
[   28.746064] Call Trace:
[   28.746067]  <TASK>
[   28.746070]  dump_stack_lvl+0x5b/0x77
[   28.746077]  mas_walk+0x16d/0x1b0
[   28.746082]  mas_find+0xf7/0x300
[   28.746088]  drm_gpuva_in_region+0x63/0xa0
[   28.746099]  __drm_gpuva_sm_map.isra.0+0x465/0x9f0
[   28.746103]  ? lock_acquire+0xbf/0x2b0
[   28.746111]  ? __pfx_drm_gpuva_sm_step+0x10/0x10
[   28.746114]  ? lock_is_held_type+0xe3/0x140
[   28.746121]  ? mark_held_locks+0x49/0x80
[   28.746125]  ? _raw_spin_unlock_irqrestore+0x30/0x60
[   28.746138]  drm_gpuva_sm_map_ops_create+0x80/0xc0
[   28.746145]  uvmm_bind_job_submit+0x3c2/0x470 [nouveau]
[   28.746272]  nouveau_job_submit+0x60/0x450 [nouveau]
[   28.746393]  nouveau_uvmm_ioctl_vm_bind+0x179/0x1e0 [nouveau]
[   28.746510]  ? __pfx_nouveau_uvmm_ioctl_vm_bind+0x10/0x10 [nouveau]
[   28.746622]  drm_ioctl_kernel+0xa9/0x160
[   28.746629]  drm_ioctl+0x1f7/0x4b0


You don't have to use a spinlock to do a read iteration.  You can just
take the rcu_read_lock() around your iteration, as long as you can
tolerate the mild inconsistencies that RCU permits.


Doing that would mean that the driver needs to do it. However, the driver either needs to serialize accesses or use it's own mutex for protection for the above reasons. Hence, that should not be needed.





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux