On 12/1/21 09:36, Christian König wrote:
Am 01.12.21 um 09:23 schrieb Thomas Hellström (Intel):
[SNIP]
Jason and I came up with a deep dive iterator for his use case,
but I
think we don't want to use that any more after my dma_resv rework.
In other words when you need to create a new dma_fence_array you
flatten
out the existing construct which is at worst case
dma_fence_chain->dma_fence_array->dma_fence.
Ok, Are there any cross-driver contract here, Like every driver
using a
dma_fence_array need to check for dma_fence_chain and flatten like
above?
So far we only discussed that on the mailing list but haven't made
any documentation for that.
OK, one other cross-driver pitfall I see is if someone accidently
joins two fence chains together by creating a fence chain unknowingly
using another fence chain as the @fence argument?
That would indeed be illegal and we should probably add a WARN_ON()
for that.
The third cross-driver pitfall IMHO is the locking dependency these
containers add. Other drivers (read at least i915) may have defined
slightly different locking orders and that should also be addressed
if needed, but that requires a cross driver agreement what the
locking orders really are. Patch 1 actually addresses this, while
keeping the container lockdep warnings for deep recursions, so at
least I think that could serve as a discussion starter.
No, drivers should never make any assumptions on that.
Yes that i915 assumption of taking the lock of the last signaled fence
first goes back a while in time. We should look at fixing that up, and
document any (possibly forbidden) assumptions about fence lock locking
orders to avoid it happening again, if there is no common cross-driver
locking order that can be agreed.
E.g. when you need to take a look from a callback you must guarantee
that you never have that lock taken when you call any of the dma_fence
functions. Your patch breaks the lockdep annotation for that.
I'm pretty sure that could be fixed in a satisfactory way if needed.
What we could do is to avoid all this by not calling the callback with
the lock held in the first place.
If that's possible that might be a good idea, pls also see below.
/Thomas
Oh, and a follow up question:
If there was a way to break the recursion on final put() (using the
same basic approach as patch 2 in this series uses to break
recursion in enable_signaling()), so that none of these containers
did require any special treatment, would it be worth pursuing? I
guess it might be possible by having the callbacks drop the
references rather than the loop in the final put. + a couple of
changes in code iterating over the fence pointers.
That won't really help, you just move the recursion from the final
put into the callback.
How do we recurse from the callback? The introduced fence_put() of
individual fence pointers
doesn't recurse anymore (at most 1 level), and any callback recursion
is broken by the irq_work?
Yeah, but then you would need to take another lock to avoid racing
with dma_fence_array_signaled().
I figure the big amount of work would be to adjust code that iterates
over the individual fence pointers to recognize that they are rcu
protected.
Could be that we could solve this with RCU, but that sounds like a lot
of churn for no gain at all.
In other words even with the problems solved I think it would be a
really bad idea to allow chaining of dma_fence_array objects.
Yes, that was really the question, Is it worth pursuing this? I'm not
really suggesting we should allow this as an intentional feature. I'm
worried, however, that if we allow these containers to start floating
around cross-driver (or even internally) disguised as ordinary
dma_fences, they would require a lot of driver special casing, or else
completely unexpeced WARN_ON()s and lockdep splats would start to turn
up, scaring people off from using them. And that would be a breeding
ground for hairy driver-private constructs.
/Thomas
Christian.
Thanks,
/Thomas