On 2/23/20 4:45 PM, Christian König wrote:
Am 21.02.20 um 18:12 schrieb Daniel Vetter:
[SNIP]
Yeah the Great Plan (tm) is to fully rely on ww_mutex slowly
degenerating
into essentially a global lock. But only when there's actual contention
and thrashing.
Yes exactly. A really big problem in TTM is currently that we drop the
lock after evicting BOs because they tend to move in again directly
after that.
From practice I can also confirm that there is exactly zero benefit
from dropping locks early and reacquire them for example for the VM
page tables. That's just makes it more likely that somebody needs to
roll back and this is what we need to avoid in the first place.
If you have a benchmarking setup available it would be very interesting
for future reference to see how changing from WD to WW mutexes affects
the roll back frequency. WW is known to cause rollbacks much less
frequently but there is more work associated with each rollback.
Contention on BO locks during command submission is perfectly fine as
long as this is as lightweight as possible while we don't have
trashing. When we have trashing multi submission performance is best
archived to just favor a single process to finish its business and
block everybody else.
Hmm. Sounds like we need a per-manager ww_rwsem protecting manager
allocation, taken in write-mode then there's thrashing. In read-mode
otherwise. That would limit the amount of "unnecessary" locks we'd have
to keep and reduce unwanted side-effects, (see below):
Because of this I would actually vote for forbidding to release
individual ww_mutex() locks in a context.
Yes, I see the problem.
But my first reaction is that this might have undersirable side-effects.
Let's say somebody wanted to swap the evicted BOs out? Or cpu-writes to
them causing faults, that might also block the mm_sem, which in turn
blocks hugepaged?
Still it's a fairly simple solution to a problem that seems otherwise
hard to solve efficiently.
Thanks,
Thomas
Regards,
Christian.
-Daniel
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel