Re: [PATCH 04/10] drm/ttm: change fence_lock to inner lock, v3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/20/2012 02:13 PM, Maarten Lankhorst wrote:
Op 20-11-12 13:03, Thomas Hellstrom schreef:
On 11/20/2012 12:33 PM, Maarten Lankhorst wrote:
Op 20-11-12 08:48, Thomas Hellstrom schreef:
On 11/19/2012 04:33 PM, Maarten Lankhorst wrote:
Op 19-11-12 16:04, Thomas Hellstrom schreef:
On 11/19/2012 03:17 PM, Thomas Hellstrom wrote:
Hi,

This patch looks mostly good, although I think ttm_bo_cleanup_refs becomes overly complicated:
Could this do, or am I missing something?

Actually, my version is bad, because ttm_bo_wait() is called with the lru lock held.

/Thomas
Oh digging through it made me remember why I had to release the reservation early and
had to allow move_notify to be called without reservation.

Fortunately move_notify has a NULL parameter, which is the only time that happens,
so you can still check do BUG_ON(mem != NULL && !ttm_bo_reserved(bo)); in your
move_notify handler.

05/10 removed the loop and assumed no new fence could be attached after the driver has
declared the bo dead.

However, at that point it may no longer hold a reservation to confirm this, that's why
I moved the cleanup to be done in the release_list handler. It could still be done in
ttm_bo_release, but we no longer have a reservation after we waited. Getting
a reservation can fail if the bo is imported for example.

While it would be true that in that case a new fence may be attached as well, that
would be less harmful since that operation wouldn't involve this device, so the
ttm bo can still be removed in that case. When that time comes I should probably
fix up that WARN_ON(ret) in ttm_bo_cleanup_refs. :-)

I did add a WARN_ON(!atomic_read(&bo->kref.refcount)); to
ttm_bo_reserve and ttm_eu_reserve_buffers to be sure nothing is done on the device
itself. If that is too paranoid, those WARN_ON's could be dropped. I prefer to leave them
in for a kernel release or 2. But according to the rules that would be the only time you
could attach a new fence and trigger the WARN_ON for now..
Hmm, I'd appreciate if you could group patches with functional changes that depend on eachother togeteher,
and "this is done because ...", which makes it much easier to review, (and to follow the commit history in case
something goes terribly wrong and we need to revert).

Meanwhile I'll take a look at the final ttm_bo.c and see if I can spot any culprits.

In general, as long as a bo is on a LRU list, we must be able to attach fences because of accelerated eviction.
I thought it was deliberately designed in such a way that it was kept on the lru list,
but since it's also on the ddestroy list it won't start accelerated eviction,
since it branches into cleanup_refs early, and lru_lock still protects all the list entries.
I used bad wording. I meant that unbinding might be accelerated, but  currently (quite inefficiently)
do synchronized unbinding, assuming that only the CPU can do that. When we start to support
unsynchronized moves, we need to be able to attach fences at least at the last move_notify(bo, NULL);
Would you need to wait in that case on fence_wait being completed before calling move_notify?

If not, you would still only need to perform one wait, but you'd have to make sure move_notify only gets
called by 1 thread before checking the fence pointer and performing a wait. At that point you still hold the
lru_lock though, so it shouldn't be too hard to make something safe.

I think typically a driver that wants to implement asynchronous moves don't want to wait before calling move_notify, but may wait in move_notify or move. Typically (upcoming vmwgfx) it would invalidate the buffer in move_notify(bo, NULL), attach a fence and then use the normal delayed destroy to wait on that fence before destroying the buffer.

Otherwise, since binds / unbinds are handled in the GPU command stream there's never any need to wait for moves except when there's a CPU
access.

/Thomas



~Maarten


_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux