On 01/03/2016 15:18, Maarten Lankhorst wrote:
Hey,
Op 18-02-16 om 15:24 schreef John.C.Harrison@xxxxxxxxx:
From: John Harrison <John.C.Harrison@xxxxxxxxx>
The request structure is reference counted. When the count reached
zero, the request was immediately freed and all associated objects
were unrefereced/unallocated. This meant that the driver mutex lock
must be held at the point where the count reaches zero. This was fine
while all references were held internally to the driver. However, the
plan is to allow the underlying fence object (and hence the request
itself) to be returned to other drivers and to userland. External
users cannot be expected to acquire a driver private mutex lock.
Rather than attempt to disentangle the request structure from the
driver mutex lock, the decsion was to defer the free code until a
later (safer) point. Hence this patch changes the unreference callback
to merely move the request onto a delayed free list. The driver's
retire worker thread will then process the list and actually call the
free function on the requests.
v2: New patch in series.
v3: Updated after review comments by Tvrtko Ursulin. Rename list nodes
to 'link' rather than 'list'. Update list processing to be more
efficient/safer with respect to spinlocks.
v4: Changed to use basic spinlocks rather than IRQ ones - missed
update from earlier feedback by Tvrtko.
v5: Improved a comment to keep the style checker happy.
For: VIZ-5190
Signed-off-by: John Harrison <John.C.Harrison@xxxxxxxxx>
Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Looks like Chris also mentioned it, but a fence can stay alive for an unknown period of time.
As a result when a fence is signaled all associated data should be freed as soon as the fence is signaled,
not when the last refcount is dropped to 0. This will remove the delayed free dance and clean up code. :)
I'm not sure what you mean. The delayed free thing is purely because
freeing up the resources associated with the request requires holding
the driver mutex lock - unpinning and freeing contexts basically. Chris
has claimed that this is easy to resolve but it does not look trivial to
me.
It might be possible to move the context, client and IRQ release from
the final ref count -> 0 function to the retire function instead. I
think that would be the soonest non-interrupt opportunity after the
request has been signalled. I'm not sure it really buys you much though.
The context is likely to be locked by a newer request anyway, the client
release is only removing up a node from list and the IRQ is already
being released at the point of signal (it is only in the ref -> 0 path
for the case where the request got aborted before completing).
The real holder of resources is the object tracking code. It is the
object/vma freeing when the object itself is retired that really
releases memory. And that is not changing - it is not part of the
request signal code path. That all happens from
'i915_gem_retire_requests_ring' or from an explicit wait-on-request. It
might be possible to trigger the process from the request signal handler
as well but again, I can't see it being easy to make that IRQ-time
friendly. I'm pretty sure it would have to be another deferred work
handler rather than doing it in the IRQ.
~Maarten
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx