On 18/01/16 16:53, Chris Wilson wrote:
On Mon, Jan 18, 2016 at 03:02:25PM +0000, Tvrtko Ursulin wrote:
- while (!list_empty(&ring->request_list)) {
- struct drm_i915_gem_request *request;
-
- request = list_first_entry(&ring->request_list,
- struct drm_i915_gem_request,
- list);
-
- if (!i915_gem_request_completed(request, true))
+ list_for_each_entry_safe(req, next, &ring->request_list, list) {
+ if (!i915_gem_request_completed(req, true))
break;
- i915_gem_request_retire(request);
+ if (!i915.enable_execlists || !i915.enable_guc_submission) {
+ i915_gem_request_retire(req);
+ } else {
+ prev_req = list_prev_entry(req, list);
+ if (prev_req)
+ i915_gem_request_retire(prev_req);
+ }
}
To explain, this attempts to ensure that in GuC mode requests are only
unreferenced if there is a *following* *completed* request.
This way, regardless of whether they are using the same or different
contexts, we can be sure that the GPU has either completed the
context writing, or that the unreference will not cause the final
unpin of the context.
This is the first bogus step. contexts have to be unreferenced from
request retire, not request free. As it stands today, this forces us to
hold the struct_mutex for the free (causing many foul ups along the
line). The only reason why it is like that is because of execlists not
decoupling its context pinning inside request cancel.
What is the first bogus step? My idea of how to fix the GuC issue, or
the mention of final unreference in relation to GPU completing the
submission?
Also I don't understand how would you decouple context and request lifetime?
Maybe we can ignore execlist mode for the moment and just consider the
GuC which, as much as I understand it, has a simpler and fully aligned
request/context/lrc lifetime of:
* reference and pin and request creation
* unpin and unreference at retire
Where retire is decoupled from actual GPU activity, or maybe better say
indirectly driven.
Execlists bolt on a parallel another instance reference and pin on top,
with different lifetime rules so maybe ignore that for the GuC
discussion. Just to figure out what you have in mind.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx