Re: [PATCH 06/12] drm/i915/scheduler: Execute requests in order of priorities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 03, 2016 at 04:21:25PM +0000, Tvrtko Ursulin wrote:
> >+		rb = rb_next(rb);
> >+		rb_erase(&cursor->priotree.node, &engine->execlist_queue);
> >+		RB_CLEAR_NODE(&cursor->priotree.node);
> >+		cursor->priotree.priority = INT_MAX;
> >+
> >+		/* We keep the previous context alive until we retire the
> >+		 * following request. This ensures that any the context object
> >+		 * is still pinned for any residual writes the HW makes into
> >+		 * it on the context switch into the next object following the
> >+		 * breadcrumb. Otherwise, we may retire the context too early.
> >+		 */
> >+		cursor->previous_context = engine->last_context;
> >+		engine->last_context = cursor->ctx;
> >+
> 
> Will we will need a knob to control the amount of context merging?
> Until preemption that is, similar to GuC queue depth "nerfing" from
> the other patch.

Right, timeslicing + preemption is the perhaps the best answer here. A
knob here would be one of those policy decisions that we want to avoid
:) Not quite sure how we would go about autotuning it. A GL workload
looks quite different to a VK workload and quite different again to a CL
workload. One can only imagine the horrors on the transcode side :)

Making those decisions is the real heart of a scheduler.

This code should be more like:

while (req = scheduler.next_request()) {
	if (!is_compatible(port, req)) {
		if (port++)
			break;
	}

	finish_request(req);
	scheduler.remove_request(req);

	port = req;
}

Given that at least this block of code is duplicated for the guc, maybe
it is worth starting. I was trying to avoid doing that because in my
mind it is the preemption that is the complex part and should drive the
next steps.

> >+static void update_priorities(struct i915_priotree *pt, int prio)
> >+{
> >+	struct drm_i915_gem_request *request =
> >+		container_of(pt, struct drm_i915_gem_request, priotree);
> >+	struct intel_engine_cs *engine = request->engine;
> >+	struct i915_dependency *dep;
> >+
> >+	if (prio <= READ_ONCE(pt->priority))
> >+		return;
> >+
> >+	/* Recursively bump all dependent priorities to match the new request */
> >+	list_for_each_entry(dep, &pt->pre_list, pre_link)
> >+		update_priorities(dep->signal, prio);
> 
> John got in trouble from recursion in his scheduler, used for the
> same thing AFAIR. Or was it the priority bumping? Either way, it
> could be imperative to avoid it.

Yup. I haven't thrown a 100,000 requests at it for this very reason.

Hmm, if I put an additional struct list_head into i915_dependency, we
can build a search list using it (praise be to the BKL).
 
> >+	/* Fifo and depth-first replacement ensure our deps execute before us */
> >+	spin_lock_irq(&engine->timeline->lock);
> >+	pt->priority = prio;
> >+	if (!RB_EMPTY_NODE(&pt->node)) {
> >+		rb_erase(&pt->node, &engine->execlist_queue);
> >+		if (insert_request(pt, &engine->execlist_queue))
> >+			engine->execlist_first = &pt->node;
> >+	}
> >+	spin_unlock_irq(&engine->timeline->lock);
> >+}
> >+
> >+static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
> >+{
> >+	/* Make sure that our entire dependency chain shares our prio */
> >+	update_priorities(&request->priotree, prio);
> >+
> >+	/* XXX Do we need to preempt to make room for us and our deps? */
> >+}
> 
> Hm, why isn't scheduling backend agnostic? Makes it much simpler to
> do the first implementation like this?

The scheduling is? The scheduling is just the policy. But we do need
entry points to call into the scheduler, such as change in the request
tree (here), completion of a request (context-switch interrupt) or
completion of a time slice. engine->schedule() is not the best name,
perhaps schedule_request(). Using engine is mostly a placeholder (a
convenient place to store a vfunc), but I do think we will end up with
different interactions with the scheduler based on the backend (in
particular feeding the GuC with dependency information). So, yes, this
is the easy route that should evolve as the need changes, and just
pencil the different callbacks as entry points where the backend and
scheduler should collude.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux