Re: [PATCH 06/12] drm/i915/scheduler: Execute requests in order of priorities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 03, 2016 at 04:21:25PM +0000, Tvrtko Ursulin wrote:
> >+static void update_priorities(struct i915_priotree *pt, int prio)
> >+{
> >+	struct drm_i915_gem_request *request =
> >+		container_of(pt, struct drm_i915_gem_request, priotree);
> >+	struct intel_engine_cs *engine = request->engine;
> >+	struct i915_dependency *dep;
> >+
> >+	if (prio <= READ_ONCE(pt->priority))
> >+		return;
> >+
> >+	/* Recursively bump all dependent priorities to match the new request */
> >+	list_for_each_entry(dep, &pt->pre_list, pre_link)
> >+		update_priorities(dep->signal, prio);
> 
> John got in trouble from recursion in his scheduler, used for the
> same thing AFAIR. Or was it the priority bumping? Either way, it
> could be imperative to avoid it.

static void update_priority(struct i915_priotree *pt, int prio)
{
        struct drm_i915_gem_request *request =
                container_of(pt, struct drm_i915_gem_request, priotree);
        struct intel_engine_cs *engine = request->engine;

        spin_lock_irq(&engine->timeline->lock);
        if (prio > pt->priority) {
                pt->priority = prio;
                if (!RB_EMPTY_NODE(&pt->node)) {
                        rb_erase(&pt->node, &engine->execlist_queue);
                        if (insert_request(pt, &engine->execlist_queue))
                                engine->execlist_first = &pt->node;
                }
        }
        spin_unlock_irq(&engine->timeline->lock);
}

static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
{
        struct i915_dependency *dep, *p;
        LIST_HEAD(dfs);

        if (prio <= READ_ONCE(request->priotree.priority))
                return;

        /* Need BKL in order to use the temporary link inside i915_dependency */
        lockdep_assert_held(&request->i915->drm.struct_mutex);

        /* Recursively bump all dependent priorities to match the new request */
        list_for_each_entry(dep, &request->priotree.pre_list, pre_link)
                if (prio > READ_ONCE(dep->signal->priority))
                        list_add_tail(&dep->dfs_link, &dfs);

        list_for_each_entry(dep, &dfs, dfs_link) {
                list_for_each_entry(p, &dep->signal->pre_list, pre_link) {
                        GEM_BUG_ON(p == dep);
                        if (prio > READ_ONCE(p->signal->priority))
                                list_move_tail(&p->dfs_link, &dfs);
                }
        }

        /* Fifo and depth-first replacement ensure our deps execute before us */
        list_for_each_entry_safe_reverse(dep, p, &dfs, dfs_link) {
                update_priority(dep->signal, prio);
                INIT_LIST_HEAD(&dep->dfs_link);
        }
        update_priority(&request->priotree, prio);

        /* XXX Do we need to preempt to make room for us and our deps? */
}

Still ugh. I think that we don't chase beyond the priorities we need to
bump is its only saving grace.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux