Re: [PATCH 2/2] drm/i915/guc: default to using GuC submission where possible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/04/16 19:51, Chris Wilson wrote:
On Fri, Apr 22, 2016 at 07:45:15PM +0100, Chris Wilson wrote:
On Fri, Apr 22, 2016 at 07:22:55PM +0100, Dave Gordon wrote:
This patch simply changes the default value of "enable_guc_submission"
from 0 (never) to -1 (auto). This means that GuC submission will be
used if the platform has a GuC, the GuC supports the request submission
protocol, and any required GuC firmwware was successfully loaded. If any
of these conditions are not met, the driver will fall back to using
execlist mode.

I just remembered something else.

  * Work Items:
  * There are several types of work items that the host may place into a
  * workqueue, each with its own requirements and limitations. Currently only
  * WQ_TYPE_INORDER is needed to support legacy submission via GuC, which
  * represents in-order queue. The kernel driver packs ring tail pointer and an
  * ELSP context descriptor dword into Work Item.

Is this right? You only allocate a single client covering all engines and
specify INORDER. We expect parallel execution between engines, is this
supported? Empirically it seems like guc is only executing commands in
series across engines and not in parallel.
-Chris

AFAIK, INORDER represents in-order executions of elements in the GuC's (internal) submission queue, which is per-engine; i.e. this option bypasses the GuC's internal scheduling algorithms and makes the GuC behave as a simple dispatcher. It demultiplexes work queue items into the multiple submission queues, then executes them in order from there.

Alex can probably confirm this in the GuC code, but I really think we'd have noticed if execution were serialised across engines. For a start, the validation tests that have one engine busy-spin while waiting for a batch on a different engine to update a buffer wouldn't ever finish.

For other reasons, however, John & I are planning to test a one-client-per-engine configuration for use by the GPU scheduler.

.Dave.
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux