On 06/07/17 11:51, Michel Thierry wrote:
On 7/6/2017 10:59 AM, Chris Wilson wrote:
> If there are no conflicts, then just skip the patch, and really there
> shouldn't be since the writes we care about are ordered and the writes
> we don't, we don't. We will need to move to per-context hwsp in the near
> future which should alleviate these concerns.
It helped me explain the strange data I was seeing in the HSWP during
driver load came from (that random data wasn't really random, I was
looking at the PPHWSP which later became the HWSP).
Just a comment/fix for GuC submission below, because as it is, this will
break it.
-Michel
I'd argue that since we're doing it wrong we should fix it even if it is
not currently failing, as things could go wrong in unexpected ways
(especially on new HW). However, I'm happy to drop this patch now if
we're going to fix it later with the per-context HWSP.
<snip>
+ */
+ flags |= PIN_MAPPABLE;
+ ret = i915_vma_pin(vma, 0, 4096, flags);
+ if (ret)
+ goto err;
This will break GuC submission, the HWSP will be allocated correctly,
[ ] [drm:intel_engine_init_common [i915]] rcs0 hws offset: 0x00001000
[ ] [drm:intel_engine_init_common [i915]] bcs0 hws offset: 0x00002000
[ ] [drm:intel_engine_init_common [i915]] vcs0 hws offset: 0x00003000
[ ] [drm:intel_engine_init_common [i915]] vcs1 hws offset: 0x00004000
[ ] [drm:intel_engine_init_common [i915]] vecs0 hws offset: 0x00005000
But these are below GUC_WOPCM_TOP (0x80000), and our _beloved_ fw must
be having problems accessing them, because this causes:
[ ] [drm:intel_guc_init_hw [i915]] GuC fw status: fetch SUCCESS, load
PENDING
[ ] [drm:guc_ucode_xfer_dma [i915]] DMA status 0x10, GuC status 0x800071ec
[ ] [drm:guc_ucode_xfer_dma [i915]] returning -110
So we must need this restriction before pinning it,
@@ -510,6 +510,11 @@ static int init_status_page(struct intel_engine_cs
*engine)
* actualy map it).
*/
flags |= PIN_MAPPABLE;
+
+ /* GuC can only access pages above GUC_WOPCM_TOP ¯\_(ツ)_/¯ */
+ if (HAS_GUC(engine->i915) && i915.enable_guc_loading)
+ flags |= PIN_OFFSET_BIAS | GUC_WOPCM_TOP;
+
ret = i915_vma_pin(vma, 0, 4096, flags);
if (ret)
goto err;
With that change, GuC is happy:
[ ] [drm:intel_engine_init_common [i915]] rcs0 hws offset: 0x00084000
[ ] [drm:intel_engine_init_common [i915]] bcs0 hws offset: 0x00089000
[ ] [drm:intel_engine_init_common [i915]] vcs0 hws offset: 0x0008e000
[ ] [drm:intel_engine_init_common [i915]] vcs1 hws offset: 0x00093000
[ ] [drm:intel_engine_init_common [i915]] vecs0 hws offset: 0x00098000
...
[ ] [drm:intel_guc_init_hw [i915]] GuC fw status: fetch SUCCESS, load
PENDING
[ ] [drm:guc_ucode_xfer_dma [i915]] DMA status 0x10, GuC status 0x8002f0ec
[ ] [drm:guc_ucode_xfer_dma [i915]] returning 0
[ ] [drm] GuC submission enabled (firmware i915/skl_guc_ver9_33.bin
[version 9.33])
After a bit of investigation I've found that the issue is not actually
with the positioning of the HWSP but with the fact that we use
status_page.ggtt_offset to point to the lrca offset of the default
context in guc_ads_create instead of using
kernel_context->engine[].state. With that fixed GuC works fine even with
the HWSP below GUC_WOPCM_TOP.
Thanks,
Daniele
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx