On 06/10/17 05:35, Michał Winiarski wrote:
On Thu, Oct 05, 2017 at 05:02:39PM +0000, Daniele Ceraolo Spurio wrote:
On 05/10/17 02:33, Chris Wilson wrote:
Quoting Michał Winiarski (2017-10-05 10:13:40)
We're using first page of kernel context state to share data with GuC,
let's precompute the ggtt offset at GuC initialization time rather than
everytime we're using GuC actions.
So LRC_GUCSHR_PN is still 0. Plans for that to change?
This is a requirement from the GuC side. GuC expects each context to have
that extra page before the PPHWSP and it uses it to dump some per-lrc info,
part of which is for internal use and part is info for the host (although we
don't need/use it).
On certain events (reset/preempt/suspend etc) GuC will dump extra info and
this is done in the page provided in the H2G. I think we use the one of the
default ctx just for simplicity, but it should be possible to use a
different one, possibly not attached to any lrc if needed, but I'm not sure
if this has ever been tested.
Done that (allocating a separate object for GuC shared data), seems to
work just fine on its own. Except if we try to remove the first page from
contexts. It seems to make GuC upset even though we're not using actions.
Yep, as I mentioned above GuC dumps runtime info about each lrc it
handles in that page (e.g. if an lrc has been submitted via proxy), so
it is probably going to either page-fault or write in the wrong memory
if that page is not allocated.
We could still do that, though without removing the extra page we're just being
more wasteful. But perhaps it's cleaner that way? Having separate managed in GuC
code rather than reusing random places in context state? Thoughts?
This is similar to what we used to do by using the PPHWSP of the default
ctx as the global HWSP. Personally I'd prefer to keep it separate as it
feels cleaner and a single extra page shouldn't hurt us that much, but
there was some push-back when I suggested the same for the HWSP.
Daniele
-Michał
-Daniele
Atm, we should be changing one pointer deref for another...
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx