On 26/11/2019 13:16, Tvrtko Ursulin wrote:
On 26/11/2019 11:31, Tvrtko Ursulin wrote:
On 26/11/2019 11:09, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2019-11-26 10:51:22)
You mentioned you did some experiment where you did something on
context
pinning and that it did not work so well. I don't know what that was
though. I don't think that was ever posted?
What I am thinking is this: You drop the timer altogether. Instead in
__execlists_update_reg_state you look at your gem_context->req_cnt and
implement your logic there.
I noticed the same non-sequitur. Except I would push that either the
entire measurement and hence patch series is bogus (beyond the patches
themselves being trivially broken, tested much?), or that it really
should be done from a timer and also adjust pinned contexts ala
reconfigure_sseu.
Yeah, if doing it at pin time would not show the power benefit that
would mean looking at req_cnt at pin time does not work, while looking
at it half a timer period ago, on average, works. Which would be very
intriguing. We'd probably want nice graphs in this case overlaying
power, request counts, selected EU config, etc.
Another thing to try, if simple bucketing of req_cnt to load level at
pin time will not work, could be a time-weighted moving average of the
same count, but also driven from context pinning.
Yet another interesting experiment would be to try context busyness
instead of request counts. Crudely, look at context's GPU busy time (I
have patches for this) per evaluation period and configure accordingly.
This should be able to track better in theory I think, but probably has
it's own problems. Hard to say without trying and comparing.
Implementation wise, a kthread periodically reconfiguring contexts would
work I think. Like:
every second
per context
query context engine busyness
calculate relative busyness
re-configure sseu
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx