On 14/09/2017 10:58, Chris Wilson wrote:
An interesting discussion regarding "hybrid interrupt polling" for NVMe
came to the conclusion that the ideal busyspin before sleeping was half
of the expected request latency (and better if it was already halfway
through that request). This suggested that we too should look again at
our tradeoff between spinning and waiting. Currently, our spin simply
tries to hide the cost of enabling the interrupt, which is good to avoid
penalising nop requests (i.e. test throughput) and not much else.
Studying real world workloads suggests that a spin of upto 500us can
What workloads and and power/perf testing?
dramatically boost performance, but the suggestion is that this is not
from avoiding interrupt latency per-se, but from secondary effects of
sleeping such as allowing the CPU reduce cstate and context switch away.
Maybe the second part of the sentence would be clearer if not in a way
in inverted form. Like longer spin = more performance = less sleeping =
less cstate switching? Or just add "but from _avoiding_ secondary
effects of sleeping"?
To offset those costs from penalising the active client, bump the initial
spin somewhat to 250us and the secondary spin to 20us to balance the cost
of another context switch following the interrupt.
Suggested-by: Sagar Kamble <sagar.a.kamble@xxxxxxxxx>
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Sagar Kamble <sagar.a.kamble@xxxxxxxxx>
Cc: Eero Tamminen <eero.t.tamminen@xxxxxxxxx>
Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Cc: Ben Widawsky <ben@xxxxxxxxxxxx>
Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
---
drivers/gpu/drm/i915/i915_gem_request.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 813a3b546d6e..ccbdaf6a0e4d 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -1155,8 +1155,20 @@ long i915_wait_request(struct drm_i915_gem_request *req,
GEM_BUG_ON(!intel_wait_has_seqno(&wait));
GEM_BUG_ON(!i915_sw_fence_signaled(&req->submit));
- /* Optimistic short spin before touching IRQs */
- if (i915_spin_request(req, state, 5))
+ /* Optimistic short spin before touching IRQs.
So it's not short any more. "Optimistic busy spin" ?
+ *
+ * We use a rather large value here to offset the penalty of switching
+ * away from the active task. Frequently, the client will wait upon
+ * an old swapbuffer to throttle itself to remain within a frame of
+ * the gpu. If the client is running in lockstep with the gpu, then
+ * it should not be waiting long at all, and a sleep now will incur
+ * extra scheduler latency in producing the next frame. So we sleep
+ * for longer to try and keep the client running.
+ *
250us sounds quite long and worrying to me.
In the waiting on swapbuffer case, what are the clients waiting for? GPU
rendering to finish or previous vblank or something?
I am thinking if it would be possible to add a special API just for this
sort of waits and internally know how long it is likely to take. So then
decide based on that whether to spin or sleep. Like next vblank is
coming in 5ms, no point in busy spinning or something like that.
Regards,
Tvrtko
+ * We need ~5us to enable the irq, ~20us to hide a context switch,
+ * we use 250us to keep the cache hot.
+ */
+ if (i915_spin_request(req, state, 250))
goto complete;
set_current_state(state);
@@ -1212,8 +1224,13 @@ long i915_wait_request(struct drm_i915_gem_request *req,
__i915_wait_request_check_and_reset(req))
continue;
- /* Only spin if we know the GPU is processing this request */
- if (i915_spin_request(req, state, 2))
+ /*
+ * A quick spin now we are on the CPU to offset the cost of
+ * context switching away (and so spin for roughly the same as
+ * the scheduler latency). We only spin if we know the GPU is
+ * processing this request, and so likely to finish shortly.
+ */
+ if (i915_spin_request(req, state, 20))
break;
if (!intel_wait_check_request(&wait, req)) {
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx