Re: [PATCH 8/8] drm/i915: Expose RPCS (SSEU) configuration to userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/05/18 17:04, Joonas Lahtinen wrote:
Quoting Lionel Landwerlin (2018-04-26 13:22:30)
On 26/04/18 11:00, Joonas Lahtinen wrote:
Quoting Lionel Landwerlin (2018-04-25 14:45:21)
From: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>

We want to allow userspace to reconfigure the subslice configuration for
its own use case. To do so, we expose a context parameter to allow
adjustment of the RPCS register stored within the context image (and
currently not accessible via LRI). If the context is adjusted before
first use, the adjustment is for "free"; otherwise if the context is
active we flush the context off the GPU (stalling all users) and forcing
the GPU to save the context to memory where we can modify it and so
ensure that the register is reloaded on next execution.

The overhead of managing additional EU subslices can be significant,
especially in multi-context workloads. Non-GPGPU contexts should
preferably disable the subslices it is not using, and others should
fine-tune the number to match their workload.
This hit a dead end last time due to the system wide policy needed to
avoid two parties fighting over the slice count (and going back and
forth between two slice counts would counter the benefits received from
this).

Do we now have a solution for the contention? I don't see code to
negotiate a global value, just raw setter.

Regards, Joonas
I've tried to come up with some numbers about the cost of the back &
forth (see igt series).

Other than that, I don't think we can expect the kernel to workaround
the inefficient use of the hardware by userspace.
Well, I'm pretty sure we should not try to make the situation too
miserable for the basic usecases.

If we allow two contexts to fight over the slice count, countering any
perceived benefit, I don't think that'll be a good default.

My recollection of the last round of discussion was that reasonable
thing to do would be to only disable slices when everyone is willing to
let go of. Then it would become a system maintainer level decision to
only run on two slices for when they see fit (configuring the userspace).

How would you detect that everybody is willing to let go?
We don't appear to have a mechanism to detect that.
Wouldn't that require to teach all userspace drivers?


More advanced tactics would include scheduling work so that we try to
avoid the slice count changes and deduct the switching time from the
execution budget of the app requesting less slices (if we had fair
time slicing).

That sounds more workable, although fairly complicated.
Maybe we could tweak the priority based on the slice count and say :
    for the next (1second / number of slice config), priority bumped for the configuration with x slices
    and rotate every (1second / number of slice config)

Would that resolve the back & forth issue completely though?

Moving a particular context back & forth between different configurations is costly too. You need to drain the context, then pin the context before you can edit and resubmit.

Thanks for your feedback,

-
Lionel


Regards, Joonas


_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux