Now that we're able to unsubmit requests, let's try to actually preempt. The series is partially based on "Preemption support for GPU scheduler": https://lists.freedesktop.org/archives/intel-gfx/2015-December/082817.html It requires "drm/i915/scheduler: Support user-defined priorities" It's still not very mature, I'm observing GPU hangs with basic sanity checks (create low_prio ctx, do work, create high_prio ctx, do work, expect that high_prio finished before low_prio, repeat) due to incorrect handling of the preemptive requests sent to GuC. What I'd like to discuss is the overall approach and userspace interactions. When considering preemption I've stayed with the "absolute" threshold approach (we're only considering requests with priority higher than some threshold), though I'm not sure whether it's the right way of doing things (since userspace applications won't be able to increase their priority without CAP_SYS_ADMIN). Perhaps it would be better to track the highest priority of the inflight requests on each engine and consider preemption relative to that? There's also the question of whether we want to have an "opt-in" interface for userspace to explicitly state "I'm ready to handle preemption". We know that we can safely preempt on the batch buffer boundary, unfortunately when we try to preempt in the middle of user batches, there are cases where the default settings are "unsafe" (e.g. require different batch buffer programming from the userspace), which is why there seems to be a preference towards an opt-in ABI (either by execbuf flag or context param). The preemption granularity is being controlled through whitelisted GEN8_CS_CHICKEN1 register, maybe we could get away with programming a "safe" default values instead? Awaiting for feedback! -Michał _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx