Re: [RFC 1/1 v2] drm/i915: Add scheduling priority to per-context parameters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 01, 2015 at 04:56:26PM +0100, Dave Gordon wrote:
> Hmmm ... the email seems to have been damaged during composition :(
> I probably shouldn't try to use vi(1) [where '~' means
> toggle-letter-case] over an ssh link [where '~' is an escape, of
> sorts] from another Linux machine inside a PuTTY terminal under
> Windows [where various keys send escape sequences containing '~'] :(
> Anyway, this
> version has the #defines as they actually appeared in the source,
> i.e. starting with UPPERCASE 'I' and not lowercase 'i'!
> 
> The next use for the i915 get/set per-context parameters ioctl,
> ahead of the introduction of the forthcoming GPU scheduler.
> 
> Signed-off-by: Dave Gordon <david.s.gordon@xxxxxxxxx>
> ---
>  drivers/gpu/drm/i915/i915_drv.h         | 28 ++++++++++++++++++++++++++++
>  drivers/gpu/drm/i915/i915_gem_context.c | 17 +++++++++++++++++
>  include/uapi/drm/i915_drm.h             |  1 +
>  3 files changed, 46 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_drv.h
> b/drivers/gpu/drm/i915/i915_drv.h
> index 279e258..104b711 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -850,6 +850,33 @@ struct i915_ctx_hang_stats {
>  	bool banned;
>  };
> 
> +/*
> + * User-settable GFX scheduler priorities are on a scale of 1 (lowest
> + * priority) to 1023 (highest priority). The special value 0 means
> + * "let the system decide my priority automatically"; this is the
> + * default if the user process does not explicitly request a different
> + * priority. Any process may decrease its scheduling priority, but
> + * only a sufficiently-privileged process may increase it. However,
> + * it is always permissible to reset it to "system default", even if
> + * is currently lower than that. Thus, if the system-assigned default
> + * were, say, 256, a process could decrease it to 128, and then to 64.
> + * It could NOT then increase it to 128 again, but COULD request a
> + * priority of 0 -- which would actually reset it to 256, allowing
> + * the process to then request 128 again. (This avoids the issue with
> + * nice(2) priorities, namely that non-super-users can not increase
> + * scheduling priorities of their own processes, even if they were the
> + * ones that decreased the priorities in the first place).

I would prefer not to couple it in such a way. Have a continuous range
-1024,1024 (default 0), but only allow a privileged process to request
a positive priority value. So any process can set any negative/zero value
at any time (thereby getting a small boost in priority at times), but only
the select few can completely gazzump them by setting a positive value.

That is a little more intuitive from my perspective. Have you considered
skipping the nice() step entirely and jump to a setpriority() model?
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux