On 01/09/17 19:58, Chris Wilson wrote:
Quoting Lionel Landwerlin (2017-09-01 18:12:30)
From: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
We want to allow userspace to reconfigure the subslice configuration for
its own use case. To do so, we expose a context parameter to allow
adjustment of the RPCS register stored within the context image (and
currently not accessible via LRI). If the context is adjusted before
first use, the adjustment is for "free"; otherwise if the context is
active we flush the context off the GPU (stalling all users) and forcing
the GPU to save the context to memory where we can modify it and so
ensure that the register is reloaded on next execution.
The overhead of managing additional EU subslices can be significant,
especially in multi-context workloads. Non-GPGPU contexts should
preferably disable the subslices it is not using, and others should
fine-tune the number to match their workload.
We expose complete control over the RPCS register, allowing
configuration of slice/subslice, via masks packed into a u64 for
simplicity. For example,
struct drm_i915_gem_context_param arg;
memset(&arg, 0, sizeof(arg));
arg.ctx_id = ctx;
arg.param = I915_CONTEXT_PARAM_SSEU;
if (drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_GETPARAM, &arg) == 0) {
union drm_i915_gem_context_param_sseu *sseu = &arg.value;
sseu->packed.subslice_mask = 0;
drmIoctl(fd, DRM_IOCTL_I915_GEM_CONTEXT_SETPARAM, &arg);
}
could be used to disable all subslices where supported.
v2: Fix offset of CTX_R_PWR_CLK_STATE in intel_lr_context_set_sseu() (Lionel)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100899
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@xxxxxxxxx>
Cc: Dmitry Rogozhkin <dmitry.v.rogozhkin@xxxxxxxxx>
CC: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
CC: Zhipeng Gong <zhipeng.gong@xxxxxxxxx>
CC: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
---
drivers/gpu/drm/i915/i915_gem_context.c | 11 ++++++
drivers/gpu/drm/i915/intel_lrc.c | 69 +++++++++++++++++++++++++++++++++
drivers/gpu/drm/i915/intel_lrc.h | 2 +
include/uapi/drm/i915_drm.h | 11 ++++++
4 files changed, 93 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 97fcb01d70eb..d399b03f452c 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -1042,6 +1042,9 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
case I915_CONTEXT_PARAM_BANNABLE:
args->value = i915_gem_context_is_bannable(ctx);
break;
+ case I915_CONTEXT_PARAM_SSEU:
+ args->value = intel_lr_context_get_sseu(ctx);
+ break;
default:
ret = -EINVAL;
break;
@@ -1097,6 +1100,14 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
else
i915_gem_context_clear_bannable(ctx);
break;
+ case I915_CONTEXT_PARAM_SSEU:
+ if (args->size)
+ ret = -EINVAL;
+ else if (!i915.enable_execlists)
+ ret = -ENODEV;
+ else
+ ret = intel_lr_context_set_sseu(ctx, args->value);
+ break;
default:
ret = -EINVAL;
break;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 1693fd9d279b..c063b84911d5 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2122,3 +2122,72 @@ void intel_lr_context_resume(struct drm_i915_private *dev_priv)
}
}
}
+
+int intel_lr_context_set_sseu(struct i915_gem_context *ctx, u64 value)
+{
+ union drm_i915_gem_context_param_sseu user = { .value = value };
+ struct drm_i915_private *i915 = ctx->i915;
+ struct intel_context *ce = &ctx->engine[RCS];
+ struct sseu_dev_info sseu = ctx->engine[RCS].sseu;
+ struct intel_engine_cs *engine;
+ enum intel_engine_id id;
+ int ret;
I have a note saying that we want to pass in (engine, class), and so
forgo the packed value and switch to an array of structs.
-Chris
Not sure what you meant by class. I've used the same flags as the
execbuff2 field.
I'm programming only the RCS (since documentation says that's the only
thing that is allowed).
But reading the R_PWR_CLK_STATE from other engine seems to give
incoherent results...
Would you expect that?
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx