Greeting all, Sorry for the delay - Easter Holidays, food coma and all that :-) On Tue, 18 Apr 2023 at 15:31, Rob Clark <robdclark@xxxxxxxxx> wrote: > > On Tue, Apr 18, 2023 at 1:34 AM Daniel Vetter <daniel@xxxxxxxx> wrote: > > > > On Tue, Apr 18, 2023 at 09:27:49AM +0100, Tvrtko Ursulin wrote: > > > > > > On 17/04/2023 21:12, Rob Clark wrote: > > > > From: Rob Clark <robdclark@xxxxxxxxxxxx> > > > > > > > > Make it work in terms of ctx so that it can be re-used for fdinfo. > > > > > > > > Signed-off-by: Rob Clark <robdclark@xxxxxxxxxxxx> > > > > --- > > > > drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 ++-- > > > > drivers/gpu/drm/msm/msm_drv.c | 2 ++ > > > > drivers/gpu/drm/msm/msm_gpu.c | 13 ++++++------- > > > > drivers/gpu/drm/msm/msm_gpu.h | 12 ++++++++++-- > > > > drivers/gpu/drm/msm/msm_submitqueue.c | 1 + > > > > 5 files changed, 21 insertions(+), 11 deletions(-) > > > > > > > > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c > > > > index bb38e728864d..43c4e1fea83f 100644 > > > > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c > > > > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c > > > > @@ -412,7 +412,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, > > > > /* Ensure string is null terminated: */ > > > > str[len] = '\0'; > > > > - mutex_lock(&gpu->lock); > > > > + mutex_lock(&ctx->lock); > > > > if (param == MSM_PARAM_COMM) { > > > > paramp = &ctx->comm; > > > > @@ -423,7 +423,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, > > > > kfree(*paramp); > > > > *paramp = str; > > > > - mutex_unlock(&gpu->lock); > > > > + mutex_unlock(&ctx->lock); > > > > return 0; > > > > } > > > > diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c > > > > index 3d73b98d6a9c..ca0e89e46e13 100644 > > > > --- a/drivers/gpu/drm/msm/msm_drv.c > > > > +++ b/drivers/gpu/drm/msm/msm_drv.c > > > > @@ -581,6 +581,8 @@ static int context_init(struct drm_device *dev, struct drm_file *file) > > > > rwlock_init(&ctx->queuelock); > > > > kref_init(&ctx->ref); > > > > + ctx->pid = get_pid(task_pid(current)); > > > > > > Would it simplify things for msm if DRM core had an up to date file->pid as > > > proposed in > > > https://patchwork.freedesktop.org/patch/526752/?series=109902&rev=4 ? It > > > gets updated if ioctl issuer is different than fd opener and this being > > > context_init here reminded me of it. Maybe you wouldn't have to track the > > > pid in msm? > > The problem is that we also need this for gpu devcore dumps, which > could happen after the drm_file is closed. The ctx can outlive the > file. > I think we all kept forgetting about that. MSM had support for ages, while AMDGPU is the second driver to land support - just a release ago. > But the ctx->pid has the same problem as the existing file->pid when > it comes to Xorg.. hopefully over time that problem just goes away. Out of curiosity: what do you mean with "when it comes to Xorg" - the "was_master" handling or something else? > guess I could do a similar dance to your patch to update the pid > whenever (for ex) a submitqueue is created. > > > Can we go one step further and let the drm fdinfo stuff print these new > > additions? Consistency across drivers and all that. > > Hmm, I guess I could _also_ store the overridden comm/cmdline in > drm_file. I still need to track it in ctx (msm_file_private) because > I could need it after the file is closed. > > Maybe it could be useful to have a gl extension to let the app set a > name on the context so that this is useful beyond native-ctx (ie. > maybe it would be nice to see that "chrome: lwn.net" is using less gpu > memory than "chrome: phoronix.com", etc) > /me awaits for the series to hit the respective websites ;-) But seriously - the series from Tvrtko (thanks for the link, will check in a moment) makes sense. Although given the livespan issue mentioned above, I don't think it's applicable here. So if it were me, I would consider the two orthogonal for the short/mid term. Fwiw this and patch 1/3 are: Reviewed-by: Emil Velikov <emil.l.velikov@xxxxxxxxx> HTH -Emil