Quoting Tvrtko Ursulin (2019-12-17 17:21:28) > > On 16/12/2019 12:53, Chris Wilson wrote: > > Quoting Tvrtko Ursulin (2019-12-16 12:07:01) > >> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h > >> index 0781b6326b8c..9fcbcb6d6f76 100644 > >> --- a/drivers/gpu/drm/i915/i915_drv.h > >> +++ b/drivers/gpu/drm/i915/i915_drv.h > >> @@ -224,6 +224,20 @@ struct drm_i915_file_private { > >> /** ban_score: Accumulated score of all ctx bans and fast hangs. */ > >> atomic_t ban_score; > >> unsigned long hang_timestamp; > >> + > >> + struct i915_drm_client { > >> + unsigned int id; > >> + > >> + struct pid *pid; > >> + char *name; > > > > Hmm. Should we scrap i915_gem_context.pid and just use the client.pid? > > Or maybe leave as it so I don't have to worry about ctx vs client > lifetime. In other words places where we access ctx->pid and the client > is maybe long gone. I don't want to ref count clients, or maybe I do.. > hmm.. keeping GPU load of a client which exited and left work running > visible? Yeah. If we don't and all the GPU time is being hogged by zombies, users of the interface will not be impressed they can't identify those. Next up, kill(client_id, SIGKILL). -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx