Hi,
On 19/01/2018 17:10, Tvrtko Ursulin wrote:
Hi,
On 19/01/2018 16:45, Peter Zijlstra wrote:
On Thu, Jan 18, 2018 at 06:40:07PM +0000, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
For situations where sysadmins might want to allow different level of
of access control for different PMUs, we start creating per-PMU
perf_event_paranoid controls in sysfs.
You've completely and utterly failed to explain why.
On an abstract level, if there is a desire to decrease the security knob
on one particular PMU provider, it is better to be able to do it just
for the one, rather for the whole system.
On a more concrete level, we have customers who want to look at certain
i915 metrics, most probably engine utilization or queue depth, in order
to make load-balancing decisions. (The two would be roughly analogous to
CPU usage and load.)
This data needs to be available to their userspaces dynamically and
would be used to pick a best GPU engine (mostly analogous to a CPU core)
to run a particular packet of work.
It would be impossible to run their product as root, and while one
option could be to write a proxy daemon which would allow unprivileged
queries, it is also a significant complication which introduces a time
shift problem on the PMU data as well.
So my thinking was that a per-PMU paranoid control should not be a
problematic concept in general. And my gut feeling anyway was that not
all PMU providers are the same class data, security wise, which was
another reason I thought per-PMU controls would be fine.
There is one more way of thinking about it, and that is that the access
control could even be extended to be per-event, and not just per-PMU.
That would allow registered PMUs to let the core know which counters are
potentially security sensitive, and which are not.
I've sent another RFC along those lines some time ago, but afterwards
I've changed my mind and thought the approach from this patch should be
less controversial since it retains all control fully in the perf core
and in the hands of sysadmins.
Any thoughts on this one? Is the approach acceptable?
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx