On Mon, Sep 02, 2019 at 07:44:40AM +0100, Patrick Bellasi wrote: > On Fri, Aug 30, 2019 at 09:48:34 +0000, Peter Zijlstra wrote... > > On Thu, Aug 22, 2019 at 02:28:10PM +0100, Patrick Bellasi wrote: > >> + rq = task_rq_lock(p, &rf); > > > > Since modifying cgroup parameters is priv only, this should be OK I > > suppose. Priv can already DoS the system anyway. > > Are you referring to the possibility to DoS the scheduler by keep > writing cgroup attributes? Yep. > Because, in that case I think cgroup attributes could be written also by > non priv users. It all depends on how they are mounted and permissions > are set. Isn't it? > > Anyway, I'm not sure we can fix that here... and in principle we could > have that DoS by setting CPUs affinities, which is user exposed. > Isn't it? Only for a single task; by using the cgroup thing we have that in-kernel iteration of tasks. The thing I worry about is bouncing rq->lock around the system; but yeah, I suppose a normal user could achieve something similar with enough tasks. > >> + /* > >> + * Setting the clamp bucket is serialized by task_rq_lock(). > >> + * If the task is not yet RUNNABLE and its task_struct is not > >> + * affecting a valid clamp bucket, the next time it's enqueued, > >> + * it will already see the updated clamp bucket value. > >> + */ > >> + if (!p->uclamp[clamp_id].active) > >> + goto done; > >> + > >> + uclamp_rq_dec_id(rq, p, clamp_id); > >> + uclamp_rq_inc_id(rq, p, clamp_id); > >> + > >> +done: > > > > I'm thinking that: > > > > if (p->uclamp[clamp_id].active) { > > uclamp_rq_dec_id(rq, p, clamp_id); > > uclamp_rq_inc_id(rq, p, clamp_id); > > } > > > > was too obvious? ;-) > > Yep, right... I think there was some more code in prev versions but I > forgot to get rid of that "goto" pattern after some change. OK, already fixed that.