H Qais, Sorry for the late reply. On Fri, 5 Jun 2020 at 12:45, Qais Yousef <qais.yousef@xxxxxxx> wrote: > > On 06/04/20 14:14, Vincent Guittot wrote: > > I have tried your patch and I don't see any difference compared to > > previous tests. Let me give you more details of my setup: > > I create 3 levels of cgroups and usually run the tests in the 4 levels > > (which includes root). The result above are for the root level > > > > But I see a difference at other levels: > > > > root level 1 level 2 level 3 > > > > /w patch uclamp disable 50097 46615 43806 41078 > > tip uclamp enable 48706(-2.78%) 45583(-2.21%) 42851(-2.18%) > > 40313(-1.86%) > > /w patch uclamp enable 48882(-2.43%) 45774(-1.80%) 43108(-1.59%) > > 40667(-1.00%) > > > > Whereas tip with uclamp stays around 2% behind tip without uclamp, the > > diff of uclamp with your patch tends to decrease when we increase the > > number of level > > Thanks for the extra info. Let me try this. > > If you can run perf and verify that you see activate/deactivate_task showing up > as overhead I'd appreciate it. Just to confirm that indeed what we're seeing > here are symptoms of the same problem Mel is seeing. I see call to activate_task() for each wakeup of the sched-pipi thread > > > Beside this, that's also interesting to notice the ~6% of perf impact > > between each level for the same image > > Interesting indeed. > > Thanks > > -- > Qais Yousef