Peter Zijlstra a écrit : > When re-computing the shares for each task group's cpu representation we > need the ratio of weight on each cpu vs the total weight of the sched > domain. > > Since load-balancing is loosely (read not) synchronized, the weight of > individual cpus can change between doing the sum and calculating the > ratio. > > The previous patch dealt with only one of the race scenarios, this patch > side steps them all by saving a snapshot of all the individual cpu > weights, thereby always working on a consistent set. > > Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> > --- > kernel/sched.c | 50 +++++++++++++++++++++++++++++--------------------- > 1 files changed, 29 insertions(+), 21 deletions(-) > > diff --git a/kernel/sched.c b/kernel/sched.c > index 0e76b17..4591054 100644 > --- a/kernel/sched.c > +++ b/kernel/sched.c > @@ -1515,30 +1515,29 @@ static unsigned long cpu_avg_load_per_task(int cpu) > > #ifdef CONFIG_FAIR_GROUP_SCHED > > +struct update_shares_data { > + unsigned long rq_weight[NR_CPUS]; > +}; > + > +static DEFINE_PER_CPU(struct update_shares_data, update_shares_data); ouch... thats quite large IMHO, up to 4096*8 = 32768 bytes per cpu... Now we have nice dynamic per cpu allocations, we could use it here, and use nr_cpus instead of NR_CPUS as the array size ? -- To unsubscribe from this list: send the line "unsubscribe linux-tip-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
![]() |