Hi Ionela,
On 8/17/22 16:21, Ionela Voinescu wrote:
Hi Pierre,
On Friday 12 Aug 2022 at 12:16:19 (+0200), Pierre Gondois wrote:
From: Pierre Gondois <Pierre.Gondois@xxxxxxx>
The Energy Aware Scheduler (EAS) estimates the energy consumption
of placing a task on different CPUs. The goal is to minimize this
energy consumption. Estimating the energy of different task placements
is increasingly complex with the size of the platform. To avoid having
a slow wake-up path, EAS is only enabled if this complexity is low
enough.
The current complexity limit was set in:
commit b68a4c0dba3b1 ("sched/topology: Disable EAS on inappropriate
platforms").
base on the first implementation of EAS, which was re-computing
the power of the whole platform for each task placement scenario, cf:
commit 390031e4c309 ("sched/fair: Introduce an energy estimation helper
function").
but the complexity of EAS was reduced in:
commit eb92692b2544d ("sched/fair: Speed-up energy-aware wake-ups")
and find_energy_efficient_cpu() (feec) algorithm was updated in:
commit 3e8c6c9aac42 ("sched/fair: Remove task_util from effective
utilization in feec()")
find_energy_efficient_cpu() (feec) is now doing:
feec()
\_ for_each_pd(pd) [0]
// get max_spare_cap_cpu and compute_prev_delta
\_ for_each_cpu(pd) [1]
\_ get_pd_busy_time(pd) [2]
\_ for_each_cpu(pd)
// evaluate pd energy without the task
\_ get_pd_max_util(pd, -1) [3.0]
\_ for_each_cpu(pd)
\_ compute_energy(pd, -1)
\_ for_each_ps(pd)
// evaluate pd energy with the task on prev_cpu
\_ get_pd_max_util(pd, prev_cpu) [3.1]
\_ for_each_cpu(pd)
\_ compute_energy(pd, prev_cpu)
\_ for_each_ps(pd)
// evaluate pd energy with the task on max_spare_cap_cpu
\_ get_pd_max_util(pd, max_spare_cap_cpu) [3.2]
\_ for_each_cpu(pd)
\_ compute_energy(pd, max_spare_cap_cpu)
\_ for_each_ps(pd)
[3.1] happens only once since prev_cpu is unique. To have an upper
bound of the complexity, [3.1] is taken into account for all pds.
So with the same definitions for nr_pd, nr_cpus and nr_ps,
the complexity is of:
nr_pd * (2 * [nr_cpus in pd] + 3 * ([nr_cpus in pd] + [nr_ps in pd]))
[0] * ( [1] + [2] + [3.0] + [3.1] + [3.2] )
= 5 * nr_cpus + 3 * nr_ps
I just want to draw your attention to [1] and the fact that the
structure of the function changed. Your calculations largely remain the
same - 3 calls to compute_energy() which in turn now calls
eenv_pd_max_util() with operations for each cpu, plus some scattered
calls to eenv_pd_busy_time(), all for each pd.
Yes indeed, there is:
s/get_pd_max_util/eenv_pd_max_util
and also as you spotted, the following pattern:
\_ eenv_pd_max_util(pd, dst_cpu)
\_ for_each_cpu(pd)
\_ compute_energy(pd, dst_cpu)
\_ for_each_ps(pd)
should actually be:
\_ compute_energy(pd, dst_cpu)
\_ eenv_pd_max_util(pd, dst_cpu)
\_ for_each_cpu(pd)
\_ for_each_ps(pd)
Thanks,
Pierre
[1]
https://lore.kernel.org/lkml/20220621090414.433602-7-vdonnefort@xxxxxxxxxx/
Thanks,
Ionela.