On Tue, Mar 08, 2022 at 05:03:07PM +0100, Dietmar Eggemann wrote: > On 08/03/2022 12:04, Vincent Guittot wrote: > > On Tue, 8 Mar 2022 at 11:30, Will Deacon <will@xxxxxxxxxx> wrote: > > [...] > > >>> --- > >>> v1: Drop MC level if coregroup weight == 1 > >>> v2: New sd topo in arch/arm64/kernel/smp.c > >>> v3: No new topo, extend core_mask to cluster_siblings > >>> > >>> drivers/base/arch_topology.c | 8 ++++++++ > >>> 1 file changed, 8 insertions(+) > >>> > >>> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c > >>> index 976154140f0b..a96f45db928b 100644 > >>> --- a/drivers/base/arch_topology.c > >>> +++ b/drivers/base/arch_topology.c > >>> @@ -628,6 +628,14 @@ const struct cpumask *cpu_coregroup_mask(int cpu) > >>> core_mask = &cpu_topology[cpu].llc_sibling; > >>> } > >>> > >>> + /* > >>> + * For systems with no shared cpu-side LLC but with clusters defined, > >>> + * extend core_mask to cluster_siblings. The sched domain builder will > >>> + * then remove MC as redundant with CLS if SCHED_CLUSTER is enabled. > > IMHO, if core_mask weight is 1, MC will be removed/degenerated anyway. > > This is what I get on my Ampere Altra (I guess I don't have the ACPI > changes which would let to a CLS sched domain): > > # cat /sys/kernel/debug/sched/domains/cpu0/domain*/name > DIE > NUMA > root@oss-altra01:~# zcat /proc/config.gz | grep SCHED_CLUSTER > CONFIG_SCHED_CLUSTER=y I'd like to follow up on this. Would you share your dmidecode BIOS Information section? Which kernel version? > >>> + */ > >>> + if (cpumask_subset(core_mask, &cpu_topology[cpu].cluster_sibling)) > >>> + core_mask = &cpu_topology[cpu].cluster_sibling; > >>> + > >> > >> Sudeep, Vincent, are you happy with this now? > > > > I would not say that I'm happy because this solution skews the core > > cpu mask in order to abuse the scheduler so that it will remove a > > wrong but useless level when it will build its domains. > > But this works so as long as the maintainer are happy, I'm fine I did explore the other options and they added considerably more complexity without much benefit in my view. I prefer this option which maintains the cpu_topology as described by the platform, and maps it into something that suits the current scheduler abstraction. I agree there is more work to be done here and intend to continue with it. > I do not have any better idea than this tweak here either in case the > platform can't provide a cleaner setup. I'd argue The platform is describing itself accurately in ACPI PPTT terms. The topology doesn't fit nicely within the kernel abstractions today. This is an area where I hope to continue to improve things going forward. > Maybe the following is easier to read but then we use > '&cpu_topology[cpu].llc_sibling' in cpu_coregroup_mask() already ... > > @@ -617,6 +617,7 @@ EXPORT_SYMBOL_GPL(cpu_topology); > const struct cpumask *cpu_coregroup_mask(int cpu) > { > const cpumask_t *core_mask = cpumask_of_node(cpu_to_node(cpu)); > + const cpumask_t *cluster_mask = cpu_clustergroup_mask(cpu); > > /* Find the smaller of NUMA, core or LLC siblings */ > if (cpumask_subset(&cpu_topology[cpu].core_sibling, core_mask)) { > @@ -628,6 +629,9 @@ const struct cpumask *cpu_coregroup_mask(int cpu) > core_mask = &cpu_topology[cpu].llc_sibling; > } > > + if (cpumask_subset(core_mask, cluster_mask)) > + core_mask = cluster_mask; > + Either works for me. I felt the version I sent was parallel to the existing implementation, but have no preference either way. > return core_mask; > } > > Reviewed-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx> > Thanks for the review Dietmar. -- Darren Hart Ampere Computing / OS and Kernel