> -----Original Message----- > From: Peter Zijlstra [mailto:peterz@xxxxxxxxxxxxx] > Sent: Friday, October 22, 2021 2:23 AM > To: Barry Song <21cnbao@xxxxxxxxx> > Cc: Tom Lendacky <thomas.lendacky@xxxxxxx>; LKML > <linux-kernel@xxxxxxxxxxxxxxx>; linux-tip-commits@xxxxxxxxxxxxxxx; Tim Chen > <tim.c.chen@xxxxxxxxxxxxxxx>; Song Bao Hua (Barry Song) > <song.bao.hua@xxxxxxxxxxxxx>; x86 <x86@xxxxxxxxxx> > Subject: Re: [tip: sched/core] sched: Add cluster scheduler level for x86 > > On Thu, Oct 21, 2021 at 11:32:36PM +1300, Barry Song wrote: > > On Thu, Oct 21, 2021 at 9:43 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: > > > > > > On Wed, Oct 20, 2021 at 10:36:19PM +0200, Peter Zijlstra wrote: > > > > > > > OK, I think I see what's happening. > > > > > > > > AFAICT cacheinfo.c does *NOT* set l2c_id on AMD/Hygon hardware, this > > > > means it's set to BAD_APICID. > > > > > > > > This then results in match_l2c() to never match. And as a direct > > > > consequence set_cpu_sibling_map() will generate cpu_l2c_shared_mask with > > > > just the one CPU set. > > > > > > > > And we have the above result and things come unstuck if we assume: > > > > SMT <= L2 <= LLC > > > > > > > > Now, the big question, how to fix this... Does AMD have means of > > > > actually setting l2c_id or should we fall back to using match_smt() for > > > > l2c_id == BAD_APICID ? > > > > > > The latter looks something like the below and ought to make EPYC at > > > least function as it did before. > > > > > > > > > --- > > > arch/x86/kernel/smpboot.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c > > > index 849159797101..c2671b2333d1 100644 > > > --- a/arch/x86/kernel/smpboot.c > > > +++ b/arch/x86/kernel/smpboot.c > > > @@ -472,7 +472,7 @@ static bool match_l2c(struct cpuinfo_x86 *c, struct > cpuinfo_x86 *o) > > > > > > /* Do not match if we do not have a valid APICID for cpu: */ > > > if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID) > > > - return false; > > > + return match_smt(c, o); /* assume at least SMT shares L2 */ > > > > Rather than making a fake cluster_cpus and cluster_cpus_list which > > will expose to userspace > > through /sys/devices/cpus/cpux/topology, could we just fix the > > sched_domain mask by the > > below? > > I don't think it's fake; SMT fundamentally has to share all cache > levels. And having the sched domains differ in setup from the reported > (nonsensical) topology also isn't appealing. Fair enough. I was actually inspired by cpu_coregroup_mask() which is a combination of a couple of cpumask set: drivers/base/arch_topology.c const struct cpumask *cpu_coregroup_mask(int cpu) { const cpumask_t *core_mask = cpumask_of_node(cpu_to_node(cpu)); /* Find the smaller of NUMA, core or LLC siblings */ if (cpumask_subset(&cpu_topology[cpu].core_sibling, core_mask)) { /* not numa in package, lets use the package siblings */ core_mask = &cpu_topology[cpu].core_sibling; } if (cpu_topology[cpu].llc_id != -1) { if (cpumask_subset(&cpu_topology[cpu].llc_sibling, core_mask)) core_mask = &cpu_topology[cpu].llc_sibling; } return core_mask; } Thanks Barry