Re: [PATCH v3 2/3] sched/topology: Rework CPU capacity asymmetry detection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Peter,

Thanks for having a look at that.

On Thu, May 13, 2021 at 01:05:11PM +0200, Peter Zijlstra wrote:
> On Mon, May 10, 2021 at 05:30:57PM +0100, Beata Michalska wrote:
> > Currently the CPU capacity asymmetry detection, performed through
> > asym_cpu_capacity_level, tries to identify the lowest topology level
> > at which the highest CPU capacity is being observed, not necessarily
> > finding the level at which all possible capacity values are visible
> > to all CPUs, which might be bit problematic for some possible/valid
> > asymmetric topologies i.e.:
> > 
> > DIE      [                                ]
> > MC       [                       ][       ]
> > 
> > CPU       [0] [1] [2] [3] [4] [5]  [6] [7]
> > Capacity  |.....| |.....| |.....|  |.....|
> > 	     L	     M       B        B
> > 
> > Where:
> >  arch_scale_cpu_capacity(L) = 512
> >  arch_scale_cpu_capacity(M) = 871
> >  arch_scale_cpu_capacity(B) = 1024
> 
> Low, High
> Small, Big
> 
> But you appear to have picked: Low and Big, which just doesn't make any
> sense. (Worse, L can also be Large :-)
>
(L)ittle, (M)edium, (B)ig
I can re-arrange the abbreviations used here.

> > In this particular case, the asymmetric topology level will point
> > at MC, as all possible CPU  masks for that level do cover the CPU
> > with the highest capacity. It will work just fine for the first
> > cluster, not so much for the second one though (consider the
> > find_energy_efficient_cpu which might end up attempting the energy
> > aware wake-up for a domain that does not see any asymmetry at all)
> > 
> > Rework the way the capacity asymmetry levels are being detected,
> > to point to the lowest topology level( for a given CPU), where full
> > range of available CPU capacities is visible to all CPUs within given
> > domain. As a result, the per-cpu sd_asym_cpucapacity might differ
> > across the domains. This will have an impact on EAS wake-up placement
> > in a way that it might see different rage of CPUs to be considered,
> > depending on the given current and target CPUs.
> > 
> > Additionally, those levels, where any range of asymmetry (not
> > necessarily full) is being detected will get identified as well.
> > The selected asymmetric topology level will be denoted by
> > SD_ASYM_CPUCAPACITY_FULL sched domain flag whereas the 'sub-levels'
> > would receive the already used SD_ASYM_CPUCAPACITY flag. This allows
> > maintaining the current behaviour for asymmetric topologies, with
> > misfit migration operating correctly on lower levels, if applicable,
> > as any asymmetry is enough to trigger the misfit migration.
> > The logic there relies on the SD_ASYM_CPUCAPACITY flag and does not
> > relate to the full asymmetry level denoted by the sd_asym_cpucapacity
> > pointer.
> 
> My head hurts trying to untangle this.
> 
> Would it not be much easier to have a cpumask for each type and then
> have something like:
> 
> 
> enum asym_type {
> 	full,
> 	asym,
> 	none,
> };
> 
> static struct cpumask *asym_masks[];
> static int asym_nr;
> 
> enum asym_type classify_asym_domain(struct sched_domain *sd)
> {
> 	int i, n = 0;
> 
> 	for (i = 0; i < asym_nr; i++) {
> 		if (cpumask_intersects(sched_domain_span(sd), asym_mask[i]))
> 			n++;
> 	}
> 
> 	WARN_ON_ONCE(!n);
> 
> 	if (n == 1)
> 		return none;
> 	if (n == asym_nr)
> 		return full;
> 	return asym;
> }
> 
> Or am I missing the point?
Your suggestion should work - thanks for that!
Guess that's just an example of overthinking.
Will change the implementation and see how that plays, but the idea looks good.
Thanks again.

---
BR
B.



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux