Re: [PATCH v5 2/3] sched/topology: Rework CPU capacity asymmetry detection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 26, 2021 at 11:52:25AM +0200, Dietmar Eggemann wrote:

> For me asym_cpu_capacity_classify() is pretty hard to digest ;-) But I
> wasn't able to break it. It also performs correctly on (non-existing SMT)
> layer (with sd span eq. single CPU).

This is the simplest form I could come up with this morning:

static inline int
asym_cpu_capacity_classify(struct sched_domain *sd,
                          const struct cpumask *cpu_map)
{
	struct asym_cap_data *entry;
	int i = 0, n = 0;

	list_for_each_entry(entry, &asym_cap_list, link) {
		if (cpumask_intersects(sched_domain_span(sd), entry->cpu_mask))
			i++;
		else
			n++;
	}

	if (WARN_ON_ONCE(!i) || i == 1) /* no asymmetry */
		return 0;

	if (n) /* partial asymmetry */
		return SD_ASYM_CPUCAPACITY;

	/* full asymmetry */
	return SD_ASYM_CPUCAPACITY | SD_ASYM_CPUCAPACITY_FULL;
}


The early termination and everything was cute; but this isn't
performance critical code and clarity is paramount.



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux