Re: [PATCH V3 5/7] genirq/affinity: move group_cpus_evenly() into lib/

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 29, 2021 at 03:40:44PM +0100, John Garry wrote:
> 
> > +/**
> > + * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality
> > + * @numgrps: number of groups
> > + *
> > + * Return: cpumask array if successful, NULL otherwise. And each element
> > + * includes CPUs assigned to this group
> > + *
> > + * Try to put close CPUs from viewpoint of CPU and NUMA locality into
> > + * same group, and run two-stage grouping:
> > + *	1) allocate present CPUs on these groups evenly first
> > + *	2) allocate other possible CPUs on these groups evenly
> > + *
> > + * We guarantee in the resulted grouping that all CPUs are covered, and
> > + * no same CPU is assigned to different groups
> 
> nit: I'd have "no same CPU is assigned to multiple groups"

OK

> 
> > + */
> > +struct cpumask *group_cpus_evenly(unsigned int numgrps)
> 
> nit: The name group_cpus_evenly() would imply an action on some cpus, when
> it's just calculating some masks - I think "masks" should be at least
> included in the name

Naming is always the hard part in reviewing, I think cpu is more
readable, maybe group_all_cpus_evenly()?

> 
> > +{
> > +	unsigned int curgrp = 0, nr_present = 0, nr_others = 0;
> > +	cpumask_var_t *node_to_cpumask;
> > +	cpumask_var_t nmsk, npresmsk;
> > +	int ret = -ENOMEM;
> > +	struct cpumask *masks = NULL;
> > +
> > +	if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL))
> > +		return NULL;
> > +
> > +	if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL))
> > +		goto fail_nmsk;
> > +
> > +	node_to_cpumask = alloc_node_to_cpumask();
> > +	if (!node_to_cpumask)
> > +		goto fail_npresmsk;
> > +
> > +	masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL);
> > +	if (!masks)
> > +		goto fail_node_to_cpumask;
> > +
> > +	/* Stabilize the cpumasks */
> > +	cpus_read_lock();
> > +	build_node_to_cpumask(node_to_cpumask);
> > +
> > +	/* grouping present CPUs first */
> > +	ret = __group_cpus_evenly(curgrp, numgrps, node_to_cpumask,
> > +				  cpu_present_mask, nmsk, masks);
> > +	if (ret < 0)
> > +		goto fail_build_affinity;
> > +	nr_present = ret;
> > +
> > +	/*
> > +	 * Allocate non present CPUs starting from the next group to be
> > +	 * handled. If the grouping of present CPUs already exhausted the
> > +	 * group space, assign the non present CPUs to the already
> > +	 * allocated out groups.
> > +	 */
> > +	if (nr_present >= numgrps)
> > +		curgrp = 0;
> > +	else
> > +		curgrp = nr_present;
> > +	cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask);
> > +	ret = __group_cpus_evenly(curgrp, numgrps, node_to_cpumask,
> > +				  npresmsk, nmsk, masks);
> > +	if (ret >= 0)
> > +		nr_others = ret;
> > +
> > + fail_build_affinity:
> 
> nit: Strange that success path goes through "fail" labels. Current code is
> this way, so feel free to ignore.

I'd rather not change current behavior in this patches.

> 
> > +	cpus_read_unlock();
> > +
> > +	if (ret >= 0)
> > +		WARN_ON(nr_present + nr_others < numgrps);
> > +
> > + fail_node_to_cpumask:
> > +	free_node_to_cpumask(node_to_cpumask);
> > +
> > + fail_npresmsk:
> > +	free_cpumask_var(npresmsk);
> > +
> > + fail_nmsk:
> > +	free_cpumask_var(nmsk);
> > +	if (ret < 0) {
> > +		kfree(masks);
> > +		return NULL;
> > +	}
> > +	return masks;
> > +}
> > +EXPORT_SYMBOL_GPL(group_cpus_evenly);
> 
> Are there any users which are available as modules? As I see, the only users
> are blk-mq-cpumap.c and irq/affinity.c, which I guess aren't available as
> modules.

Yeah, so far only two built-in users, I think it is fine to start with
not exporting the symbols, will change to this way in next version.


Thanks, 
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux