Re: [PATCH] drivers/firmware: psci_checker: stash and use topology_core_cpumask for hotplug tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 18, 2018 at 05:49:30PM +0100, Lorenzo Pieralisi wrote:
> On Wed, Jul 18, 2018 at 12:25:32PM +0100, Sudeep Holla wrote:
> > Commit 7f9545aa1a91 ("arm64: smp: remove cpu and numa topology
> > information when hotplugging out CPU") updates the cpu topology when
> > the CPU is hotplugged out. However the PSCI checker code uses the
> > topology_core_cpumask pointers for some of the cpu hotplug testing.
> > Since the pointer to the core_cpumask of the first CPU in the group
> > is used, which when that CPU itself is hotpugged out is just set to
> > itself, the testing terminates after that particular CPU is tested out.
> > But the intention of this tests is to cover all the CPU in the group.
> > 
> > In order to support that, we need to stash the topology_core_cpumask
> > before the start of the test and use that value instead of pointer to
> > a cpumask which will be updated on CPU hotplug.
> > 
> > Fixes: 7f9545aa1a91a9a4 ("arm64: smp: remove cpu and numa topology
> > 	information when hotplugging out CPU")
> > Reported-by: Geert Uytterhoeven <geert@xxxxxxxxxxxxxx>
> > Tested-by: Geert Uytterhoeven <geert+renesas@xxxxxxxxx>
> > Cc: Mark Rutland <mark.rutland@xxxxxxx>
> > Cc: Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx>
> > Signed-off-by: Sudeep Holla <sudeep.holla@xxxxxxx>
> > ---
> >  drivers/firmware/psci_checker.c | 30 ++++++++++++++++++++----------
> >  1 file changed, 20 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/firmware/psci_checker.c b/drivers/firmware/psci_checker.c
> > index bb1c068bff19..e330c6cb45f5 100644
> > --- a/drivers/firmware/psci_checker.c
> > +++ b/drivers/firmware/psci_checker.c
> > @@ -77,21 +77,23 @@ static int psci_ops_check(void)
> >  	return 0;
> >  }
> >  
> > -static int find_cpu_groups(const struct cpumask *cpus,
> > -			   const struct cpumask **cpu_groups)
> > +static int find_cpu_groups(cpumask_var_t *cpu_groups)
> >  {
> >  	unsigned int nb = 0;
> >  	cpumask_var_t tmp;
> >  
> >  	if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
> >  		return -ENOMEM;
> > -	cpumask_copy(tmp, cpus);
> > +	cpumask_copy(tmp, cpu_online_mask);
> >  
> >  	while (!cpumask_empty(tmp)) {
> >  		const struct cpumask *cpu_group =
> >  			topology_core_cpumask(cpumask_any(tmp));
> >  
> > -		cpu_groups[nb++] = cpu_group;
> > +		if (cpu_groups && cpu_groups[nb])
> > +			cpumask_copy(cpu_groups[nb], cpu_group);
> > +
> > +		nb++;
> >  		cpumask_andnot(tmp, tmp, cpu_group);
> >  	}
> >  
> > @@ -169,25 +171,31 @@ static unsigned int down_and_up_cpus(const struct cpumask *cpus,
> >  static int hotplug_tests(void)
> >  {
> >  	int err;
> > -	cpumask_var_t offlined_cpus;
> > +	cpumask_var_t offlined_cpus, *cpu_groups;
> >  	int i, nb_cpu_group;
> > -	const struct cpumask **cpu_groups;
> >  	char *page_buf;
> >  
> > +	/* first run to just get the number of cpu groups */
> > +	nb_cpu_group = find_cpu_groups(NULL);
> > +
> >  	err = -ENOMEM;
> >  	if (!alloc_cpumask_var(&offlined_cpus, GFP_KERNEL))
> >  		return err;
> > -	/* We may have up to nb_available_cpus cpu_groups. */
> > -	cpu_groups = kmalloc_array(nb_available_cpus, sizeof(*cpu_groups),
> > -				   GFP_KERNEL);
> > +	cpu_groups = kcalloc(nb_cpu_group, sizeof(cpu_groups), GFP_KERNEL);
> >  	if (!cpu_groups)
> >  		goto out_free_cpus;
> > +
> > +	for (i = 0; i < nb_cpu_group; ++i)
> > +		if (!alloc_cpumask_var(&cpu_groups[i], GFP_KERNEL))
> > +			goto out_free_cpu_groups;
> > +
> >  	page_buf = (char *)__get_free_page(GFP_KERNEL);
> >  	if (!page_buf)
> >  		goto out_free_cpu_groups;
> >  
> >  	err = 0;
> > -	nb_cpu_group = find_cpu_groups(cpu_online_mask, cpu_groups);
> > +	/* second run to populate/copy the cpumask */
> > +	nb_cpu_group = find_cpu_groups(cpu_groups);
> >  
> >  	/*
> >  	 * Of course the last CPU cannot be powered down and cpu_down() should
> > @@ -212,6 +220,8 @@ static int hotplug_tests(void)
> >  
> >  	free_page((unsigned long)page_buf);
> >  out_free_cpu_groups:
> > +	for (i = 0; i < nb_cpu_group; ++i)
> > +		free_cpumask_var(cpu_groups[i]);
> >  	kfree(cpu_groups);
> >  out_free_cpus:
> >  	free_cpumask_var(offlined_cpus);
> 
> Hi Sudeep,
> 
> thanks for the patch. I reckon that adding two functions, say,
> alloc_cpu_groups() and free_cpu_groups() would make the code
> more readable instead of relying on find_cpu_groups() to first
> count then copy; it is for readability rather than correctness.
> 

I agree, I can say I was lazy and started trying to keep delta small.
I will respin.

--
Regards,
Sudeep



[Index of Archives]     [Linux Samsung SOC]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux