On Thu, 2020-05-21 at 16:51 -0400, Daniel Jordan wrote: > From: Mathias Krause <minipli@xxxxxxxxxxxxxx> > > [ Upstream commit 1bd845bcb41d5b7f83745e0cb99273eb376f2ec5 ] Well spotted, I'll add this for 3.16 as well. Ben. > The parallel queue per-cpu data structure gets initialized only for CPUs > in the 'pcpu' CPU mask set. This is not sufficient as the reorder timer > may run on a different CPU and might wrongly decide it's the target CPU > for the next reorder item as per-cpu memory gets memset(0) and we might > be waiting for the first CPU in cpumask.pcpu, i.e. cpu_index 0. > > Make the '__this_cpu_read(pd->pqueue->cpu_index) == next_queue->cpu_index' > compare in padata_get_next() fail in this case by initializing the > cpu_index member of all per-cpu parallel queues. Use -1 for unused ones. > > Signed-off-by: Mathias Krause <minipli@xxxxxxxxxxxxxx> > Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> > Signed-off-by: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx> > --- > kernel/padata.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/kernel/padata.c b/kernel/padata.c > index 8aef48c3267b..4f860043a8e5 100644 > --- a/kernel/padata.c > +++ b/kernel/padata.c > @@ -461,8 +461,14 @@ static void padata_init_pqueues(struct parallel_data *pd) > struct padata_parallel_queue *pqueue; > > cpu_index = 0; > - for_each_cpu(cpu, pd->cpumask.pcpu) { > + for_each_possible_cpu(cpu) { > pqueue = per_cpu_ptr(pd->pqueue, cpu); > + > + if (!cpumask_test_cpu(cpu, pd->cpumask.pcpu)) { > + pqueue->cpu_index = -1; > + continue; > + } > + > pqueue->pd = pd; > pqueue->cpu_index = cpu_index; > cpu_index++; -- Ben Hutchings Logic doesn't apply to the real world. - Marvin Minsky
Attachment:
signature.asc
Description: This is a digitally signed message part