Waiman Long <longman@xxxxxxxxxx> writes: > We are hit with a not easily reproducible divide-by-0 panic in padata.c > at bootup time. > > [ 10.017908] Oops: divide error: 0000 1 PREEMPT SMP NOPTI > [ 10.017908] CPU: 26 PID: 2627 Comm: kworker/u1666:1 Not tainted 6.10.0-15.el10.x86_64 #1 > [ 10.017908] Hardware name: Lenovo ThinkSystem SR950 [7X12CTO1WW]/[7X12CTO1WW], BIOS [PSE140J-2.30] 07/20/2021 > [ 10.017908] Workqueue: events_unbound padata_mt_helper > [ 10.017908] RIP: 0010:padata_mt_helper+0x39/0xb0 > : > [ 10.017963] Call Trace: > [ 10.017968] <TASK> > [ 10.018004] ? padata_mt_helper+0x39/0xb0 > [ 10.018084] process_one_work+0x174/0x330 > [ 10.018093] worker_thread+0x266/0x3a0 > [ 10.018111] kthread+0xcf/0x100 > [ 10.018124] ret_from_fork+0x31/0x50 > [ 10.018138] ret_from_fork_asm+0x1a/0x30 > [ 10.018147] </TASK> > > Looking at the padata_mt_helper() function, the only way a divide-by-0 > panic can happen is when ps->chunk_size is 0. The way that chunk_size is > initialized in padata_do_multithreaded(), chunk_size can be 0 when the > min_chunk in the passed-in padata_mt_job structure is 0. > > Fix this divide-by-0 panic by making sure that chunk_size will be at > least 1 no matter what the input parameters are. > > Fixes: 004ed42638f4 ("padata: add basic support for multithreaded jobs") > Signed-off-by: Waiman Long <longman@xxxxxxxxxx> > --- > kernel/padata.c | 7 +++++++ > 1 file changed, 7 insertions(+) > > diff --git a/kernel/padata.c b/kernel/padata.c > index 53f4bc912712..0fa6c2895460 100644 > --- a/kernel/padata.c > +++ b/kernel/padata.c > @@ -517,6 +517,13 @@ void __init padata_do_multithreaded(struct padata_mt_job *job) > ps.chunk_size = max(ps.chunk_size, job->min_chunk); > ps.chunk_size = roundup(ps.chunk_size, job->align); > > + /* > + * chunk_size can be 0 if the caller sets min_chunk to 0. So force it > + * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().` > + */ Thanks for the patch and detailed comment. > + if (!ps.chunk_size) > + ps.chunk_size = 1U; > + could it be ps.chunk_size = max(ps.chunk_size, 1U); or can be merged with earlier max() ps.chunk_size = max(ps.chunk_size, max(job->min_chunk, 1U)); ps.chunk_size = roundup(ps.chunk_size, job->align); sits well with how entire file is written and compiler is optimizing them to same level. Kamlesh > list_for_each_entry(pw, &works, pw_list) > if (job->numa_aware) { > int old_node = atomic_read(&last_used_nid); > -- > 2.43.5