On Tue, Feb 06, 2024 at 05:53:04PM -0800, Jane Chu wrote: > Add Daniel Jordan. Thanks, Jane. I'm adding Steffen too, and please cc padata maintainers on future patches. MAINTAINERS has linux-crypto too under padata, but for changes to just padata_do_multithreaded that's probably not necessary. > On 2/5/2024 1:09 AM, Muchun Song wrote: > > > On Feb 5, 2024, at 16:26, Gang Li <gang.li@xxxxxxxxx> wrote: > > > On 2024/2/5 15:28, Muchun Song wrote: > > > > On 2024/1/26 23:24, Gang Li wrote: > > > > > -static void __init gather_bootmem_prealloc(void) > > > > > +static void __init gather_bootmem_prealloc_node(unsigned long start, unsigned long end, void *arg) > > > > > + > > > > > { > > > > > + int nid = start; > > > > Sorry for so late to notice an issue here. I have seen a comment from > > > > PADATA, whcih says: > > > > @max_threads: Max threads to use for the job, actual number may be less > > > > depending on task size and minimum chunk size. > > > > PADATA will not guarantee gather_bootmem_prealloc_node() will be called > > > > ->max_threads times (You have initialized it to the number of NUMA nodes in > > > > gather_bootmem_prealloc). Therefore, we should add a loop here to initialize > > > > multiple nodes, namely (@end - @start) here. Otherwise, we will miss > > > > initializing some nodes. > > > > Thanks. > > > > > > > In padata_do_multithreaded: > > > > > > ``` > > > /* Ensure at least one thread when size < min_chunk. */ > > > nworks = max(job->size / max(job->min_chunk, job->align), 1ul); > > > nworks = min(nworks, job->max_threads); > > > > > > ps.nworks = padata_work_alloc_mt(nworks, &ps, &works); > > > ``` > > > > > > So we have works <= max_threads, but >= size/min_chunk. > > Given a 4-node system, the current implementation will schedule > > 4 threads to call gather_bootmem_prealloc() respectively, and > > there is no problems here. But what if PADATA schedules 2 > > threads and each thread aims to handle 2 nodes? I think > > it is possible for PADATA in the future, because it does not > > break any semantics exposed to users. The comment about @min_chunk: > > > > The minimum chunk size in job-specific units. This > > allows the client to communicate the minimum amount > > of work that's appropriate for one worker thread to > > do at once. > > > > It only defines the minimum chunk size but not maximum size, > > so it is possible to let each ->thread_fn handle multiple > > minimum chunk size. Right? Therefore, I am not concerned Right. The core issue is that gather_bootmem_prealloc_node() doesn't look at @end, but padata expects that each call of the thread function covers the start/end range that's passed. I understand that this happens to work today with how padata calculates nworks, but it seems better to honor the expectation, so I agree with Muchun's suggestion a few messages ago to loop over the range. I hope to look at the rest of the series and that standalone Kconfig patch after about a week, there isn't time before that.