Hello, On Tue, Jun 06, 2023 at 03:21:27PM -0400, Liam R. Howlett wrote: > * Yu Ma <yu.ma@xxxxxxxxx> [230606 08:27]: > > When running UnixBench/Execl throughput case, false sharing is observed > > due to frequent read on base_addr and write on free_bytes, chunk_md. > > > > UnixBench/Execl represents a class of workload where bash scripts > > are spawned frequently to do some short jobs. It will do system call on > > execl frequently, and execl will call mm_init to initialize mm_struct > > of the process. mm_init will call __percpu_counter_init for > > percpu_counters initialization. Then pcpu_alloc is called to read > > the base_addr of pcpu_chunk for memory allocation. Inside pcpu_alloc, > > it will call pcpu_alloc_area to allocate memory from a specified chunk. > > This function will update "free_bytes" and "chunk_md" to record the > > rest free bytes and other meta data for this chunk. Correspondingly, > > pcpu_free_area will also update these 2 members when free memory. > > Call trace from perf is as below: > > + 57.15% 0.01% execl [kernel.kallsyms] [k] __percpu_counter_init > > + 57.13% 0.91% execl [kernel.kallsyms] [k] pcpu_alloc > > - 55.27% 54.51% execl [kernel.kallsyms] [k] osq_lock > > - 53.54% 0x654278696e552f34 > > main > > __execve > > entry_SYSCALL_64_after_hwframe > > do_syscall_64 > > __x64_sys_execve > > do_execveat_common.isra.47 > > alloc_bprm > > mm_init > > __percpu_counter_init > > pcpu_alloc > > - __mutex_lock.isra.17 > > > > In current pcpu_chunk layout, ‘base_addr’ is in the same cache line > > with ‘free_bytes’ and ‘chunk_md’, and ‘base_addr’ is at the > > last 8 bytes. This patch moves ‘bound_map’ up to ‘base_addr’, > > to let ‘base_addr’ locate in a new cacheline. > > > > With this change, on Intel Sapphire Rapids 112C/224T platform, > > based on v6.4-rc4, the 160 parallel score improves by 24%. > > Can we have a comment somewhere around this structure to avoid someone > reverting this change by accident? > I agree with Liam. It was only recently percpu was added to the mm_struct so this wasn't originally on the hot path. It's probably worth reshuffling around pcpu_chunk because as you point out base_addr is read_only after init. There in general aren't that many of these structs on any particular host, so its probably good to just annotate with ____cacheline_aligned_in_smp and potentially reshuffle around a few other variables. Another optimization here is a batch allocation which hasn't been done yet (allocate essentially an array of percpu variables all at once, but allow for their lifetimes to be independent). PS - I know I'm not super active, but please cc me on percpu changes. Thanks, Dennis > > > > Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> > > Signed-off-by: Yu Ma <yu.ma@xxxxxxxxx> > > --- > > mm/percpu-internal.h | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h > > index f9847c131998..981eeb2ad0a9 100644 > > --- a/mm/percpu-internal.h > > +++ b/mm/percpu-internal.h > > @@ -41,10 +41,10 @@ struct pcpu_chunk { > > struct list_head list; /* linked to pcpu_slot lists */ > > int free_bytes; /* free bytes in the chunk */ > > struct pcpu_block_md chunk_md; > > + unsigned long *bound_map; /* boundary map */ > > void *base_addr; /* base address of this chunk */ > > > > unsigned long *alloc_map; /* allocation map */ > > - unsigned long *bound_map; /* boundary map */ > > struct pcpu_block_md *md_blocks; /* metadata blocks */ > > > > void *data; /* chunk data */ > > -- > > 2.39.3 > > >