On Tue, Mar 15, 2022 at 4:29 AM Barry Song <21cnbao@xxxxxxxxx> wrote: <snipped> > > I guess the main cause of the regression for the previous sequence > > with 16 entries is that the ebizzy has a new allocated copy in > > search_mem(), which is mapped and used only once in each loop. > > and the temp copy can push out those hot chunks. > > > > Anyway, I understand it is a trade-off between warmly embracing new > > pages and holding old pages tightly. Real user cases from phone, server, > > desktop will be judging this better. Thanks for all the details. I looked into them today and found no regressions when running with your original program. After I explain why, I hope you'd be convinced that using programs like this one is not a good way to measure things :) Problems: 1) Given the 2.5GB configuration and a sequence of cold/hot chunks, I assume your program tries to simulate a handful of apps running on a phone. A short repeating sequence is closer to sequential access than to real user behaviors, as I suggested last time. You could check out how something similar is done here [1]. 2) Under the same assumption (phone), C programs are very different from Android apps in terms of runtime memory behaviors, e.g., JVM GC [2]. 3) Assuming you are interested in the runtime memory behavior of C/C++ programs, your program is still not very representative. All C/C++ programs I'm familiar with choose to link against TCmalloc, jemalloc or implement their own allocators. GNU libc, IMO, has a small market share nowadays. 4) TCmalloc/jemalloc are not only optimized for multithreading, they are also THP aware. THP is very important when benchmarking page reclaim, e.g., two similarly warm THPs can comprise 511+1 or 1+511 of warm+cold 4K pages. The LRU algorithm that chooses more of the former is at the disadvantage. Unless it's recommended by the applications you are trying to benchmark, THP should be disabled. (Android generally doesn't use THP.) 5) Swap devices are also important. Zram should NOT be used unless you know your benchmark doesn't generate incompressible data. The LRU algorithm that chooses more incompressible pages is at disadvantage. Here is my result: on the same Snapdragon 7c + 2.5GB RAM + 1.5GB ramdisk swap, with your original program compiled against libc malloc and TCMalloc, to 32-bit and 64-bit binaries: # cat /sys/kernel/mm/lru_gen/enabled 0x0003 # cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] # modprobe brd rd_nr=1 rd_size=1572864 # if=/dev/zero of=/dev/ram0 bs=1M # mkswap /dev/ram0 # swapoff -a # swapon /dev/ram0 # ldd test_absl_32 linux-vdso.so.1 (0xf6e7f000) libabsl_malloc.so.2103.0.1 => /usr/lib/libabsl_malloc.so.2103.0.1 (0xf6e23000) libpthread.so.0 => /lib/libpthread.so.0 (0xf6dff000) libc.so.6 => /lib/libc.so.6 (0xf6d07000) /lib/ld-linux-armhf.so.3 (0x09df0000) libabsl_base.so.2103.0.1 => /usr/lib/libabsl_base.so.2103.0.1 (0xf6ce5000) libabsl_raw_logging.so.2103.0.1 => /usr/lib/libabsl_raw_logging.so.2103.0.1 (0xf6cc4000) libabsl_spinlock_wait.so.2103.0.1 => /usr/lib/libabsl_spinlock_wait.so.2103.0.1 (0xf6ca3000) libc++.so.1 => /usr/lib/libc++.so.1 (0xf6c04000) libc++abi.so.1 => /usr/lib/libc++abi.so.1 (0xf6bcd000) # file test_absl_64 test_absl_64: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked # ldd test_gnu_32 linux-vdso.so.1 (0xeabef000) libpthread.so.0 => /lib/libpthread.so.0 (0xeab92000) libc.so.6 => /lib/libc.so.6 (0xeaa9a000) /lib/ld-linux-armhf.so.3 (0x05690000) # file test_gnu_64 test_gnu_64: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked ### baseline 5.17-rc8 # perf record ./test_gnu_64 -t 4 -s $((200*1024*1024)) -S 6000000 10 records/s real 59.00 s user 39.83 s sys 174.18 s 18.51% [.] memcpy 15.98% [k] __pi_clear_page 5.59% [k] rmqueue_pcplist 5.19% [k] do_raw_spin_lock 5.09% [k] memmove 4.60% [k] _raw_spin_unlock_irq 3.62% [k] _raw_spin_unlock_irqrestore 3.61% [k] free_unref_page_list 3.29% [k] zap_pte_range 2.53% [k] local_daif_restore 2.50% [k] down_read_trylock 1.41% [k] handle_mm_fault 1.32% [k] do_anonymous_page 1.31% [k] up_read 1.03% [k] free_swap_cache ### MGLRU v9 # perf record ./test_gnu_64 -t 4 -s $((200*1024*1024)) -S 6000000 11 records/s real 57.00 s user 39.39 s 19.36% [.] memcpy 16.50% [k] __pi_clear_page 6.21% [k] memmove 5.57% [k] rmqueue_pcplist 5.07% [k] do_raw_spin_lock 4.96% [k] _raw_spin_unlock_irqrestore 4.25% [k] free_unref_page_list 3.80% [k] zap_pte_range 3.69% [k] _raw_spin_unlock_irq 2.71% [k] local_daif_restore 2.10% [k] down_read_trylock 1.50% [k] handle_mm_fault 1.29% [k] do_anonymous_page 1.17% [k] free_swap_cache 1.08% [k] up_read [1] https://chromium.googlesource.com/chromiumos/platform/tast-tests/+/refs/heads/main/src/chromiumos/tast/local/memory/mempressure/mempressure.go [2] https://developer.android.com/topic/performance/memory-overview