On 11/13/24 02:14, Kent Overstreet wrote:
On Tue, Nov 12, 2024 at 11:30:39AM +0800, Hao Ge wrote:
Hi Suren
Firstly, please forgive me for my improper wording in the commit message.
After sending it, I realized that I should have used "suggestion" instead of
"decided".
Secondly, please forgive me for taking a few days to respond. I've been
quite busy these days.
Let's continue to discuss this issue.
On 11/9/24 02:16, Suren Baghdasaryan wrote:
On Thu, Nov 7, 2024 at 11:50 PM Hao Ge <hao.ge@xxxxxxxxx> wrote:
From: Hao Ge <gehao@xxxxxxxxxx>
After much consideration,I have decided to remove
the "mem_profiling" sysctl interface to prevent
users from dynamically enabling or disabling the
MEMORY ALLOCATION PROFILING feature at runtime.
I have taken the following actions: I set
CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y to
enable memory allocation profiling by default,
and then made adjustments to mem_profiling dynamically
during runtime.
When I ran the OOM test program, I obtained useful
information that was indeed very helpful for debugging.
[ 1023.065402] Memory allocations:
[ 1023.065407] 12.8 GiB 6546 mm/huge_memory.c:1328 func:do_huge_pmd_anonymous_page
[ 1023.065412] 873 MiB 229985 arch/arm64/mm/fault.c:986 func:vma_alloc_zeroed_movable_folio
[ 1023.065415] 187 MiB 29732 mm/slub.c:2412 func:alloc_slab_page
[ 1023.065418] 99.8 MiB 25560 mm/memory.c:1065 func:folio_prealloc
[ 1023.065421] 47.2 MiB 3189 mm/readahead.c:434 func:ra_alloc_folio
[ 1023.065424] 30.0 MiB 15 mm/khugepaged.c:1072 func:alloc_charge_folio
[ 1023.065428] 28.6 MiB 514 mm/compaction.c:1880 func:compaction_alloc
[ 1023.065430] 25.8 MiB 6592 mm/page_ext.c:271 func:alloc_page_ext
[ 1023.065433] 25.6 MiB 6546 mm/huge_memory.c:1161 func:__do_huge_pmd_anonymous_page
[ 1023.065436] 23.5 MiB 6017 mm/shmem.c:1771 func:shmem_alloc_folio
After running echo 0 > /proc/sys/vm/mem_profiling
and then executing the same test program,
I obtained the following results
[ 1156.509699] Memory allocations:
[ 1156.509703] 187 MiB 29645 mm/slub.c:2412 func:alloc_slab_page
[ 1156.509707] 142 MiB 9357 mm/readahead.c:434 func:ra_alloc_folio
[ 1156.509710] 136 MiB 41325 arch/arm64/mm/fault.c:986 func:vma_alloc_zeroed_movable_folio
[ 1156.509713] 99.7 MiB 25531 mm/memory.c:1065 func:folio_prealloc
[ 1156.509716] 56.0 MiB 28 mm/huge_memory.c:1328 func:do_huge_pmd_anonymous_page
[ 1156.509719] 30.0 MiB 15 mm/khugepaged.c:1072 func:alloc_charge_folio
[ 1156.509723] 28.6 MiB 514 mm/compaction.c:1880 func:compaction_alloc
[ 1156.509725] 26.3 MiB 7460 mm/readahead.c:264 func:page_cache_ra_unbounded
[ 1156.509728] 25.8 MiB 6592 mm/page_ext.c:271 func:alloc_page_ext
[ 1156.509730] 23.5 MiB 6016 mm/shmem.c:1771 func:shmem_alloc_folio
Because mem_profiling was disabled by executing
echo 0 > /proc/sys/vm/mem_profiling,we are unable to
record memory allocation information after the disablement.
Naturally you are unable to track the allocations after disabling it.
You disabled it as root, so I assume you know what you are doing.
These output logs can mislead users. And similarly, the same
applies to alloc_info.
I would understand if you made /proc/allocinfo empty after disabling
it to avoid confusing the user, but ripping out the ability to
enable/disable profiling at runtime does not make sense to me. Once
you collect required data, disabling profiling gets you back the
performance that you pay for it. There are usecases when a program on
a remote device periodically enables profiling for some time, records
the difference in allocations and then disables it. Your change breaks
such users.
Actually, my original intention was also to make /proc/allocinfo empty when
disabling it,
but I considered the following scenario: after we disable it and clear
/proc/allocinfo,
we then start a memory-intensive application,
such as our OOM (Out-Of-Memory) test program.
If we later enable it again, the issue described in my commit message would
still arise.
Perhaps we need to further consider how to handle this situation.
Why would you do such a thing?
We put a lot of effort into making memory allocation profiling cheap
enough to leave on, and I haven't seen a single complaint about
performance overhead.
Hi Kent
Thank you very much for your and Suren's hard work.
For me, this feature is still very useful
As I mentioned in my previous reply to Suren's email, I did overlook
some things, and for that, I apologize.
Thanks
Best regards
Hao