Re: [PATCH] smaps: count large pages smaller than PMD size to anonymous_thp

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/12/5 1:05, Ryan Roberts wrote:
> On 04/12/2024 14:40, Wenchao Hao wrote:
>> On 2024/12/3 22:42, Ryan Roberts wrote:
>>> On 03/12/2024 14:17, David Hildenbrand wrote:
>>>> On 03.12.24 14:49, Wenchao Hao wrote:
>>>>> Currently, /proc/xxx/smaps reports the size of anonymous huge pages for
>>>>> each VMA, but it does not include large pages smaller than PMD size.
>>>>>
>>>>> This patch adds the statistics of anonymous huge pages allocated by
>>>>> mTHP which is smaller than PMD size to AnonHugePages field in smaps.
>>>>>
>>>>> Signed-off-by: Wenchao Hao <haowenchao22@xxxxxxxxx>
>>>>> ---
>>>>>   fs/proc/task_mmu.c | 6 ++++++
>>>>>   1 file changed, 6 insertions(+)
>>>>>
>>>>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>>>>> index 38a5a3e9cba2..b655011627d8 100644
>>>>> --- a/fs/proc/task_mmu.c
>>>>> +++ b/fs/proc/task_mmu.c
>>>>> @@ -717,6 +717,12 @@ static void smaps_account(struct mem_size_stats *mss,
>>>>> struct page *page,
>>>>>           if (!folio_test_swapbacked(folio) && !dirty &&
>>>>>               !folio_test_dirty(folio))
>>>>>               mss->lazyfree += size;
>>>>> +
>>>>> +        /*
>>>>> +         * Count large pages smaller than PMD size to anonymous_thp
>>>>> +         */
>>>>> +        if (!compound && PageHead(page) && folio_order(folio))
>>>>> +            mss->anonymous_thp += folio_size(folio);
>>>>>       }
>>>>>         if (folio_test_ksm(folio))
>>>>
>>>>
>>>> I think we decided to leave this (and /proc/meminfo) be one of the last
>>>> interfaces where this is only concerned with PMD-sized ones:
>>>>
>>>> Documentation/admin-guide/mm/transhuge.rst:
>>>>
>>>> The number of PMD-sized anonymous transparent huge pages currently used by the
>>>> system is available by reading the AnonHugePages field in ``/proc/meminfo``.
>>>> To identify what applications are using PMD-sized anonymous transparent huge
>>>> pages, it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages
>>>> fields for each mapping. (Note that AnonHugePages only applies to traditional
>>>> PMD-sized THP for historical reasons and should have been called
>>>> AnonHugePmdMapped).
>>>>
>>>
>>> Agreed. If you need per-process metrics for mTHP, we have a python script at
>>> tools/mm/thpmaps which does a fairly good job of parsing pagemap. --help gives
>>> you all the options.
>>>
>>
>> I tried this tool, and it is very powerful and practical IMO.
>> However, thereare two disadvantages:
>>
>> - This tool is heavily dependent on Python and Python libraries.
>>   After installing several libraries with the pip command, I was able to
>>   get it running.
> 
> I think numpy is the only package it uses which is not in the standard library?
> What other libraries did you need to install?
> 

Yes, I just tested it on the standard version (Fedora), and that is indeed the case.
Previously, I needed to install additional packages is because I removed some unused
software from the old environment.

Recently, I revisited and started using your tool again. It’s very useful, meeting
my needs and even exceeding them. I am now testing with qemu to run a fedora, so
it's easy to run it.

>>   In practice, the environment we need to analyze may be a mobile or
>>   embedded environment, where it is very difficult to deploy these
>>   libraries.
> 
> Yes, I agree that's a problem, especially for Android. The script has proven
> useful to me for debugging in a traditional Linux distro environment though.
> 
>> - It seems that this tool only counts file-backed large pages? During
> 
> No; the tool counts file-backed and anon memory. But it reports it in separate
> counters. See `thpmaps --help` for full details.
> 
>>   the actual test, I mapped a region of anonymous pages and mapped it
>>   as large pages, but the tool did not display those large pages.
>>   Below is my test file(mTHP related sysfs interface is set to "always"
>>   to make sure using large pages):
> 
> Which mTHP sizes did you enable? Depending on your value of SIZE and which mTHP
> sizes are enabled, you may not have a correctly aligned region in p. So mTHP
> would not be allocated. Best to over-allocate then explicitly align p to the
> mTHP size, then fault it in.
> 

I enabled 64k/128k/256k MTHP and have been studying, debugging, and changing
parts of the khugepaged code to try merging standard pages into mTHP large
pages. So, I wanted to use smap to observe the large page sizes in a process.

>>
>> int main()
>> {
>>         int i;
>>         char *c;
>>         unsigned long *p;
>>
>>         p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> 
> What is SIZE here?
> 
>>         if (!p) {
>>                 perror("fail to get memory");
>>                 exit(-1);
>>         }
>>
>>         c = (unsigned char *)p;
>>
>>         for (i = 0; i < SIZE / 8; i += 8)
>>                 *(p + i) = 0xffff + i;
> 
> Err... what's your intent here? I think you're writting to 1 in every 8 longs?
> Probably just write to the first byte of every page.
> 

The data is fixed for the purpose of analyzing zram compression, so I filled
some data here.

> Thanks,
> Ryan
> 
>>
>>         while (1)
>>                 sleep(10);
>>
>>         return 0;
>> }
>>
>> Thanks,
>> wenchao
>>
> 





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux