Re: [PATCH v2] mm: add per-order mTHP swap-in fallback/fallback_charge counters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/11/23 19:52, Lance Yang wrote:
> On Sat, Nov 23, 2024 at 6:27 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
>>
>> On Sat, Nov 23, 2024 at 10:36 AM Lance Yang <ioworker0@xxxxxxxxx> wrote:
>>>
>>> Hi Wenchao,
>>>
>>> On Sat, Nov 23, 2024 at 12:14 AM Wenchao Hao <haowenchao22@xxxxxxxxx> wrote:
>>>>
>>>> Currently, large folio swap-in is supported, but we lack a method to
>>>> analyze their success ratio. Similar to anon_fault_fallback, we introduce
>>>> per-order mTHP swpin_fallback and swpin_fallback_charge counters for
>>>> calculating their success ratio. The new counters are located at:
>>>>
>>>> /sys/kernel/mm/transparent_hugepage/hugepages-<size>/stats/
>>>>         swpin_fallback
>>>>         swpin_fallback_charge
>>>>
>>>> Signed-off-by: Wenchao Hao <haowenchao22@xxxxxxxxx>
>>>> ---
>>>> V2:
>>>>  Introduce swapin_fallback_charge, which increments if it fails to
>>>>  charge a huge page to memory despite successful allocation.
>>>>
>>>>  Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++
>>>>  include/linux/huge_mm.h                    |  2 ++
>>>>  mm/huge_memory.c                           |  6 ++++++
>>>>  mm/memory.c                                |  2 ++
>>>>  4 files changed, 20 insertions(+)
>>>>
>>>> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
>>>> index 5034915f4e8e..9c07612281b5 100644
>>>> --- a/Documentation/admin-guide/mm/transhuge.rst
>>>> +++ b/Documentation/admin-guide/mm/transhuge.rst
>>>> @@ -561,6 +561,16 @@ swpin
>>>>         is incremented every time a huge page is swapped in from a non-zswap
>>>>         swap device in one piece.
>>>>
>>>
>>> Would the following be better?
>>>
>>> +swpin_fallback
>>> +       is incremented if a huge page swapin fails to allocate or charge
>>> +       it and instead falls back to using small pages.
>>>
>>> +swpin_fallback_charge
>>> +       is incremented if a huge page swapin fails to charge it and instead
>>> +       falls back to using small pages even though the allocation was
>>> +       successful.
>>
>> much better, but it is better to align with "huge pages with
>> lower orders or small pages", not necessarily small pages:
>>
>> anon_fault_fallback
>> is incremented if a page fault fails to allocate or charge
>> a huge page and instead falls back to using huge pages with
>> lower orders or small pages.
>>
>> anon_fault_fallback_charge
>> is incremented if a page fault fails to charge a huge page and
>> instead falls back to using huge pages with lower orders or
>> small pages even though the allocation was successful.
> 
> Right, I clearly overlooked that ;)
> 

Hi Lance and Barry,

Do you think the following expression is clear? Compared to my original
version, I’ve removed the word “huge” from the first line, and it now
looks almost identical to anon_fault_fallback/anon_fault_fallback_charge.

swpin_fallback
       is incremented if a page swapin fails to allocate or charge
       a huge page and instead falls back to using huge pages with
       lower orders or small pages.

swpin_fallback_charge
       is incremented if a page swapin fails to charge a huge page and
       instead falls back to using huge pages with lower orders or
       small pages even though the allocation was successful.

Thanks,
Wencaho

> Thanks,
> Lance
> 
>>
>>>
>>> Thanks,
>>> Lance
>>>
>>>> +swpin_fallback
>>>> +       is incremented if a huge page swapin fails to allocate or charge
>>>> +       a huge page and instead falls back to using huge pages with
>>>> +       lower orders or small pages.
>>>> +
>>>> +swpin_fallback_charge
>>>> +       is incremented if a page swapin fails to charge a huge page and
>>>> +       instead falls back to using huge pages with lower orders or
>>>> +       small pages even though the allocation was successful.
>>>> +
>>>>  swpout
>>>>         is incremented every time a huge page is swapped out to a non-zswap
>>>>         swap device in one piece without splitting.
>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>> index b94c2e8ee918..93e509b6c00e 100644
>>>> --- a/include/linux/huge_mm.h
>>>> +++ b/include/linux/huge_mm.h
>>>> @@ -121,6 +121,8 @@ enum mthp_stat_item {
>>>>         MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
>>>>         MTHP_STAT_ZSWPOUT,
>>>>         MTHP_STAT_SWPIN,
>>>> +       MTHP_STAT_SWPIN_FALLBACK,
>>>> +       MTHP_STAT_SWPIN_FALLBACK_CHARGE,
>>>>         MTHP_STAT_SWPOUT,
>>>>         MTHP_STAT_SWPOUT_FALLBACK,
>>>>         MTHP_STAT_SHMEM_ALLOC,
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index ee335d96fc39..46749dded1c9 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -617,6 +617,8 @@ DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
>>>>  DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>>>>  DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
>>>>  DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
>>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
>>>> +DEFINE_MTHP_STAT_ATTR(swpin_fallback_charge, MTHP_STAT_SWPIN_FALLBACK_CHARGE);
>>>>  DEFINE_MTHP_STAT_ATTR(swpout, MTHP_STAT_SWPOUT);
>>>>  DEFINE_MTHP_STAT_ATTR(swpout_fallback, MTHP_STAT_SWPOUT_FALLBACK);
>>>>  #ifdef CONFIG_SHMEM
>>>> @@ -637,6 +639,8 @@ static struct attribute *anon_stats_attrs[] = {
>>>>  #ifndef CONFIG_SHMEM
>>>>         &zswpout_attr.attr,
>>>>         &swpin_attr.attr,
>>>> +       &swpin_fallback_attr.attr,
>>>> +       &swpin_fallback_charge_attr.attr,
>>>>         &swpout_attr.attr,
>>>>         &swpout_fallback_attr.attr,
>>>>  #endif
>>>> @@ -669,6 +673,8 @@ static struct attribute *any_stats_attrs[] = {
>>>>  #ifdef CONFIG_SHMEM
>>>>         &zswpout_attr.attr,
>>>>         &swpin_attr.attr,
>>>> +       &swpin_fallback_attr.attr,
>>>> +       &swpin_fallback_charge_attr.attr,
>>>>         &swpout_attr.attr,
>>>>         &swpout_fallback_attr.attr,
>>>>  #endif
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 209885a4134f..774dfd309cfe 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -4189,8 +4189,10 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
>>>>                         if (!mem_cgroup_swapin_charge_folio(folio, vma->vm_mm,
>>>>                                                             gfp, entry))
>>>>                                 return folio;
>>>> +                       count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK_CHARGE);
>>>>                         folio_put(folio);
>>>>                 }
>>>> +               count_mthp_stat(order, MTHP_STAT_SWPIN_FALLBACK);
>>>>                 order = next_order(&orders, order);
>>>>         }
>>>>
>>>> --
>>>> 2.45.0
>>>>
>>
>> Thanks
>> Barry





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux