Re: [RFC 0/6] Reclaim zero subpages of thp to avoid memory bloat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 28-10-21 19:56:49, Ning Zhang wrote:
> As we know, thp may lead to memory bloat which may cause OOM.
> Through testing with some apps, we found that the reason of
> memory bloat is a huge page may contain some zero subpages
> (may accessed or not). And we found that most zero subpages
> are centralized in a few huge pages.
> 
> Following is a text_classification_rnn case for tensorflow:
> 
>   zero_subpages   huge_pages  waste
>   [     0,     1) 186         0.00%
>   [     1,     2) 23          0.01%
>   [     2,     4) 36          0.02%
>   [     4,     8) 67          0.08%
>   [     8,    16) 80          0.23%
>   [    16,    32) 109         0.61%
>   [    32,    64) 44          0.49%
>   [    64,   128) 12          0.30%
>   [   128,   256) 28          1.54%
>   [   256,   513) 159        18.03%
> 
> In the case, there are 187 huge pages (25% of the total huge pages)
> which contain more then 128 zero subpages. And these huge pages
> lead to 19.57% waste of the total rss. It means we can reclaim
> 19.57% memory by splitting the 187 huge pages and reclaiming the
> zero subpages.

What is the THP policy configuration in your testing? I assume you are
using defaults right? That would be always for THP and madvise for
defrag. Would it make more sense to use madvise mode for THP for your
workload? The THP code is rather complex and just by looking at the
diffstat this add quite a lot on top. Is this really worth it?
-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux