Re: THP backed thread stacks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 9, 2023 at 4:05 PM Zach O'Keefe <zokeefe@xxxxxxxxxx> wrote:
>
> On Thu, Mar 9, 2023 at 3:33 PM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote:
> >
> > On 03/09/23 14:38, Zach O'Keefe wrote:
> > > On Wed, Mar 8, 2023 at 11:02 AM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote:
> > > >
> > > > On 03/06/23 16:40, Mike Kravetz wrote:
> > > > > On 03/06/23 19:15, Peter Xu wrote:
> > > > > > On Mon, Mar 06, 2023 at 03:57:30PM -0800, Mike Kravetz wrote:
> > > > > > >
> > > > > > > Just wondering if there is anything better or more selective that can be
> > > > > > > done?  Does it make sense to have THP backed stacks by default?  If not,
> > > > > > > who would be best at disabling?  A couple thoughts:
> > > > > > > - The kernel could disable huge pages on stacks.  libpthread/glibc pass
> > > > > > >   the unused flag MAP_STACK.  We could key off this and disable huge pages.
> > > > > > >   However, I'm sure there is somebody somewhere today that is getting better
> > > > > > >   performance because they have huge pages backing their stacks.
> > > > > > > - We could push this to glibc/libpthreads and have them use
> > > > > > >   MADV_NOHUGEPAGE on thread stacks.  However, this also has the potential
> > > > > > >   of regressing performance if somebody somewhere is getting better
> > > > > > >   performance due to huge pages.
> > > > > >
> > > > > > Yes it seems it's always not safe to change a default behavior to me.
> > > > > >
> > > > > > For stack I really can't tell why it must be different here.  I assume the
> > > > > > problem is the wasted space and it exaggerates easily with N-threads.  But
> > > > > > IIUC it'll be the same as thp to normal memories iiuc, e.g., there can be a
> > > > > > per-thread mmap() of 2MB even if only 4K is used each, then if such mmap()
> > > > > > is populated by THP for each thread there'll also be a huge waste.
> > > >
> > > > I may be alone in my thinking here, but it seems that stacks by their nature
> > > > are not generally good candidates for huge pages.  I am just thinking about
> > > > the 'normal' use case where stacks contain local function data and arguments.
> > > > Am I missing something, or are huge pages really a benefit here?
> > > >
> > > > Of course, I can imagine some thread with a large amount of frequently
> > > > accessed data allocated on it's stack which could benefit from huge
> > > > pages.  But, this seems to be an exception rather than the rule.
> > > >
> > > > I understand the argument that THP always means always and everywhere.
> > > > It just seems that thread stacks may be 'special enough' to consider
> > > > disabling by default
> > >
> > > Just my drive-by 2c, but would agree with you here (at least wrt
> > > hugepages not being good candidates, in general). A user mmap()'ing
> > > memory has a lot more (direct) control over how they fault / utilize
> > > the memory: you know when you're running out of space and can map more
> > > space as needed. For these stacks, you're setting the stack size to
> > > 2MB just as a precaution so you can avoid overflow -- AFAIU there is
> > > no intention of using the whole mapping (and looking at some data,
> > > it's very likely you won't come close).
> > >
> > > That said, why bother setting stack attribute to 2MiB in size if there
> > > isn't some intention of possibly being THP-backed? Moreover, how did
> > > it happen that the mappings were always hugepage-aligned here?
> >
> > I do not have the details as to why the Java group chose 2MB for stack
> > size.  My 'guess' is that they are trying to save on virtual space (although
> > that seems silly).  2MB is actually reducing the default size.  The
> > default pthread stack size on my desktop (fedora) is 8MB [..]
>
> Oh, that's interesting -- I did not know that. That's huge.
>
> > [..]  This also is
> > a nice multiple of THP size.
> >
> > I think the hugepage alignment in their environment was somewhat luck.
> > One suggestion made was to change stack size to avoid alignment and
> > hugepage usage.  That 'works' but seems kind of hackish.
>
> That was my first thought, if the alignment was purely due to luck,
> and not somebody manually specifying it. Agreed it's kind of hackish
> if anyone can get bit by this by sheer luck.
>
> > Also, David H pointed out the somewhat recent commit to align sufficiently
> > large mappings to THP boundaries.  This is going to make all stacks huge
> > page aligned.
>
> I think that change was reverted by Linus in commit 0ba09b173387
> ("Revert "mm: align larger anonymous mappings on THP boundaries""),
> until it's perf regressions were better understood -- and I haven't
> seen a revamp of it.

The regression has been fixed and it is not related to this commit. I
suggested Andrew to resurrect this commit a couple of months ago, but
it has not been.

>
> > --
> > Mike Kravetz
>





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux