Re: [RFC] mm: Disable NUMA_BALANCING_DEFAULT_ENABLED and TRANSPARENT_HUGEPAGE on PREEMPT_RT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 26, 2021 at 06:51:00PM +0200, Sebastian Andrzej Siewior wrote:
> In https://lore.kernel.org/all/20200304091159.GN3818@xxxxxxxxxxxxxxxxxxx/
> Mel wrote:
> 
> | While I ack'd this, an RT application using THP is playing with fire,
> | I know the RT extension for SLE explicitly disables it from being enabled
> | at kernel config time. At minimum the critical regions should be mlocked
> | followed by prctl to disable future THP faults that are non-deterministic,
> | both from an allocation point of view, and a TLB access point of view. It's
> | still reasonable to expect a smaller TLB reach for huge pages than
> | base pages.
> 
> With TRANSPARENT_HUGEPAGE enabled I haven't seen spikes > 100us
> in cyclictest. I did have mlock_all() enabled but nothing else.
> PR_SET_THP_DISABLE remained unchanged (enabled). Is there anything to
> stress this to be sure or is mlock_all() enough to do THP but leave the
> mlock() applications alone?
> 
> Then Mel continued with:
> 
> | It's a similar hazard with NUMA balancing, an RT application should either
> | disable balancing globally or set a memory policy that forces it to be
> | ignored. They should be doing this anyway to avoid non-deterministic
> | memory access costs due to NUMA artifacts but it wouldn't surprise me
> | if some applications got it wrong.
> 
> Usually (often) RT applications are pinned. I would assume that on
> bigger box the RT tasks are at least pinned to a node. How bad can this
> get in worst case? cyclictest pins every thread to CPU. I could remove
> this for testing. What would be a good test to push this to its limit?
> 
> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>

Somewhat tentative but

Acked-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>

It's tentative because NUMA Balancing gets default disabled on PREEMPT_RT
but it's still possible to enable where as THP is disabled entirely
and can never be enabled. This is a little inconsistent and it would be
preferable that they match either by disabling NUMA_BALANCING entirely or
forbidding TRANSPARENT_HUGEPAGE_ALWAYS && PREEMPT_RT. I'm ok with either.

There is the possibility that an RT application could use THP safely by
using madvise() and mlock(). That way, THP is available but only if an
application has explicit knowledge of THP and smart enough to do it only
during the initialisation phase with

diff --git a/mm/Kconfig b/mm/Kconfig
index d16ba9249bc5..d6ccca216028 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -393,6 +393,7 @@ choice
 
 	config TRANSPARENT_HUGEPAGE_ALWAYS
 		bool "always"
+		depends on !PREEMPT_RT
 	help
 	  Enabling Transparent Hugepage always, can increase the
 	  memory footprint of applications without a guaranteed

There is the slight caveat that even then THP can have inconsistent
latencies if it has a split THP with separate entries for base and huge
pages. The responsibility would be on the person deploying the application
to ensure a platform was suitable for both RT and using huge pages.

-- 
Mel Gorman
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux