On Thu, Oct 28, 2021 at 02:04:29PM +0200, Sebastian Andrzej Siewior wrote: > On 2021-10-27 10:12:12 [+0100], Mel Gorman wrote: > > On Tue, Oct 26, 2021 at 06:51:00PM +0200, Sebastian Andrzej Siewior wrote: > > > In https://lore.kernel.org/all/20200304091159.GN3818@xxxxxxxxxxxxxxxxxxx/ > > > Mel wrote: > > > > > > | While I ack'd this, an RT application using THP is playing with fire, > > > | I know the RT extension for SLE explicitly disables it from being enabled > > > | at kernel config time. At minimum the critical regions should be mlocked > > > | followed by prctl to disable future THP faults that are non-deterministic, > > > | both from an allocation point of view, and a TLB access point of view. It's > > > | still reasonable to expect a smaller TLB reach for huge pages than > > > | base pages. > > > > > > With TRANSPARENT_HUGEPAGE enabled I haven't seen spikes > 100us > > > in cyclictest. I did have mlock_all() enabled but nothing else. > > > PR_SET_THP_DISABLE remained unchanged (enabled). Is there anything to > > > stress this to be sure or is mlock_all() enough to do THP but leave the > > > mlock() applications alone? > > > > > > Then Mel continued with: > > > > > > | It's a similar hazard with NUMA balancing, an RT application should either > > > | disable balancing globally or set a memory policy that forces it to be > > > | ignored. They should be doing this anyway to avoid non-deterministic > > > | memory access costs due to NUMA artifacts but it wouldn't surprise me > > > | if some applications got it wrong. > > > > > > Usually (often) RT applications are pinned. I would assume that on > > > bigger box the RT tasks are at least pinned to a node. How bad can this > > > get in worst case? cyclictest pins every thread to CPU. I could remove > > > this for testing. What would be a good test to push this to its limit? > > > > > > Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > > > > Somewhat tentative but > > > > Acked-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > > > > It's tentative because NUMA Balancing gets default disabled on PREEMPT_RT > > but it's still possible to enable where as THP is disabled entirely > > and can never be enabled. This is a little inconsistent and it would be > > preferable that they match either by disabling NUMA_BALANCING entirely or > > forbidding TRANSPARENT_HUGEPAGE_ALWAYS && PREEMPT_RT. I'm ok with either. > > Oh. I can go either way depending on the input ;) > > > There is the possibility that an RT application could use THP safely by > > using madvise() and mlock(). That way, THP is available but only if an > > application has explicit knowledge of THP and smart enough to do it only > > during the initialisation phase with > > Yes that was my question. So if you have "always", do mlock_all() in the > application and then have other threads that same application doing > malloc/ free of memory that the RT thread is not touching then bad > things can still happen, right? > My understanding is that all threads can be blocked in a page fault if > there is some THP operation going on. > Hmm, it could happen if all the memory used by the RT thread was not hugepage-aligned and potentially khugepaged could interfere. khugepaged can be disabled if tuned properly but the alignment requirement would be tricky. Probably safer to just disable it like it has been historically. For consistently, force NUMA_BALANCING to be disabled too because it introduces non-deterministic latencies even if memory regions are locked and bound. > > There is the slight caveat that even then THP can have inconsistent > > latencies if it has a split THP with separate entries for base and huge > > pages. The responsibility would be on the person deploying the application > > to ensure a platform was suitable for both RT and using huge pages. > > split THP? Sorry, "split TLB" where part of the TLB only handles base pages and another part handles huge pages. -- Mel Gorman SUSE Labs