Re: [RFC PATCH 0/3] Cgroup-based THP control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 01-11-24 16:39:07, Stepanov Anatoly wrote:
> On 11/1/2024 4:28 PM, Michal Hocko wrote:
> > On Fri 01-11-24 16:24:55, Stepanov Anatoly wrote:
> >> On 11/1/2024 4:15 PM, Michal Hocko wrote:
> >>> On Fri 01-11-24 14:54:27, Stepanov Anatoly wrote:
> >>>> On 11/1/2024 10:35 AM, Michal Hocko wrote:
> >>>>> On Thu 31-10-24 17:37:12, Stepanov Anatoly wrote:
> >>>>>> If we consider the inheritance approach (prctl + launcher), it's fine until we need to change
> >>>>>> THP mode property for several tasks at once, in this case some batch-change approach needed.
> >>>>>
> >>>>> I do not follow. How is this any different from a single process? Or do
> >>>>> you mean to change the mode for an already running process?
> >>>>>
> >>>> yes, for already running set of processes
> >>>
> >>
> >>> Why is that preferred over setting the policy upfront?
> >> Setting the policy in advance is fine, as the first step to do.
> >> But we might not know in advance
> >> which exact policy is the most beneficial for one set of apps or another.
> 
> > 
> > How do you plan to find that out when the application is running
> > already?
> For example, if someone willing to compare some DB server performance with THP-off vs THP-on,
> and DB server restart isn't an option.

So you essentially expect user tell you that they want THP and you want
to make that happen on fly, correct? It is not like there is an actual
monitoring and dynamic policing.

If that is the case then I am not really convinced this is a worthwhile
to support TBH. I can see that a workload knows in advance that they
benefit from THP but I am much more dubious about "learning during the
runtime" is a real life thing. I might be wrong of course but if
somebody has performance monitoring that is able to identify performance
bottlenecks based on specific workload then applying THP on the whole
group of proceesses seems like a very crude way to deal with that. I
could see a case for madvice_process(MADV_COLLAPSE) to deal with
specific memory hotspots though.
-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux