Re: Caching/buffers become useless after some time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/24/2018 02:11 AM, Marinko Catovic wrote:
>> Hmm it's actually interesting to see GFP_TRANSHUGE there and not
>> GFP_TRANSHUGE_LIGHT. What's your thp defrag setting? (cat
>> /sys/kernel/mm/transparent_hugepage/enabled). Maybe it's set to
>> "always", or there's a heavily faulting process that's using
>> madvise(MADV_HUGEPAGE). If that's the case, setting it to "defer" or
>> even "never" could be a workaround.
> 
> cat /sys/kernel/mm/transparent_hugepage/enabled
> always [madvise] never

Hmm my mistake. I was actually interested in
/sys/kernel/mm/transparent_hugepage/defrag

> according to the docs this is the default
>> "madvise" will enter direct reclaim like "always" but only for regions
>> that are have used madvise(MADV_HUGEPAGE). This is the default behaviour.

Yeah but that's about 'defrag'. For 'enabled', the default should be
always. But it's a kernel config option I think? Let's see what you have
for 'defrag'...

> would any change there kick in immediately, even when in the 100M/10G case?

If it's indeed preventing the cache from growing back, changing that
should result in gradual increase. Note that it doesn't look probable
that THP is the cause, but the trace didn't contain any other
allocations that could be responsible for high-order direct reclaim.

>> or there's a heavily faulting process that's using madvise(MADV_HUGEPAGE)
> 
> are you suggesting that a/one process can cause this?
> how would one be able to identify it..? should killing it allow the
> cache to be
> populated again instantly? if yes, then I could start killing all
> processes on the
> host until there is improvement to observe.

It's not the process' fault, and killing it might disrupt the
observation in unexpected ways. It's simpler to change the global
setting to "never" to confirm or rule out this.

Ah, checked the trace and it seems to be "php-cgi". Interesting that
they use madvise(MADV_HUGEPAGE). Anyway the above still applies.

> so far I can tell that it is not the database server, since restarting
> it did not help at all.
> 
> Please remember that, suggesting this, I can see how buffers (the 100MB
> value)
> are `oscillating`. When in the cache-useless state it jumps around
> literally every second
> from e.g. 100 to 102, then 99, 104, 85, 101, 105, 98, .. and so on,
> where it always gets
> closer from well-populated several GB in the beginning to those 100MB
> over the days.
> so doing anything that should cause an effect would be easily measurable
> instantly,
> which is to date only achieved by dropping caches.
> 
> Please tell me if you need any measurements again, when or at what
> state, with code
> snippets perhaps to fit your needs.

1. Send the current value of /sys/kernel/mm/transparent_hugepage/defrag
2. Unless it's 'defer' or 'never' already, try changing it to 'defer'.

Thanks.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux