It would be nice if you can specify exact numbers you like to see.
On Tue, Apr 14, 2020 at 2:49 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
....
> As far I see numbers which vmpressure uses - they are closer to RSS of
> userspace processes for memory utilization.
> Default calibration in memory.pressure_level_medium as 60% makes 8GB device
> hit memory threshold when RSS utilization
> reaches ~5 GB and that is a bit too early, I observe it happened
> immediately after boot. Reasonable level should be
> in the 70-80% range depending on SW preloaded on your device.
I am not sure I follow. Levels are based on the reclaim ineffectivity not
the overall memory utilization. So it takes to have only 40% reclaim
effectivity to trigger the medium level. While you are right that the
threshold for the event is pretty arbitrary I would like to hear why
that doesn't work in your environment. It shouldn't really depend on the
amount of memory as this is a percentage, right?
It is not only depends from amount of memory or reclams but also what is software running.
As I see from vmscan.c vmpressure activated from various shrink_node() or, basically do_try_to_free_pages().
To hit this state you need to somehow lack memory due to various reasons, so the amount of memory plays a role here.
In particular my case is very impacted by GPU (using CMA) consumption which can easily take gigs.
Apps can take gigabyte as well.
So reclaiming will be quite often called in case of lack of memory (4K calls are possible).
Handling level change will happen if the amount of scanned pages is more than window size, 512 is too little as now it is only 2 MB.
So small slices are a source of false triggers.
Next, pressure counted as
unsigned long scale = scanned + reclaimed;
pressure = scale - (reclaimed * scale / scanned);
pressure = pressure * 100 / scale;
pressure = pressure * 100 / scale;
Or for 512 pages (lets use minimal) it leads to reclaimed should be 204 pages for 60% threshold and 25 pages for 95% (as critical)
In case of pressure happened (usually at 85% of memory used, and hittin critical level) I rarely see something like closer to real numbers
vmpressure_work_fn: scanned 545, reclaimed 144 <-- 73%
vmpressure_work_fn: scanned 16283, reclaimed 2495 <-- same session but 83%
Most of the time it is looping between kswapd and lmkd reclaiming failures, consuming quite a high amount of cpu.
On vmscan calls everything looks as expected
[ 312.410938] vmpressure: tree 0 scanned 4, reclaimed 2
[ 312.410939] vmpressure: tree 0 scanned 120, reclaimed 62
[ 312.410939] vmpressure: tree 1 scanned 2, reclaimed 1
[ 312.410940] vmpressure: tree 1 scanned 120, reclaimed 62
[ 312.410941] vmpressure: tree 0 scanned 0, reclaimed 0
[ 312.410939] vmpressure: tree 0 scanned 120, reclaimed 62
[ 312.410939] vmpressure: tree 1 scanned 2, reclaimed 1
[ 312.410940] vmpressure: tree 1 scanned 120, reclaimed 62
[ 312.410941] vmpressure: tree 0 scanned 0, reclaimed 0
> From another point of view having a memory.pressure_level_critical set to
> 95% may never happen as it comes to a level where an OOM killer already
> starts to kill processes,
> and in some cases it is even worse than the now removed Android low memory
> killer. For such cases has sense to shift the threshold down to 85-90% to
> have device reliably
> handling low memory situations and not rely only on oom_score_adj hints.
>
> Next important parameter for tweaking is memory.pressure_window which has
> the sense to increase twice to reduce the number of activations of userspace
> to save some power by reducing sensitivity.
Could you be more specific, please?
That are parameters which most sensitive for tweaking for me.
At least someone who use vmpressure will be able to tune up or down depending on combination apps.
> For 12 and 16 GB devices the situation will be similar but worse, based on
> fact in current settings they will hit medium memory usage when ~5 or 6.5
> GB memory will be still free.
>
>
> >
> > Anyway, I have to confess I am not a big fan of this. vmpressure turned
> > out to be a very weak interface to measure the memory pressure. Not only
> > it is not numa aware which makes it unusable on many systems it also
> > gives data way too late from the practice.
> >
> > Btw. why don't you use /proc/pressure/memory resp. its memcg counterpart
> > to measure the memory pressure in the first place?
> >
>
> According to our checks PSI produced numbers only when swap enabled e.g.
> swapless device 75% RAM utilization:
I believe you should discuss that with the people familiar with PSI
internals (Johannes already in the CC list).
Thanks for pointing, I will answer for his letters.
--
Michal Hocko
SUSE Labs
With Best Wishes,
Leonid