Re: [PATCH] mm: fix low-high watermark distance on small systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 20, 2018 at 04:22:36PM +0530, Vinayak Menon wrote:
> >
> >> up few more times and doing shorter steals is better than kswapd stealing more in a single run. The latter
> >> does not better direct reclaims and causes thrashing too.
> >
> > That's the tradeoff of kswapd aggressiveness to avoid high rate
> > direct reclaim.
> 
> We can call it trade off only if increasing the aggressiveness of kswapd reduces the direct reclaims ?
> But as shown by the data I had shared, the aggressiveness does not improve direct reclaims. It is just causing
> unnecessary reclaim. i.e. a much lower low-high gap gives the same benefit on direct reclaims with far less
> reclaim.

Said again, it depends on workload. I can make simple test to break it easily.

> 
> >>> Don't get me wrong. I don't want you test all of wfs with varios
> >>> workload to prove your logic is better. What I want to say here is
> >>> it's heuristic so it couldn't be perfect for every workload so
> >>> if you change to non-linear, you could be better but others might be not.
> >> Yes I understand your point. But since mmtests and Android tests showed similar results, I thought the
> >> heuristic may just work across workloads. I assume from Johannes's tests on 140GB machine (from the
> >> commit msg of the patch which introduced wsf) that the current low-high gap works well without thrashing
> >> on bigger machines. This made me assume that the behavior is non-linear. So the non-linear behavior will
> >> not make any difference to higher RAM machines as the low-high remains almost same as shown in the table
> >> below. But I understand your point, for a different workload on smaller machines, I am not sure the benefit I
> >> see would be observed, though that's the same problem with current wsf too.
> > True. That's why I don't want to make it complicate. Later, if someone complains
> > "linear is better for his several testing", are you happy to rollback to it? 
> >
> > You might argue it's same problem now but at least as-is code is simple to
> > understand. 
> 
> Yes I agree that there can be workloads on low RAM that may see a side effect.  But since popular use case like those on Android

My concern is not side effect but putting more heuristic without proving
it's generally better.

I don't think repeated app launching on android doesn't reflect real user
scenario. Anyone don't do that in real life except some guys want to show
benchmark result in youtube.
About mmtests, what kinds of tests did you perform? So what's the result?
If you reduced thrashing, how much the test result is improved?
Every tests are improved? Need not vmstat but result from the benchmark.
Such wide testing would make more conviction.

> and also the mmtests shows the problem, and fixed by the patch, can we try to pick it and see if someone complains ? I see that
> there were other reports of this https://lkml.org/lkml/2017/11/24/167 . Do you suggest a tunable approach taken by the patch
> in that link ? So that varying use cases can be accommodated. I wanted to avoid a new tunable if some heuristic like the patch does
> just works.

Actually, I don't want to touch it unless we have more nice feedback
algorithm.

Anyway, it's just my opinion. I did best effort to explain. I will
defer to maintainer.

Thanks.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux