On Fri 24-11-17 14:12:56, peter enderborg wrote: > On 11/24/2017 11:14 AM, Michal Hocko wrote: > > On Fri 24-11-17 11:07:07, Peter Enderborg wrote: > >> When tuning the watermark_scale_factor to reduce stalls and compactions > >> the high mark is also changed, it changed a bit too much. So this > >> patch introduces a slope that can reduce this overhead a bit, or > >> increase it if needed. > > This doesn't explain what is the problem, why it is a problem and why we > > need yet another tuning to address it. Users shouldn't really care about > > internal stuff like watermark tuning for each watermark independently. > > This looks like a gross hack. Please start over with the problem > > description and then we can move on to an approapriate fix. Piling up > > tuning knobs to workaround problems is simply not acceptable. > > > > In the original patch - https://lkml.org/lkml/2016/2/18/498 - had a > > discussion about small systems with 8GB RAM. In the handheld world, that's > a lot of RAM. However, the magic number 2 used in the present algorithm > is out of the blue. Compaction problems are the same for both small and > big. So small devices also need to increase watermark to > get compaction to work and reduce direct reclaims. Changing the low watermark > makes direct reclaim rate drop a lot. But it will cause kswap to work more, > and that has a negative impact. Lowering the gap will smooth out the kswap > workload to suite embedded devices a lot better. This can be addressed by > reducing the high watermark using the slope patch herein. Im sort of understand > your opinion on user knobs, but hard-coded magic numbers are even worse. How can a poor user know how to tune it when _we_ cannot do a qualified guess and we do know all the implementation details. Really, describe problems you are seeing with the current code and we can talk about a proper fix or a heuristic when the fix is hard/unfeasible. -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html