On Mon, Aug 26, 2019 at 02:06:30PM +0200, Michal Hocko wrote: > On Tue 13-08-19 12:51:43, Michal Hocko wrote: > > On Mon 12-08-19 11:07:25, Johannes Weiner wrote: > > > On Mon, Aug 12, 2019 at 10:09:47AM +0200, Michal Hocko wrote: > [...] > > > > > Maybe the refaults will be fine - but latency expectations around > > > > > mapped page cache certainly are a lot higher than unmapped cache. > > > > > > > > > > So I'm a bit reluctant about this patch. If Minchan can be happy with > > > > > the lock batching, I'd prefer that. > > > > > > > > Yes, it seems that the regular lock drop&relock helps in Minchan's case > > > > but this is a kind of change that might have other subtle side effects. > > > > E.g. will-it-scale has noticed a regression [1], likely because the > > > > critical section is shorter and the overal throughput of the operation > > > > decreases. Now, the w-i-s is an artificial benchmark so I wouldn't lose > > > > much sleep over it normally but we have already seen real regressions > > > > when the locking pattern has changed in the past so I would by a bit > > > > cautious. > > > > > > I'm much more concerned about fundamentally changing the aging policy > > > of mapped page cache then about the lock breaking scheme. With locking > > > we worry about CPU effects; with aging we worry about additional IO. > > > > But the later is observable and debuggable little bit easier IMHO. > > People are quite used to watch for major faults from my experience > > as that is an easy metric to compare. Rootcausing additional (re)faults is really difficult. We're talking about a slight trend change in caching behavior in a sea of millions of pages. There could be so many factors causing this, and for most you have to patch debugging stuff into the kernel to rule them out. A CPU regression you can figure out with perf. > > > > As I've said, this RFC is mostly to open a discussion. I would really > > > > like to weigh the overhead of mark_page_accessed and potential scenario > > > > when refaults would be visible in practice. I can imagine that a short > > > > lived statically linked applications have higher chance of being the > > > > only user unlike libraries which are often being mapped via several > > > > ptes. But the main problem to evaluate this is that there are many other > > > > external factors to trigger the worst case. > > > > > > We can discuss the pros and cons, but ultimately we simply need to > > > test it against real workloads to see if changing the promotion rules > > > regresses the amount of paging we do in practice. > > > > Agreed. Do you see other option than to try it out and revert if we see > > regressions? We would get a workload description which would be helpful > > for future regression testing when touching this area. We can start > > slower and keep it in linux-next for a release cycle to catch any > > fallouts early. > > > > Thoughts? > > ping... Personally, I'm not convinced by this patch. I think it's a pretty drastic change in aging heuristics just to address a CPU overhead problem that has simpler, easier to verify, alternative solutions. It WOULD be great to clarify and improve the aging model for mapped cache, to make it a bit easier to reason about. But this patch does not really get there either. Instead of taking a serious look at mapped cache lifetime and usage scenarios, the changelog is more in "let's see what breaks if we take out this screw here" territory. So I'm afraid I don't think the patch & changelog in its current shape should go upstream.