On Wed 29-07-15 18:36:40, Vladimir Davydov wrote: > On Wed, Jul 29, 2015 at 05:08:55PM +0200, Michal Hocko wrote: > > On Wed 29-07-15 17:45:39, Vladimir Davydov wrote: [...] > > > Page table scan approach has the inherent problem - it ignores unmapped > > > page cache. If a workload does a lot of read/write or map-access-unmap > > > operations, we won't be able to even roughly estimate its wss. > > > > That page cache is trivially reclaimable if it is clean. If it needs > > writeback then it is non-idle only until the next writeback. So why does > > it matter for the estimation? > > Because it might be a part of a workload's working set, in which case > evicting it will make the workload lag. My point was that no sane application will rely on the unmaped pagecache being part of the working set. But you are right that you might have a more complex load consisting of many applications each doing buffered IO on the same set of files which might get evicted due to other memory pressure in the meantime and have a higher latencies. This is where low limit covering this memory as well might be helpful. -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html