Quoting Michal Hocko (2017-09-15 07:36:19) > On Thu 14-09-17 17:16:27, Taras Kondratiuk wrote: > > Hi > > > > In our devices under low memory conditions we often get into a trashing > > state when system spends most of the time re-reading pages of .text > > sections from a file system (squashfs in our case). Working set doesn't > > fit into available page cache, so it is expected. The issue is that > > OOM killer doesn't get triggered because there is still memory for > > reclaiming. System may stuck in this state for a quite some time and > > usually dies because of watchdogs. > > > > We are trying to detect such trashing state early to take some > > preventive actions. It should be a pretty common issue, but for now we > > haven't find any existing VM/IO statistics that can reliably detect such > > state. > > > > Most of metrics provide absolute values: number/rate of page faults, > > rate of IO operations, number of stolen pages, etc. For a specific > > device configuration we can determine threshold values for those > > parameters that will detect trashing state, but it is not feasible for > > hundreds of device configurations. > > > > We are looking for some relative metric like "percent of CPU time spent > > handling major page faults". With such relative metric we could use a > > common threshold across all devices. For now we have added such metric > > to /proc/stat in our kernel, but we would like to find some mechanism > > available in upstream kernel. > > > > Has somebody faced similar issue? How are you solving it? > > Yes this is a pain point for a _long_ time. And we still do not have a > good answer upstream. Johannes has been playing in this area [1]. > The main problem is that our OOM detection logic is based on the ability > to reclaim memory to allocate new memory. And that is pretty much true > for the pagecache when you are trashing. So we do not know that > basically whole time is spent refaulting the memory back and forth. > We do have some refault stats for the page cache but that is not > integrated to the oom detection logic because this is really a > non-trivial problem to solve without triggering early oom killer > invocations. > > [1] http://lkml.kernel.org/r/20170727153010.23347-1-hannes@xxxxxxxxxxx Thanks Michal. memdelay looks promising. We will check it. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href