On Thu, Nov 15, 2012 at 12:11:47AM -0800, David Rientjes wrote: [...] > Might not be too difficult if you implement your own cgroup to aggregate > these tasks for which you want to know memory pressure events; it would > have to be triggered for the task trying to allocate memory at any given > time and how hard it was to allocate that memory in the slowpath, tie it > back to that tasks' memory pressure cgroup, and then report the trigger if > it's over a user-defined threshold normalized to the 0-100 scale. Then > you could co-mount this cgroup with memcg, cpusets, or just do it for the > root cgroup for users who want to monitor the entire system This seems doable. But > (CONFIG_CGROUPS is enabled by default). Hehe, you're saying that we have to have cgroups=y. :) But some folks were deliberately asking us to make the cgroups optional. OK, here is what I can try to do: - Implement memory pressure cgroup as you described, by doing so we'd make the thing play well with cpusets and memcg; - This will be eventfd()-based; - Once done, we will have a solution for pretty much every major use-case (i.e. servers, desktops and Android, they all have cgroups enabled); (- Optionally, if there will be a demand, for CGROUPS=n we can implement a separate sysfs file with the exactly same eventfd interface, it will only report global pressure. This will be for folks that don't want the cgroups for some reason. The interface can be discussed separately.) Thanks, Anton. -- To unsubscribe from this list: send the line "unsubscribe linux-man" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html