On 11/21/2012 12:46 PM, Anton Vorontsov wrote: > On Wed, Nov 21, 2012 at 12:27:28PM +0400, Glauber Costa wrote: >> On 11/20/2012 10:23 PM, David Rientjes wrote: >>> Anton can correct me if I'm wrong, but I certainly don't think this is >>> where mempressure is headed: I don't think any accounting needs to be done > > Yup, I'd rather not do any accounting, at least not in bytes. It doesn't matter here, but memcg doesn't do any accounting in bytes as well. It only display it in bytes, but internally, it's all pages. The bytes representation is convenient, because then you can be agnostic of page sizes. > >>> and, if it is, it's a design issue that should be addressed now rather >>> than later. I believe notifications should occur on current's mempressure >>> cgroup depending on its level of reclaim: nobody cares if your memcg has a >>> limit of 64GB when you only have 32GB of RAM, we'll want the notification. >> >> My main concern is that to trigger those notifications, one would have >> to first determine whether or not the particular group of tasks is under >> pressure. > > As far as I understand, the notifications will be triggered by a process > that tries to allocate memory. So, effectively that would be a per-process > pressure. > > So, if one process in a group is suffering, we notify that "a process in a > group is under pressure", and the notification goes to a cgroup listener If you effectively have a per-process mechanism, why do you need an extra cgroup at all? It seems to me that this is simply something that should be inherited over fork, and then you register the notifier in your first process, and it will be valid for everybody in the process tree. If you need tasks in different processes to respond to the same notifier, then you just register the same notifier in two different processes. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>