On Tue, Dec 19, 2017 at 7:24 AM, Tejun Heo <tj@xxxxxxxxxx> wrote: > Hello, > > On Tue, Dec 19, 2017 at 07:12:19AM -0800, Shakeel Butt wrote: >> Yes, there are pros & cons, therefore we should give users the option >> to select the API that is better suited for their use-cases and > > Heh, that's not how API decisions should be made. The long term > outcome would be really really bad. > >> environment. Both approaches are not interchangeable. We use memsw >> internally for use-cases I mentioned in commit message. This is one of >> the main blockers for us to even consider cgroup-v2 for memory >> controller. > > Let's concentrate on the use case. I couldn't quite understand what > was missing from your description. You said that it'd make things > easier for the centralized monitoring system which isn't really a > description of a use case. Can you please go into more details > focusing on the eventual goals (rather than what's currently > implemented)? > The goal is to provide an interface that provides: 1. Consistent memory usage history 2. Consistent memory limit enforcement behavior By consistent I mean, the environment should not affect the usage history. For example, the presence or absence of swap or memory pressure on the system should not affect the memory usage history i.e. making environment an invariant. Similarly, the environment should not affect the memcg OOM or memcg memory reclaim behavior. To provide consistent memory usage history using the current cgroup-v2's 'swap' interface, an additional metric expressing the intersection of memory and swap has to be exposed. Basically memsw is the union of memory and swap. So, if that additional metric can be used to find the union. However for consistent memory limit enforcement, I don't think there is an easy way to use current 'swap' interface. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>