On Mon 22-11-21 14:04:04, Johannes Weiner wrote: [...] > I'm not a fan of this. It uses filesystem mounts to create shareable > resource domains outside of the cgroup hierarchy, which has all the > downsides you listed, and more: > > 1. You need a filesystem interface in the first place, and a new > ad-hoc channel and permission model to coordinate with the cgroup > tree, which isn't great. All filesystems you want to share data on > need to be converted. > > 2. It doesn't extend to non-filesystem sources of shared data, such as > memfds, ipc shm etc. > > 3. It requires unintuitive configuration for what should be basic > shared accounting semantics. Per default you still get the old > 'first touch' semantics, but to get sharing you need to reconfigure > the filesystems? > > 4. If a task needs to work with a hierarchy of data sharing domains - > system-wide, group of jobs, job - it must interact with a hierarchy > of filesystem mounts. This is a pain to setup and may require task > awareness. Moving data around, working with different mount points. > Also, no shared and private data accounting within the same file. > > 5. It reintroduces cgroup1 semantics of tasks and resouces, which are > entangled, sitting in disjunct domains. OOM killing is one quirk of > that, but there are others you haven't touched on. Who is charged > for the CPU cycles of reclaim in the out-of-band domain? Who is > charged for the paging IO? How is resource pressure accounted and > attributed? Soon you need cpu= and io= as well. > > My take on this is that it might work for your rather specific > usecase, but it doesn't strike me as a general-purpose feature > suitable for upstream. I just want to reiterate that this resonates with my concerns expressed earlier and thanks for expressing them in a much better structured and comprehensive way, Johannes. [btw. a non-technical comment. For features like this it is better to not rush into newer versions posting until there is at least some agreement for the feature. Otherwise we have fragments of the discussion spread over several email threads] > If we want sharing semantics for memory, I think we need a more > generic implementation with a cleaner interface. > > Here is one idea: > > Have you considered reparenting pages that are accessed by multiple > cgroups to the first common ancestor of those groups? > > Essentially, whenever there is a memory access (minor fault, buffered > IO) to a page that doesn't belong to the accessing task's cgroup, you > find the common ancestor between that task and the owning cgroup, and > move the page there. > > With a tree like this: > > root - job group - job > `- job > `- job group - job > `- job > > all pages accessed inside that tree will propagate to the highest > level at which they are shared - which is the same level where you'd > also set shared policies, like a job group memory limit or io weight. > > E.g. libc pages would (likely) bubble to the root, persistent tmpfs > pages would bubble to the respective job group, private data would > stay within each job. > > No further user configuration necessary. Although you still *can* use > mount namespacing etc. to prohibit undesired sharing between cgroups. > > The actual user-visible accounting change would be quite small, and > arguably much more intuitive. Remember that accounting is recursive, > meaning that a job page today also shows up in the counters of job > group and root. This would not change. The only thing that IS weird > today is that when two jobs share a page, it will arbitrarily show up > in one job's counter but not in the other's. That would change: it > would no longer show up as either, since it's not private to either; > it would just be a job group (and up) page. > > This would be a generic implementation of resource sharing semantics: > independent of data source and filesystems, contained inside the > cgroup interface, and reusing the existing hierarchies of accounting > and control domains to also represent levels of common property. > > Thoughts? This is an interesting concept. I am not sure how expensive and intrusive (code wise) this would get but that is more of an implementation detail. Another option would be to provide a syscall to claim a shared resource. This would require a cooperation of the application but it would establish a clear responsibility model. -- Michal Hocko SUSE Labs