On Thu, Feb 20, 2020 at 10:46:39AM +0100, Michal Hocko wrote: > On Wed 19-02-20 16:17:35, Johannes Weiner wrote: > > On Wed, Feb 19, 2020 at 08:53:32PM +0100, Michal Hocko wrote: > > > On Wed 19-02-20 14:16:18, Johannes Weiner wrote: > [...] > > > > [ This is generally work in process: for example, if you isolate > > > > workloads with memory.low, kswapd cpu time isn't accounted to the > > > > cgroup that causes it. Swap IO issued by kswapd isn't accounted to > > > > the group that is getting swapped. > > > > > > Well, kswapd is a system activity and as such it is acceptable that it > > > is accounted to the system. But in this case we are talking about a > > > memcg configuration which influences all other workloads by stealing CPU > > > cycles from them > > > > From a user perspective this isn't a meaningful distinction. > > > > If I partition my memory among containers and one cgroup is acting > > out, I would want the culprit to be charged for the cpu cycles the > > reclaim is causing. Whether I divide my machine up using memory.low or > > using memory.max doesn't really matter: I'm choosing between the two > > based on a *memory policy* I want to implement - work-conserving vs > > non-conserving. I shouldn't have to worry about the kernel tracking > > CPU cycles properly in the respective implementations of these knobs. > > > > So kswapd is very much a cgroup-attributable activity, *especially* if > > I'm using memory.low to delineate different memory domains. > > While I understand what you are saying I do not think this is easily > achievable with the current implementation. The biggest problem I can > see is that you do not have a clear information who to charge for > the memory shortage on a particular NUMA node with a pure low limit > based balancing because the limit is not NUMA aware. Besides that the > origin of the memory pressure might be outside of any memcg. You can > punish/account all memcgs in excess in some manner, e.g. proportionally > to their size/excess but I am not really sure how fair that will > be. Sounds like an interesting project but also sounds like tangent to > this patch. > > High/Max limits are quite different because they are dealing with > the internal memory pressure and you can attribute it to the > cgroup/hierarchy which is in excess. There is a clear domain to reclaim > from. This is an easier model to reason about IMHO. They're not different. memory.low is just a usage limit that happens to be enforcecd lazily rather than immediately. If I'm setting memory.high or memory.max and I allocate beyond it, my memory will be reclaimed with the limit as the target. If I'm setting memory.low and I allocate beyond it, my memory will eventually be reclaimed with the limit as the target. In either case, the cgroup who allocated the memory that is being reclaimed is the one obviously responsible for the reclaim work. Why would the time of limit enforcement change that? If on the other hand an allocation reclaims you below your limit, such as can happen with a NUMA-bound allocation, whether it's high, max, or low, then that's their cost to pay. But it's not really that important what we do in that case - the memcg settings aren't NUMA aware so that whole scenario is out of the purview of the controller anyway. diff --git a/mm/vmscan.c b/mm/vmscan.c index d6085115c7f2..24fe6e9e64b1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2651,6 +2651,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) memcg = mem_cgroup_iter(target_memcg, NULL, NULL); do { struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat); + bool account_cpu = current_is_kswapd() || current_work(); unsigned long reclaimed; unsigned long scanned; @@ -2673,6 +2674,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) continue; } memcg_memory_event(memcg, MEMCG_LOW); + account_cpu = false; break; case MEMCG_PROT_NONE: /* @@ -2688,11 +2690,17 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) reclaimed = sc->nr_reclaimed; scanned = sc->nr_scanned; + if (account_cpu) + use_cpu_of_cgroup(memcg->css.cgroup); + shrink_lruvec(lruvec, sc); shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority); + if (account_cpu) + unuse_cpu_of_cgroup(); + /* Record the group's reclaim efficiency */ vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, > > > without much throttling on the consumer side - especially when the > > > memory is reclaimable without a lot of sleeping or contention on > > > locks etc. > > > > The limiting factor on the consumer side is IO. Reading a page is way > > more costly than reclaiming it, which is why we built our isolation > > stack starting with memory and IO control and are only now getting to > > working on proper CPU isolation. > > > > > I am absolutely aware that we will never achieve a perfect isolation due > > > to all sorts of shared data structures, lock contention and what not but > > > this patch alone just allows spill over to unaccounted work way too > > > easily IMHO. > > > > I understand your concern about CPU cycles escaping, and I share > > it. My point is that this patch isn't adding a problem that isn't > > already there, nor is it that much of a practical concern at the time > > of this writing given the state of CPU isolation in general. > > I beg to differ here. Ppu controller should be able to isolate user > contexts performing high limit reclaim now. Your patch is changing that > functionality to become unaccounted for a large part and that might be > seen as a regression for those workloads which partition the system by > using high limit and also rely on cpu controller because workloads are > CPU sensitive. > > Without the CPU controller support this patch is not complete and I do > not see an absolute must to marge it ASAP because it is not a regression > fix or something we cannot live without. I think you're still thinking in a cgroup1 reality, where you would set a memory limit in isolation and then eat a ton of CPU pushing up against it. In comprehensive isolation setups implemented in cgroup2, "heavily" reclaimed containers are primarily IO bound on page faults, refaults, writeback. The reclaim cost is a small part of it, and as I said, in a magnitude range for which the CPU controller is currently too heavy. We can carry this patch out of tree until the CPU controller is fixed, but I think the reasoning to keep it out is not actually based on the practical reality of a cgroup2 world.