Before storing a page, zswap first checks if the number of stored pages exceeds the limit specified by memory.zswap.max, for each cgroup in the hierarchy. If this limit is reached or exceeded, then zswap shrinking is triggered and short-circuits the store attempt. However, since the zswap's LRU is not memcg-aware, this can create the following pathological behavior: the cgroup whose zswap limit is 0 will evict pages from other cgroups continually, without lowering its own zswap usage. This means the shrinking will continue until the need for swap ceases or the pool becomes empty. As a result of this, we observe a disproportionate amount of zswap writeback and a perpetually small zswap pool in our experiments, even though the pool limit is never hit. More generally, a cgroup might unnecessarily evict pages from other cgroups before we drive the memcg back below its limit. This patch fixes the issue by rejecting zswap store attempt without shrinking the pool when obj_cgroup_may_zswap() returns false. Fixes: f4840ccfca25 ("zswap: memcg accounting") Reviewed-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Signed-off-by: Nhat Pham <nphamcs@xxxxxxxxx> --- mm/zswap.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/mm/zswap.c b/mm/zswap.c index 59da2a415fbb..cff93643a6ab 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1174,9 +1174,14 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset, goto reject; } + /* + * XXX: zswap reclaim does not work with cgroups yet. Without a + * cgroup-aware entry LRU, we will push out entries system-wide based on + * local cgroup limits. + */ objcg = get_obj_cgroup_from_page(page); if (objcg && !obj_cgroup_may_zswap(objcg)) - goto shrink; + goto reject; /* reclaim space if needed */ if (zswap_is_full()) { -- 2.34.1