Hi Nhat, On Wed, Nov 8, 2023 at 1:15 PM Nhat Pham <nphamcs@xxxxxxxxx> wrote: > > Ah that was meant to be a fixlet - so that on top of the original > "zswap: make shrinking memcg-aware" patch. The intention was > to eventually squash it... > > But this is getting a bit annoyingly confusing, I admit. I just rebased to > mm-unstable + squashed it all again, then sent one single replacement > patch: > > [PATCH v5 3/6 REPLACE] zswap: make shrinking memcg-aware Thank you for the quick response. Yes, I am able to download your replacement version of patch 3. Just FYI, I am using "git mailsplit" to split up the mbox into 6 separate patch files. On mm-unstable, I am able to apply your replacement patch 3 cleanly. I also need some help on the patch 0005, it does not apply cleanly either. $ git mailsplit -ozswap-pool-lru v5_20231106_nphamcs_workload_specific_and_memory_pressure_driven_zswap_writeback.mbx $ git am patches/zswap-pool-lru/0001 Applying: list_lru: allows explicit memcg and NUMA node selection $ git am patches/zswap-pool-lru/0002 Applying: memcontrol: allows mem_cgroup_iter() to check for onlineness $ git am patches/zswap-pool-lru/3.replace Applying: zswap: make shrinking memcg-aware $ git am patches/zswap-pool-lru/0004 Applying: mm: memcg: add per-memcg zswap writeback stat $ git am patches/zswap-pool-lru/0005 Applying: selftests: cgroup: update per-memcg zswap writeback selftest error: patch failed: tools/testing/selftests/cgroup/test_zswap.c:50 error: tools/testing/selftests/cgroup/test_zswap.c: patch does not apply Patch failed at 0001 selftests: cgroup: update per-memcg zswap writeback selftest hint: Use 'git am --show-current-patch=diff' to see the failed patch When you have resolved this problem, run "git am --continue". If you prefer to skip this patch, run "git am --skip" instead. To restore the original branch and stop patching, run "git am --abort". > > Let me know if this still fails to apply. If not, I'll send the whole thing > again as v6! My sincerest apologies for the troubles and confusion :( No problem at all. Thanks for your help on patch 3. Chris