The patch titled Subject: mm, slab: fix sign conversion problem in memcg_uncharge_slab() has been added to the -mm tree. Its filename is mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Waiman Long <longman@xxxxxxxxxx> Subject: mm, slab: fix sign conversion problem in memcg_uncharge_slab() It was found that running the LTP test on a PowerPC system could produce erroneous values in /proc/meminfo, like: MemTotal: 531915072 kB MemFree: 507962176 kB MemAvailable: 1100020596352 kB Using bisection, the problem is tracked down to commit 9c315e4d7d8c ("mm: memcg/slab: cache page number in memcg_(un)charge_slab()"). In memcg_uncharge_slab() with a "int order" argument: unsigned int nr_pages = 1 << order; : mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages); The mod_lruvec_state() function will eventually call the __mod_zone_page_state() which accepts a long argument. Depending on the compiler and how inlining is done, "-nr_pages" may be treated as a negative number or a very large positive number. Apparently, it was treated as a large positive number in that PowerPC system leading to incorrect stat counts. This problem hasn't been seen in x86-64 yet, perhaps the gcc compiler there has some slight difference in behavior. It is fixed by making nr_pages a signed value. For consistency, a similar change is applied to memcg_charge_slab() as well. Link: http://lkml.kernel.org/r/20200620184719.10994-1-longman@xxxxxxxxxx Fixes: 9c315e4d7d8c ("mm: memcg/slab: cache page number in memcg_(un)charge_slab()"). Signed-off-by: Waiman Long <longman@xxxxxxxxxx> Acked-by: Roman Gushchin <guro@xxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Pekka Enberg <penberg@xxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slab.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/mm/slab.h~mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab +++ a/mm/slab.h @@ -348,7 +348,7 @@ static __always_inline int memcg_charge_ gfp_t gfp, int order, struct kmem_cache *s) { - unsigned int nr_pages = 1 << order; + int nr_pages = 1 << order; struct mem_cgroup *memcg; struct lruvec *lruvec; int ret; @@ -388,7 +388,7 @@ out: static __always_inline void memcg_uncharge_slab(struct page *page, int order, struct kmem_cache *s) { - unsigned int nr_pages = 1 << order; + int nr_pages = 1 << order; struct mem_cgroup *memcg; struct lruvec *lruvec; _ Patches currently in -mm which might be from longman@xxxxxxxxxx are mm-slab-use-memzero_explicit-in-kzfree.patch mm-slab-fix-sign-conversion-problem-in-memcg_uncharge_slab.patch mm-treewide-rename-kzfree-to-kfree_sensitive.patch