The patch titled Subject: mm/mlock.c: fix mlock count can not decrease in race condition has been removed from the -mm tree. Its filename was mlock-fix-mlock-count-can-not-decrease-in-race-condition.patch This patch was dropped because an updated version will be merged ------------------------------------------------------ From: Yisheng Xie <xieyisheng1@xxxxxxxxxx> Subject: mm/mlock.c: fix mlock count can not decrease in race condition Kefeng reported that when running the follow test the mlock count in meminfo cannot be decreased: [1] testcase linux:~ # cat test_mlockal grep Mlocked /proc/meminfo for j in `seq 0 10` do for i in `seq 4 15` do ./p_mlockall >> log & done sleep 0.2 done # wait some time to let mlock counter decrease and 5s may not enough sleep 5 grep Mlocked /proc/meminfo linux:~ # cat p_mlockall.c #include <sys/mman.h> #include <stdlib.h> #include <stdio.h> #define SPACE_LEN 4096 int main(int argc, char ** argv) { int ret; void *adr = malloc(SPACE_LEN); if (!adr) return -1; ret = mlockall(MCL_CURRENT | MCL_FUTURE); printf("mlockall ret = %d\n", ret); ret = munlockall(); printf("munlockall ret = %d\n", ret); free(adr); return 0; } In __munlock_pagevec() we ClearPageMlock but isolation_failed in race condition, and we do not count these page into delta_munlocked, which causes mlock counter inaccuracy because we cleared the PageMlock and cannot count down the number in the future. Fix it by counting the number of pages whose PageMlock flag is cleared. [akpm@xxxxxxxxxxxxxxxxxxxx: coding-style fixes] Fixes: 1ebb7cc6a583 ("mm: munlock: batch NR_MLOCK zone state updates") Link: http://lkml.kernel.org/r/1495620504-7007-1-git-send-email-xieyisheng1@xxxxxxxxxx Signed-off-by: Yisheng Xie <xieyisheng1@xxxxxxxxxx> Reported-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Tested-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Joern Engel <joern@xxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Michel Lespinasse <walken@xxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Rik van Riel <riel@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/mlock.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff -puN mm/mlock.c~mlock-fix-mlock-count-can-not-decrease-in-race-condition mm/mlock.c --- a/mm/mlock.c~mlock-fix-mlock-count-can-not-decrease-in-race-condition +++ a/mm/mlock.c @@ -284,7 +284,7 @@ static void __munlock_pagevec(struct pag { int i; int nr = pagevec_count(pvec); - int delta_munlocked; + int munlocked = 0; struct pagevec pvec_putback; int pgrescued = 0; @@ -296,6 +296,7 @@ static void __munlock_pagevec(struct pag struct page *page = pvec->pages[i]; if (TestClearPageMlocked(page)) { + munlocked--; /* * We already have pin from follow_page_mask() * so we can spare the get_page() here. @@ -315,8 +316,8 @@ static void __munlock_pagevec(struct pag pagevec_add(&pvec_putback, pvec->pages[i]); pvec->pages[i] = NULL; } - delta_munlocked = -nr + pagevec_count(&pvec_putback); - __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); + if (munlocked) + __mod_zone_page_state(zone, NR_MLOCK, munlocked); spin_unlock_irq(zone_lru_lock(zone)); /* Now we can release pins of pages that we are not munlocking */ _ Patches currently in -mm which might be from xieyisheng1@xxxxxxxxxx are