On 7/21/21 12:28 AM, akpm@xxxxxxxxxxxxxxxxxxxx wrote: > The patch titled > Subject: mm/mremap: fix memory account on do_munmap() failure > has been added to the -mm tree. Its filename is > mm-mremap-fix-memory-account-on-do_munmap-failure.patch > > This patch should soon appear at > https://ozlabs.org/~akpm/mmots/broken-out/mm-mremap-fix-memory-account-on-do_munmap-failure.patch > and later at > https://ozlabs.org/~akpm/mmotm/broken-out/mm-mremap-fix-memory-account-on-do_munmap-failure.patch > > Before you just go and hit "reply", please: > a) Consider who else should be cc'ed > b) Prefer to cc a suitable mailing list as well > c) Ideally: find the original patch on the mailing list and do a > reply-to-all to that, adding suitable additional cc's > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** > > The -mm tree is included into linux-next and is updated > there every 3-4 working days > > ------------------------------------------------------ > From: Chen Wandun <chenwandun@xxxxxxxxxx> > Subject: mm/mremap: fix memory account on do_munmap() failure > > mremap will account the delta between new_len and old_len in > vma_to_resize, and then call move_vma when expanding an existing memory > mapping. In function move_vma, there are two scenarios when calling > do_munmap: > > 1. move_page_tables from old_addr to new_addr success > 2. move_page_tables from old_addr to new_addr fail > > In first scenario, it should account old_len if do_munmap fail, because > the delta has already been accounted. > > In second scenario, new_addr/new_len will assign to old_addr/old_len if > move_page_table fail, so do_munmap is try to unmap new_addr actually, if > do_munmap fail, it should account the new_len, because error code will be > return from move_vma, and delta will be unaccounted. What'more, because > of new_len == old_len, so account old_len also is OK. > > In summary, account old_len will be correct if do_munmap fail. > > Link: https://lkml.kernel.org/r/20210717101942.120607-1-chenwandun@xxxxxxxxxx > Fixes: 51df7bcb6151 ("mm/mremap: account memory on do_munmap() failure") > Signed-off-by: Chen Wandun <chenwandun@xxxxxxxxxx> > Cc: Dmitry Safonov <0x7f454c46@xxxxxxxxx> > Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> > Cc: Wei Yongjun <weiyongjun1@xxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Nice catch! > --- > > mm/mremap.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > --- a/mm/mremap.c~mm-mremap-fix-memory-account-on-do_munmap-failure > +++ a/mm/mremap.c > @@ -686,7 +686,7 @@ static unsigned long move_vma(struct vm_ > if (do_munmap(mm, old_addr, old_len, uf_unmap) < 0) { > /* OOM: unable to split vma, just get accounts right */ > if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) > - vm_acct_memory(new_len >> PAGE_SHIFT); > + vm_acct_memory(old_len >> PAGE_SHIFT); > excess = 0; > } > But now as you noticed the accounting in vma_to_resize(), I can't help but see that accounting for MREMAP_DONTUNMAP seems to have been broken from the beginning. Either we should also hack around this way: --- a/mm/mremap.c +++ b/mm/mremap.c @@ -605,7 +605,12 @@ static unsigned long move_vma(struct vm_area_struct *vma, return err; if (unlikely(flags & MREMAP_DONTUNMAP && vm_flags & VM_ACCOUNT)) { - if (security_vm_enough_memory_mm(mm, new_len >> PAGE_SHIFT)) + /* + * new_len >= old_len, VMA shrinking is not in this path. + * (new_len - old_len) is already charged in vma_to_resize() + * So, charge old_len instead of new_len. + */ + if (security_vm_enough_memory_mm(mm, old_len >> PAGE_SHIFT)) return -ENOMEM; } @@ -614,7 +619,7 @@ static unsigned long move_vma(struct vm_area_struct *vma, &need_rmap_locks); if (!new_vma) { if (unlikely(flags & MREMAP_DONTUNMAP && vm_flags & VM_ACCOUNT)) - vm_unacct_memory(new_len >> PAGE_SHIFT); + vm_unacct_memory(old_len >> PAGE_SHIFT); return -ENOMEM; } --->8--- But I hate what's going on here. That's disgusting, let's not account/unaccount memory for vma_to_resize(), I've sent an alternative patch: https://lore.kernel.org/lkml/20210721124949.517217-1-dima@xxxxxxxxxx/ Thanks, Dmitry