On Wed, 22 Jul 2015, Kirill A. Shutemov wrote: > On Tue, Jul 21, 2015 at 03:59:36PM -0400, Eric B Munson wrote: > > @@ -648,20 +656,23 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len) > > start &= PAGE_MASK; > > > > down_write(¤t->mm->mmap_sem); > > - ret = do_mlock(start, len, 0); > > + ret = apply_vma_flags(start, len, flags, false); > > up_write(¤t->mm->mmap_sem); > > > > return ret; > > } > > > > +SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len) > > +{ > > + return do_munlock(start, len, VM_LOCKED); > > +} > > + > > static int do_mlockall(int flags) > > { > > struct vm_area_struct * vma, * prev = NULL; > > > > if (flags & MCL_FUTURE) > > current->mm->def_flags |= VM_LOCKED; > > - else > > - current->mm->def_flags &= ~VM_LOCKED; > > I think this is wrong. > > With current code mlockall(MCL_CURRENT) after mlockall(MCL_FUTURE | > MCL_CURRENT) would undo future mlocking, without unlocking currently > mlocked memory. > > The change will break the use-case. It is wrong and I have addressed it in this case as well as with the MCL_ONFAULT flag introduced in patch 4. I will also add to the mlockall man page to specify this behavior.
Attachment:
signature.asc
Description: Digital signature