Re: RFC: reviving mlock isolation dead code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 8, 2010 at 8:34 PM, KOSAKI Motohiro
<kosaki.motohiro@xxxxxxxxxxxxxx> wrote:
> While in airplane to come back from KS and LPC, I was thinking this issue. now I think
> we can solve this issue. can you please hear my idea?

I have been having similar thoughts over the past week. I'll try to
send a related patch set soon.

> Now, mlock has following call flow
>
> sys_mlock
>        down_write(mmap_sem)
>        do_mlock()
>                for-each-vma
>                        mlock_fixup()
>                                __mlock_vma_pages_range()
>                                        __get_user_pages()
>        up_write(mmap_sem)
>
> Then, I'd propose two phase mlock. that said,
>
> sys_mlock
>        down_write(mmap_sem)
>        do_mlock()
>                for-each-vma
>                        turn on VM_LOCKED and merge/split vma
>        downgrade_write(mmap_sem)
>                for-each-vma
>                        mlock_fixup()
>                                __mlock_vma_pages_range()
>        up_read(mmap_sem)
>
> Usually, kernel developers strongly dislike two phase thing beucase it's slow. but at least
> _I_ think it's ok in this case. because mlock is really really slow syscall, it often take a few
> *miniture*. then, A few microsecond slower is not big matter.
>
> What do you think?

downgrade_write() would help, but only partially. If another thread
tries to acquire the mmap_sem for write, it will get queued for a long
time until mlock() completes - this may in itself be acceptable, but
the issue here is that additional readers like try_to_unmap_one()
won't be able to acquire the mmap_sem anymore. This is because the
rwsem code prevents new readers from entering once there is a queued
writer, in order to avoid starvation.

My proposal would be as follows:

sys_mlock
       down_write(mmap_sem)
       do_mlock()
               for-each-vma
                       turn on VM_LOCKED and merge/split vma
       up_write(mmap_sem)
       for (addr = start of mlock range; addr < end of mlock range;
addr = next_addr)
               down_read(mmap_sem)
               find vma for addr
               next_addr = end of the vma
               if vma still has VM_LOCKED flag:
                       next_addr = min(next_addr, addr + few pages)
                       mlock a small batch of pages from that vma
(from addr to next_addr)
               up_read(mmap_sem)

Since a large mlock() can take a long time and we don't want to hold
mmap_sem for that long, we have to allow other threads to grab
mmap_sem and deal with the concurrency issues.

The races aren't actually too bad:

* If some other thread creates new VM_LOCKED vmas within the mlock
range while sys_mlock() is working: both threads will be trying to
mlock_fixup the same page range at once. This is no big deal as
__mlock_vma_pages_range already only needs mmap_sem held for read: the
get_user_pages() part can safely proceed in parallel and the
mlock_vma_page() part is protected by the page lock and won't do
anything if the PageMlocked flag is already set.

* If some other thread creates new non-VM_LOCKED vmas, or munlocks the
same address ranges that mlock() is currently working on: the mlock()
code needs to be careful here to not mlock the pages when the vmas
don't have the VM_LOCKED flag anymore. From the user process point of
view, things will look like if the mlock had completed first, followed
by the munlock.

The other mlock related issue I have is that it marks pages as dirty
(if they are in a writable VMA), and causes writeback to work on them,
even though the pages have not actually been modified. This looks like
it would be solvable with a new get_user_pages flag for mlock use
(breaking cow etc, but not writing to the pages just yet).

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]