On Wed, Feb 24, 2016 at 06:50:25PM +0100, Gerald Schaefer wrote: > On Wed, 24 Feb 2016 18:59:21 +0300 > "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> wrote: > > > Previously, __split_huge_page_splitting() required serialization against > > gup_fast to make sure nobody can obtain new reference to the page after > > __split_huge_page_splitting() returns. This was a way to stabilize page > > references before starting to distribute them from head page to tail > > pages. > > > > With new refcounting, we don't care about this. Splitting PMD is now > > decoupled from splitting underlying compound page. It's okay to get new > > pins after split_huge_pmd(). To stabilize page references during > > split_huge_page() we rely on setting up migration entries once all > > pmds are split into page tables. > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > > --- > > mm/gup.c | 11 +++-------- > > mm/huge_memory.c | 7 +++---- > > 2 files changed, 6 insertions(+), 12 deletions(-) > > > > diff --git a/mm/gup.c b/mm/gup.c > > index 7bf19ffa2199..2f528fce3a62 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -1087,8 +1087,7 @@ struct page *get_dump_page(unsigned long addr) > > * > > * get_user_pages_fast attempts to pin user pages by walking the page > > * tables directly and avoids taking locks. Thus the walker needs to be > > - * protected from page table pages being freed from under it, and should > > - * block any THP splits. > > + * protected from page table pages being freed from under it. > > * > > * One way to achieve this is to have the walker disable interrupts, and > > * rely on IPIs from the TLB flushing code blocking before the page table > > @@ -1097,9 +1096,8 @@ struct page *get_dump_page(unsigned long addr) > > * > > * Another way to achieve this is to batch up page table containing pages > > * belonging to more than one mm_user, then rcu_sched a callback to free those > > - * pages. Disabling interrupts will allow the fast_gup walker to both block > > - * the rcu_sched callback, and an IPI that we broadcast for splitting THPs > > - * (which is a relatively rare event). The code below adopts this strategy. > > + * pages. Disabling interrupts will allow the fast_gup walker to block > > + * the rcu_sched callback. The code below adopts this strategy. > > * > > * Before activating this code, please be aware that the following assumptions > > * are currently made: > > @@ -1391,9 +1389,6 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, > > * With interrupts disabled, we block page table pages from being > > * freed from under us. See mmu_gather_tlb in asm-generic/tlb.h > > * for more details. > > - * > > - * We do not adopt an rcu_read_lock(.) here as we also want to > > - * block IPIs that come from THPs splitting. > > */ > > Hmm, now that the IPI from THP splitting is not needed anymore, this > comment would suggest that we could use rcu_read_lock(_sched) for > fast_gup, instead of keeping the (probably more expensive) IRQ enable/ > disable. That should be enough to synchronize against the > call_rcu_sched() from the batched tlb_table_flush, right? Possibly. I'm not hugely aware about all details here. + People from the patch which introduced the comment. Can anybody comment on this? -- Kirill A. Shutemov -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>