On Fri 03-08-12 20:56:45, Hillf Danton wrote: > The computation of page offset index is open coded, and incorrect, to > be used in scanning prio tree, as huge page offset is required, and is > fixed with the well defined routine. I guess that nobody reported this because if someone really wants to share he will use aligned address for mmap/shmat and so the index is 0. Anyway it is worth fixing. Thanks for pointing out! > > Signed-off-by: Hillf Danton <dhillf@xxxxxxxxx> > --- > > --- a/arch/x86/mm/hugetlbpage.c Fri Aug 3 20:34:58 2012 > +++ b/arch/x86/mm/hugetlbpage.c Fri Aug 3 20:40:16 2012 > @@ -72,12 +72,15 @@ static void huge_pmd_share(struct mm_str > if (!vma_shareable(vma, addr)) > return; > > + idx = linear_page_index(vma, addr); > + You can use linear_hugepage_index directly and remove the idx initialization as well. > mutex_lock(&mapping->i_mmap_mutex); > vma_prio_tree_foreach(svma, &iter, &mapping->i_mmap, idx, idx) { > if (svma == vma) > continue; > > - saddr = page_table_shareable(svma, vma, addr, idx); > + saddr = page_table_shareable(svma, vma, addr, > + idx * (PMD_SIZE/PAGE_SIZE)); Why not just fixing page_table_shareable as well rather than playing tricks like this? > if (saddr) { > spte = huge_pte_offset(svma->vm_mm, saddr); > if (spte) { > -- -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>