Hi Mike, After getting promising results initially, we discovered there is yet another bug left with hugetlbfs MADV_DONTNEED. This one involves a page fault on a hugetlbfs address, while another thread in the same process is in the middle of MADV_DONTNEED on that same memory address. The code in __unmap_hugepage_range() will clear the page table entry, and then at some point later the lazy TLB code will actually free the huge page back into the hugetlbfs free page pool. Meanwhile, hugetlb_no_page will call alloc_huge_page, and that will fail because the code calling __unmap_hugepage_range() has not actually returned the page to the free list yet. The result is that the process gets killed with SIGBUS. I have thought of a few different solutions to this problem, but none of them look good: - Make MADV_DONTNEED take a write lock on mmap_sem, to exclude page faults. This could make MADV_DONTNEED on VMAs with 4kB pages unacceptably slow. - Some sort of atomic counter kept by __unmap_hugepage_range() that huge pages may be getting placed in the tlb gather, and freed later by tlb_finish_mmu(). This would involve changes to the MMU gather code, outside of hugetlbfs. - Some sort of generation counter that tracks tlb_gather_mmu cycles in progress, with the alloc_huge_page failure path waiting until all mmu gather operations that started before it to finish, before retrying the allocation. This requires changes to the generic code, outside of hugetlbfs. What are the reasonable alternatives here? Should we see if anybody can come up with a simple solution to the problem, or would it be better to just disable MADV_DONTNEED on hugetlbfs for now? -- All Rights Reversed.
Attachment:
signature.asc
Description: This is a digitally signed message part