On 2022/9/3 5:35, Mike Kravetz wrote: > On 08/30/22 10:46, Miaohe Lin wrote: >> On 2022/8/30 6:37, Mike Kravetz wrote: >>> On 08/29/22 10:44, Miaohe Lin wrote: >>>> On 2022/8/25 1:57, Mike Kravetz wrote: >>>>> Create the new routine hugetlb_unmap_file_folio that will unmap a single >>>>> file folio. This is refactored code from hugetlb_vmdelete_list. It is >>>>> modified to do locking within the routine itself and check whether the >>>>> page is mapped within a specific vma before unmapping. >>>>> >>>>> This refactoring will be put to use and expanded upon in a subsequent >>>>> patch adding vma specific locking. >>>>> >>>>> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> >>>>> --- >>>>> fs/hugetlbfs/inode.c | 123 +++++++++++++++++++++++++++++++++---------- >>>>> 1 file changed, 94 insertions(+), 29 deletions(-) >>>>> >>>>> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c >>>>> index e83fd31671b3..b93d131b0cb5 100644 >>>>> --- a/fs/hugetlbfs/inode.c >>>>> +++ b/fs/hugetlbfs/inode.c >>>>> @@ -371,6 +371,94 @@ static void hugetlb_delete_from_page_cache(struct page *page) >>>>> delete_from_page_cache(page); >>>>> } >>>>> >>>>> +/* >>>>> + * Called with i_mmap_rwsem held for inode based vma maps. This makes >>>>> + * sure vma (and vm_mm) will not go away. We also hold the hugetlb fault >>>>> + * mutex for the page in the mapping. So, we can not race with page being >>>>> + * faulted into the vma. >>>>> + */ >>>>> +static bool hugetlb_vma_maps_page(struct vm_area_struct *vma, >>>>> + unsigned long addr, struct page *page) >>>>> +{ >>>>> + pte_t *ptep, pte; >>>>> + >>>>> + ptep = huge_pte_offset(vma->vm_mm, addr, >>>>> + huge_page_size(hstate_vma(vma))); >>>>> + >>>>> + if (!ptep) >>>>> + return false; >>>>> + >>>>> + pte = huge_ptep_get(ptep); >>>>> + if (huge_pte_none(pte) || !pte_present(pte)) >>>>> + return false; >>>>> + >>>>> + if (pte_page(pte) == page) >>>>> + return true; >>>> >>>> I'm thinking whether pte entry could change after we check it since huge_pte_lock is not held here. >>>> But I think holding i_mmap_rwsem in writelock mode should give us such a guarantee, e.g. migration >>>> entry is changed back to huge pte entry while holding i_mmap_rwsem in readlock mode. >>>> Or am I miss something? >>> >>> Let me think about this. I do not think it is possible, but you ask good >>> questions. >>> >>> Do note that this is the same locking sequence used at the beginning of the >>> page fault code where the decision to call hugetlb_no_page() is made. >> >> Yes, hugetlb_fault() can tolerate the stale pte entry because pte entry will be re-checked later under the page table lock. >> However if we see a stale pte entry here, the page might be leftover after truncated and thus break truncation? But I'm not >> sure whether this will occur. Maybe the i_mmap_rwsem writelock and hugetlb_fault_mutex can prevent this issue. >> > > I looked at this some more. Just to be clear, we only need to worry > about modifications of pte_page(). Racing with other pte modifications > such as accessed, or protection changes is acceptable. > > Of course, the fault mutex prevents faults from happening. i_mmap_rwsem > protects against unmap and truncation operations as well as migration as > you noted above. I believe the only other place where we update pte_page() > is when copying page table such as during fork. However, with commit > bcd51a3c679d "Lazy page table copies in fork()" we are going to skip > copying for files and rely on page faults to populate the tables. > > I believe we are safe from races with just the fault mutex and i_mmap_rwsem. I believe your analysis is right. Thanks for your clarifying. Thanks, Miaohe Lin