On 28.07.22 18:45, Mike Kravetz wrote: > On 07/28/22 10:02, Miaohe Lin wrote: >> On 2022/7/28 3:00, Mike Kravetz wrote: >>> On 07/27/22 17:20, Miaohe Lin wrote: >>>> On 2022/7/7 4:23, Mike Kravetz wrote: >>>>> Most hugetlb fault handling code checks for faults beyond i_size. >>>>> While there are early checks in the code paths, the most difficult >>>>> to handle are those discovered after taking the page table lock. >>>>> At this point, we have possibly allocated a page and consumed >>>>> associated reservations and possibly added the page to the page cache. >>>>> >>>>> When discovering a fault beyond i_size, be sure to: >>>>> - Remove the page from page cache, else it will sit there until the >>>>> file is removed. >>>>> - Do not restore any reservation for the page consumed. Otherwise >>>>> there will be an outstanding reservation for an offset beyond the >>>>> end of file. >>>>> >>>>> The 'truncation' code in remove_inode_hugepages must deal with fault >>>>> code potentially removing a page/folio from the cache after the page was >>>>> returned by filemap_get_folios and before locking the page. This can be >>>>> discovered by a change in folio_mapping() after taking folio lock. In >>>>> addition, this code must deal with fault code potentially consuming >>>>> and returning reservations. To synchronize this, remove_inode_hugepages >>>>> will now take the fault mutex for ALL indices in the hole or truncated >>>>> range. In this way, it KNOWS fault code has finished with the page/index >>>>> OR fault code will see the updated file size. >>>>> >>>>> Signed-off-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> >>>>> --- >>>> >>>> <snip> >>>> >>>>> @@ -5606,8 +5610,10 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, >>>>> >>>>> ptl = huge_pte_lock(h, mm, ptep); >>>>> size = i_size_read(mapping->host) >> huge_page_shift(h); >>>>> - if (idx >= size) >>>>> + if (idx >= size) { >>>>> + beyond_i_size = true; >>>> >>>> Thanks for your patch. There is one question: >>>> >>>> Since races between hugetlb pagefault and truncate is guarded by hugetlb_fault_mutex, >>>> do we really need to check it again after taking the page table lock? >>>> >>> >>> Well, the fault mutex can only guard a single hugetlb page. The fault mutex >>> is actually an array/table of mutexes hashed by mapping address and file index. >>> So, during truncation we take take the mutex for each page as they are >>> unmapped and removed. So, the fault mutex only synchronizes operations >>> on one specific page. The idea with this patch is to coordinate the fault >>> code and truncate code when operating on the same page. >>> >>> In addition, changing the file size happens early in the truncate process >>> before taking any locks/mutexes. >> >> I wonder whether we can somewhat live with it to make code simpler. When changing the file size happens >> after checking i_size but before taking the page table lock in hugetlb_fault, the truncate code would >> remove the hugetlb page from the page cache for us after hugetlb_fault finishes if we don't roll back >> when checking i_size again under the page table lock? >> >> In a word, if hugetlb_fault see a truncated inode, back out early. If not, let truncate code does its >> work. So we don't need to complicate the already complicated error path. Or am I miss something? >> > > Thank you! I believe your observations and suggestions are correct. > > We can just let the fault code proceed after the early "idx >= size", > and let the truncation code remove the page. This also eliminates the > need for patch 3 (hugetlbfs: move routine remove_huge_page to hugetlb.c). At least remaining the functions would be very welcome nonetheless :) > > I will make these changes in the next version. Just so I understand correctly, we want to let fault handling code back out early if we find any incompatible change, and simply retry the fault? I'm thinking about some kind of a high-level seqcount. -- Thanks, David / dhildenb