On Tue, Apr 16, 2024 at 2:41 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote: > > Barry Song <21cnbao@xxxxxxxxx> writes: > > > On Tue, Apr 16, 2024 at 2:27 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote: > >> > >> > >> Added Khalid for arch_do_swap_page(). > >> > >> Barry Song <21cnbao@xxxxxxxxx> writes: > >> > >> > On Mon, Apr 15, 2024 at 8:39 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote: > >> >> > >> >> Barry Song <21cnbao@xxxxxxxxx> writes: > >> > >> [snip] > >> > >> >> > >> >> > + bool any_swap_shared = false; > >> >> > > >> >> > if (!pte_unmap_same(vmf)) > >> >> > goto out; > >> >> > @@ -4137,6 +4141,35 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > >> >> > */ > >> >> > vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, > >> >> > &vmf->ptl); > >> >> > >> >> We should move pte check here. That is, > >> >> > >> >> if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) > >> >> goto out_nomap; > >> >> > >> >> This will simplify the situation for large folio. > >> > > >> > the plan is moving the whole code block > >> > > >> > if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) > >> > > >> > after > >> > if (unlikely(!folio_test_uptodate(folio))) { > >> > ret = VM_FAULT_SIGBUS; > >> > goto out_nomap; > >> > } > >> > > >> > though we couldn't be !folio_test_uptodate(folio)) for hitting > >> > swapcache but it seems > >> > logically better for future use. > >> > >> LGTM, Thanks! > >> > >> >> > >> >> > + > >> >> > + /* We hit large folios in swapcache */ > >> >> > >> >> The comments seems unnecessary because the code tells that already. > >> >> > >> >> > + if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) { > >> >> > + int nr = folio_nr_pages(folio); > >> >> > + int idx = folio_page_idx(folio, page); > >> >> > + unsigned long folio_start = vmf->address - idx * PAGE_SIZE; > >> >> > + unsigned long folio_end = folio_start + nr * PAGE_SIZE; > >> >> > + pte_t *folio_ptep; > >> >> > + pte_t folio_pte; > >> >> > + > >> >> > + if (unlikely(folio_start < max(vmf->address & PMD_MASK, vma->vm_start))) > >> >> > + goto check_pte; > >> >> > + if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->vm_end))) > >> >> > + goto check_pte; > >> >> > + > >> >> > + folio_ptep = vmf->pte - idx; > >> >> > + folio_pte = ptep_get(folio_ptep); > >> >> > >> >> It's better to construct pte based on fault PTE via generalizing > >> >> pte_next_swp_offset() (may be pte_move_swp_offset()). Then we can find > >> >> inconsistent PTEs quicker. > >> > > >> > it seems your point is getting the pte of page0 by pte_next_swp_offset() > >> > unfortunately pte_next_swp_offset can't go back. on the other hand, > >> > we have to check the real pte value of the 0nd entry right now because > >> > swap_pte_batch() only really reads pte from the 1st entry. it assumes > >> > pte argument is the real value for the 0nd pte entry. > >> > > >> > static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte) > >> > { > >> > pte_t expected_pte = pte_next_swp_offset(pte); > >> > const pte_t *end_ptep = start_ptep + max_nr; > >> > pte_t *ptep = start_ptep + 1; > >> > > >> > VM_WARN_ON(max_nr < 1); > >> > VM_WARN_ON(!is_swap_pte(pte)); > >> > VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte))); > >> > > >> > while (ptep < end_ptep) { > >> > pte = ptep_get(ptep); > >> > > >> > if (!pte_same(pte, expected_pte)) > >> > break; > >> > > >> > expected_pte = pte_next_swp_offset(expected_pte); > >> > ptep++; > >> > } > >> > > >> > return ptep - start_ptep; > >> > } > >> > >> Yes. You are right. > >> > >> But we may check whether the pte of page0 is same as "vmf->orig_pte - > >> folio_page_idx()" (fake code). > > > > right, that is why we are reading and checking PTE0 before calling > > swap_pte_batch() > > right now. > > > > folio_ptep = vmf->pte - idx; > > folio_pte = ptep_get(folio_ptep); > > if (!is_swap_pte(folio_pte) || non_swap_entry(pte_to_swp_entry(folio_pte)) || > > swap_pte_batch(folio_ptep, nr, folio_pte, &any_swap_shared) != nr) > > goto check_pte; > > > > So, if I understand correctly, you're proposing that we should directly check > > PTE0 in swap_pte_batch(). Personally, I don't have any objections to this idea. > > However, I'd also like to hear the feedback from Ryan and David :-) > > I mean that we can replace > > !is_swap_pte(folio_pte) || non_swap_entry(pte_to_swp_entry(folio_pte)) > > in above code with pte_same() with constructed expected first pte. Got it. It could be quite tricky, especially with considerations like pte_swp_soft_dirty, pte_swp_exclusive, and pte_swp_uffd_wp. We might require a helper function similar to pte_next_swp_offset() but capable of moving both forward and backward. For instance: pte_move_swp_offset(pte_t pte, long delta) pte_next_swp_offset can insteadly call it by: pte_move_swp_offset(pte, 1); Is it what you are proposing? > > >> > >> You need to check the pte of page 0 anyway. > >> > >> >> > >> >> > + if (!is_swap_pte(folio_pte) || non_swap_entry(pte_to_swp_entry(folio_pte)) || > >> >> > + swap_pte_batch(folio_ptep, nr, folio_pte, &any_swap_shared) != nr) > >> >> > + goto check_pte; > >> >> > + > >> >> > + start_address = folio_start; > >> >> > + start_pte = folio_ptep; > >> >> > + nr_pages = nr; > >> >> > + entry = folio->swap; > >> >> > + page = &folio->page; > >> >> > + } > >> >> > + > >> >> > +check_pte: > >> >> > if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) > >> >> > goto out_nomap; > >> >> > > >> >> > @@ -4190,6 +4223,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > >> >> > */ > >> >> > exclusive = false; > >> >> > } > >> >> > + > >> >> > + /* Reuse the whole large folio iff all entries are exclusive */ > >> >> > + if (nr_pages > 1 && any_swap_shared) > >> >> > + exclusive = false; > >> >> > } > >> >> > > > [snip] > > -- > Best Regards, > Huang, Ying