On Wed, Nov 29, 2023 at 1:08 AM Ryan Roberts <ryan.roberts@xxxxxxx> wrote: > > On 28/11/2023 05:49, Barry Song wrote: > > On Mon, Nov 27, 2023 at 5:15 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote: > >> > >> On 27/11/2023 03:18, Barry Song wrote: > >>>> Ryan Roberts (14): > >>>> mm: Batch-copy PTE ranges during fork() > >>>> arm64/mm: set_pte(): New layer to manage contig bit > >>>> arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit > >>>> arm64/mm: pte_clear(): New layer to manage contig bit > >>>> arm64/mm: ptep_get_and_clear(): New layer to manage contig bit > >>>> arm64/mm: ptep_test_and_clear_young(): New layer to manage contig bit > >>>> arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit > >>>> arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit > >>>> arm64/mm: ptep_set_access_flags(): New layer to manage contig bit > >>>> arm64/mm: ptep_get(): New layer to manage contig bit > >>>> arm64/mm: Split __flush_tlb_range() to elide trailing DSB > >>>> arm64/mm: Wire up PTE_CONT for user mappings > >>>> arm64/mm: Implement ptep_set_wrprotects() to optimize fork() > >>>> arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown > >>> > >>> Hi Ryan, > >>> Not quite sure if I missed something, are we splitting/unfolding CONTPTES > >>> in the below cases > >> > >> The general idea is that the core-mm sets the individual ptes (one at a time if > >> it likes with set_pte_at(), or in a block with set_ptes()), modifies its > >> permissions (ptep_set_wrprotect(), ptep_set_access_flags()) and clears them > >> (ptep_clear(), etc); This is exactly the same interface as previously. > >> > >> BUT, the arm64 implementation of those interfaces will now detect when a set of > >> adjacent PTEs (a contpte block - so 16 naturally aligned entries when using 4K > >> base pages) are all appropriate for having the CONT_PTE bit set; in this case > >> the block is "folded". And it will detect when the first PTE in the block > >> changes such that the CONT_PTE bit must now be unset ("unfolded"). One of the > >> requirements for folding a contpte block is that all the pages must belong to > >> the *same* folio (that means its safe to only track access/dirty for thecontpte > >> block as a whole rather than for each individual pte). > >> > >> (there are a couple of optimizations that make the reality slightly more > >> complicated than what I've just explained, but you get the idea). > >> > >> On that basis, I believe all the specific cases you describe below are all > >> covered and safe - please let me know if you think there is a hole here! > >> > >>> > >>> 1. madvise(MADV_DONTNEED) on a part of basepages on a CONTPTE large folio > >> > >> The page will first be unmapped (e.g. ptep_clear() or ptep_get_and_clear(), or > >> whatever). The implementation of that will cause an unfold and the CONT_PTE bit > >> is removed from the whole contpte block. If there is then a subsequent > >> set_pte_at() to set a swap entry, the implementation will see that its not > >> appropriate to re-fold, so the range will remain unfolded. > >> > >>> > >>> 2. vma split in a large folio due to various reasons such as mprotect, > >>> munmap, mlock etc. > >> > >> I'm not sure if PTEs are explicitly unmapped/remapped when splitting a VMA? I > >> suspect not, so if the VMA is split in the middle of a currently folded contpte > >> block, it will remain folded. But this is safe and continues to work correctly. > >> The VMA arrangement is not important; it is just important that a single folio > >> is mapped contiguously across the whole block. > >> > >>> > >>> 3. try_to_unmap_one() to reclaim a folio, ptes are scanned one by one > >>> rather than being as a whole. > >> > >> Yes, as per 1; the arm64 implementation will notice when the first entry is > >> cleared and unfold the contpte block. > >> > >>> > >>> In hardware, we need to make sure CONTPTE follow the rule - always 16 > >>> contiguous physical address with CONTPTE set. if one of them run away > >>> from the 16 ptes group and PTEs become unconsistent, some terrible > >>> errors/faults can happen in HW. for example > >> > >> Yes, the implementation obeys all these rules; see contpte_try_fold() and > >> contpte_try_unfold(). the fold/unfold operation is only done when all > >> requirements are met, and we perform it in a manner that is conformant to the > >> architecture requirements (see contpte_fold() - being renamed to > >> contpte_convert() in the next version). > > > > Hi Ryan, > > > > sorry for too many comments, I remembered another case > > > > 4. mremap > > > > a CONTPTE might be remapped to another address which might not be > > aligned with 16*basepage. thus, in move_ptes(), we are copying CONPTEs > > from src to dst. > > static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > unsigned long old_addr, unsigned long old_end, > > struct vm_area_struct *new_vma, pmd_t *new_pmd, > > unsigned long new_addr, bool need_rmap_locks) > > { > > struct mm_struct *mm = vma->vm_mm; > > pte_t *old_pte, *new_pte, pte; > > ... > > > > /* > > * We don't have to worry about the ordering of src and dst > > * pte locks because exclusive mmap_lock prevents deadlock. > > */ > > old_pte = pte_offset_map_lock(mm, old_pmd, old_addr, &old_ptl); > > if (!old_pte) { > > err = -EAGAIN; > > goto out; > > } > > new_pte = pte_offset_map_nolock(mm, new_pmd, new_addr, &new_ptl); > > if (!new_pte) { > > pte_unmap_unlock(old_pte, old_ptl); > > err = -EAGAIN; > > goto out; > > } > > if (new_ptl != old_ptl) > > spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > flush_tlb_batched_pending(vma->vm_mm); > > arch_enter_lazy_mmu_mode(); > > > > for (; old_addr < old_end; old_pte++, old_addr += PAGE_SIZE, > > new_pte++, new_addr += PAGE_SIZE) { > > if (pte_none(ptep_get(old_pte))) > > continue; > > > > pte = ptep_get_and_clear(mm, old_addr, old_pte); > > .... > > } > > > > This has two possibilities > > 1. new_pte is aligned with CONT_PTES, we can still keep CONTPTE; > > 2. new_pte is not aligned with CONT_PTES, we should drop CONTPTE > > while copying. > > > > does your code also handle this properly? > > Yes; same mechanism - the arm64 arch code does the CONT_PTE bit management and > folds/unfolds as neccessary. > > Admittedly this may be non-optimal because we are iterating a single PTE at a > time. When we clear the first pte of a contpte block in the source, the block > will be unfolded. When we set the last pte of the contpte block in the dest, the > block will be folded. If we had a batching mechanism, we could just clear the > whole source contpte block in one hit (no need to unfold first) and we could > just set the dest contpte block in one hit (no need to fold at the end). > > I haven't personally seen this as a hotspot though; I don't know if you have any > data to the contrary? I've followed this type of batching technique for the fork > case (patch 13). We could do a similar thing in theory, but its a bit more in my previous testing, i don't see mremap quite often, so no worries. as long as it is bug-free, it is fine to me though a mremap microbench will definitely lose :-) > complex because of the ptep_get_and_clear() return value; you would need to > return all ptes for the cleared range, or somehow collapse the actual info that > the caller requires (presumably access/dirty info). > Thanks Barry