On 27/11/2023 10:35, Barry Song wrote: > On Mon, Nov 27, 2023 at 10:15 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote: >> >> On 27/11/2023 03:18, Barry Song wrote: >>>> Ryan Roberts (14): >>>> mm: Batch-copy PTE ranges during fork() >>>> arm64/mm: set_pte(): New layer to manage contig bit >>>> arm64/mm: set_ptes()/set_pte_at(): New layer to manage contig bit >>>> arm64/mm: pte_clear(): New layer to manage contig bit >>>> arm64/mm: ptep_get_and_clear(): New layer to manage contig bit >>>> arm64/mm: ptep_test_and_clear_young(): New layer to manage contig bit >>>> arm64/mm: ptep_clear_flush_young(): New layer to manage contig bit >>>> arm64/mm: ptep_set_wrprotect(): New layer to manage contig bit >>>> arm64/mm: ptep_set_access_flags(): New layer to manage contig bit >>>> arm64/mm: ptep_get(): New layer to manage contig bit >>>> arm64/mm: Split __flush_tlb_range() to elide trailing DSB >>>> arm64/mm: Wire up PTE_CONT for user mappings >>>> arm64/mm: Implement ptep_set_wrprotects() to optimize fork() >>>> arm64/mm: Add ptep_get_and_clear_full() to optimize process teardown >>> >>> Hi Ryan, >>> Not quite sure if I missed something, are we splitting/unfolding CONTPTES >>> in the below cases >> >> The general idea is that the core-mm sets the individual ptes (one at a time if >> it likes with set_pte_at(), or in a block with set_ptes()), modifies its >> permissions (ptep_set_wrprotect(), ptep_set_access_flags()) and clears them >> (ptep_clear(), etc); This is exactly the same interface as previously. >> >> BUT, the arm64 implementation of those interfaces will now detect when a set of >> adjacent PTEs (a contpte block - so 16 naturally aligned entries when using 4K >> base pages) are all appropriate for having the CONT_PTE bit set; in this case >> the block is "folded". And it will detect when the first PTE in the block >> changes such that the CONT_PTE bit must now be unset ("unfolded"). One of the >> requirements for folding a contpte block is that all the pages must belong to >> the *same* folio (that means its safe to only track access/dirty for thecontpte >> block as a whole rather than for each individual pte). >> >> (there are a couple of optimizations that make the reality slightly more >> complicated than what I've just explained, but you get the idea). >> >> On that basis, I believe all the specific cases you describe below are all >> covered and safe - please let me know if you think there is a hole here! >> >>> >>> 1. madvise(MADV_DONTNEED) on a part of basepages on a CONTPTE large folio >> >> The page will first be unmapped (e.g. ptep_clear() or ptep_get_and_clear(), or >> whatever). The implementation of that will cause an unfold and the CONT_PTE bit >> is removed from the whole contpte block. If there is then a subsequent >> set_pte_at() to set a swap entry, the implementation will see that its not >> appropriate to re-fold, so the range will remain unfolded. >> >>> >>> 2. vma split in a large folio due to various reasons such as mprotect, >>> munmap, mlock etc. >> >> I'm not sure if PTEs are explicitly unmapped/remapped when splitting a VMA? I >> suspect not, so if the VMA is split in the middle of a currently folded contpte >> block, it will remain folded. But this is safe and continues to work correctly. >> The VMA arrangement is not important; it is just important that a single folio >> is mapped contiguously across the whole block. > > I don't think it is safe to keep CONTPTE folded in a split_vma case. as > otherwise, copy_ptes in your other patch might only copy a part > of CONTPES. > For example, if page0-page4 and page5-page15 are splitted in split_vma, > in fork, while copying pte for the first VMA, we are copying page0-page4, > this will immediately cause inconsistent CONTPTE. as we have to > make sure all CONTPTEs are atomically mapped in a PTL. No that's not how it works. The CONT_PTE bit is not blindly copied from parent to child. It is explicitly managed by the arch code and set when appropriate. In the case above, we will end up calling set_ptes() for page0-page4 in the child. set_ptes() will notice that there are only 5 contiguous pages so it will map without the CONT_PTE bit. > >> >>> >>> 3. try_to_unmap_one() to reclaim a folio, ptes are scanned one by one >>> rather than being as a whole. >> >> Yes, as per 1; the arm64 implementation will notice when the first entry is >> cleared and unfold the contpte block. >> >>> >>> In hardware, we need to make sure CONTPTE follow the rule - always 16 >>> contiguous physical address with CONTPTE set. if one of them run away >>> from the 16 ptes group and PTEs become unconsistent, some terrible >>> errors/faults can happen in HW. for example >> >> Yes, the implementation obeys all these rules; see contpte_try_fold() and >> contpte_try_unfold(). the fold/unfold operation is only done when all >> requirements are met, and we perform it in a manner that is conformant to the >> architecture requirements (see contpte_fold() - being renamed to >> contpte_convert() in the next version). >> >> Thanks for the review! >> >> Thanks, >> Ryan >> >>> >>> case0: >>> addr0 PTE - has no CONTPE >>> addr0+4kb PTE - has CONTPTE >>> .... >>> addr0+60kb PTE - has CONTPTE >>> >>> case 1: >>> addr0 PTE - has no CONTPE >>> addr0+4kb PTE - has CONTPTE >>> .... >>> addr0+60kb PTE - has swap >>> >>> Unconsistent 16 PTEs will lead to crash even in the firmware based on >>> our observation. >>> > > Thanks > Barry