Re: [PATCH v1 15/39] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18/12/2023 17:03, David Hildenbrand wrote:
> On 18.12.23 17:22, Ryan Roberts wrote:
>> On 11/12/2023 15:56, David Hildenbrand wrote:
>>> Let's use folio_add_anon_rmap_ptes(), batching the rmap operations.
>>>
>>> While at it, use more folio operations (but only in the code branch we're
>>> touching), use VM_WARN_ON_FOLIO(), and pass RMAP_EXCLUSIVE instead of
>>> manually setting PageAnonExclusive.
>>>
>>> We should never see non-anon pages on that branch: otherwise, the
>>> existing page_add_anon_rmap() call would have been flawed already.
>>>
>>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
>>> ---
>>>   mm/huge_memory.c | 23 +++++++++++++++--------
>>>   1 file changed, 15 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 1f5634b2f374..82ad68fe0d12 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -2398,6 +2398,7 @@ static void __split_huge_pmd_locked(struct
>>> vm_area_struct *vma, pmd_t *pmd,
>>>           unsigned long haddr, bool freeze)
>>>   {
>>>       struct mm_struct *mm = vma->vm_mm;
>>> +    struct folio *folio;
>>>       struct page *page;
>>>       pgtable_t pgtable;
>>>       pmd_t old_pmd, _pmd;
>>> @@ -2493,16 +2494,18 @@ static void __split_huge_pmd_locked(struct
>>> vm_area_struct *vma, pmd_t *pmd,
>>>           uffd_wp = pmd_swp_uffd_wp(old_pmd);
>>>       } else {
>>>           page = pmd_page(old_pmd);
>>> +        folio = page_folio(page);
>>>           if (pmd_dirty(old_pmd)) {
>>>               dirty = true;
>>> -            SetPageDirty(page);
>>> +            folio_set_dirty(folio);
>>>           }
>>>           write = pmd_write(old_pmd);
>>>           young = pmd_young(old_pmd);
>>>           soft_dirty = pmd_soft_dirty(old_pmd);
>>>           uffd_wp = pmd_uffd_wp(old_pmd);
>>>   -        VM_BUG_ON_PAGE(!page_count(page), page);
>>> +        VM_WARN_ON_FOLIO(!folio_ref_count(folio), folio);
>>> +        VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
>>
>> Is this warning really correct? file-backed memory can be PMD-mapped with
>> CONFIG_READ_ONLY_THP_FOR_FS, so presumably it can also have the need to be
>> remapped as pte? Although I guess if we did have a file-backed folio, it
>> definitely wouldn't be correct to call page_add_anon_rmap() /
>> folio_add_anon_rmap_ptes()...
> 
> Yes, see the patch description where I spell that out.

Oh god, how did I miss that... sorry!

> 
> PTE-remapping a file-back folio will simply zap the PMD and refault from the
> page cache after creating a page table.


Yep, that makes sense.

> 
> So this is anon-only code.
> 





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux