Re: [RFC PATCH] madvise: make madvise_cold_or_pageout_pte_range() support large folio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> -               if (pageout_anon_only_filter && !folio_test_anon(folio))
>> +               /* Do not interfere with other mappings of this folio */
>> +               if (folio_mapcount(folio) != 1)
>>                         continue;
>>
>> -               VM_BUG_ON_FOLIO(folio_test_large(folio), folio);
>> -
>> -               if (pte_young(ptent)) {
>> -                       ptent = ptep_get_and_clear_full(mm, addr, pte,
>> -                                                       tlb->fullmm);
>> -                       ptent = pte_mkold(ptent);
>> -                       set_pte_at(mm, addr, pte, ptent);
>> -                       tlb_remove_tlb_entry(tlb, pte, addr);
>> -               }
>> -
>> -               /*
>> -                * We are deactivating a folio for accelerating reclaiming.
>> -                * VM couldn't reclaim the folio unless we clear PG_young.
>> -                * As a side effect, it makes confuse idle-page tracking
>> -                * because they will miss recent referenced history.
>> -                */
>> -               folio_clear_referenced(folio);
>> -               folio_test_clear_young(folio);
>> -               if (folio_test_active(folio))
>> -                       folio_set_workingset(folio);
>> +pageout_cold_folio:
>>                 if (pageout) {
>>                         if (folio_isolate_lru(folio)) {
>>                                 if (folio_test_unevictable(folio))
>> @@ -529,8 +542,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>>                 arch_leave_lazy_mmu_mode();
>>                 pte_unmap_unlock(start_pte, ptl);
>>         }
>> -       if (pageout)
>> -               reclaim_pages(&folio_list);
>> +
>> +       if (pageout) {
>> +               LIST_HEAD(reclaim_list);
>> +
>> +               while (!list_empty(&folio_list)) {
>> +                       int refs;
>> +                       unsigned long flags;
>> +                       struct mem_cgroup *memcg = folio_memcg(folio);
>> +
>> +                       folio = lru_to_folio(&folio_list);
>> +                       list_del(&folio->lru);
>> +
>> +                       refs = folio_referenced(folio, 0, memcg, &flags);
>> +
>> +                       if ((flags & VM_LOCKED) || (refs == -1)) {
>> +                               folio_putback_lru(folio);
>> +                               continue;
>> +                       }
>> +
>> +                       folio_test_clear_referenced(folio);
>> +                       list_add(&folio->lru, &reclaim_list);
>> +               }
>> +               reclaim_pages(&reclaim_list);
>> +       }
> 
> i overlooked the chunk above -- it's unnecessary: after we split the
> large folio (and splice the base folios onto the same LRU list), we
> continue at the position of the first base folio because of:
> 
>   pte--;
>   addr -= PAGE_SIZE;
>   continue;
> 
> And then we do pte_mkold(), which takes care of the A-bit.
This patch moves the A-bit clear out of the folio isolation loop. So
even the folio is split and loop restarts from the first base folio,
the A-bit is not cleared. A-bit is only cleared in reclaim loop.

There is one option for A-bit clearing:
  - clear A-bit of base 4K page in isolation loop and leave large folio
    A-bit clearing to reclaim loop.

This patch didn't use it because don't want to introduce A-bit clearing
in two places. But I am open about clearing base 4K page A-bit cleared in
isolation loop. Thanks.


Regards
Yin, Fengwei





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux