On 7/26/23 13:40, Yu Zhao wrote: > On Tue, Jul 25, 2023 at 10:44 PM Yin Fengwei <fengwei.yin@xxxxxxxxx> wrote: >> >> >> >> On 7/26/23 11:26, Yu Zhao wrote: >>> On Tue, Jul 25, 2023 at 8:49 PM Yin Fengwei <fengwei.yin@xxxxxxxxx> wrote: >>>> >>>> >>>> On 7/25/23 13:55, Yu Zhao wrote: >>>>> On Fri, Jul 21, 2023 at 3:41 AM Yin Fengwei <fengwei.yin@xxxxxxxxx> wrote: >>>>>> >>>>>> Currently, in function madvise_cold_or_pageout_pte_range(), the >>>>>> young bit of pte/pmd is cleared notify subscripter. >>>>>> >>>>>> Using notify-able API to make sure the subscripter is signaled about >>>>>> the young bit clearing. >>>>>> >>>>>> Signed-off-by: Yin Fengwei <fengwei.yin@xxxxxxxxx> >>>>>> --- >>>>>> mm/madvise.c | 18 ++---------------- >>>>>> 1 file changed, 2 insertions(+), 16 deletions(-) >>>>>> >>>>>> diff --git a/mm/madvise.c b/mm/madvise.c >>>>>> index f12933ebcc24..b236e201a738 100644 >>>>>> --- a/mm/madvise.c >>>>>> +++ b/mm/madvise.c >>>>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>> return 0; >>>>>> } >>>>>> >>>>>> - if (pmd_young(orig_pmd)) { >>>>>> - pmdp_invalidate(vma, addr, pmd); >>>>>> - orig_pmd = pmd_mkold(orig_pmd); >>>>>> - >>>>>> - set_pmd_at(mm, addr, pmd, orig_pmd); >>>>>> - tlb_remove_pmd_tlb_entry(tlb, pmd, addr); >>>>>> - } >>>>>> - >>>>>> + pmdp_clear_flush_young_notify(vma, addr, pmd); >>>>>> folio_clear_referenced(folio); >>>>>> folio_test_clear_young(folio); >>>>>> if (folio_test_active(folio)) >>>>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, >>>>>> >>>>>> VM_BUG_ON_FOLIO(folio_test_large(folio), folio); >>>>>> >>>>>> - if (pte_young(ptent)) { >>>>>> - ptent = ptep_get_and_clear_full(mm, addr, pte, >>>>>> - tlb->fullmm); >>>>>> - ptent = pte_mkold(ptent); >>>>>> - set_pte_at(mm, addr, pte, ptent); >>>>>> - tlb_remove_tlb_entry(tlb, pte, addr); >>>>>> - } >>>>>> - >>>>>> + ptep_clear_flush_young_notify(vma, addr, pte); >>>>> >>>>> These two places are tricky. >>>>> >>>>> I agree there is a problem here, i.e., we are not consulting the mmu >>>>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been a >>>>> known problem to me for a while (not a high priority one). >>>>> >>>>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() is >>>>> not. But, on x86, we might see a performance improvement since >>>>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there might >>>>> be regressions though. >>>>> >>>>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned he >>>>> prefers flush. So I'll let him chime in. >>>> I am OK with either way even no flush way here is more efficient for >>>> arm64. Let's wait for Minchan's comment. >>> >>> Yes, and I don't think there would be any "negative" consequences >>> without tlb flushes when clearing the A-bit. >>> >>>>> If we do end up with ptep_clear_young_notify(), please remove >>>>> mmu_gather -- it should have been done in this patch. >>>> >>>> I suppose "remove mmu_gather" means to trigger flush tlb operation in >>>> batched way to make sure no stale data in TLB for long time on arm64 >>>> platform. >>> >>> In madvise_cold_or_pageout_pte_range(), we only need struct >>> mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing >>> tlb after clearing the A-bit. There is no correction, e.g., potential >>> data corruption, involved there. >> >> From https://lore.kernel.org/lkml/20181029105515.GD14127@xxxxxxx/, >> the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_young() >> is to prevent the stale data in TLB. I suppose there is no correction issue >> there also. >> >> So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() is fine? > > Sorry, I'm not sure I understand your question here. Oh. Sorry for the confusion. I will explain my understanding and question in detail. > > In this patch, you removed tlb_remove_tlb_entry(), so we don't need > struct mmu_gather *tlb any more. Yes. You are right. > > If you are asking why I prefer ptep_clear_young_notify() (no flush), > which also doesn't need tlb_remove_tlb_entry(), then the answer is > that the TLB size doesn't scale like DRAM does: the gap has been > growing exponentially. So there is no way TLB can hold stale entries > long enough to cause a measurable effect on the A-bit. This isn't a > conjecture -- it's been proven conversely: we encountered bugs (almost > every year) caused by missing TLB flushes and resulting in data > corruption. They were never easy to reproduce, meaning stale entries > never stayed long in TLB. when I read https://lore.kernel.org/lkml/20181029105515.GD14127@xxxxxxx/, my understanding is that arm64 still keep something in ptep_clear_flush_young. The reason is finishing TLB flush by next context-switch to make sure no stale entries in TLB cross next context switch. Should we still keep same behavior (no stable entries in TLB cross next context switch) for madvise_cold_or_pageout_pte_range()? So two versions work (I assume we should keep same behavior) for me: 1. using xxx_flush_xxx() version. but with possible regression on arm64. 2. using none flush version. But do batched TLB flush. Hope this could make things clear. Thanks. Regards Yin, Fengwei