On 7/19/22 08:58, Huang, Ying wrote: > Anshuman Khandual <anshuman.khandual@xxxxxxx> writes: > >> On 7/19/22 06:53, Barry Song wrote: >>> On Tue, Jul 19, 2022 at 12:44 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote: >>>> >>>> Barry Song <21cnbao@xxxxxxxxx> writes: >>>> >>>>> From: Barry Song <v-songbaohua@xxxxxxxx> >>>>> >>>>> THP_SWAP has been proven to improve the swap throughput significantly >>>>> on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay >>>>> splitting THP after swapped out"). >>>>> As long as arm64 uses 4K page size, it is quite similar with x86_64 >>>>> by having 2MB PMD THP. THP_SWAP is architecture-independent, thus, >>>>> enabling it on arm64 will benefit arm64 as well. >>>>> A corner case is that MTE has an assumption that only base pages >>>>> can be swapped. We won't enable THP_SWAP for ARM64 hardware with >>>>> MTE support until MTE is reworked to coexist with THP_SWAP. >>>>> >>>>> A micro-benchmark is written to measure thp swapout throughput as >>>>> below, >>>>> >>>>> unsigned long long tv_to_ms(struct timeval tv) >>>>> { >>>>> return tv.tv_sec * 1000 + tv.tv_usec / 1000; >>>>> } >>>>> >>>>> main() >>>>> { >>>>> struct timeval tv_b, tv_e;; >>>>> #define SIZE 400*1024*1024 >>>>> volatile void *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, >>>>> MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); >>>>> if (!p) { >>>>> perror("fail to get memory"); >>>>> exit(-1); >>>>> } >>>>> >>>>> madvise(p, SIZE, MADV_HUGEPAGE); >>>>> memset(p, 0x11, SIZE); /* write to get mem */ >>>>> >>>>> gettimeofday(&tv_b, NULL); >>>>> madvise(p, SIZE, MADV_PAGEOUT); >>>>> gettimeofday(&tv_e, NULL); >>>>> >>>>> printf("swp out bandwidth: %ld bytes/ms\n", >>>>> SIZE/(tv_to_ms(tv_e) - tv_to_ms(tv_b))); >>>>> } >>>>> >>>>> Testing is done on rk3568 64bit quad core processor Quad Core >>>>> Cortex-A55 platform - ROCK 3A. >>>>> thp swp throughput w/o patch: 2734bytes/ms (mean of 10 tests) >>>>> thp swp throughput w/ patch: 3331bytes/ms (mean of 10 tests) >>>>> >>>>> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> >>>>> Cc: Minchan Kim <minchan@xxxxxxxxxx> >>>>> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> >>>>> Cc: Hugh Dickins <hughd@xxxxxxxxxx> >>>>> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> >>>>> Cc: Anshuman Khandual <anshuman.khandual@xxxxxxx> >>>>> Cc: Steven Price <steven.price@xxxxxxx> >>>>> Cc: Yang Shi <shy828301@xxxxxxxxx> >>>>> Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> >>>>> --- >>>>> -v3: >>>>> * refine the commit log; >>>>> * add a benchmark result; >>>>> * refine the macro of arch_thp_swp_supported >>>>> Thanks to the comments of Anshuman, Andrew, Steven >>>>> >>>>> arch/arm64/Kconfig | 1 + >>>>> arch/arm64/include/asm/pgtable.h | 6 ++++++ >>>>> include/linux/huge_mm.h | 12 ++++++++++++ >>>>> mm/swap_slots.c | 2 +- >>>>> 4 files changed, 20 insertions(+), 1 deletion(-) >>>>> >>>>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>>>> index 1652a9800ebe..e1c540e80eec 100644 >>>>> --- a/arch/arm64/Kconfig >>>>> +++ b/arch/arm64/Kconfig >>>>> @@ -101,6 +101,7 @@ config ARM64 >>>>> select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP >>>>> select ARCH_WANT_LD_ORPHAN_WARN >>>>> select ARCH_WANTS_NO_INSTR >>>>> + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES >>>>> select ARCH_HAS_UBSAN_SANITIZE_ALL >>>>> select ARM_AMBA >>>>> select ARM_ARCH_TIMER >>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >>>>> index 0b6632f18364..78d6f6014bfb 100644 >>>>> --- a/arch/arm64/include/asm/pgtable.h >>>>> +++ b/arch/arm64/include/asm/pgtable.h >>>>> @@ -45,6 +45,12 @@ >>>>> __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) >>>>> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ >>>>> >>>>> +static inline bool arch_thp_swp_supported(void) >>>>> +{ >>>>> + return !system_supports_mte(); >>>>> +} >>>>> +#define arch_thp_swp_supported arch_thp_swp_supported >>>>> + >>>>> /* >>>>> * Outside of a few very special situations (e.g. hibernation), we always >>>>> * use broadcast TLB invalidation instructions, therefore a spurious page >>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>>>> index de29821231c9..4ddaf6ad73ef 100644 >>>>> --- a/include/linux/huge_mm.h >>>>> +++ b/include/linux/huge_mm.h >>>>> @@ -461,4 +461,16 @@ static inline int split_folio_to_list(struct folio *folio, >>>>> return split_huge_page_to_list(&folio->page, list); >>>>> } >>>>> >>>>> +/* >>>>> + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to >>>>> + * limitations in the implementation like arm64 MTE can override this to >>>>> + * false >>>>> + */ >>>>> +#ifndef arch_thp_swp_supported >>>>> +static inline bool arch_thp_swp_supported(void) >>>>> +{ >>>>> + return true; >>>>> +} >>>> >>>> How about the following? >>>> >>>> static inline bool arch_wants_thp_swap(void) >>>> { >>>> return IS_ENABLED(ARCH_WANTS_THP_SWAP); >>>> } >>> >>> This looks good. then i'll need to change arm64 to >>> >>> +static inline bool arch_thp_swp_supported(void) >>> +{ >>> + return IS_ENABLED(ARCH_WANTS_THP_SWAP) && !system_supports_mte(); >>> +} >> >> Why ? CONFIG_THP_SWAP depends on ARCH_WANTS_THP_SWAP. In folio_alloc_swap(), >> IS_ENABLED(CONFIG_THP_SWAP) enabled, will also imply ARCH_WANTS_THP_SWAP too >> is enabled. Hence checking for ARCH_WANTS_THP_SWAP again does not make sense >> either in the generic fallback stub, or in arm64 platform override. Because >> without ARCH_WANTS_THP_SWAP enabled, arch_thp_swp_supported() should never >> be called in the first place. > > For the only caller now, the checking looks redundant. But the original > proposed implementation as follows, > > static inline bool arch_thp_swp_supported(void) > { > return true; > } > > will return true even on architectures that don't support/want THP swap. But the function will never be called on for those platforms. > That will confuse people too. I dont see how. > > And the "redundant" checking has no run time overhead, because compiler > will do the trick. I understand that, but dont think this indirection is necessary.