Currently we can end up touching PROT_MTE user pages twice on fault and once on unmap. On fault, with KASAN disabled we first clear data and then set tags to 0, and with KASAN enabled we simultaneously clear data and set tags to the KASAN random tag, and then set tags again to 0. On unmap, we poison the page by setting tags, but this is less likely to find a bug than poisoning kernel pages. This patch series fixes these inefficiencies by only touching the pages once on fault using the DC GZVA instruction to clear both data and tags, and avoiding poisoning user pages on free. Peter Collingbourne (4): mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable() kasan: use separate (un)poison implementation for integrated init arm64: mte: handle tags zeroing at page allocation time kasan: disable freed user page poisoning with HW tags arch/alpha/include/asm/page.h | 6 +-- arch/arm64/include/asm/mte.h | 4 ++ arch/arm64/include/asm/page.h | 10 +++-- arch/arm64/lib/mte.S | 20 ++++++++++ arch/arm64/mm/fault.c | 26 +++++++++++++ arch/arm64/mm/proc.S | 10 +++-- arch/ia64/include/asm/page.h | 6 +-- arch/m68k/include/asm/page_no.h | 6 +-- arch/s390/include/asm/page.h | 6 +-- arch/x86/include/asm/page.h | 6 +-- include/linux/gfp.h | 18 +++++++-- include/linux/highmem.h | 43 ++++++++------------- include/linux/kasan.h | 64 +++++++++++++++++++------------- include/linux/page-flags.h | 9 +++++ include/trace/events/mmflags.h | 9 ++++- mm/kasan/common.c | 4 +- mm/kasan/hw_tags.c | 32 ++++++++++++++++ mm/mempool.c | 6 ++- mm/page_alloc.c | 66 +++++++++++++++++++-------------- 19 files changed, 242 insertions(+), 109 deletions(-) -- 2.32.0.rc0.204.g9fa02ecfa5-goog