On Wed, 2 Jun 2021 16:52:26 -0700 Peter Collingbourne <pcc@xxxxxxxxxx> wrote: > Currently we can end up touching PROT_MTE user pages twice on fault > and once on unmap. On fault, with KASAN disabled we first clear data > and then set tags to 0, and with KASAN enabled we simultaneously > clear data and set tags to the KASAN random tag, and then set tags > again to 0. On unmap, we poison the page by setting tags, but this > is less likely to find a bug than poisoning kernel pages. > > This patch series fixes these inefficiencies by only touching the pages > once on fault using the DC GZVA instruction to clear both data and > tags, and avoiding poisoning user pages on free. > > ... > > arch/alpha/include/asm/page.h | 6 +-- > arch/arm64/include/asm/mte.h | 4 ++ > arch/arm64/include/asm/page.h | 10 +++-- > arch/arm64/lib/mte.S | 20 ++++++++++ > arch/arm64/mm/fault.c | 26 +++++++++++++ > arch/arm64/mm/proc.S | 10 +++-- > arch/ia64/include/asm/page.h | 6 +-- > arch/m68k/include/asm/page_no.h | 6 +-- > arch/s390/include/asm/page.h | 6 +-- > arch/x86/include/asm/page.h | 6 +-- > include/linux/gfp.h | 18 +++++++-- > include/linux/highmem.h | 43 ++++++++------------- > include/linux/kasan.h | 64 +++++++++++++++++++------------- > include/linux/page-flags.h | 9 +++++ > include/trace/events/mmflags.h | 9 ++++- > mm/kasan/common.c | 4 +- > mm/kasan/hw_tags.c | 32 ++++++++++++++++ > mm/mempool.c | 6 ++- > mm/page_alloc.c | 66 +++++++++++++++++++-------------- > 19 files changed, 242 insertions(+), 109 deletions(-) This is more MMish than ARMish, but I expect it will get more exposure in an ARM tree than in linux-next alone. I'll grab them for now, but in the hope that they will appear in -next via an ARM tree so I get to drop them again.