On Thu, Jan 10, 2019 at 02:09:37PM -0700, Khalid Aziz wrote: > From: Juerg Haefliger <juerg.haefliger@xxxxxxxxxxxxx> > > Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and > provide a hook for updating a single kernel page table entry (which is > required by the generic XPFO code). > > v6: use flush_tlb_kernel_range() instead of __flush_tlb_one() > > CC: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx > Signed-off-by: Juerg Haefliger <juerg.haefliger@xxxxxxxxxxxxx> > Signed-off-by: Tycho Andersen <tycho@xxxxxxxxxx> > Signed-off-by: Khalid Aziz <khalid.aziz@xxxxxxxxxx> > --- > arch/arm64/Kconfig | 1 + > arch/arm64/mm/Makefile | 2 ++ > arch/arm64/mm/xpfo.c | 58 ++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 61 insertions(+) > create mode 100644 arch/arm64/mm/xpfo.c > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index ea2ab0330e3a..f0a9c0007d23 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -171,6 +171,7 @@ config ARM64 > select SWIOTLB > select SYSCTL_EXCEPTION_TRACE > select THREAD_INFO_IN_TASK > + select ARCH_SUPPORTS_XPFO > help > ARM 64-bit (AArch64) Linux support. > > diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile > index 849c1df3d214..cca3808d9776 100644 > --- a/arch/arm64/mm/Makefile > +++ b/arch/arm64/mm/Makefile > @@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n > > obj-$(CONFIG_KASAN) += kasan_init.o > KASAN_SANITIZE_kasan_init.o := n > + > +obj-$(CONFIG_XPFO) += xpfo.o > diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c > new file mode 100644 > index 000000000000..678e2be848eb > --- /dev/null > +++ b/arch/arm64/mm/xpfo.c > @@ -0,0 +1,58 @@ > +/* > + * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P. > + * Copyright (C) 2016 Brown University. All rights reserved. > + * > + * Authors: > + * Juerg Haefliger <juerg.haefliger@xxxxxxx> > + * Vasileios P. Kemerlis <vpk@xxxxxxxxxxxx> > + * > + * This program is free software; you can redistribute it and/or modify it > + * under the terms of the GNU General Public License version 2 as published by > + * the Free Software Foundation. > + */ > + > +#include <linux/mm.h> > +#include <linux/module.h> > + > +#include <asm/tlbflush.h> > + > +/* > + * Lookup the page table entry for a virtual address and return a pointer to > + * the entry. Based on x86 tree. > + */ > +static pte_t *lookup_address(unsigned long addr) > +{ > + pgd_t *pgd; > + pud_t *pud; > + pmd_t *pmd; > + > + pgd = pgd_offset_k(addr); > + if (pgd_none(*pgd)) > + return NULL; > + > + pud = pud_offset(pgd, addr); > + if (pud_none(*pud)) > + return NULL; > + > + pmd = pmd_offset(pud, addr); > + if (pmd_none(*pmd)) > + return NULL; > + > + return pte_offset_kernel(pmd, addr); > +} > + > +/* Update a single kernel page table entry */ > +inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot) > +{ > + pte_t *pte = lookup_address((unsigned long)kaddr); > + > + set_pte(pte, pfn_pte(page_to_pfn(page), prot)); Thought on the other hand.. what if the page is PMD? Do you really want to do this? What if 'pte' is NULL? > +} > + > +inline void xpfo_flush_kernel_tlb(struct page *page, int order) > +{ > + unsigned long kaddr = (unsigned long)page_address(page); > + unsigned long size = PAGE_SIZE; > + > + flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size); Ditto here. You are assuming it is PTE, but it may be PMD or such. Or worts - the lookup_address could be NULL. > +} > -- > 2.17.1 >