From: Ira Weiny <ira.weiny@xxxxxxxxx> Enable PKS protection for devmap pages. The devmap protection facility wants to co-opt kmap_{local_page,atomic}() to mediate access to PKS protected pages. kmap() allows for global mappings to be established, while the PKS facility depends on thread-local access. For this reason kmap() is not supported, but it leaves a policy decision for what to do when kmap() is attempted on a protected devmap page. Neither of the 2 current DAX-capable filesystems (ext4 and xfs) perform such global mappings. The bulk of device drivers that would handle devmap pages are not using kmap(). Any future filesystems that gain DAX support, or device drivers wanting to support devmap protected pages will need to move to kmap_local_page(). In the meantime to handle these kmap() users call pgmap_protection_flag_invalid() to flag and invalid use of any potentially protected pages. This allows better debugging of invalided uses vs catching faults later on when the address is used. Direct-map exposure is already mitigated by default on HIGHMEM systems because by definition HIGHMEM systems do not have large capacities of memory in the direct map. Therefore, to reduce complexity HIGHMEM systems are not supported. Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxx> Signed-off-by: Ira Weiny <ira.weiny@xxxxxxxxx> --- include/linux/highmem-internal.h | 5 +++++ mm/Kconfig | 1 + 2 files changed, 6 insertions(+) diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 7902c7d8b55f..f88bc14a643b 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -142,6 +142,7 @@ static inline struct page *kmap_to_page(void *addr) static inline void *kmap(struct page *page) { might_sleep(); + pgmap_protection_flag_invalid(page); return page_address(page); } @@ -157,6 +158,7 @@ static inline void kunmap(struct page *page) static inline void *kmap_local_page(struct page *page) { + pgmap_mk_readwrite(page); return page_address(page); } @@ -175,12 +177,14 @@ static inline void __kunmap_local(void *addr) #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(addr); #endif + pgmap_mk_noaccess(kmap_to_page(addr)); } static inline void *kmap_atomic(struct page *page) { preempt_disable(); pagefault_disable(); + pgmap_mk_readwrite(page); return page_address(page); } @@ -199,6 +203,7 @@ static inline void __kunmap_atomic(void *addr) #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(addr); #endif + pgmap_mk_noaccess(kmap_to_page(addr)); pagefault_enable(); preempt_enable(); } diff --git a/mm/Kconfig b/mm/Kconfig index 201d41269a36..4184d0a7531d 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -794,6 +794,7 @@ config DEVMAP_ACCESS_PROTECTION bool "Access protection for memremap_pages()" depends on NVDIMM_PFN depends on ARCH_HAS_SUPERVISOR_PKEYS + depends on !HIGHMEM select GENERAL_PKS_USER default y -- 2.28.0.rc0.12.gb6a658bd00c9