On Wed, May 11, 2022 at 02:20:45PM +0900, Hyeonggon Yoo wrote: > > pgprot_t vm_get_page_prot(unsigned long vm_flags) > > { > > pgprot_t ret = __pgprot(pgprot_val(protection_map[vm_flags & > > (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | > > pgprot_val(arch_vm_get_page_prot(vm_flags))); > > > > return arch_filter_pgprot(ret); > > } > > EXPORT_SYMBOL(vm_get_page_prot); > > I guess it's only set for processes' VMA if no caller is abusing > vm_get_page_prot() for kernel mappings. > > But yeah, just quick guessing does not make us convinced. > Let's Cc people working on mm. > > If kernel never uses _PAGE_PROTNONE for kernel mappings, it's just okay > not to clear _PAGE_GLOBAL at first in __change_page_attr() if it's not user address, > because no user will confuse _PAGE_GLOBAL as _PAGE_PROTNONE if it's kernel > address. right? > I'm not aware of a case where _PAGE_BIT_PROTNONE is used for a kernel address expecting PROT_NONE semantics instead of the global bit. NUMA Balancing is not going to accidentally treat a kernel address as if it's a NUMA hinting fault. By the time a fault is determining if a PTE access is a numa hinting fault or accesssing a PROT_NONE region, it has been established that it is a userspace address backed by a valid VMA. -- Mel Gorman SUSE Labs