On Tuesday 27 June 2017 03:41 PM, Ram Pai wrote:
Pass the correct protection key value to the hash functions on page fault. Signed-off-by: Ram Pai <linuxram@xxxxxxxxxx> --- arch/powerpc/include/asm/pkeys.h | 11 +++++++++++ arch/powerpc/mm/hash_utils_64.c | 4 ++++ arch/powerpc/mm/mem.c | 6 ++++++ 3 files changed, 21 insertions(+) diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h index ef1c601..1370b3f 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -74,6 +74,17 @@ static inline bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey) } /* + * return the protection key of the vma corresponding to the + * given effective address @ea. + */ +static inline int mm_pkey(struct mm_struct *mm, unsigned long ea) +{ + struct vm_area_struct *vma = find_vma(mm, ea); + int pkey = vma ? vma_pkey(vma) : 0; + return pkey; +} + +/*
That is not going to work in hash fault path right ? We can't do a find_vma there without holding the mmap_sem
-aneesh