Re: [PATCH mm-unstable RFC 17/26] powerpc/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE on 32bit book3s

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Le 06/12/2022 à 15:47, David Hildenbrand a écrit :
> We already implemented support for 64bit book3s in commit bff9beaa2e80
> ("powerpc/pgtable: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE for book3s")
> 
> Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE also in 32bit by reusing yet
> unused LSB 2 / MSB 29. There seems to be no real reason why that bit cannot
> be used, and reusing it avoids having to steal one bit from the swap
> offset.
> 
> While at it, mask the type in __swp_entry().
> 
> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
> Cc: Nicholas Piggin <npiggin@xxxxxxxxx>
> Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
> ---
>   arch/powerpc/include/asm/book3s/32/pgtable.h | 38 +++++++++++++++++---
>   1 file changed, 33 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
> index 75823f39e042..8107835b38c1 100644
> --- a/arch/powerpc/include/asm/book3s/32/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
> @@ -42,6 +42,9 @@
>   #define _PMD_PRESENT_MASK (PAGE_MASK)
>   #define _PMD_BAD	(~PAGE_MASK)
>   
> +/* We borrow the _PAGE_USER bit to store the exclusive marker in swap PTEs. */
> +#define _PAGE_SWP_EXCLUSIVE	_PAGE_USER
> +
>   /* And here we include common definitions */
>   
>   #define _PAGE_KERNEL_RO		0
> @@ -363,17 +366,42 @@ static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
>   #define pmd_page(pmd)		pfn_to_page(pmd_pfn(pmd))
>   
>   /*
> - * Encode and decode a swap entry.
> - * Note that the bits we use in a PTE for representing a swap entry
> - * must not include the _PAGE_PRESENT bit or the _PAGE_HASHPTE bit (if used).
> - *   -- paulus
> + * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that
> + * are !pte_none() && !pte_present().
> + *
> + * Format of swap PTEs (32bit PTEs):
> + *
> + *                         1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3
> + *   0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
> + *   E H P <- type --> <----------------- offset ------------------>

That's in reversed order. _PAGE_HASHPTE is bit 30 and should be on the 
right hand side. Etc ...

Some exemple in arch/powerpc/include/asm/nohash/32/pte-40x.h or 
arch/powerpc/include/asm/nohash/32/pte-85xx.h

> + *
> + *   E is the exclusive marker that is not stored in swap entries.
> + *   _PAGE_PRESENT (P) and __PAGE_HASHPTE (H) must be 0.
> + *
> + * For 64bit PTEs, the offset is extended by 32bit.
>    */
>   #define __swp_type(entry)		((entry).val & 0x1f)
>   #define __swp_offset(entry)		((entry).val >> 5)
> -#define __swp_entry(type, offset)	((swp_entry_t) { (type) | ((offset) << 5) })
> +#define __swp_entry(type, offset)	((swp_entry_t) { ((type) & 0x1f) | ((offset) << 5) })
>   #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val(pte) >> 3 })
>   #define __swp_entry_to_pte(x)		((pte_t) { (x).val << 3 })
>   
> +#define __HAVE_ARCH_PTE_SWP_EXCLUSIVE
> +static inline int pte_swp_exclusive(pte_t pte)
> +{
> +	return pte_val(pte) & _PAGE_SWP_EXCLUSIVE;
> +}
> +
> +static inline pte_t pte_swp_mkexclusive(pte_t pte)
> +{
> +	return __pte(pte_val(pte) | _PAGE_SWP_EXCLUSIVE);
> +}
> +
> +static inline pte_t pte_swp_clear_exclusive(pte_t pte)
> +{
> +	return __pte(pte_val(pte) & ~_PAGE_SWP_EXCLUSIVE);
> +}
> +
>   /* Generic accessors to PTE bits */
>   static inline int pte_write(pte_t pte)		{ return !!(pte_val(pte) & _PAGE_RW);}
>   static inline int pte_read(pte_t pte)		{ return 1; }




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux