Re: [PATCH v1 01/11] arm/pgtable: define PFN_PTE_SHIFT on arm and arm64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/01/2024 11:02, David Hildenbrand wrote:
> On 23.01.24 11:48, David Hildenbrand wrote:
>> On 23.01.24 11:34, Ryan Roberts wrote:
>>> On 22/01/2024 19:41, David Hildenbrand wrote:
>>>> We want to make use of pte_next_pfn() outside of set_ptes(). Let's
>>>> simpliy define PFN_PTE_SHIFT, required by pte_next_pfn().
>>>>
>>>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
>>>> ---
>>>>    arch/arm/include/asm/pgtable.h   | 2 ++
>>>>    arch/arm64/include/asm/pgtable.h | 2 ++
>>>>    2 files changed, 4 insertions(+)
>>>>
>>>> diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
>>>> index d657b84b6bf70..be91e376df79e 100644
>>>> --- a/arch/arm/include/asm/pgtable.h
>>>> +++ b/arch/arm/include/asm/pgtable.h
>>>> @@ -209,6 +209,8 @@ static inline void __sync_icache_dcache(pte_t pteval)
>>>>    extern void __sync_icache_dcache(pte_t pteval);
>>>>    #endif
>>>>    +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>> +
>>>>    void set_ptes(struct mm_struct *mm, unsigned long addr,
>>>>                  pte_t *ptep, pte_t pteval, unsigned int nr);
>>>>    #define set_ptes set_ptes
>>>> diff --git a/arch/arm64/include/asm/pgtable.h
>>>> b/arch/arm64/include/asm/pgtable.h
>>>> index 79ce70fbb751c..d4b3bd96e3304 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -341,6 +341,8 @@ static inline void __sync_cache_and_tags(pte_t pte,
>>>> unsigned int nr_pages)
>>>>            mte_sync_tags(pte, nr_pages);
>>>>    }
>>>>    +#define PFN_PTE_SHIFT        PAGE_SHIFT
>>>
>>> I think this is buggy. And so is the arm64 implementation of set_ptes(). It
>>> works fine for 48-bit output address, but for 52-bit OAs, the high bits are not
>>> kept contigously, so if you happen to be setting a mapping for which the
>>> physical memory block straddles bit 48, this won't work.
>>
>> Right, as soon as the PTE bits are not contiguous, this stops working,
>> just like set_ptes() would, which I used as orientation.
>>
>>>
>>> Today, only the 64K base page config can support 52 bits, and for this,
>>> OA[51:48] are stored in PTE[15:12]. But 52 bits for 4K and 16K base pages is
>>> coming (hopefully v6.9) and in this case OA[51:50] are stored in PTE[9:8].
>>> Fortunately we already have helpers in arm64 to abstract this.
>>>
>>> So I think arm64 will want to define its own pte_next_pfn():
>>>
>>> #define pte_next_pfn pte_next_pfn
>>> static inline pte_t pte_next_pfn(pte_t pte)
>>> {
>>>     return pfn_pte(pte_pfn(pte) + 1, pte_pgprot(pte));
>>> }
>>>
> 
> Digging into the details, on arm64 we have:
> 
> #define pte_pfn(pte)           (__pte_to_phys(pte) >> PAGE_SHIFT)
> 
> and
> 
> #define __pte_to_phys(pte)     (pte_val(pte) & PTE_ADDR_MASK)
> 
> But that implies, that upstream the PFN is always contiguous, no?
> 


But __pte_to_phys() and __phys_to_pte_val() depend on a Kconfig. If PA bits is
52, the bits are not all contiguous:

#ifdef CONFIG_ARM64_PA_BITS_52
static inline phys_addr_t __pte_to_phys(pte_t pte)
{
	return (pte_val(pte) & PTE_ADDR_LOW) |
		((pte_val(pte) & PTE_ADDR_HIGH) << PTE_ADDR_HIGH_SHIFT);
}
static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
{
	return (phys | (phys >> PTE_ADDR_HIGH_SHIFT)) & PTE_ADDR_MASK;
}
#else
#define __pte_to_phys(pte)	(pte_val(pte) & PTE_ADDR_MASK)
#define __phys_to_pte_val(phys)	(phys)
#endif





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux