Re: [PATCH] Revert "arm64: mm: set the contiguous bit for kernel mappings where appropriate"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23 February 2017 at 16:22, Mark Rutland <mark.rutland@xxxxxxx> wrote:
> This reverts commit 0bfc445dec9dd8130d22c9f4476eed7598524129.
>
> When we change the permissions of regions mapped using contiguous
> entries, the architecture requires us to follow a Break-Before-Make
> strategy, breaking *all* associated entries before we can change any of
> the following properties from the entries:
>
>  - presence of the contiguous bit
>  - output address
>  - attributes
>  - permissiones
>
> Failure to do so can result in a number of problems (e.g. TLB conflict
> aborts and/or erroneous results from TLB lookups).
>
> See ARM DDI 0487A.k_iss10775, "Misprogramming of the Contiguous bit",
> page D4-1762.
>
> We do not take this into account when altering the permissions of kernel
> segments in mark_rodata_ro(), where we change the permissions of live
> contiguous entires one-by-one, leaving them transiently inconsistent.
> This has been observed to result in failures on some fast model
> configurations.
>
> Unfortunately, we cannot follow Break-Before-Make here as we'd have to
> unmap kernel text and data used to perform the sequence.
>
> For the timebeing, revert commit 0bfc445dec9dd813 so as to avoid issues
> resulting from this misuse of the contiguous bit.
>
> Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
> Reported-by: Jean-Philippe Brucker <jean-philippe.brucker@xxxxxxx>
> Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
> Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
> Cc: Will Deacon <Will.Deacon@xxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx # v4.10

Acked-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>

> ---
>  arch/arm64/mm/mmu.c | 34 ++++------------------------------
>  1 file changed, 4 insertions(+), 30 deletions(-)
>
> I'm aware that our hugtlbpage code has a similar issue. I'm looking into that
> now, and will address that with separate patches. It should be possible to use
> BBM there as it's a userspace mapping.
>

Are you looking into this issue as well? Or should I?


> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 17243e4..c391b1f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -108,10 +108,8 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
>  static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
>                                   unsigned long end, unsigned long pfn,
>                                   pgprot_t prot,
> -                                 phys_addr_t (*pgtable_alloc)(void),
> -                                 bool page_mappings_only)
> +                                 phys_addr_t (*pgtable_alloc)(void))
>  {
> -       pgprot_t __prot = prot;
>         pte_t *pte;
>
>         BUG_ON(pmd_sect(*pmd));
> @@ -129,18 +127,7 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
>         do {
>                 pte_t old_pte = *pte;
>
> -               /*
> -                * Set the contiguous bit for the subsequent group of PTEs if
> -                * its size and alignment are appropriate.
> -                */
> -               if (((addr | PFN_PHYS(pfn)) & ~CONT_PTE_MASK) == 0) {
> -                       if (end - addr >= CONT_PTE_SIZE && !page_mappings_only)
> -                               __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
> -                       else
> -                               __prot = prot;
> -               }
> -
> -               set_pte(pte, pfn_pte(pfn, __prot));
> +               set_pte(pte, pfn_pte(pfn, prot));
>                 pfn++;
>
>                 /*
> @@ -159,7 +146,6 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
>                                   phys_addr_t (*pgtable_alloc)(void),
>                                   bool page_mappings_only)
>  {
> -       pgprot_t __prot = prot;
>         pmd_t *pmd;
>         unsigned long next;
>
> @@ -186,18 +172,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
>                 /* try section mapping first */
>                 if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
>                       !page_mappings_only) {
> -                       /*
> -                        * Set the contiguous bit for the subsequent group of
> -                        * PMDs if its size and alignment are appropriate.
> -                        */
> -                       if (((addr | phys) & ~CONT_PMD_MASK) == 0) {
> -                               if (end - addr >= CONT_PMD_SIZE)
> -                                       __prot = __pgprot(pgprot_val(prot) |
> -                                                         PTE_CONT);
> -                               else
> -                                       __prot = prot;
> -                       }
> -                       pmd_set_huge(pmd, phys, __prot);
> +                       pmd_set_huge(pmd, phys, prot);
>
>                         /*
>                          * After the PMD entry has been populated once, we
> @@ -207,8 +182,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
>                                                       pmd_val(*pmd)));
>                 } else {
>                         alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
> -                                      prot, pgtable_alloc,
> -                                      page_mappings_only);
> +                                      prot, pgtable_alloc);
>
>                         BUG_ON(pmd_val(old_pmd) != 0 &&
>                                pmd_val(old_pmd) != pmd_val(*pmd));
> --
> 1.9.1
>



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]