Re: [PATCH] riscv: mm: Pre-allocate PGD entries vmalloc/modules area

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Palmer Dabbelt <palmer@xxxxxxxxxxx> writes:

> On Mon, 29 May 2023 11:00:23 PDT (-0700), bjorn@xxxxxxxxxx wrote:
>> From: Björn Töpel <bjorn@xxxxxxxxxxxx>
>>
>> The RISC-V port requires that kernel PGD entries are to be
>> synchronized between MMs. This is done via the vmalloc_fault()
>> function, that simply copies the PGD entries from init_mm to the
>> faulting one.
>>
>> Historically, faulting in PGD entries have been a source for both bugs
>> [1], and poor performance.
>>
>> One way to get rid of vmalloc faults is by pre-allocating the PGD
>> entries. Pre-allocating the entries potientially wastes 64 * 4K (65 on
>> SV39). The pre-allocation function is pulled from Jörg Rödel's x86
>> work, with the addition of 3-level page tables (PMD allocations).
>>
>> The pmd_alloc() function needs the ptlock cache to be initialized
>> (when split page locks is enabled), so the pre-allocation is done in a
>> RISC-V specific pgtable_cache_init() implementation.
>>
>> Pre-allocate the kernel PGD entries for the vmalloc/modules area, but
>> only for 64b platforms.
>>
>> Link: https://lore.kernel.org/lkml/20200508144043.13893-1-joro@xxxxxxxxxx/ # [1]
>> Signed-off-by: Björn Töpel <bjorn@xxxxxxxxxxxx>
>> ---
>>  arch/riscv/mm/fault.c | 20 +++------------
>>  arch/riscv/mm/init.c  | 58 +++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 62 insertions(+), 16 deletions(-)
>>
>> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
>> index 8685f85a7474..6b0b5e517e12 100644
>> --- a/arch/riscv/mm/fault.c
>> +++ b/arch/riscv/mm/fault.c
>> @@ -230,32 +230,20 @@ void handle_page_fault(struct pt_regs *regs)
>>  		return;
>>
>>  	/*
>> -	 * Fault-in kernel-space virtual memory on-demand.
>> -	 * The 'reference' page table is init_mm.pgd.
>> +	 * Fault-in kernel-space virtual memory on-demand, for 32-bit
>> +	 * architectures.  The 'reference' page table is init_mm.pgd.
>
> That wording seems a little odd to me: I think English allows for these 
> "add something after the comma to change the meaning of a sentence" 
> things, but they're kind of complicated.  Maybe it's easier to just flip 
> the order?
>
> That said, it's very early so maybe it's fine...
>
>>  	 *
>>  	 * NOTE! We MUST NOT take any locks for this case. We may
>>  	 * be in an interrupt or a critical region, and should
>>  	 * only copy the information from the master page table,
>>  	 * nothing more.
>>  	 */
>> -	if (unlikely((addr >= VMALLOC_START) && (addr < VMALLOC_END))) {
>> +	if (!IS_ENABLED(CONFIG_64BIT) &&
>> +	    unlikely(addr >= VMALLOC_START && addr < VMALLOC_END)) {
>>  		vmalloc_fault(regs, code, addr);
>>  		return;
>>  	}
>>
>> -#ifdef CONFIG_64BIT
>> -	/*
>> -	 * Modules in 64bit kernels lie in their own virtual region which is not
>> -	 * in the vmalloc region, but dealing with page faults in this region
>> -	 * or the vmalloc region amounts to doing the same thing: checking that
>> -	 * the mapping exists in init_mm.pgd and updating user page table, so
>> -	 * just use vmalloc_fault.
>> -	 */
>> -	if (unlikely(addr >= MODULES_VADDR && addr < MODULES_END)) {
>> -		vmalloc_fault(regs, code, addr);
>> -		return;
>> -	}
>> -#endif
>>  	/* Enable interrupts if they were enabled in the parent context. */
>>  	if (!regs_irqs_disabled(regs))
>>  		local_irq_enable();
>> diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
>> index 747e5b1ef02d..38bd4dd95276 100644
>> --- a/arch/riscv/mm/init.c
>> +++ b/arch/riscv/mm/init.c
>> @@ -1363,3 +1363,61 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>>  	return vmemmap_populate_basepages(start, end, node, NULL);
>>  }
>>  #endif
>> +
>> +#ifdef CONFIG_64BIT
>> +/*
>> + * Pre-allocates page-table pages for a specific area in the kernel
>> + * page-table. Only the level which needs to be synchronized between
>> + * all page-tables is allocated because the synchronization can be
>> + * expensive.
>> + */
>> +static void __init preallocate_pgd_pages_range(unsigned long start, unsigned long end,
>> +					       const char *area)
>> +{
>> +	unsigned long addr;
>> +	const char *lvl;
>> +
>> +	for (addr = start; addr < end && addr >= start; addr = ALIGN(addr + 1, PGDIR_SIZE)) {
>> +		pgd_t *pgd = pgd_offset_k(addr);
>> +		p4d_t *p4d;
>> +		pud_t *pud;
>> +		pmd_t *pmd;
>> +
>> +		lvl = "p4d";
>> +		p4d = p4d_alloc(&init_mm, pgd, addr);
>> +		if (!p4d)
>> +			goto failed;
>> +
>> +		if (pgtable_l5_enabled)
>> +			continue;
>> +
>> +		lvl = "pud";
>> +		pud = pud_alloc(&init_mm, p4d, addr);
>> +		if (!pud)
>> +			goto failed;
>> +
>> +		if (pgtable_l4_enabled)
>> +			continue;
>> +
>> +		lvl = "pmd";
>> +		pmd = pmd_alloc(&init_mm, pud, addr);
>> +		if (!pmd)
>> +			goto failed;
>> +	}
>> +	return;
>> +
>> +failed:
>> +	/*
>> +	 * The pages have to be there now or they will be missing in
>> +	 * process page-tables later.
>> +	 */
>> +	panic("Failed to pre-allocate %s pages for %s area\n", lvl, area);
>> +}
>> +
>> +void __init pgtable_cache_init(void)
>> +{
>> +	preallocate_pgd_pages_range(VMALLOC_START, VMALLOC_END, "vmalloc");
>> +	if (IS_ENABLED(CONFIG_MODULES))
>> +		preallocate_pgd_pages_range(MODULES_VADDR, MODULES_END, "bpf/modules");
>> +}
>> +#endif
>>
>> base-commit: ac9a78681b921877518763ba0e89202254349d1b
>
> Reviewed-by: Palmer Dabbelt <palmer@xxxxxxxxxxxx>
>
> aside from the build issue, which seems pretty straight-forward.  I'm 
> going to drop this from patchwork.

Hmm, you applied the V2 a couple of days ago [1], which fixes the build
issue. Did you drop the V2 from the queue?

[1]
https://lore.kernel.org/linux-riscv/168727442024.569.16572247474971535604.git-patchwork-notify@xxxxxxxxxx/


Björn





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux