Re: [PATCH v8 03/12] x86/mm: consolidate full flush threshold decision

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 04, 2025 at 08:39:52PM -0500, Rik van Riel wrote:

> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 6cf881a942bb..02e1f5c5bca3 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -1000,8 +1000,13 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
>  	BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1);
>  #endif
>  
> -	info->start		= start;
> -	info->end		= end;
> +	/*
> +	 * Round the start and end addresses to the page size specified
> +	 * by the stride shift. This ensures partial pages at the end of
> +	 * a range get fully invalidated.
> +	 */
> +	info->start		= round_down(start, 1 << stride_shift);
> +	info->end		= round_up(end, 1 << stride_shift);
>  	info->mm		= mm;
>  	info->stride_shift	= stride_shift;
>  	info->freed_tables	= freed_tables;

Rather than doing this; should we not fix whatever dodgy users are
feeding us non-page-aligned addresses for invalidation?






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux