Re: [PATCH v8 03/12] x86/mm: consolidate full flush threshold decision

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2025-02-05 at 13:20 +0100, Peter Zijlstra wrote:
> On Tue, Feb 04, 2025 at 08:39:52PM -0500, Rik van Riel wrote:
> 
> > diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> > index 6cf881a942bb..02e1f5c5bca3 100644
> > --- a/arch/x86/mm/tlb.c
> > +++ b/arch/x86/mm/tlb.c
> > @@ -1000,8 +1000,13 @@ static struct flush_tlb_info
> > *get_flush_tlb_info(struct mm_struct *mm,
> >  	BUG_ON(this_cpu_inc_return(flush_tlb_info_idx) != 1);
> >  #endif
> >  
> > -	info->start		= start;
> > -	info->end		= end;
> > +	/*
> > +	 * Round the start and end addresses to the page size
> > specified
> > +	 * by the stride shift. This ensures partial pages at the
> > end of
> > +	 * a range get fully invalidated.
> > +	 */
> > +	info->start		= round_down(start, 1 <<
> > stride_shift);
> > +	info->end		= round_up(end, 1 <<
> > stride_shift);
> >  	info->mm		= mm;
> >  	info->stride_shift	= stride_shift;
> >  	info->freed_tables	= freed_tables;
> 
> Rather than doing this; should we not fix whatever dodgy users are
> feeding us non-page-aligned addresses for invalidation?
> 

The best way to do that would probably be by adding
a WARN_ON_ONCE here if the value of either start or
end changed, not by merging code that will trigger
kernel crashes - even if the bug is elsewhere.

I would be happy to add a WARN_ON_ONCE either in a
next version, or in a follow-up patch, whichever is
more convenient for you.

-- 
All Rights Reversed.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux