On Wed, Oct 06, 2021 at 03:08:46PM +0100, Matthew Wilcox wrote: > On Wed, Oct 06, 2021 at 01:42:26PM +0100, Matthew Wilcox (Oracle) wrote: > > Move the compound page overrun detection out of > > CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people. > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > > Acked-by: Kees Cook <keescook@xxxxxxxxxxxx> > > --- > > mm/usercopy.c | 10 +++++----- > > 1 file changed, 5 insertions(+), 5 deletions(-) > > > > diff --git a/mm/usercopy.c b/mm/usercopy.c > > index 63476e1506e0..b825c4344917 100644 > > --- a/mm/usercopy.c > > +++ b/mm/usercopy.c > > @@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, > > ((unsigned long)end & (unsigned long)PAGE_MASK))) > > return; > > > > - /* Allow if fully inside the same compound (__GFP_COMP) page. */ > > - endpage = virt_to_head_page(end); > > - if (likely(endpage == page)) > > - return; > > - > > /* > > * Reject if range is entirely either Reserved (i.e. special or > > * device memory), or CMA. Otherwise, reject since the object spans > > Needs an extra hunk to avoid a warning with that config: Ah yeah, good catch. > > @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n, > { > #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN > const void *end = ptr + n - 1; > - struct page *endpage; > bool is_reserved, is_cma; > > /* > > I'll wait a few days and send a v3. When you send v3, can you CC linux-hardening@xxxxxxxxxxxxxxx too? Thanks for poking at this! -Kees -- Kees Cook