On Tue, Feb 11, 2020 at 08:18:33PM -0800, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> > > We can't kmap() a THP, so add a wrapper around zero_user() for large > pages. I would rather address it closer to the root: make zero_user_segments() handle compound pages. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > --- > include/linux/highmem.h | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index ea5cdbd8c2c3..4465b8784353 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -245,6 +245,28 @@ static inline void zero_user(struct page *page, > zero_user_segments(page, start, start + size, 0, 0); > } > > +static inline void zero_user_large(struct page *page, > + unsigned start, unsigned size) > +{ > + unsigned int i; > + > + for (i = 0; i < thp_order(page); i++) { > + if (start > PAGE_SIZE) { Off-by-one? >= ? > + start -= PAGE_SIZE; > + } else { > + unsigned this_size = size; > + > + if (size > (PAGE_SIZE - start)) > + this_size = PAGE_SIZE - start; > + zero_user(page + i, start, this_size); > + start = 0; > + size -= this_size; > + if (!size) > + break; > + } > + } > +} > + > #ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE > > static inline void copy_user_highpage(struct page *to, struct page *from, > -- > 2.25.0 > > -- Kirill A. Shutemov