On Tue, Feb 18, 2020 at 10:07:05AM -0800, Matthew Wilcox wrote: > On Tue, Feb 18, 2020 at 08:10:52PM +0300, Kirill A. Shutemov wrote: > > Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > > Thanks > > > > +#if defined(CONFIG_HIGHMEM) && defined(CONFIG_TRANSPARENT_HUGEPAGE) > > > +void zero_user_segments(struct page *page, unsigned start1, unsigned end1, > > > + unsigned start2, unsigned end2); > > > +#else /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ > > > > This is a neat trick. I like it. > > > > Although, it means non-inlined version will never get tested :/ > > I worry about that too, but I don't really want to incur the overhead on > platforms people actually use. I'm also worried about latency: kmap_atomic() disables preemption even if system has no highmem. Some archs have way too large THP to clear them with preemption disabled. I *think* there's no real need in preemption disabling in this situation and we can wrap kmap_atomic()/kunmap_atomic() into CONFIG_HIGHMEM. -- Kirill A. Shutemov