On Fri, May 05, 2017 at 01:03:16PM -0400, Pavel Tatashin wrote: > If we are using deferred struct page initialization feature, most of > "struct page"es are getting initialized after other CPUs are started, and > hence we are benefiting from doing this job in parallel. However, we are > still zeroing all the memory that is allocated for "struct pages" using the > boot CPU. This patch solves this problem, by deferring zeroing "struct > pages" to only when they are initialized on s390 platforms. > > Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> > Reviewed-by: Shannon Nelson <shannon.nelson@xxxxxxxxxx> > --- > arch/s390/mm/vmem.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c > index 9c75214..ffe9ba1 100644 > --- a/arch/s390/mm/vmem.c > +++ b/arch/s390/mm/vmem.c > @@ -252,7 +252,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node) > void *new_page; > > new_page = vmemmap_alloc_block(PMD_SIZE, node, > - true); > + VMEMMAP_ZERO); > if (!new_page) > goto out; > pmd_val(*pm_dir) = __pa(new_page) | sgt_prot; If you add the hunk below then this is Acked-by: Heiko Carstens <heiko.carstens@xxxxxxxxxx> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index ffe9ba1aec8b..bf88a8b9c24d 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -272,7 +272,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node) if (pte_none(*pt_dir)) { void *new_page; - new_page = vmemmap_alloc_block(PAGE_SIZE, node, true); + new_page = vmemmap_alloc_block(PAGE_SIZE, node, VMEMMAP_ZERO); if (!new_page) goto out; pte_val(*pt_dir) = __pa(new_page) | pgt_prot; -- To unsubscribe from this list: send the line "unsubscribe linux-s390" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html