To take advantage of optimizations when adding pages to the page cache via shmem_insert_pages(), improve the likelihood that the pages array passed to shmem_insert_pages() starts on an aligned index. Do this when preserving pages by starting a new pkram_link page when the current page is aligned and the next aligned page will not fit on the pkram_link page. Signed-off-by: Anthony Yznaga <anthony.yznaga@xxxxxxxxxx> --- mm/pkram.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/mm/pkram.c b/mm/pkram.c index b63b2a3958e7..3f43809c8a85 100644 --- a/mm/pkram.c +++ b/mm/pkram.c @@ -911,9 +911,20 @@ static int __pkram_save_page(struct pkram_access *pa, struct page *page, { struct pkram_data_stream *pds = &pa->pds; struct pkram_link *link = pds->link; + int align, align_cnt; + + if (PageTransHuge(page)) { + align = 1 << (HPAGE_PMD_ORDER + XA_CHUNK_SHIFT - (HPAGE_PMD_ORDER % XA_CHUNK_SHIFT)); + align_cnt = align >> HPAGE_PMD_ORDER; + } else { + align = XA_CHUNK_SIZE; + align_cnt = XA_CHUNK_SIZE; + } if (!link || pds->entry_idx >= PKRAM_LINK_ENTRIES_MAX || - index != pa->pages.next_index) { + index != pa->pages.next_index || + (IS_ALIGNED(index, align) && + (pds->entry_idx + align_cnt > PKRAM_LINK_ENTRIES_MAX))) { link = pkram_new_link(pds, pa->ps->gfp_mask); if (!link) return -ENOMEM; -- 1.8.3.1 _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec