On Mon, Mar 04, 2024 at 07:07:20PM +0800, Qi Zheng wrote: > --- a/arch/s390/mm/gmap.c > +++ b/arch/s390/mm/gmap.c > @@ -206,9 +206,11 @@ static void gmap_free(struct gmap *gmap) > > /* Free additional data for a shadow gmap */ > if (gmap_is_shadow(gmap)) { > + struct ptdesc *ptdesc; > + > /* Free all page tables. */ > - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) > - page_table_free_pgste(page); > + list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) > + page_table_free_pgste(ptdesc); An important note: ptdesc allocation/freeing is different than the standard alloc_pages()/free_pages() routines architectures are used to. Are we sure we don't have memory leaks here? We always allocate and free ptdescs as compound pages; for page table struct pages, most archictectures do not. s390 has CRST_ALLOC_ORDER pagetables, meaning if we free anything using the ptdesc api, we better be sure it was allocated using the ptdesc api as well. Like you, I don't have a s390 to test on, so hopefully some s390 expert can chime in to let us know if we need a fix for this.