On Thu 11-03-21 12:26:32, Muchun Song wrote: > On Wed, Mar 10, 2021 at 11:19 PM Michal Hocko <mhocko@xxxxxxxx> wrote: > > > > On Mon 08-03-21 18:28:02, Muchun Song wrote: [...] > > > @@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page) > > > h->free_huge_pages--; > > > h->free_huge_pages_node[nid]--; > > > h->max_huge_pages--; > > > - update_and_free_page(h, head); > > > - rc = 0; > > > + rc = update_and_free_page(h, head); > > > + if (rc) { > > > + h->surplus_huge_pages--; > > > + h->surplus_huge_pages_node[nid]--; > > > + h->max_huge_pages++; > > > > This is quite ugly and confusing. update_and_free_page is careful to do > > the proper counters accounting and now you just override it partially. > > Why cannot we rely on update_and_free_page do the right thing? > > Dissolving path is special here. Since update_and_free_page failed, > the number of surplus pages was incremented. Surplus pages are > the number of pages greater than max_huge_pages. Since we are > incrementing max_huge_pages, we should decrement (undo) the > addition to surplus_huge_pages and surplus_huge_pages_node[nid]. Can we make dissolve_free_huge_page less special or tell update_and_free_page to not account against dissolve_free_huge_page? -- Michal Hocko SUSE Labs