On Mon, Jan 17, 2022 at 07:06:25PM +0300, Kirill A. Shutemov wrote: > On Sun, Jan 16, 2022 at 12:18:14PM +0000, Matthew Wilcox (Oracle) wrote: > > We have to allocate memory in order to split a file-backed folio, so > > it's not a good idea to split them in the memory freeing path. > > Could elaborate on why split a file-backed folio requires memory > allocation? In the commit message or explain it to you now? We need to allocate xarray nodes to store all the newly-independent pages. With a folio that's more than 64 entries in size (current implementation), we elide the lowest layer of the radix tree. But with any data structure that tracks folios, we'll need to create space in it to track N folios instead of 1. > > It also > > doesn't work for XFS because pages have an extra reference count from > > page_has_private() and split_huge_page() expects that reference to have > > already been removed. > > Need to adjust can_split_huge_page()? no? > > Unfortunately, we still have to split shmem THPs > > because we can't handle swapping out an entire THP yet. > > ... especially if the system doesn't have swap :P Not sure what correction to the commit message you want here.