On Mon, Nov 18, 2024 at 5:03 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Sat, Nov 16, 2024 at 09:16:58AM +0000, Chen Ridong wrote: > > 2. In shrink_page_list function, if folioN is THP(2M), it may be splited > > and added to swap cache folio by folio. After adding to swap cache, > > it will submit io to writeback folio to swap, which is asynchronous. > > When shrink_page_list is finished, the isolated folios list will be > > moved back to the head of inactive lru. The inactive lru may just look > > like this, with 512 filioes have been move to the head of inactive lru. > > I was hoping that we'd be able to stop splitting the folio when adding > to the swap cache. Ideally. we'd add the whole 2MB and write it back > as a single unit. This is already the case: adding to the swapcache doesn’t require splitting THPs, but failing to allocate 2MB of contiguous swap slots will. > > This is going to become much more important with memdescs. We'd have to > allocate 512 struct folios to do this, which would be about 10 4kB pages, > and if we're trying to swap out memory, we're probably low on memory. > > So I don't like this solution you have at all because it doesn't help us > get to the solution we're going to need in about a year's time. > Ridong might need to clarify why this splitting is occurring. If it’s due to the failure to allocate swap slots, we still need a solution to address it. Thanks Barry