On 2022/5/7 1:26, Yang Shi wrote: > On Sun, May 1, 2022 at 10:32 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: >> >> On Mon, May 02, 2022 at 03:28:49PM +1000, NeilBrown wrote: >>> On Mon, 02 May 2022, Matthew Wilcox wrote: >>>> On Mon, May 02, 2022 at 02:57:46PM +1000, NeilBrown wrote: >>>>> @@ -390,9 +392,9 @@ static void sio_read_complete(struct kiocb *iocb, long ret) >>>>> struct page *page = sio->bvec[p].bv_page; >>>>> >>>>> SetPageUptodate(page); >>>>> + count_swpout_vm_event(page); >>>>> unlock_page(page); >>>>> } >>>>> - count_vm_events(PSWPIN, sio->pages); >>>> >>>> Surely that should be count_swpIN_vm_event? >>>> >>> I'm not having a good day.... >>> >>> Certainly shouldn't be swpout. There isn't a count_swpin_vm_event(). >>> >>> swap_readpage() only counts once for each page no matter how big it is. >>> While swap_writepage() counts one for each PAGE_SIZE written. >>> >>> And we have THP_SWPOUT but not THP_SWPIN >> >> _If_ I understand the swap-in patch correctly (at least as invoked by >> shmem), it won't attempt to swap in an entire THP. Even if it swapped >> out an order-9 page, it will bring in order-0 pages from swap, and then >> rely on khugepaged to reassemble them. > > Totally correct. The try_to_unmap() called by vmscan would split PMD > to PTEs then install swap entries for each PTE but keep the huge page > unsplit. > > BTW, there were patches adding THP swapin support, but they were never merged. Could you please tell me where the THP swapin patches are ? It would be really helpful if you can kindly figure that out for me! :) Thanks a lot! > >> >> Someone who actually understands the swap code should check that my >> explanation here is correct. >> > . >