Barry Song <21cnbao@xxxxxxxxx> writes: > On Fri, Mar 15, 2024 at 9:43 PM Huang, Ying <ying.huang@xxxxxxxxx> wrote: >> >> Barry Song <21cnbao@xxxxxxxxx> writes: >> >> > From: Chuanhua Han <hanchuanhua@xxxxxxxx> >> > >> > On an embedded system like Android, more than half of anon memory is >> > actually in swap devices such as zRAM. For example, while an app is >> > switched to background, its most memory might be swapped-out. >> > >> > Now we have mTHP features, unfortunately, if we don't support large folios >> > swap-in, once those large folios are swapped-out, we immediately lose the >> > performance gain we can get through large folios and hardware optimization >> > such as CONT-PTE. >> > >> > This patch brings up mTHP swap-in support. Right now, we limit mTHP swap-in >> > to those contiguous swaps which were likely swapped out from mTHP as a >> > whole. >> > >> > Meanwhile, the current implementation only covers the SWAP_SYCHRONOUS >> > case. It doesn't support swapin_readahead as large folios yet since this >> > kind of shared memory is much less than memory mapped by single process. >> >> In contrast, I still think that it's better to start with normal swap-in >> path, then expand to SWAP_SYCHRONOUS case. > > I'd rather try the reverse direction as non-sync anon memory is only around > 3% in a phone as my observation. Phone is not the only platform that Linux is running on. >> >> In normal swap-in path, we can take advantage of swap readahead >> information to determine the swapped-in large folio order. That is, if >> the return value of swapin_nr_pages() > 1, then we can try to allocate >> and swapin a large folio. > > I am not quite sure we still need to depend on this. in do_anon_page, > we have broken the assumption and allocated a large folio directly. I don't think that we have a sophisticated policy to allocate large folio. Large folio could waste memory for some workloads, so I think that it's a good idea to allocate large folio always. Readahead gives us an opportunity to play with the policy. > On the other hand, compressing/decompressing large folios as a > whole rather than doing it one by one can save a large percent of > CPUs and provide a much lower compression ratio. With a hardware > accelerator, this is even faster. I am not against to support large folio for compressing/decompressing. I just suggest to do that later, after we play with normal swap-in. SWAP_SYCHRONOUS related swap-in code is an optimization based on normal swap. So, it seems natural to support large folio swap-in for normal swap-in firstly. > So I'd rather more aggressively get large folios swap-in involved > than depending on readahead. We can take advantage of readahead algorithm in SWAP_SYCHRONOUS optimization too. The sub-pages that is not accessed by page fault can be treated as readahead. I think that is a better policy than allocating large folio always. >> >> To do that, we need to track whether the sub-pages are accessed. I >> guess we need that information for large file folio readahead too. >> >> Hi, Matthew, >> >> Can you help us on tracking whether the sub-pages of a readahead large >> folio has been accessed? >> >> > Right now, we are re-faulting large folios which are still in swapcache as a >> > whole, this can effectively decrease extra loops and early-exitings which we >> > have increased in arch_swap_restore() while supporting MTE restore for folios >> > rather than page. On the other hand, it can also decrease do_swap_page as >> > PTEs used to be set one by one even we hit a large folio in swapcache. >> > >> -- Best Regards, Huang, Ying