Minchan Kim <minchan@xxxxxxxxxx> writes: > On Tue, Nov 28, 2023 at 11:19:20AM +0800, Huang, Ying wrote: >> Yosry Ahmed <yosryahmed@xxxxxxxxxx> writes: >> >> > On Mon, Nov 27, 2023 at 1:32 PM Minchan Kim <minchan@xxxxxxxxxx> wrote: >> >> >> >> On Mon, Nov 27, 2023 at 12:22:59AM -0800, Chris Li wrote: >> >> > On Mon, Nov 27, 2023 at 12:14 AM Huang, Ying <ying.huang@xxxxxxxxx> wrote: >> >> > > > I agree with Ying that anonymous pages typically have different page >> >> > > > access patterns than file pages, so we might want to treat them >> >> > > > differently to reclaim them effectively. >> >> > > > One random idea: >> >> > > > How about we put the anonymous page in a swap cache in a different LRU >> >> > > > than the rest of the anonymous pages. Then shrinking against those >> >> > > > pages in the swap cache would be more effective.Instead of having >> >> > > > [anon, file] LRU, now we have [anon not in swap cache, anon in swap >> >> > > > cache, file] LRU >> >> > > >> >> > > I don't think that it is necessary. The patch is only for a special use >> >> > > case. Where the swap device is used up while some pages are in swap >> >> > > cache. The patch will kill performance, but it is used to avoid OOM >> >> > > only, not to improve performance. Per my understanding, we will not use >> >> > > up swap device space in most cases. This may be true for ZRAM, but will >> >> > > we keep pages in swap cache for long when we use ZRAM? >> >> > >> >> > I ask the question regarding how many pages can be freed by this patch >> >> > in this email thread as well, but haven't got the answer from the >> >> > author yet. That is one important aspect to evaluate how valuable is >> >> > that patch. >> >> >> >> Exactly. Since swap cache has different life time with page cache, they >> >> would be usually dropped when pages are unmapped(unless they are shared >> >> with others but anon is usually exclusive private) so I wonder how much >> >> memory we can save. >> > >> > I think the point of this patch is not saving memory, but rather >> > avoiding an OOM condition that will happen if we have no swap space >> > left, but some pages left in the swap cache. Of course, the OOM >> > avoidance will come at the cost of extra work in reclaim to swap those >> > pages out. >> > >> > The only case where I think this might be harmful is if there's plenty >> > of pages to reclaim on the file LRU, and instead we opt to chase down >> > the few swap cache pages. So perhaps we can add a check to only set >> > sc->swapcache_only if the number of pages in the swap cache is more >> > than the number of pages on the file LRU or similar? Just make sure we >> > don't chase the swapcache pages down if there's plenty to scan on the >> > file LRU? >> >> The swap cache pages can be divided to 3 groups. >> >> - group 1: pages have been written out, at the tail of inactive LRU, but >> not reclaimed yet. >> >> - group 2: pages have been written out, but were failed to be reclaimed >> (e.g., were accessed before reclaiming) >> >> - group 3: pages have been swapped in, but were kept in swap cache. The >> pages may be in active LRU. >> >> The main target of the original patch should be group 1. And the pages >> may be cheaper to reclaim than file pages. > > Yeah, that's common for asynchronous swap devices and that's popular. Then, > How about freeing those memory as soon as the writeback is done instead of > keep adding more tricks to solve the issue? > > https://lkml.kernel.org/linux-mm/1368411048-3753-1-git-send-email-minchan@xxxxxxxxxx/ > > I remember it's under softIRQ context so there were some issues to change > locking rules for memcg and swap. And there was some concern to increase > softirq latency due to page freeing but both were not the main obstacle to > be fixed. Thanks for sharing. It's good to avoid to add the pages back to LRU, then isolate them from LRU. I have concerns that is it possible that too many pages are reclaimed? For example, to reclaim a small number of pages, too many pages were written to disk because the performance difference between CPU and storage. Originally, we still only reclaim requested number of pages although much more were written. But with the change, we may reclaim them all. >> >> Group 2 are hard to be reclaimed if swap_count() isn't 0. > > "were accessed before reclaiming" would be rare. If page reclaiming algorithm works well enough, that should be true. >> >> Group 3 should be reclaimed in theory, but the overhead may be high. >> And we may need to reclaim the swap entries instead of pages if the pages >> are hot. But we can start to reclaim the swap entries before the swap >> space is run out. > > I thought the swap-in path will reclaim the swap slots once it detects > swapspace wasn't enough(e.g., vm_swap_full or mem_cgroup_swap-full)? Yes. You are right. But before swap space wasn't enough, we may keep quite some pages in swap cache. But these pages may becomes hot later. Then we have no opportunity to reclaim these swap space. So, we may need to add some code to check this situation at appropriate places. For example, when we scan pages in active list, or activate pages in inactive list. >> >> So, if we can count group 1, we may use that as indicator to scan anon >> pages. And we may add code to reclaim group 3 earlier. -- Best Regards, Huang, Ying