On Wed, Nov 16, 2022 at 3:59 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Tue, 15 Nov 2022 18:38:07 -0700 Yu Zhao <yuzhao@xxxxxxxxxx> wrote: > > > The page reclaim isolates a batch of folios from the tail of one of > > the LRU lists and works on those folios one by one. For a suitable > > swap-backed folio, if the swap device is async, it queues that folio > > for writeback. After the page reclaim finishes an entire batch, it > > puts back the folios it queued for writeback to the head of the > > original LRU list. > > > > In the meantime, the page writeback flushes the queued folios also by > > batches. Its batching logic is independent from that of the page > > reclaim. For each of the folios it writes back, the page writeback > > calls folio_rotate_reclaimable() which tries to rotate a folio to the > > tail. > > > > folio_rotate_reclaimable() only works for a folio after the page > > reclaim has put it back. If an async swap device is fast enough, the > > page writeback can finish with that folio while the page reclaim is > > still working on the rest of the batch containing it. In this case, > > that folio will remain at the head and the page reclaim will not retry > > it before reaching there. > > > > This patch adds a retry to evict_folios(). After evict_folios() has > > finished an entire batch and before it puts back folios it cannot free > > immediately, it retries those that may have missed the rotation. > > > > Before this patch, ~60% of folios swapped to an Intel Optane missed > > folio_rotate_reclaimable(). After this patch, ~99% of missed folios > > were reclaimed upon retry. > > > > This problem affects relatively slow async swap devices like Samsung > > 980 Pro much less and does not affect sync swap devices like zram or > > zswap at all. > > As I understand it, this approach has an implicit assumption that by > the time evict_folios() has completed its first pass, write IOs will > have completed and the resulting folios are available for processing on > evict_folios()'s second pass, yes? Correct. > If so, it all kinda works by luck of timing. Yes, it's betting on luck. But it's a very good bet because the race window on the second pass is probably 100 times smaller. The race window on the first pass is the while() loop in shrink_folio_list(), and it has a lot to work on. The race window on the second pass is a simple list_for_each_entry_safe_reverse() loop. This small race window is closed immediately after we put the folios that are still under writeback back on the LRU list. Then we call shrink_folio_list() again for the retry. > If the swap device is > even slower, the number of folios which are unavailable on the second > pass will increase? Correct. > Can we make this more deterministic? For example change evict_folios() > to recognize this situation and to then do folio_rotate_reclaimable()'s > work for it? Or if that isn't practical, do something else? There are multiple options, none of them is a better tradeoff: 1) the page reclaim telling the page writeback exactly when to flush. pro: more reliable con: the page reclaim doesn't know better 2) adding a synchronization mechanism between the two pro: more reliable con: a lot more complexity 3) unlock folios and submit bio after they are put back on LRU (my second choice) pro: more reliable con: more complexity (within mm) > (Is folio_rotate_reclaimable() actually useful? That concept must be > 20 years old. What breaks if we just delete it and leave the pages > wherever they are?) Most people use zram (with rw_page) or zswap nowadays, and they don't need folio_rotate_reclaimable(). But we still need that function to support swapping to SSD. (Optane is discontinued.)