On 19.02.24 16:03, Matthew Wilcox wrote:
On Mon, Feb 19, 2024 at 10:43:06AM +0100, David Hildenbrand wrote:
On 17.02.24 03:25, Matthew Wilcox (Oracle) wrote:
By making release_pages() call folios_put(), we can get rid of the calls
to compound_head() for the callers that already know they have folios.
We can also get rid of the lock_batch tracking as we know the size
of the batch is limited by folio_batch. This does reduce the maximum
number of pages for which the lruvec lock is held, from SWAP_CLUSTER_MAX
(32) to PAGEVEC_SIZE (15). I do not expect this to make a significant
difference, but if it does, we can increase PAGEVEC_SIZE to 31.
I'm afraid that won't apply to current mm-unstable anymore, where we can now
put multiple references to a single folio (as part of unmapping
large PTE-mapped folios).
Argh. I'm not a huge fan of that approach, but let's live with it for
now.
I'm hoping we at least can get rid of page ranges at some point (and
just have folio + nr_refs), but for the time being there is no way
around that due to delayed rmap handling that needs the exact pages (ugh).
folios_put_refs() does sound reasonable in any case, although likely
"putting multiple references" is limited to zap/munmap/... code paths.
How about this as a replacement patch? It compiles ...
Nothing jumped at me, one comment:
[...]
+EXPORT_SYMBOL(folios_put);
+
+/**
+ * release_pages - batched put_page()
+ * @arg: array of pages to release
+ * @nr: number of pages
+ *
+ * Decrement the reference count on all the pages in @arg. If it
+ * fell to zero, remove the page from the LRU and free it.
+ *
+ * Note that the argument can be an array of pages, encoded pages,
+ * or folio pointers. We ignore any encoded bits, and turn any of
+ * them into just a folio that gets free'd.
+ */
+void release_pages(release_pages_arg arg, int nr)
+{
+ struct folio_batch fbatch;
+ int refs[PAGEVEC_SIZE];
+ struct encoded_page **encoded = arg.encoded_pages;
+ int i;
+
+ folio_batch_init(&fbatch);
+ for (i = 0; i < nr; i++) {
+ /* Turn any of the argument types into a folio */
+ struct folio *folio = page_folio(encoded_page_ptr(encoded[i]));
+
+ /* Is our next entry actually "nr_pages" -> "nr_refs" ? */
+ refs[fbatch.nr] = 1;
+ if (unlikely(encoded_page_flags(encoded[i]) &
+ ENCODED_PAGE_BIT_NR_PAGES_NEXT))
+ refs[fbatch.nr] = encoded_nr_pages(encoded[++i]);
+
+ if (folio_batch_add(&fbatch, folio) > 0)
+ continue;
+ folios_put_refs(&fbatch, refs);
+ }
+
+ if (fbatch.nr)
+ folios_put_refs(&fbatch, refs);
I wonder if it makes sense to remember if any ref !=1, and simply call
folios_put() if that's the case.
But I guess the whole point about PAGEVEC_SIZE is that it is very
cache-friendly and traversing it a second time (e.g., when all we are
doing is freeing order-0 folios) is not too expensive.
--
Cheers,
David / dhildenb