Re: [PATCH v2 15/16] slab: Allocate frozen pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/9/22 19:18, Matthew Wilcox (Oracle) wrote:
Since slab does not use the page refcount, it can allocate and
free frozen pages, saving one atomic operation per free.

Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Reviewed-by: William Kucharski <william.kucharski@xxxxxxxxxx>

AFAICS the problem of has_unmovable_pages() is not addressed:
https://lore.kernel.org/all/40d658da-6220-e05e-ba0b-d95c82f6bfb3@xxxxxxxxxx/

But I don't think it's sustainable approach to enhance the checks there with PageSlab() and then with whatever other user will adopt allocating frozen pages in the future. I guess it would be better to just be able to detect pages on pcplist without false positives. A new page type? Maybe the overhead of managing it would be negligible as we set page->index anyway for migratetype?

---
  mm/slab.c | 23 +++++++++++------------
  1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 10e96137b44f..e7603d23c6c9 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1355,23 +1355,23 @@ slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid)
  static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
  								int nodeid)
  {
-	struct folio *folio;
+	struct page *page;
  	struct slab *slab;
flags |= cachep->allocflags; - folio = (struct folio *) __alloc_pages_node(nodeid, flags, cachep->gfporder);
-	if (!folio) {
+	page = __alloc_frozen_pages(flags, cachep->gfporder, nodeid, NULL);
+	if (!page) {
  		slab_out_of_memory(cachep, flags, nodeid);
  		return NULL;
  	}
- slab = folio_slab(folio);
+	__SetPageSlab(page);
+	slab = (struct slab *)page;
account_slab(slab, cachep->gfporder, cachep, flags);
-	__folio_set_slab(folio);
  	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
-	if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0)))
+	if (sk_memalloc_socks() && page_is_pfmemalloc(page))
  		slab_set_pfmemalloc(slab);
return slab;
@@ -1383,18 +1383,17 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
  static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)
  {
  	int order = cachep->gfporder;
-	struct folio *folio = slab_folio(slab);
+	struct page *page = (struct page *)slab;
- BUG_ON(!folio_test_slab(folio));
  	__slab_clear_pfmemalloc(slab);
-	__folio_clear_slab(folio);
-	page_mapcount_reset(folio_page(folio, 0));
-	folio->mapping = NULL;
+	__ClearPageSlab(page);
+	page_mapcount_reset(page);
+	page->mapping = NULL;
if (current->reclaim_state)
  		current->reclaim_state->reclaimed_slab += 1 << order;
  	unaccount_slab(slab, order, cachep);
-	__free_pages(folio_page(folio, 0), order);
+	free_frozen_pages(page, order);
  }
static void kmem_rcu_free(struct rcu_head *head)





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux