+ mm-zswap-zswap_store_page-will-initialize-entry-after-adding-to-xarray.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: zswap: zswap_store_page() will initialize entry after adding to xarray.
has been added to the -mm mm-unstable branch.  Its filename is
     mm-zswap-zswap_store_page-will-initialize-entry-after-adding-to-xarray.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-zswap-zswap_store_page-will-initialize-entry-after-adding-to-xarray.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Kanchana P Sridhar <kanchana.p.sridhar@xxxxxxxxx>
Subject: mm: zswap: zswap_store_page() will initialize entry after adding to xarray.
Date: Wed, 2 Oct 2024 10:33:29 -0700

This incorporates Yosry's suggestions in [1] for further simplifying
zswap_store_page().  If the page is successfully compressed and added to
the xarray, we get the pool/objcg refs, and initialize all the entry's
members.  Only after this, we add it to the zswap LRU.

In the time between the entry's addition to the xarray and it's member
initialization, we are protected against concurrent stores/loads/swapoff
through the folio lock, and are protected against writeback because the
entry is not on the LRU yet.

This way, we don't have to drop the pool/objcg refs, now that the entry
initialization is centralized to the successful page store code path.

zswap_compress() is modified to take a zswap_pool parameter in keeping
with this simplification (as against obtaining this from entry->pool).

[1]: https://lore.kernel.org/all/CAJD7tkZh6ufHQef5HjXf_F5b5LC1EATexgseD=4WvrO+a6Ni6w@xxxxxxxxxxxxxx/

Link: https://lkml.kernel.org/r/20241002173329.213722-1-kanchana.p.sridhar@xxxxxxxxx
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@xxxxxxxxx>
Cc: Chengming Zhou <chengming.zhou@xxxxxxxxx>
Cc: Huang Ying <ying.huang@xxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Nhat Pham <nphamcs@xxxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Wajdi Feghali <wajdi.k.feghali@xxxxxxxxx>
Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/zswap.c |   56 +++++++++++++++++++++++----------------------------
 1 file changed, 26 insertions(+), 30 deletions(-)

--- a/mm/zswap.c~mm-zswap-zswap_store_page-will-initialize-entry-after-adding-to-xarray
+++ a/mm/zswap.c
@@ -881,7 +881,8 @@ static int zswap_cpu_comp_dead(unsigned
 	return 0;
 }
 
-static bool zswap_compress(struct page *page, struct zswap_entry *entry)
+static bool zswap_compress(struct page *page, struct zswap_entry *entry,
+			   struct zswap_pool *pool)
 {
 	struct crypto_acomp_ctx *acomp_ctx;
 	struct scatterlist input, output;
@@ -893,7 +894,7 @@ static bool zswap_compress(struct page *
 	gfp_t gfp;
 	u8 *dst;
 
-	acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
+	acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
 
 	mutex_lock(&acomp_ctx->mutex);
 
@@ -926,7 +927,7 @@ static bool zswap_compress(struct page *
 	if (comp_ret)
 		goto unlock;
 
-	zpool = entry->pool->zpool;
+	zpool = pool->zpool;
 	gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
 	if (zpool_malloc_support_movable(zpool))
 		gfp |= __GFP_HIGHMEM | __GFP_MOVABLE;
@@ -1413,32 +1414,21 @@ static ssize_t zswap_store_page(struct p
 				struct obj_cgroup *objcg,
 				struct zswap_pool *pool)
 {
+	swp_entry_t page_swpentry = page_swap_entry(page);
 	struct zswap_entry *entry, *old;
 
 	/* allocate entry */
 	entry = zswap_entry_cache_alloc(GFP_KERNEL, page_to_nid(page));
 	if (!entry) {
 		zswap_reject_kmemcache_fail++;
-		goto reject;
+		return -EINVAL;
 	}
 
-	/* zswap_store() already holds a ref on 'objcg' and 'pool' */
-	if (objcg)
-		obj_cgroup_get(objcg);
-	zswap_pool_get(pool);
-
-	/* if entry is successfully added, it keeps the reference */
-	entry->pool = pool;
+	if (!zswap_compress(page, entry, pool))
+		goto compress_failed;
 
-	if (!zswap_compress(page, entry))
-		goto put_pool_objcg;
-
-	entry->swpentry = page_swap_entry(page);
-	entry->objcg = objcg;
-	entry->referenced = true;
-
-	old = xa_store(swap_zswap_tree(entry->swpentry),
-		       swp_offset(entry->swpentry),
+	old = xa_store(swap_zswap_tree(page_swpentry),
+		       swp_offset(page_swpentry),
 		       entry, GFP_KERNEL);
 	if (xa_is_err(old)) {
 		int err = xa_err(old);
@@ -1457,6 +1447,16 @@ static ssize_t zswap_store_page(struct p
 		zswap_entry_free(old);
 
 	/*
+	 * The entry is successfully compressed and stored in the tree, there is
+	 * no further possibility of failure. Grab refs to the pool and objcg.
+	 * These refs will be dropped by zswap_entry_free() when the entry is
+	 * removed from the tree.
+	 */
+	zswap_pool_get(pool);
+	if (objcg)
+		obj_cgroup_get(objcg);
+
+	/*
 	 * We finish initializing the entry while it's already in xarray.
 	 * This is safe because:
 	 *
@@ -1466,25 +1466,21 @@ static ssize_t zswap_store_page(struct p
 	 *    The publishing order matters to prevent writeback from seeing
 	 *    an incoherent entry.
 	 */
+	entry->pool = pool;
+	entry->swpentry = page_swpentry;
+	entry->objcg = objcg;
+	entry->referenced = true;
 	if (entry->length) {
 		INIT_LIST_HEAD(&entry->lru);
 		zswap_lru_add(&zswap_list_lru, entry);
 	}
 
-	/*
-	 * We shouldn't have any possibility of failure after the entry is
-	 * added in the xarray. The pool/objcg refs obtained here will only
-	 * be dropped if/when zswap_entry_free() gets called.
-	 */
 	return entry->length;
 
 store_failed:
-	zpool_free(entry->pool->zpool, entry->handle);
-put_pool_objcg:
-	zswap_pool_put(pool);
-	obj_cgroup_put(objcg);
+	zpool_free(pool->zpool, entry->handle);
+compress_failed:
 	zswap_entry_cache_free(entry);
-reject:
 	return -EINVAL;
 }
 
_

Patches currently in -mm which might be from kanchana.p.sridhar@xxxxxxxxx are

mm-define-obj_cgroup_get-if-config_memcg-is-not-defined.patch
mm-zswap-modify-zswap_compress-to-accept-a-page-instead-of-a-folio.patch
mm-zswap-rename-zswap_pool_get-to-zswap_pool_tryget.patch
mm-change-count_objcg_event-to-count_objcg_events-for-batch-event-updates.patch
mm-zswap-modify-zswap_stored_pages-to-be-atomic_long_t.patch
mm-zswap-support-large-folios-in-zswap_store.patch
mm-swap-count-successful-large-folio-zswap-stores-in-hugepage-zswpout-stats.patch
mm-zswap-zswap_store_page-will-initialize-entry-after-adding-to-xarray.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux