[patch] mm,zswap: fix zswap::zswap_comp.lock vs zsmalloc::zs_map_area.lock deadlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When zswap_comp.lock was added, zpool_map/unmap_handle() were called both inside and
outside of preempt disabled sections, thus zswap_comp.lock locked sections when those
preempt disabled sections were removed.  With the later addition of zs_map_area.lock
in the zsmalloc map/unmap methods, these two locks collide in a zswap_frontswap_load()
vs zswap_frontswap_store() inversion/deadlock.

Call zpool map/unmap methods in zswap_frontswap_load() under zswap_comp.lock as
they are in zswap_frontswap_store() to prevent deadlock.

Signed-off-by: Mike Galbraith <efault@xxxxxx>
---
 mm/zswap.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1183,17 +1183,17 @@ static int zswap_frontswap_load(unsigned
 	}

 	/* decompress */
+	local_lock(&zswap_comp.lock);
 	dlen = PAGE_SIZE;
 	src = zpool_map_handle(entry->pool->zpool, entry->handle, ZPOOL_MM_RO);
 	if (zpool_evictable(entry->pool->zpool))
 		src += sizeof(struct zswap_header);
 	dst = kmap_atomic(page);
-	local_lock(&zswap_comp.lock);
 	tfm = *this_cpu_ptr(entry->pool->tfm);
 	ret = crypto_comp_decompress(tfm, src, entry->length, dst, &dlen);
-	local_unlock(&zswap_comp.lock);
 	kunmap_atomic(dst);
 	zpool_unmap_handle(entry->pool->zpool, entry->handle);
+	local_unlock(&zswap_comp.lock);
 	BUG_ON(ret);

 freeentry:





[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux