This is a note to let you know that I've just added the patch titled zram: avoid double free in function zram_bvec_write() to the 3.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: zram-avoid-double-free-in-function-zram_bvec_write.patch and it can be found in the queue-3.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 65c484609a3b25c35e4edcd5f2c38f98f5226093 Mon Sep 17 00:00:00 2001 From: Jiang Liu <liuj97@xxxxxxxxx> Date: Fri, 7 Jun 2013 00:07:25 +0800 Subject: zram: avoid double free in function zram_bvec_write() From: Jiang Liu <liuj97@xxxxxxxxx> commit 65c484609a3b25c35e4edcd5f2c38f98f5226093 upstream. When doing a patial write and the whole page is filled with zero, zram_bvec_write() will free uncmem twice. Signed-off-by: Jiang Liu <jiang.liu@xxxxxxxxxx> Acked-by: Minchan Kim <minchan@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/staging/zram/zram_drv.c | 2 -- 1 file changed, 2 deletions(-) --- a/drivers/staging/zram/zram_drv.c +++ b/drivers/staging/zram/zram_drv.c @@ -272,8 +272,6 @@ static int zram_bvec_write(struct zram * if (page_zero_filled(uncmem)) { kunmap_atomic(user_mem); - if (is_partial_io(bvec)) - kfree(uncmem); zram->stats.pages_zero++; zram_set_flag(meta, index, ZRAM_ZERO); ret = 0; Patches currently in stable-queue which might be from liuj97@xxxxxxxxx are queue-3.10/zram-avoid-access-beyond-the-zram-device.patch queue-3.10/zram-protect-sysfs-handler-from-invalid-memory-access.patch queue-3.10/zram-avoid-double-free-in-function-zram_bvec_write.patch queue-3.10/zram-use-zram-lock-to-protect-zram_free_page-in-swap-free-notify-path.patch queue-3.10/zram-avoid-invalid-memory-access-in-zram_exit.patch queue-3.10/zram-destroy-all-devices-on-error-recovery-path-in-zram_init.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html