Hi Greg, On Tue, Apr 18, 2017 at 02:50:40PM +0200, gregkh@xxxxxxxxxxxxxxxxxxx wrote: > > This is a note to let you know that I've just added the patch titled > > zram: do not use copy_page with non-page aligned address > > to the 4.9-stable tree which can be found at: > http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary > > The filename of the patch is: > zram-do-not-use-copy_page-with-non-page-aligned-address.patch > and it can be found in the queue-4.9 subdirectory. > > If you, or anyone else, feels it should not be added to the stable tree, > please let <stable@xxxxxxxxxxxxxxx> know about it. > > > From d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e Mon Sep 17 00:00:00 2001 > From: Minchan Kim <minchan@xxxxxxxxxx> > Date: Thu, 13 Apr 2017 14:56:37 -0700 > Subject: zram: do not use copy_page with non-page aligned address > > From: Minchan Kim <minchan@xxxxxxxxxx> > > commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream. > > The copy_page is optimized memcpy for page-alinged address. If it is > used with non-page aligned address, it can corrupt memory which means > system corruption. With zram, it can happen with > > 1. 64K architecture > 2. partial IO > 3. slub debug > > Partial IO need to allocate a page and zram allocates it via kmalloc. > With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned > address. And finally, copy_page(mem, cmem) corrupts memory. > > So, this patch changes it to memcpy. > > Actuaully, we don't need to change zram_bvec_write part because zsmalloc > returns page-aligned address in case of PAGE_SIZE class but it's not > good to rely on the internal of zsmalloc. > > Note: > When this patch is merged to stable, clear_page should be fixed, too. > Unfortunately, recent zram removes it by "same page merge" feature so > it's hard to backport this patch to -stable tree. > > I will handle it when I receive the mail from stable tree maintainer to > merge this patch to backport. By above reason, I send new version to cover clear_page. Please merge below patch instead of this one. Thanks. >From d7a7420fbce12ed2a6247755a64ae55a591a2a57 Mon Sep 17 00:00:00 2001 From: Minchan Kim <minchan@xxxxxxxxxx> Date: Thu, 13 Apr 2017 14:56:37 -0700 Subject: [PATCH for v4.9] zram: do not use copy_page with non-page aligned address commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream The copy_page is optimized memcpy for page-alinged address. If it is used with non-page aligned address, it can corrupt memory which means system corruption. With zram, it can happen with 1. 64K architecture 2. partial IO 3. slub debug Partial IO need to allocate a page and zram allocates it via kmalloc. With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned address. And finally, [copy|clear]_page(mem, cmem) corrupts memory. So, this patch changes it to memcpy|memset. Actuaully, we don't need to change zram_bvec_write part because zsmalloc returns page-aligned address in case of PAGE_SIZE class but it's not good to rely on the internal of zsmalloc. And I didn't change clear_page in handle_zero_page intentionally because it's very clear no problem because kmap_atomic guarantees address is page-size alinged. Cc: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()") Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@xxxxxxxxxx Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> --- drivers/block/zram/zram_drv.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index d2ef51ca9cf4..c9914d653968 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -582,13 +582,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index) if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) { bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); - clear_page(mem); + memset(mem, 0, PAGE_SIZE); return 0; } cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO); if (size == PAGE_SIZE) { - copy_page(mem, cmem); + memcpy(mem, cmem, PAGE_SIZE); } else { struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp); @@ -780,7 +780,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) { src = kmap_atomic(page); - copy_page(cmem, src); + memcpy(cmem, src, PAGE_SIZE); kunmap_atomic(src); } else { memcpy(cmem, src, clen); -- 2.7.4