When I tested zram, I found processes got segfaulted. The reason was zram_rw_page doesn't make the page dirty again when swap write failed, and even it doesn't return error by [1]. If error by zram internal happens, zram_rw_page should return non-zero without calling page_endio. It causes resubmit the IO with bio so that it ends up calling bio->bi_end_io. The reason is zram could be used for a block device for FS and swap, which they uses different bio complete callback, which works differently. So, we should rely on the bio I/O complete handler rather than zram_bvec_rw itself in case of I/O fail. This patch fixes the segfault issue as well one [1]'s mentioned [1] zram: make rw_page opeartion return 0 Cc: Matthew Wilcox <matthew.r.wilcox@xxxxxxxxx> Cc: Karam Lee <karam.lee@xxxxxxx> Cc: Dave Chinner <david@xxxxxxxxxxxxx> Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> --- drivers/block/zram/zram_drv.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 4b4f4dbc3cfd..0e0650feab2a 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -978,12 +978,10 @@ static int zram_rw_page(struct block_device *bdev, sector_t sector, out_unlock: up_read(&zram->init_lock); out: - page_endio(page, rw, err); + if (unlikely(err)) + return err; - /* - * Return 0 prevents I/O fallback trial caused by rw_page fail - * and upper layer can handle this IO error via page error. - */ + page_endio(page, rw, 0); return 0; } -- 2.0.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>