Hello Johannes, On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote: > zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using > the NVMe over Fabrics loopback target which potentially sends a huge bulk of > pages attached to the bio's bvec this results in a kernel panic because of > array out of bounds accesses in zram_decompress_page(). First of all, thanks for the report and fix up! Unfortunately, I'm not familiar with that interface of block layer. It seems this is a material for stable so I want to understand it clear. Could you say more specific things to educate me? What scenario/When/How it is problem? It will help for me to understand! Thanks. > > Signed-off-by: Johannes Thumshirn <jthumshirn@xxxxxxx> > --- > drivers/block/zram/zram_drv.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > index e27d89a..dceb5ed 100644 > --- a/drivers/block/zram/zram_drv.c > +++ b/drivers/block/zram/zram_drv.c > @@ -1189,6 +1189,8 @@ static int zram_add(void) > blk_queue_io_min(zram->disk->queue, PAGE_SIZE); > blk_queue_io_opt(zram->disk->queue, PAGE_SIZE); > zram->disk->queue->limits.discard_granularity = PAGE_SIZE; > + zram->disk->queue->limits.max_sectors = SECTORS_PER_PAGE; > + zram->disk->queue->limits.chunk_sectors = 0; > blk_queue_max_discard_sectors(zram->disk->queue, UINT_MAX); > /* > * zram_bio_discard() will clear all logical blocks if logical block > -- > 1.8.5.6 >