Hi Hannes, On Tue, Mar 7, 2017 at 4:00 PM, Hannes Reinecke <hare@xxxxxxx> wrote: > On 03/07/2017 06:22 AM, Minchan Kim wrote: >> Hello Johannes, >> >> On Mon, Mar 06, 2017 at 11:23:35AM +0100, Johannes Thumshirn wrote: >>> zram can handle at most SECTORS_PER_PAGE sectors in a bio's bvec. When using >>> the NVMe over Fabrics loopback target which potentially sends a huge bulk of >>> pages attached to the bio's bvec this results in a kernel panic because of >>> array out of bounds accesses in zram_decompress_page(). >> >> First of all, thanks for the report and fix up! >> Unfortunately, I'm not familiar with that interface of block layer. >> >> It seems this is a material for stable so I want to understand it clear. >> Could you say more specific things to educate me? >> >> What scenario/When/How it is problem? It will help for me to understand! >> Thanks for the quick response! > The problem is that zram as it currently stands can only handle bios > where each bvec contains a single page (or, to be precise, a chunk of > data with a length of a page). Right. > > This is not an automatic guarantee from the block layer (who is free to > send us bios with arbitrary-sized bvecs), so we need to set the queue > limits to ensure that. What does it mean "bios with arbitrary-sized bvecs"? What kinds of scenario is it used/useful? And how can we solve it by setting queue limit? Sorry for the many questions due to limited knowledge. Thanks.