On Fri, Jun 10, 2016 at 7:37 PM, Hannes Reinecke <hare@xxxxxxx> wrote: > On 06/10/2016 01:07 PM, Ming Lei wrote: >> After arbitrary bio size is supported, the incoming bio may >> be very big. We have to split the bio into small bios so that >> each holds at most BIO_MAX_PAGES bvecs for safety reason, such >> as bio_clone(). >> >> This patch fixes the following kernel crash: >> >>> [ 172.660142] BUG: unable to handle kernel NULL pointer dereference at 0000000000000028 >>> [ 172.660229] IP: [<ffffffff811e53b4>] bio_trim+0xf/0x2a >>> [ 172.660289] PGD 7faf3e067 PUD 7f9279067 PMD 0 >>> [ 172.660399] Oops: 0000 [#1] SMP >>> [...] >>> [ 172.664780] Call Trace: >>> [ 172.664813] [<ffffffffa007f3be>] ? raid1_make_request+0x2e8/0xad7 [raid1] >>> [ 172.664846] [<ffffffff811f07da>] ? blk_queue_split+0x377/0x3d4 >>> [ 172.664880] [<ffffffffa005fb5f>] ? md_make_request+0xf6/0x1e9 [md_mod] >>> [ 172.664912] [<ffffffff811eb860>] ? generic_make_request+0xb5/0x155 >>> [ 172.664947] [<ffffffffa0445c89>] ? prio_io+0x85/0x95 [bcache] >>> [ 172.664981] [<ffffffffa0448252>] ? register_cache_set+0x355/0x8d0 [bcache] >>> [ 172.665016] [<ffffffffa04497d3>] ? register_bcache+0x1006/0x1174 [bcache] >> >> The issue can be reproduced by the following steps: >> - create one raid1 over two virtio-blk >> - build bcache device over the above raid1 and another cache device >> and bucket size is set as 2Mbytes >> - set cache mode as writeback >> - run random write over ext4 on the bcache device >> >> Fixes: 54efd50(block: make generic_make_request handle arbitrarily sized bios) >> Reported-by: Sebastian Roesner <sroesner-kernelorg@xxxxxxxxxxxxxxxxx> >> Reported-by: Eric Wheeler <bcache@xxxxxxxxxxxxxxxxxx> >> Cc: stable@xxxxxxxxxxxxxxx (4.3+) >> Cc: Shaohua Li <shli@xxxxxx> >> Acked-by: Kent Overstreet <kent.overstreet@xxxxxxxxx> >> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxxxxx> >> --- >> V2: >> - don't mark as REQ_NOMERGE in case the bio is splitted >> for reaching the limit of bvecs count >> V1: >> - Kent pointed out that using max io size can't cover >> the case of non-full bvecs/pages >> block/blk-merge.c | 35 ++++++++++++++++++++++++++++++++--- >> 1 file changed, 32 insertions(+), 3 deletions(-) >> > Hmm. So everybody is suffering because someone _might_ be using bio_clone? I believe most of usages are involved with <= 256 bvecs in one bio, so only few(such as bcache) will 'suffer', not everybody, :-) > Why can't we fixup bio_clone() (or the callers of which) to correctly > set the queue limits? IMO there isn't a good solution to fix the issue in bio_clone. Firstly one page can held at most 256 bvecs, and not safe to allocate multi-pages in I/O path. Secondaly as said in the comment of the patch it can't be a queue limit now because bio_clone() is used inside bio bounce. But it should be possible to use bio splitting to deal with bio bounce, and it can be a following up job, and of course that change can be a bit too big for backporting. That is why I suggest to fix the issue with this patch. Or other ideas? Thanks, Ming > > Cheers, > > Hannes > -- > Dr. Hannes Reinecke Teamlead Storage & Networking > hare@xxxxxxx +49 911 74053 688 > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg > GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton > HRB 21284 (AG Nürnberg) > -- > To unsubscribe from this list: send the line "unsubscribe linux-block" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html