On 18/04/19 11:29, Ming Lei wrote: > On Thu, Apr 18, 2019 at 10:42:21AM +0200, Paolo Bonzini wrote: >> On 18/04/19 04:19, Ming Lei wrote: >>> Hi Paolo, >>> >>> On Wed, Apr 17, 2019 at 01:52:07PM +0200, Paolo Bonzini wrote: >>>> Because bio_kmalloc uses inline iovecs, the limit on the number of entries >>>> is not BIO_MAX_PAGES but rather UIO_MAXIOV, which indeed is already checked >>>> in bio_kmalloc. This could cause SG_IO requests to be truncated and the HBA >>>> to report a DMA overrun. >>> >>> BIO_MAX_PAGES only limits the single bio's max vector number, if one bio >>> can't hold all user space request, new bio will be allocated and appended >>> to the passthrough request if queue limits aren't reached. >> >> Stupid question: where? I don't see any place starting at >> blk_rq_map_user_iov (and then __blk_rq_map_user_iov->bio_map_user_iov) >> that would allocate a second bio. The only bio_kmalloc in that path is >> the one I'm patching. > > Each bio is created inside __blk_rq_map_user_iov() which is run inside > a loop, and the created bio is added to request via blk_rq_append_bio(), > see the following code: Uff, I can't read apparently. :( This is the commit that introduced it: commit 4d6af73d9e43f78651a43ee4c5ad221107ac8365 Author: Christoph Hellwig <hch@xxxxxx> Date: Wed Mar 2 18:07:14 2016 +0100 block: support large requests in blk_rq_map_user_iov This patch adds support for larger requests in blk_rq_map_user_iov by allowing it to build multiple bios for a request. This functionality used to exist for the non-vectored blk_rq_map_user in the past, and this patch reuses the existing functionality for it on the unmap side, which stuck around. Thanks to the iov_iter API supporting multiple bios is fairly trivial, as we can just iterate the iov until we've consumed the whole iov_iter. Signed-off-by: Christoph Hellwig <hch@xxxxxx> Reported-by: Jeff Lien <Jeff.Lien@xxxxxxxx> Tested-by: Jeff Lien <Jeff.Lien@xxxxxxxx> Reviewed-by: Keith Busch <keith.busch@xxxxxxxxx> Signed-off-by: Jens Axboe <axboe@xxxxxx> Paolo