Because bio_kmalloc uses inline iovecs, the limit on the number of entries is not BIO_MAX_PAGES but rather UIO_MAX_IOV, which indeed is already checked in bio_kmalloc. This could cause SG_IO requests to be truncated and the HBA to report a DMA overrun. Note that if the argument to iov_iter_npages were changed to UIO_MAX_IOV, we would still truncate SG_IO requests beyond UIO_MAX_IOV pages. Changing it to UIO_MAX_IOV + 1 instead ensures that bio_kmalloc notices that the request is too big and blocks it. Cc: stable@xxxxxxxxxxxxxxx Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Fixes: b282cc766958 ("bio_map_user_iov(): get rid of the iov_for_each()", 2017-10-11) Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> --- block/bio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/block/bio.c b/block/bio.c index 4db1008309ed..cc1195f5af7a 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1299,7 +1299,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, if (!iov_iter_count(iter)) return ERR_PTR(-EINVAL); - bio = bio_kmalloc(gfp_mask, iov_iter_npages(iter, BIO_MAX_PAGES)); + bio = bio_kmalloc(gfp_mask, iov_iter_npages(iter, UIO_MAX_IOV + 1)); if (!bio) return ERR_PTR(-ENOMEM); -- 2.21.0