4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Al Viro <viro@xxxxxxxxxxxxxxxxxx> commit 1cfd0ddd82232804e03f3023f6a58b50dfef0574 upstream. Since "block: support large requests in blk_rq_map_user_iov" we started to call it with partially drained iter; that works fine on the write side, but reads create a copy of iter for completion time. And that needs to take the possibility of ->iov_iter != 0 into account... Signed-off-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- block/bio.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/block/bio.c +++ b/block/bio.c @@ -1171,8 +1171,8 @@ struct bio *bio_copy_user_iov(struct req */ bmd->is_our_pages = map_data ? 0 : 1; memcpy(bmd->iov, iter->iov, sizeof(struct iovec) * iter->nr_segs); - iov_iter_init(&bmd->iter, iter->type, bmd->iov, - iter->nr_segs, iter->count); + bmd->iter = *iter; + bmd->iter.iov = bmd->iov; ret = -ENOMEM; bio = bio_kmalloc(gfp_mask, nr_pages);