+
+ if (!vec)
+ ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen,
+ GFP_KERNEL);
+ else {
+ struct iovec fast_iov[UIO_FASTIOV];
+ struct iovec *iov = fast_iov;
+ struct iov_iter iter;
+
+ ret = import_iovec(rq_data_dir(req), ubuffer, bufflen,
+ UIO_FASTIOV, &iov, &iter);
+ if (ret < 0)
goto out;
+ ret = blk_rq_map_user_iov(q, req, NULL, &iter, GFP_KERNEL);
+ kfree(iov);
To me some of this almost screams like lifting the vectored vs
not to the block layer into a separate helper.
So I skipped doing this, as cleanup is effective when we have the
elephant; only a part is visible here. The last patch (nvme fixedbufs
support) also changes this region.
I can post a cleanup when all these moving pieces get settled.
+ }
+ bio = req->bio;
+ if (ret)
+ goto out_unmap;
This seems incorrect, we don't need to unmap if blk_rq_map_user*
failed.
+ if (bdev)
+ bio_set_dev(bio, bdev);
I think we can actually drop this now - bi_bdev should only be used
by the non-passthrough path these days.
Not sure if I am missing something, but this seemed necessary. bi_bdev was
null otherwise.
Did all other changes.