On 11/12/2020 22:24, Chaitanya Kulkarni wrote: > Johhanes/Christoph, > > On 12/10/20 23:30, Johannes Thumshirn wrote: >> On 09/12/2020 14:41, Johannes Thumshirn wrote: >>> On 09/12/2020 11:18, Johannes Thumshirn wrote: >>>> On 09/12/2020 11:10, hch@xxxxxxxxxxxxx wrote: >>>>> On Wed, Dec 09, 2020 at 10:08:53AM +0000, Johannes Thumshirn wrote: >>>>>> On 09/12/2020 10:34, Christoph Hellwig wrote: >>>>>>> Btw, another thing I noticed: >>>>>>> >>>>>>> when using io_uring to submit a write to btrfs that ends up using Zone >>>>>>> Append we'll hit the >>>>>>> >>>>>>> if (WARN_ON_ONCE(is_bvec)) >>>>>>> return -EINVAL; >>>>>>> >>>>>>> case in bio_iov_iter_get_pages with the changes in this series. >>>>>> Yes this warning is totally bogus. It was in there from the beginning of the >>>>>> zone-append series and I have no idea why I didn't kill it. >>>>>> >>>>>> IIRC Chaitanya had a patch in his nvmet zoned series removing it. > > Even though that patch is not needed I've tested that with the NVMeOF > backend to build bios from bvecs with bio_iov_iter_get_pages(), I can still > send that patch, please confirm. > I have the following locally, but I fail to call it in my tests: commit ea93c91a70204a23ebf9e22b19fbf8add644e12e Author: Johannes Thumshirn <johannes.thumshirn@xxxxxxx> Date: Wed Dec 9 23:26:38 2020 +0900 block: provide __bio_iov_bvec_add_append_pages Signed-off-by: Johannes Thumshirn <johannes.thumshirn@xxxxxxx> diff --git a/block/bio.c b/block/bio.c index 5c6982902330..dc8178ca796f 100644 --- a/block/bio.c +++ b/block/bio.c @@ -992,6 +992,31 @@ void bio_release_pages(struct bio *bio, bool mark_dirty) } EXPORT_SYMBOL_GPL(bio_release_pages); +static int __bio_iov_bvec_add_append_pages(struct bio *bio, + struct iov_iter *iter) +{ + const struct bio_vec *bv = iter->bvec; + struct request_queue *q = bio->bi_disk->queue; + unsigned int max_append_sectors = queue_max_zone_append_sectors(q); + unsigned int len; + size_t size; + + if (WARN_ON_ONCE(!max_append_sectors)) + return -EINVAL; + + if (WARN_ON_ONCE(iter->iov_offset > bv->bv_len)) + return -EINVAL; + + len = min_t(size_t, bv->bv_len - iter->iov_offset, iter->count); + size = bio_add_hw_page(q, bio, bv->bv_page, len, + bv->bv_offset + iter->iov_offset, + max_append_sectors, false); + if (unlikely(size != len)) + return -EINVAL; + iov_iter_advance(iter, size); + return 0; +} + static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) { const struct bio_vec *bv = iter->bvec; @@ -1142,9 +1167,10 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) do { if (bio_op(bio) == REQ_OP_ZONE_APPEND) { - if (WARN_ON_ONCE(is_bvec)) - return -EINVAL; - ret = __bio_iov_append_get_pages(bio, iter); + if (is_bvec) + ret = __bio_iov_bvec_add_append_pages(bio, iter); + else + ret = __bio_iov_append_get_pages(bio, iter); } else { if (is_bvec) ret = __bio_iov_bvec_add_pages(bio, iter); It's basically __bio_iov_bvec_add_pages() respecting the max_zone_sectors queue limit.