On 8/9/19 6:01 AM, Damien Le Moal wrote: > On 2019/08/07 4:53, Jens Axboe wrote: >>> Hi Damien, >>> >>> Had you verified this patch with blktests and KASAN enabled? I think the >>> above patch introduced the following KASAN complaint: >> >> I posted this in another thread, can you try? >> >> >> diff --git a/fs/block_dev.c b/fs/block_dev.c >> index a6f7c892cb4a..131e2e0582a6 100644 >> --- a/fs/block_dev.c >> +++ b/fs/block_dev.c >> @@ -349,7 +349,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) >> loff_t pos = iocb->ki_pos; >> blk_qc_t qc = BLK_QC_T_NONE; >> gfp_t gfp; >> - ssize_t ret; >> + int ret; >> >> if ((pos | iov_iter_alignment(iter)) & >> (bdev_logical_block_size(bdev) - 1)) >> @@ -386,8 +386,6 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) >> >> ret = 0; >> for (;;) { >> - int err; >> - >> bio_set_dev(bio, bdev); >> bio->bi_iter.bi_sector = pos >> 9; >> bio->bi_write_hint = iocb->ki_hint; >> @@ -395,10 +393,8 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) >> bio->bi_end_io = blkdev_bio_end_io; >> bio->bi_ioprio = iocb->ki_ioprio; >> >> - err = bio_iov_iter_get_pages(bio, iter); >> - if (unlikely(err)) { >> - if (!ret) >> - ret = err; >> + ret = bio_iov_iter_get_pages(bio, iter); >> + if (unlikely(ret)) { >> bio->bi_status = BLK_STS_IOERR; >> bio_endio(bio); >> break; >> @@ -421,7 +417,6 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) >> if (nowait) >> bio->bi_opf |= (REQ_NOWAIT | REQ_NOWAIT_INLINE); >> >> - dio->size += bio->bi_iter.bi_size; >> pos += bio->bi_iter.bi_size; >> >> nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES); >> @@ -433,13 +428,13 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) >> polled = true; >> } >> >> + dio->size += bio->bi_iter.bi_size; >> qc = submit_bio(bio); >> if (qc == BLK_QC_T_EAGAIN) { >> - if (!ret) >> - ret = -EAGAIN; >> + dio->size -= bio->bi_iter.bi_size; >> + ret = -EAGAIN; >> goto error; >> } >> - ret = dio->size; >> >> if (polled) >> WRITE_ONCE(iocb->ki_cookie, qc); >> @@ -460,18 +455,17 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) >> atomic_inc(&dio->ref); >> } >> >> + dio->size += bio->bi_iter.bi_size; >> qc = submit_bio(bio); >> if (qc == BLK_QC_T_EAGAIN) { >> - if (!ret) >> - ret = -EAGAIN; >> + dio->size -= bio->bi_iter.bi_size; >> + ret = -EAGAIN; >> goto error; >> } >> - ret = dio->size; >> >> bio = bio_alloc(gfp, nr_pages); >> if (!bio) { >> - if (!ret) >> - ret = -EAGAIN; >> + ret = -EAGAIN; >> goto error; >> } >> } >> @@ -496,6 +490,8 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) >> out: >> if (!ret) >> ret = blk_status_to_errno(dio->bio.bi_status); >> + if (likely(!ret)) >> + ret = dio->size; >> >> bio_put(&dio->bio); >> return ret; >> > > Jens, > > I tested a slightly modified version of your patch. I think it is 100% > equivalent, but a little cleaner in my opinion. [snip] I already queued up the other one days ago, don't think there's much difference between them. In any case, I've read your full email and I'll take a look at the NOWAIT + sync case. I'm currently out though, so it won't be until Monday. Because of that, I'll ship what I have today as it fixes the most common case, then go over the sync+nowait on Monday to see what's up there. -- Jens Axboe