Re: Block device direct read EIO handling broken?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019/08/06 5:31, Jens Axboe wrote:
> On 8/5/19 11:31 AM, Jens Axboe wrote:
>> On 8/5/19 11:15 AM, Darrick J. Wong wrote:
>>> Hi Damien,
>>>
>>> I noticed a regression in xfs/747 (an unreleased xfstest for the
>>> xfs_scrub media scanning feature) on 5.3-rc3.  I'll condense that down
>>> to a simpler reproducer:
>>>
>>> # dmsetup table
>>> error-test: 0 209 linear 8:48 0
>>> error-test: 209 1 error
>>> error-test: 210 6446894 linear 8:48 210
>>>
>>> Basically we have a ~3G /dev/sdd and we set up device mapper to fail IO
>>> for sector 209 and to pass the io to the scsi device everywhere else.
>>>
>>> On 5.3-rc3, performing a directio pread of this range with a < 1M buffer
>>> (in other words, a request for fewer than MAX_BIO_PAGES bytes) yields
>>> EIO like you'd expect:
>>>
>>> # strace -e pread64 xfs_io -d -c 'pread -b 1024k 0k 1120k' /dev/mapper/error-test
>>> pread64(3, 0x7f880e1c7000, 1048576, 0)  = -1 EIO (Input/output error)
>>> pread: Input/output error
>>> +++ exited with 0 +++
>>>
>>> But doing it with a larger buffer succeeds(!):
>>>
>>> # strace -e pread64 xfs_io -d -c 'pread -b 2048k 0k 1120k' /dev/mapper/error-test
>>> pread64(3, "XFSB\0\0\20\0\0\0\0\0\0\fL\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1146880, 0) = 1146880
>>> read 1146880/1146880 bytes at offset 0
>>> 1 MiB, 1 ops; 0.0009 sec (1.124 GiB/sec and 1052.6316 ops/sec)
>>> +++ exited with 0 +++
>>>
>>> (Note that the part of the buffer corresponding to the dm-error area is
>>> uninitialized)
>>>
>>> On 5.3-rc2, both commands would fail with EIO like you'd expect.  The
>>> only change between rc2 and rc3 is commit 0eb6ddfb865c ("block: Fix
>>> __blkdev_direct_IO() for bio fragments").
>>>
>>> AFAICT we end up in __blkdev_direct_IO with a 1120K buffer, which gets
>>> split into two bios: one for the first BIO_MAX_PAGES worth of data (1MB)
>>> and a second one for the 96k after that.
>>>
>>> I think the problem is that every time we submit a bio, we increase ret
>>> by the size of that bio, but at the time we do that we have no idea if
>>> the bio is going to succeed or not.  At the end of the function we do:
>>>
>>> 	if (!ret)
>>> 		ret = blk_status_to_errno(dio->bio.bi_status);
>>>
>>> Which means that we only pick up the IO error if we haven't already set
>>> ret.  I suppose that was useful for being able to return a short read,
>>> but now that we always increment ret by the size of the bio, we act like
>>> the whole buffer was read.  I tried a -rc2 kernel and found that 40% of
>>> the time I'd get an EIO and the rest of the time I got a short read.
>>>
>>> Not sure where to go from here, but something's not right...
>>
>> I'll take a look.
> 
> How about this? The old code did:
> 
> 	if (!ret)
> 		ret = blk_status_to_errno(dio->bio.bi_status);
> 	if (likely(!ret))
> 		ret = dio->size;
> 
> where 'ret' was just tracking the error. With 'ret' now being the
> positive IO size, we should overwrite it if ret is >= 0, not just if
> it's zero.
> 
> Also kill a use-after-free.
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index a6f7c892cb4a..67c8e87c9481 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -386,6 +386,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>  
>  	ret = 0;
>  	for (;;) {
> +		ssize_t this_size;
>  		int err;
>  
>  		bio_set_dev(bio, bdev);
> @@ -433,13 +434,14 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>  				polled = true;
>  			}
>  
> +			this_size = bio->bi_iter.bi_size;
>  			qc = submit_bio(bio);
>  			if (qc == BLK_QC_T_EAGAIN) {
>  				if (!ret)
>  					ret = -EAGAIN;
>  				goto error;
>  			}
> -			ret = dio->size;
> +			ret += this_size;
>  
>  			if (polled)
>  				WRITE_ONCE(iocb->ki_cookie, qc);
> @@ -460,13 +462,14 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>  			atomic_inc(&dio->ref);
>  		}
>  
> +		this_size = bio->bi_iter.bi_size;
>  		qc = submit_bio(bio);
>  		if (qc == BLK_QC_T_EAGAIN) {
>  			if (!ret)
>  				ret = -EAGAIN;
>  			goto error;
>  		}
> -		ret = dio->size;
> +		ret += this_size;
>  
>  		bio = bio_alloc(gfp, nr_pages);
>  		if (!bio) {
> @@ -494,7 +497,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>  	__set_current_state(TASK_RUNNING);
>  
>  out:
> -	if (!ret)
> +	if (ret >= 0)
>  		ret = blk_status_to_errno(dio->bio.bi_status);
>  
>  	bio_put(&dio->bio);
> 

Jens,

I would set "this_size" when dio->size is being incremented though, to avoid
repeating it.

		if (nowait)
			bio->bi_opf |= (REQ_NOWAIT | REQ_NOWAIT_INLINE);

+		this_size = bio->bi_iter.bi_size;
-		dio->size += bio->bi_iter.bi_size;
+		dio->size += this_size;
		pos += bio->bi_iter.bi_size;

In any case, looking again at this code, it looks like there is a problem with
dio->size being incremented early, even for fragments that get BLK_QC_T_EAGAIN,
because dio->size is being used in blkdev_bio_end_io(). So an incorrect size can
be reported to user space in that case on completion (e.g. large asynchronous
no-wait dio that cannot be issued in one go).

So maybe something like this ? (completely untested)

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 75cc7f424b3a..77714e03c21e 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -349,7 +349,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
*iter, int nr_pages)
        loff_t pos = iocb->ki_pos;
        blk_qc_t qc = BLK_QC_T_NONE;
        gfp_t gfp;
-       ssize_t ret;
+       ssize_t ret = 0;

        if ((pos | iov_iter_alignment(iter)) &
            (bdev_logical_block_size(bdev) - 1))
@@ -386,6 +386,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
*iter, int nr_pages)

        ret = 0;
        for (;;) {
+               size_t this_size;
                int err;

                bio_set_dev(bio, bdev);
@@ -421,7 +422,7 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
*iter, int nr_pages)
                if (nowait)
                        bio->bi_opf |= (REQ_NOWAIT | REQ_NOWAIT_INLINE);

-               dio->size += bio->bi_iter.bi_size;
+               this_size = bio->bi_iter.bi_size;
                pos += bio->bi_iter.bi_size;

                nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
@@ -435,11 +436,11 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
*iter, int nr_pages)

                        qc = submit_bio(bio);
                        if (qc == BLK_QC_T_EAGAIN) {
-                               if (!ret)
+                               if (!dio->size)
                                        ret = -EAGAIN;
                                goto error;
                        }
-                       ret = dio->size;
+                       dio->size += this_size;

                        if (polled)
                                WRITE_ONCE(iocb->ki_cookie, qc);
@@ -462,15 +463,15 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
*iter, int nr_pages)

                qc = submit_bio(bio);
                if (qc == BLK_QC_T_EAGAIN) {
-                       if (!ret)
+                       if (!dio->size)
                                ret = -EAGAIN;
                        goto error;
                }
-               ret = dio->size;
+               dio->size += this_size;

                bio = bio_alloc(gfp, nr_pages);
                if (!bio) {
-                       if (!ret)
+                       if (!dio->size)
                                ret = -EAGAIN;
                        goto error;
                }
@@ -496,10 +497,15 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter
*iter, int nr_pages)
 out:
        if (!ret)
                ret = blk_status_to_errno(dio->bio.bi_status);
+       if (likely(!ret))
+               ret = dio->size;

        bio_put(&dio->bio);
        return ret;


-- 
Damien Le Moal
Western Digital Research




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux