On 2019/5/14 5:19 下午, Coly Li wrote: > On 2019/5/14 4:55 下午, Thorsten Knabe wrote: >> On 5/13/19 5:36 PM, Coly Li wrote: >>> On 2019/5/9 3:43 上午, Coly Li wrote: >>>> On 2019/5/8 11:58 下午, Thorsten Knabe wrote: >>> [snipped] >>> >>>>> Hi Cody. >>>>> >>>>>> I cannot do this. Because this is real I/O issued to backing device, if >>>>>> it failed, it means something really wrong on backing device. >>>>> >>>>> I have not found a definitive answer or documentation what the >>>>> REQ_RAHEAD flag is actually used for. However in my understanding, after >>>>> reading a lot of kernel source, it is used as an indication, that the >>>>> bio read request is unimportant for proper operation and may be failed >>>>> by the block device driver returning BLK_STS_IOERR, if it is too >>>>> expensive or requires too many additional resources. >>>>> >>>>> At least the BTRFS and DRBD code do not take bio request IO errors that >>>>> are marked with the REQ_RAHEAD flag into account in their error >>>>> counters. Thus it is probably okay if such IO errors with the REQ_RAHEAD >>>>> flags set are not counted as errors by bcache too. >>>>> >>>>>> >>>>>> Hmm, If raid6 may returns different error code in bio->bi_status, then >>>>>> we can identify this is a failure caused by raid degrade, not a read >>>>>> hardware or link failure. But now I am not familiar with raid456 code, >>>>>> no idea how to change the md raid code (I assume you meant md raid6)... >>>>> >>>>> I my assumptions above regarding the REQ_RAHEAD flag are correct, then >>>>> the RAID code is correct, because restoring data from the parity >>>>> information is a relatively expensive operation for read-ahead data, >>>>> that is possibly never actually needed. >>>> >>>> >>>> Hi Thorsten, >>>> >>>> Thank you for the informative hint. I agree with your idea, it seems >>>> ignoring I/O error of REQ_RAHEAD bios does not hurt. Let me think how to >>>> fix it by your suggestion. >>>> >>> >>> Hi Thorsten, >>> >>> Could you please to test the attached patch ? >>> Thanks in advance. >>> >> >> Hi Cody. >> >> I have applied your patch to a 3 systems running Linux 5.1.1 yesterday >> evening, on one of them I removed a disk from the RAID6 array. >> >> The patch works as expected. The system with the removed disk has logged >> more than 1300 of the messages added by your patch. Most of them have >> been logged shortly after boot up and a few shorter burst evenly spread >> over the runtime of the system. >> >> Probably it would be a good idea to apply some sort of rate limit to the >> log message. I could imagine that a different file system or I/O pattern >> could cause a lot more of these message. >> > > Hi Thorsten, > > Nice suggestion, I will add ratelimit to pr_XXX routines in other patch. > Will post it out later for your testing. > Could you please to test the attached v2 patch ? Thanks in advance. -- Coly Li
From 31dc685d78b6f77ddd3d4ffa97478431a6602ed9 Mon Sep 17 00:00:00 2001 From: Coly Li <colyli@xxxxxxx> Date: Mon, 13 May 2019 22:48:09 +0800 Subject: [PATCH v2] bcache: ignore read-ahead request failure on backing device When md raid device (e.g. raid456) is used as backing device, read-ahead requests on a degrading and recovering md raid device might be failured immediately by md raid code, but indeed this md raid array can still be read or write for normal I/O requests. Therefore such failed read-ahead request are not real hardware failure. Further more, after degrading and recovering accomplished, read-ahead requests will be handled by md raid array again. For such condition, I/O failures of read-ahead requests don't indicate real health status (because normal I/O still be served), they should not be counted into I/O error counter dc->io_errors. Since there is no simple way to detect whether the backing divice is a md raid device, this patch simply ignores I/O failures for read-ahead bios on backing device, to avoid bogus backing device failure on a degrading md raid array. Suggested-by: Thorsten Knabe <linux@xxxxxxxxxxxxxxxxx> Signed-off-by: Coly Li <colyli@xxxxxxx> --- drivers/md/bcache/io.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/md/bcache/io.c b/drivers/md/bcache/io.c index c25097968319..4d93f07f63e5 100644 --- a/drivers/md/bcache/io.c +++ b/drivers/md/bcache/io.c @@ -58,6 +58,18 @@ void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio) WARN_ONCE(!dc, "NULL pointer of struct cached_dev"); + /* + * Read-ahead requests on a degrading and recovering md raid + * (e.g. raid6) device might be failured immediately by md + * raid code, which is not a real hardware media failure. So + * we shouldn't count failed REQ_RAHEAD bio to dc->io_errors. + */ + if (bio->bi_opf & REQ_RAHEAD) { + pr_warn_ratelimited("%s: Read-ahead I/O failed on backing device, ignore", + dc->backing_dev_name); + return; + } + errors = atomic_add_return(1, &dc->io_errors); if (errors < dc->error_limit) pr_err("%s: IO error on backing device, unrecoverable", -- 2.16.4