This is a note to let you know that I've just added the patch titled md/raid6: Fix anomily when recovering a single device in RAID6. to the 4.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: md-raid6-fix-anomily-when-recovering-a-single-device-in-raid6.patch and it can be found in the queue-4.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From foo@baz Mon Mar 19 09:58:12 CET 2018 From: NeilBrown <neilb@xxxxxxxx> Date: Mon, 3 Apr 2017 12:11:32 +1000 Subject: md/raid6: Fix anomily when recovering a single device in RAID6. From: NeilBrown <neilb@xxxxxxxx> [ Upstream commit 7471fb77ce4dc4cb81291189947fcdf621a97987 ] When recoverying a single missing/failed device in a RAID6, those stripes where the Q block is on the missing device are handled a bit differently. In these cases it is easy to check that the P block is correct, so we do. This results in the P block be destroy. Consequently the P block needs to be read a second time in order to compute Q. This causes lots of seeks and hurts performance. It shouldn't be necessary to re-read P as it can be computed from the DATA. But we only compute blocks on missing devices, since c337869d9501 ("md: do not compute parity unless it is on a failed drive"). So relax the change made in that commit to allow computing of the P block in a RAID6 which it is the only missing that block. This makes RAID6 recovery run much faster as the disk just "before" the recovering device is no longer seeking back-and-forth. Reported-by-tested-by: Brad Campbell <lists2009@xxxxxxxxxxxxxxx> Reviewed-by: Dan Williams <dan.j.williams@xxxxxxxxx> Signed-off-by: NeilBrown <neilb@xxxxxxxx> Signed-off-by: Shaohua Li <shli@xxxxxx> Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/md/raid5.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -3372,9 +3372,20 @@ static int fetch_block(struct stripe_hea BUG_ON(test_bit(R5_Wantcompute, &dev->flags)); BUG_ON(test_bit(R5_Wantread, &dev->flags)); BUG_ON(sh->batch_head); + + /* + * In the raid6 case if the only non-uptodate disk is P + * then we already trusted P to compute the other failed + * drives. It is safe to compute rather than re-read P. + * In other cases we only compute blocks from failed + * devices, otherwise check/repair might fail to detect + * a real inconsistency. + */ + if ((s->uptodate == disks - 1) && + ((sh->qd_idx >= 0 && sh->pd_idx == disk_idx) || (s->failed && (disk_idx == s->failed_num[0] || - disk_idx == s->failed_num[1]))) { + disk_idx == s->failed_num[1])))) { /* have disk failed, and we're requested to fetch it; * do compute it */ Patches currently in stable-queue which might be from neilb@xxxxxxxx are queue-4.4/md-raid6-fix-anomily-when-recovering-a-single-device-in-raid6.patch