Re: [PATCH 1/3] raid0, linear, md: add error_handlers for raid0 and linear

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2/14/22 5:37 PM, Mariusz Tkaczyk wrote:
On Sat, 12 Feb 2022 09:12:00 +0800
Guoqing Jiang <guoqing.jiang@xxxxxxxxx> wrote:

On 1/27/22 11:39 PM, Mariusz Tkaczyk wrote:
Patch 62f7b1989c0 ("md raid0/linear: Mark array as 'broken' and
fail BIOs if a member is gone") allowed to finish writes earlier
(before level dependent actions) for non-redundant arrays.

To achieve that MD_BROKEN is added to mddev->flags if drive
disappearance is detected. This is done in is_mddev_broken() which
is confusing and not consistent with other levels where
error_handler() is used. This patch adds appropriate error_handler
for raid0 and linear.
I think the purpose of them are quite different, as said before,
error_handler
is mostly against rdev while is_mddev_broken is for mddev though it
needs to test rdev first.
I changed is_mddev_broken to is_rdev_broken, because it checks the
device now. On error it calls md_error and later error_handler.
I unified error handling for each level. Do you consider it as wrong?

I am neutral to the change

It also adopts md_error(), we only want to call .error_handler for
those levels. mddev->pers->sync_request is additionally checked,
its existence implies a level with redundancy.

Usage of error_handler causes that disk failure can be requested
from userspace. User can fail the array via #mdadm --set-faulty
command. This is not safe and will be fixed in mdadm.
What is the safe issue here? It would betterr to post mdadm fix
together.
We can and should block user from damaging raid even if it is
recoverable. It is a regression.

I don't follow, did you mean --set-fault from mdadm could "damaging raid"?

I will fix mdadm. I don't consider it as a big risk (because it is
recoverable) so I focused on kernel part first.

It is correctable because failed
state is not recorded in the metadata. After next assembly array
will be read-write again.
I don't think it is a problem, care to explain why it can't be RW
again?
failed state is not recoverable in runtime, so you need to recreate
array.

IIUC, the failfast flag is supposed to be set during transient error not
permanent failure, the rdev (marked as failfast) need to be revalidated
and readded to array.


[ ... ]

+		char *md_name = mdname(mddev);
+
+		pr_crit("md/linear%s: Disk failure on %pg
detected.\n"
+			"md/linear:%s: Cannot continue, failing
array.\n",
+			md_name, rdev->bdev, md_name);
The second md_name is not needed.
Could you elaborate here more? Do you want to skip device name in
second message?

Yes, we printed two md_name here, seems unnecessary.

[ ... ]

--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7982,7 +7982,11 @@ void md_error(struct mddev *mddev, struct
md_rdev *rdev)
   	if (!mddev->pers || !mddev->pers->error_handler)
   		return;
-	mddev->pers->error_handler(mddev,rdev);
+	mddev->pers->error_handler(mddev, rdev);
+
+	if (!mddev->pers->sync_request)
+		return;
The above only valid for raid0 and linear, I guess it is fine if DM
don't create LV on top
of them. But the new checking deserves some comment above.
Will do, could you propose comment?

Or, just check if it is raid0 or linear directly instead of implies level with
redundancy.

Thanks,
Guoqing



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux