Re: [PATCH 1/3] raid0, linear, md: add error_handlers for raid0 and linear

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 15 Feb 2022 11:43:34 +0800
Guoqing Jiang <guoqing.jiang@xxxxxxxxx> wrote:

> >> It also adopts md_error(), we only want to call .error_handler for
> >> those levels. mddev->pers->sync_request is additionally checked,
> >> its existence implies a level with redundancy.
> >>
> >> Usage of error_handler causes that disk failure can be requested
> >> from userspace. User can fail the array via #mdadm --set-faulty
> >> command. This is not safe and will be fixed in mdadm.
> >> What is the safe issue here? It would betterr to post mdadm fix
> >> together.  
> > We can and should block user from damaging raid even if it is
> > recoverable. It is a regression.  
> 
> I don't follow, did you mean --set-fault from mdadm could "damaging
> raid"?

Yes, now it will be able to impose failed state. This is a regression I
caused and I'm aware of that.
> 
> > I will fix mdadm. I don't consider it as a big risk (because it is
> > recoverable) so I focused on kernel part first.
> >  
> >>> It is correctable because failed
> >>> state is not recorded in the metadata. After next assembly array
> >>> will be read-write again.  
> >> I don't think it is a problem, care to explain why it can't be RW
> >> again?  
> > failed state is not recoverable in runtime, so you need to recreate
> > array.  
> 
> IIUC, the failfast flag is supposed to be set during transient error
> not permanent failure, the rdev (marked as failfast) need to be
> revalidated and readded to array.
> 

The device is never marked as failed. I checked write paths (because I
introduced more aggressive policy for writes) and if we are in a
critical array, failfast flag is not added to bio for both raid1 and
radi10, see raid1_write_request().
> 
> [ ... ] 
> 
> >>> +		char *md_name = mdname(mddev);
> >>> +
> >>> +		pr_crit("md/linear%s: Disk failure on %pg
> >>> detected.\n"
> >>> +			"md/linear:%s: Cannot continue, failing
> >>> array.\n",
> >>> +			md_name, rdev->bdev, md_name);  
> >> The second md_name is not needed.  
> > Could you elaborate here more? Do you want to skip device name in
> > second message?  
> 
> Yes, we printed two md_name here, seems unnecessary.
> 
I will merge errors to one message.

> 
> >>> --- a/drivers/md/md.c
> >>> +++ b/drivers/md/md.c
> >>> @@ -7982,7 +7982,11 @@ void md_error(struct mddev *mddev, struct
> >>> md_rdev *rdev)
> >>>    	if (!mddev->pers || !mddev->pers->error_handler)
> >>>    		return;
> >>> -	mddev->pers->error_handler(mddev,rdev);
> >>> +	mddev->pers->error_handler(mddev, rdev);
> >>> +
> >>> +	if (!mddev->pers->sync_request)
> >>> +		return;  
> >> The above only valid for raid0 and linear, I guess it is fine if DM
> >> don't create LV on top
> >> of them. But the new checking deserves some comment above.  
> > Will do, could you propose comment?  
> 
> Or, just check if it is raid0 or linear directly instead of implies 
> level with
> redundancy.

Got it.

Thanks,
Mariusz




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux