Re: mdadm --fail doesn't mark device as failed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 21 Nov 2012 17:53:58 +0100 Sebastian Riemer
<sebastian.riemer@xxxxxxxxxxxxxxxx> wrote:

> On 21.11.2012 17:17, Ross Boylan wrote:
> > After I failed and removed a partition, mdadm --examine seems to show
> > that partition is fine.
> >
> > Perhaps related to this, I failed a partition and when I rebooted it
> > came up as the sole member of its RAID array.
> >
> > Is this behavior expected?  Is there a way to make the failures more
> > convincing?
> 
> Yes, it is expected behavior. Without "mdadm --fail" you can't remove a
> device from the array. If you stop the array with the failed device,
> then the state is stored in the superblock.
> 
> There is a difference in the way mdadm does it and the sysfs method.
> mdadm sends an ioctl to the kernel. With the sysfs command the faulty
> state is stored immediately in the superblock.
> 
> # echo faulty > /sys/block/md0/md/dev-sdb1/state
> 

This is not true.  "mdadm --fail" and "echo faulty > state" have exactly the
same effect on the array.  They simulate an error occurring.


NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux