Re: RAID1 removing failed disk returns EBUSY

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




----- Original Message -----
> From: "Xiao Ni" <xni@xxxxxxxxxx>
> To: "Joe Lawrence" <joe.lawrence@xxxxxxxxxxx>
> Cc: "NeilBrown" <neilb@xxxxxxx>, linux-raid@xxxxxxxxxxxxxxx, "Bill Kuzeja" <william.kuzeja@xxxxxxxxxxx>
> Sent: Friday, January 30, 2015 10:19:01 AM
> Subject: Re: RAID1 removing failed disk returns EBUSY
> 
> 
> 
> ----- Original Message -----
> > From: "Joe Lawrence" <joe.lawrence@xxxxxxxxxxx>
> > To: "Xiao Ni" <xni@xxxxxxxxxx>
> > Cc: "NeilBrown" <neilb@xxxxxxx>, linux-raid@xxxxxxxxxxxxxxx, "Bill Kuzeja"
> > <william.kuzeja@xxxxxxxxxxx>
> > Sent: Friday, January 23, 2015 11:11:29 PM
> > Subject: Re: RAID1 removing failed disk returns EBUSY
> > 
> > On Tue, 20 Jan 2015 02:16:46 -0500
> > Xiao Ni <xni@xxxxxxxxxx> wrote:
> > > Joe
> > > 
> > >    Thanks for the explanation. So echo "idle" to sync_action is a
> > >    workaround
> > > without the patch.
> > >  
> > >    It looks like the patch is not enough to fix the problem.
> > > Do you have a try with the new patch? Is the problem still exist in
> > > your environment?
> > > 
> > >    If your environment have no problem, can you give me the version
> > >    number?
> > >    I'll
> > > have a try with the same version too.
> > 
> > Hi Xiao,
> > 
> > Bill and I did some more testing yesterday and I think we've figured
> > out the confusion.  Running a 3.18+ kernel and an upstream mdadm, it
> > was the udev invocation of "mdadm -If <dev>" that was automatically
> > removing the device for us.
> > 
> > If we ran with an older mdadm and got the MD wedged in the faulty
> > condition, then nothing we echoed into the sysfs state file ('idle'
> > 'fail' or 'remove')  would change anything.  I think this agrees with
> > your testing report.
> > 
> > So two things:
> > 
> > 1 - Did you make / make install the latest mdadm and see it try to run
> > mdadm -If on the removed disk?  (You could also try manually running
> > it.)
> 
>   I make sure I have install the latest mdadm
>   [root@dhcp-12-133 ~]# mdadm --version
>   mdadm - v3.3.2-18-g93d3bd3 - 18th December 2014
> 
>   It can prove this, right?
> 
>   It's strange when I ran mdadm -If
> 
> [root@dhcp-12-133 ~]# mdadm -If sdc
> mdadm: sdc does not appear to be a component of any array
> [root@dhcp-12-133 ~]# cat /proc/mdstat
> Personalities : [raid1]
> md0 : active (auto-read-only) raid1 sdd1[1] sdc1[0](F)
>       5238784 blocks super 1.2 [2/1] [_U]
>       
> unused devices: <none>
> 
>   I unplug the device manually from the machine. The machine is on my desk.

Hi Joe

   Sorry for this. I input the command wrongly.

[root@dhcp-12-133 ~]# mdadm -If sdc1
mdadm: set sdc1 faulty in md0
mdadm: hot remove failed for sdc1: Device or resource busy

> 
> 
> > 
> > 2 - I think the sysfs interface to the removed disks is still broken in
> > cases where (1) doesn't occur.
> > 
> > Thanks,
> > 
> > -- Joe
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux