Re: Drive failed in 4-drive md RAID 10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> --On Friday, September 18, 2020 10:53 PM +0200 Simon Matter
> <simon.matter@xxxxxxxxx> wrote:
>
>> mdadm --remove /dev/md127 /dev/sdf1
>>
>> and then the same with --add should hotremove and add dev device again.
>>
>> If it rebuilds fine it may again work for a long time.
>
> This worked like a charm. When I added it back, it told me it was
> "re-adding" the drive, so it recognized the drive I'd just removed. I
> checked /proc/mdstat and it showed rebuilding. It took about 90 minutes to
> finish and is now running fine.

I think it's usually like this:
When a drive has a bad sector, the sector is then read from the other raid
disk but the failing disk is marked bad. Then when rebuilding, the bad
sector gets written again and the drive remaps it to a spare sector. As a
result all is well again. Note that the drive firmware can handle such
cases differently depending on the drive type.

Regards,
Simon

>
> _______________________________________________
> CentOS mailing list
> CentOS@xxxxxxxxxx
> https://lists.centos.org/mailman/listinfo/centos
>


_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux