Re: RAID5 up, but one drive removed, one says spare building, what now?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jun-Kai,

On 11/10/2017 06:31 AM, Wols Lists wrote:
> On 10/11/17 03:09, Jun-Kai Teoh wrote:
>> Hi all,
>>
>> I managed to get my RAID drive back up, content looks like it's still
>> there, but it's not resyncing or reshaping and my parity drive was
>> removed (I did it when I tried to get it back up).

If you can see your content (mounted read-only, I hope), backup up
everything, in order from most critical to least critical.

>> So what should I do now? I'm afraid of doing anything else at this point.

Well, you need to provide more information.  At least the mdadm -E
reports for all of the member devices.  Including the "parity" device
you removed.  (Parity is spread among all devices in a normal raid5
layout, so you may be suffering from a misunderstanding.)

>> /dev/md126:
>>         Version : 1.2
>>   Creation Time : Thu Jun 30 07:57:36 2016
>>      Raid Level : raid5
>>      Array Size : 23441323008 (22355.39 GiB 24003.91 GB)
>>   Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
>>    Raid Devices : 8
>>   Total Devices : 7
>>     Persistence : Superblock is persistent
>>
>>   Intent Bitmap : Internal
>>
>>     Update Time : Thu Nov  9 18:57:18 2017
>>           State : clean, FAILED
>>  Active Devices : 6
>> Working Devices : 7
>>  Failed Devices : 0
>>   Spare Devices : 1
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>   Delta Devices : 1, (7->8)
                        ^^^^^^

Somewhere in your efforts, you must have used mdadm --grow.  That was
bad, and the reason for my suggestion above.

Please review your bash history and reconstruct what steps you took in
your efforts to revive your array.  An extract of relevant dmesg text
from that time period may be helpful, too.

It would help if you summarized how you got into this jam in the first
place.

[trim /]

> Okay. I was hoping someone else would chime in, but I'd say this looks
> well promising. You have seven drives of eight so you have no redundancy :-(
> 
> You say your data is still there - does that mean you've mounted it, and
> it looks okay?
> 
> sde is rebuilding, which means the array is sorting itself out.

It's not progressing, because of the reshape.

> You need that eighth drive. If a fsck says you have no (or almost no)
> filesystem corruption, and you have a known-good drive, add it in. The
> array will then sort itself out.

No, backups are first.

> I would NOT recommend mounting it read-write until it comes back and
> says "eight drives of eight working".

Concur.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux