Re: Degraded RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/16/19 2:33 PM, Wol wrote:
On 16/10/2019 22:15, Curtis Vaughan wrote:
Think I got it working, just want to make sure I did this right. Using
fdisk I recreated the exact same partitions on sda as on sdb.

Then I ran the mdadm --re-add for each partition to each raid volume. So
now here are some outputs to various commands. Does everything look right?

Yup. Looks fine.

Because we have two raids on one disk, the rebuild is throttled such that only one rebuild is proceeding at a time.

md1 is rebuilding, as it says. Once that completes then all the status stuff will look normal, and md0 will start rebuilding.

Don't know how long it will take, but because the raid doesn't know what bits of the disk are used and what are not, the complete rebuild will take however long it takes to read a 1gig drive from end to end, and that is quite a long time ...

Cheers,
Wol

Actually, I still seem to have a problem.

After updates I decided to reboot, but it would never reboot until I removed the new drive. I'm wondering if it has something to do with needing to installl grub on the new drive?

Anyhow, now that I've pulled the new drive out and started the server, the old drive is now sda. So does that mean I should issue the commands to add the new drive back to the raid but as sdb?




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux