Problems after extending partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



/dev/md1 was RAID 1 built from hda3 and hdb3. After increasing the partition size of hd[ab]3, md1 could not be assembled. I think I understand why and have a solution, but I would appreciate it if someone could check it. This is with 0.90 format on Debian Lenny with the partitions in raid auto-detect mode. hda and hdb are virtual disks inside a kvm VM; it would be time-consuming to rebuild it from scratch.

The final wrinkle is that when I brought the VM up and dm1 was not constructed one of the partitions was used anyway, so they are now out of sync.

Analysis: Growing the partitions meant that the mdadm superblocks were not at the expected offset from the end of the partitions, and so they weren't recognized as part of the array.

Solution:  (step 3 is the crucial one)
1. Shut down the VM; call it the target VM.
2. Mount the disks onto a rescue VM (running squeeze) as sdb and sdc.
3. mdadm --create /dev/md1 --UUID=xxxxx --level=mirror --raid-devices=2 /dev/sdb3 missing --spare-devices=1 /dev/sdc3.
UUID taken from the target VM.
4. wait for it to sync.
5. maybe do some kind of command to say the raid no longer has a spare. It might be
mdadm --grow /dev/md3 --spare-devices=0
6. Shut the rescue VM and start the target VM.

Does it matter if I call the device /dev/md1 in step 3? It is known as that in the target VM.

Thanks for any help.
Ross Boylan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux