raidreconf aborted after being almost done

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I tried to add a 6th disk to a RAID-5 with raidreconf 0.1.2

Almost being done raidreconf aborted with the error message:

raid5_map_global_to_local: disk 0 block out of range: 2442004 (2442004)
gblock = 7326012
aborted

After searching the web I believe this is due to different disk sizes. Because I
use different disks (vendor and type) having different geometries it is not
possible to have partitions of exact the same size. They match as good as
possible but some always have different amounts of blocks.
It would be great if raidreconf would complain about the different disk sizes
and abort prior to messing up the disks.

Is there any way I can recover my RAID device? I tried raidstart on it but it
started using only the old setup with 5 disks without including the new one. How
do I start the array including the 6th disk?

Regards
Klaus
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux