Why would a recreation cause a different number of blocks??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Still trying to restore my large storage system with out
total screwing up. There are two different raid md devices.
both had their superblocks wiped and one of the six drives
is screwed (the other 5 are fine).

Before the human failure: (OS reinstall and I only deleted
the MD devices in the ubuntu installer. I think this just
zeros the md superblocks of the affected partitions)

Personalities : [raid6] [raid5] [raid4] [raid1] [linear] [multipath]
[raid0] [raid10]
md3 : active raid6 sda1[0] sdc1[2] sde1[4] sdb1[1] sdd1[6]
      1073735680 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5]
[UUUUU_]

I recreated the device with:

mdadm --create --assume-clean --level=6 --raid-devices=6 /dev/md0
/dev/sdd1 /dev/sdb1 /dev/sde1 /dev/sdc1 /dev/sda1 missing

and now it reports:
root@nas:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sda1[4] sdc1[3] sde1[2] sdb1[1] sdd1[0]
      1073215488 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5]
[UUUUU_]

Why did my block counts change? The disk partitions weren't touched
or changed at any point. Shouldn't I have gotten the same size?

The created device isn't work. There is suppose to be luks encrypted
volume there but luksOpen reports there is no luks header. (and there
use to be). Would the odd change in size indicate total corruption?

- Jeff


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux