Hi,
It does say, that metadata is written to the disks. Depending on the
metadata version, this could still fry your LUKS-Header. v1.2 Header is
stored 4k from the start of the drive.
You really should have started the array degraded, check the contents
and then hot-add the new (empty) drive.
--assume-clean is something you use, when you assume or know, that your
array is ok, even if one of the metadata headers says differently.
And mdadm gave you quite some lines to consider first:
/dev/sdc1 appears to contain an ext2fs file system - This probably was
not The old raid component, when it had a LUKS header ontop. A check
would have been wise and then remove the ext2fs signatures, once you
really made sure.
mdadm: partition table exists on /dev/sdb but will be lost or...
Now THIS drive had a partitions table, maybe your raid component was on
one of the partitions?
Regards
-Sven
P.S.: For a recovery, create an image of the running md, then of each
component drive. Inspect the images if you can find the LUKS header at
all. If you do, dd the header and keyslots out of the image and check
the header throroughly, if it's broken, try to fix the copy you made, if
it can be used and you are able to create a ro-LUKS-mapping, you can
possibly recover your data (You really would want to back it up at this
point!)
Am 21.11.2015 um 17:29 schrieb Luis Alexandre:
Hi.
I have a raid1 (mdadm-based) setup with two disks. I had them encrypted with
luks. Everything was ok for 2 years. My PC had a problem and I had to mount
this on a new PC.
When I tried to start the raid on the new PC it only started 1 of the disks
because the other had been replaced on a different PC and had a different
hostname (my original PC had a script to assemble the raid even with the two
disks having a different hostname).
So I tried to fix this different hostname problem by re-creating the raid in
the new PC using
mdadm -C /dev/md127 -l1 -n2 --assume-clean --metadata=1.2 /dev/sdb /dev/sdc1
--uuid=1d925c8d:8c8bb953:4e9070f7:43344cf9
mdadm: /dev/sdb appears to be part of a raid array:
level=raid1 devices=2 ctime=Sun Feb 12 19:40:32 2012
mdadm: partition table exists on /dev/sdb but will be lost or
meaningless after creating array
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=976760832K mtime=Mon Aug 25 19:50:27 2014
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Tue Aug 26 09:10:39 2014
Continue creating array? y
mdadm: array /dev/md127 started.
All appeared to be OK:
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
[raid10]
md127 : active (auto-read-only) raid1 sdc1[1] sdb[0]
976629568 blocks super 1.2 [2/2] [UU]
unused devices: <none>
but now luks does not open the raid:
sudo cryptsetup luksOpen /dev/md127 raid1
Device /dev/md127 is not a valid LUKS device.
Any ideas on how to re-open the raid with luks?
Note: I thought there would be no problem with the create command because of
this in the mdadm man page:
"Create Create a new array with per-device metadata (superblocks).
Appropriate metadata is written to each device, and then the array
comprising those devices is activated. A 'resync' process is started to make
sure that the array is consistent (e.g. both sides of a mirror contain the
same data) but *the content of the device is left otherwise untouched*. "
Thanks for any help you can provide.
Luis
_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt
_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt