Re: Brocken Raid & LUKS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 22.02.2013 16:37, schrieb Phil Turmel:
On 02/22/2013 09:58 AM, Stone wrote:
Am 22.02.2013 14:53, schrieb Phil Turmel:
On 02/22/2013 05:31 AM, stone@xxxxxxxxx wrote:
to work on the live cd is very slow.
i will kick out my two system drives and take one new and install a old
system (ubuntu 11.04, i think on this system i have created the first
time the raid) to it.

do you have new infos from the hexdump or other news to try out some
things the get the raid and the luks running?
Unfortunately, no.  The hexdump had no real superblock candidates that I
could see.  That strongly suggests that there remain some ordering
issues.  I would try chunk sizes down to 8k.  If that still doesn't
work, consider re-creating with a different drive order--it's a slim
possibility that "sdc1 sdd1 missing sdf1" isn't correct.

Meanwhile, you haven't supplied the complete hexdump of your luks
signature sector.  It may not help, but it would show the payload offset.
What about this part?

i have installed the system now with one system drive.
the raid devices are now: sdb1 sdc1 sdd1(brocken not sync) sde1
Ok.

i have now tested all chunk's from 512k to 8k
512 Open Luks but no superblock
256 Open Luks but no superblock
128 No key available with this passphrase
64 No key available with this passphrase
32 No key available with this passphrase
16 No key available with this passphrase
8 No key available with this passphrase
Ok, but on the smaller chunk sizes, the device order could impact
interpretation of the key material.  You should repeat the small chunk
tests with the drive order variations below.

Make a grid with chunk size on one axis, and drive order on the other
axis.  Mark each combination with yes or no if it can open luks.  If it
can, save the output of "fsck -n" in a file.  This would be a good thing
to script.

After the script is done, look at all the saved files to see if any look
like possible solutions.
i write a script and send my results back but you really wont a fsck -n /dev/mapper/md2_nas?
the output i veeeery long like this:
Illegal block number passed to ext2fs_mark_block_bitmap #2667529020 for in-use block map Illegal block number passed to ext2fs_test_block_bitmap #2667529021 for in-use block map Illegal block number passed to ext2fs_mark_block_bitmap #2667529021 for in-use block map Illegal block number passed to ext2fs_test_block_bitmap #2667529022 for in-use block map Illegal block number passed to ext2fs_mark_block_bitmap #2667529022 for in-use block map Illegal block number passed to ext2fs_test_block_bitmap #2667529023 for in-use block map Illegal block number passed to ext2fs_mark_block_bitmap #2667529023 for in-use block map Illegal block number passed to ext2fs_test_block_bitmap #2667529024 for in-use block map Illegal block number passed to ext2fs_mark_block_bitmap #2667529024 for in-use block map Illegal block number passed to ext2fs_test_block_bitmap #2667529025 for in-use block map Illegal block number passed to ext2fs_mark_block_bitmap #2667529025 for in-use block map
512k and 256k working better...
next tests:
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
--raid-devices=4 /dev/sde1 /dev/sdb1 missing /dev/sdc1
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
--raid-devices=4 /dev/sdc1 /dev/sdb1 missing /dev/sde1
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
--raid-devices=4 /dev/sdc1 missing /dev/sdb1 /dev/sde1
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
--raid-devices=4 /dev/sdb1 /dev/sde1 /dev/sdc1 missing
     fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
     fsck.ext4: Bad magic number in super-block while trying to open
/dev/mapper/md2_nas
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
--raid-devices=4 /dev/sde1 /dev/sdc1 /dev/sdb1 missing
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
--raid-devices=4 /dev/sdc1 /dev/sde1 /dev/sdb1 missing
     No Luks
mdadm --create /dev/md2 --assume-clean --chunk=512 --verbose --level=5
--raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sde1 missing
     fsck.ext4: Invalid argument while trying to open /dev/mapper/md2_nas
     fsck.ext4: Bad magic number in super-block while trying to open
/dev/mapper/md2_nas

do you think that i should try do mount the partion as RO? but i think
this is not working because the damaged filesystem. right?
Do *not* mount at all.  Even a read-only mount isn't really
read-only--it will try to play back the journal, and will try to write
to the superblocks.

Phil

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux