On 20/04/13 09:53, Ben Bucksch wrote:
Ben Bucksch wrote, On 20.04.2013 03:26:
I can read my files again, without problem, all is happy.
Actually, no. XFS filesystem structure is not sane. I must have done
something wrong. (If possible, please let me know what, all data should
be posted.)
At first, it looked OK, as if only one recently written directory was
broken. I unmounted one of the FS, did xfs_repair, and after
re-mounting, almost all directories are gone. Almost 100% dataloss. I
can't describe how upset I am against md.
As others have already told you, md does not go randomly kicking drives
from arrays. Your system had a failure of some kind which caused the
loss of two drives. You tried to recover it and managed to get a drive
into the spare state. After much troubleshooting, you used the canon of
last resort "assume-clean" after which (without properly verifying your
drives were in the correct order) you ran a terribly destructive write
to the disks and have almost certainly ruined any chance you had at
recovering your data.
I fail to see where the fault lies with md.
Had you searched or asked a little more, you would have found a number
of people who have written permutation scripts which would have iterated
every possible arrangement of drives to allow you to run a read-only
fsck on each one, which would have positively identified the correct
order of your disks.
Your best bet now is to post on the xfs list to find out if there is
_any_ way of undoing what you just did, or working around it (backup
superblocks or whatever) and then running a permutation on your drives
to see if any combination shows you any valid data.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html