Le Thu, 17 Apr 2014 15:33:48 -0400, Stuart Gathman <stuart@gathman.org> a écrit : Thanks for your answer > Fortunately, your fsck was read only. At this point, you need to > crash/halt your system with no shutdown (to avoid further writes to the > mounted filesystems). > Then REMOVE the new drive. Start up again, and add the new drive properly. RAID5 recreate : started with 2 original drives ~# mdadm --assemble --force /dev/md0 /dev/sdc1 /dev/sdd1 md0 status is normal, missing new drive : sdb(1) ~# cat /proc/mdstat md0 : active raid5 sdc1[0] sdd1[1] 3907023872 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] > You should check stuff out READ ONLY. You will need fsck (READ ONLY at > first), and at least some data has been destroyed. > If the data is really important, you need to copy the two old drives > somewhere before you do ANYTHING else. Buy two more drives! That will > let you recover from any more mistakes typing Create instead of Assemble > or Manage. (Note that --assume-clean warns you that you really need to > know what you are doing!) I try a read-only fsck ~# fsck -n -v -f /dev/lvm-raid/lvmp3 fsck from util-linux-ng 2.17.2 e2fsck 1.41.11 (14-Mar-2010) Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Inode 7, i_blocks is 114136, should be 8. Fix? no Inode 786433 is in use, but has dtime set. Fix? no Inode 786433 has imagic flag set. Clear? no Inode 786433 has compression flag set on filesystem without compression support. Clear? no Inode 786433 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 786433 has an invalid root node. Clear HTree index? no Inode 786433, i_blocks is 4294967295, should be 0. Fix? no [...] Directory entry for '.' in ... (11) is big. Split? no Missing '.' in directory inode 262145. Fix? no Invalid inode number for '.' in directory inode 262145. Fix? no Directory entry for '.' in ... (262145) is big. Split? no Directory inode 917506, block #0, offset 0: directory corrupted Salvage? no e2fsck: aborted Sounds bad, what should I do know ? _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/