[Bug 203943] ext4 corruption after RAID6 degraded; e2fsck skips block checks and fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



https://bugzilla.kernel.org/show_bug.cgi?id=203943

--- Comment #3 from Yann Ormanns (yann@xxxxxxxxxxx) ---
Andreas & Ted, thank you for your replies.

(In reply to Andreas Dilger from comment #1)
> This seems like a RAID problem and not an ext4 problem. The RAID array
> shouldn't be returning random garbage if one of the drives is unavailable.
> Maybe it is not doing data parity verification on reads, so that it is
> blindly returning bad blocks from the failed drive rather than
> reconstructing valid data from parity if the drive does not fail completely?

How can I check that? At least running "checkarray" did not find anything new
or helpful.

(In reply to Theodore Tso from comment #2)
> Did you resync the disks *before* you ran e2fsck?   Or only afterwards?

1. my RAID6 got degraded and ext4 errors showed up
2. I ran e2fsck, it consumed all  memory and showed only "Inode %$i block %$b
conflicts with critical metadata, skipping block checks."
3. I replaced the faulty disk and resynced the RAID6
4. e2fsck was able to clean the filesystem
5. I simulated a drive fault (so my RAID6 had n+1 working disks left)
6. the ext4 FS got corrupted again
7. although the RAID is clean again, e2fsck is not able to clean the FS (like
in step 2)

-- 
You are receiving this mail because:
You are watching the assignee of the bug.



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux