sw-raid1+ ext3 - can't fsck on boot?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there,

I'm running software raid 1 across two 60GB IDE drives and booting off
the raid device. The raid device holds an ext3 filesystem.

Each drive is configured as a master on its own bus.

The system is redhat 7.2, stock kernel 2.4.9-31smp. The hardware
platform is a Dell precision dual 2Ghz P4 system with 1G of memory.

I have two of these systems, both configured identitically. I've had
filesystem corruption problems on both machines. 

In the process of trying to troubleshoot the problem, I've used tune2fs
to force fscks on every 1 mount, but these never happen on reboot. If I
do tune2fs -l /dev/md0 I can clearly see that the system is past its
maximal mount count, and that it has a "needs check" flag, but it does
not fsck on boot or even ask to be optionally fsck'd on boot like it
does when the filesystem is marked dirty. Each time I reboot the system
the raid device + ext3 filesystem loads fine and shows no errors.

Is this the desired behavior? I suspect that there may be errors on the
root partition but I can't fsck it while the system is up. I built a CD
rescue disk from some random freshmeat project, and it does appear to
find errors on the md partition, but I am not sure that I trust it. It's
running an older kernel.

Does anyone know how to force ext3 fscks on reboot, or have any ideas
what kinds of things could cause filesystem corruption in my setup? 

Thanks in advance,
-Darrell








[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux