rebooting more often to stop fsck problems and total disk loss

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I run several hundred servers that are used heavily (webhosting, etc.)
all day long.

Quite often we'll have a server that either needs a really long fsck
(10 hours - 200 gig drive) or an fsck that evntually results in
everything going to lost+found (pretty much a total loss).

Would rebooting these servers monthly (or some other frequency) stop this?

Is it correct to visualize this as small errors compounding over time
thus more frequent reboots would allow quick fsck's to fix the errors
before they become huge?

(OS is redhat 7.3 and el3)

Thanks for any input!

_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux