Bron Gondwana wrote:
I assume you mean 500 gigs! We're switching from 300 to 500 on newfilesystems because we have one business customer that's over 150Gb now and we want to keep all their users on the one partition forfolder sharing. We don't do any murder though.
Oops yes. I meant 500 gigs. The potential downside of running an fsck on terabyte+ filesystems is not worth the risks IMO. The tremendous speed & efficiency of Cyrus is in it's small files and the indexes. However you have to keep that in mind when estimating not just backups and other daily/weekly items but more serious items. Really I've looked at fsck too many times in my life and don't ever want to again. Anyone who tells me "oh yes but journalling solved all that long ago...." will get an earful from me about how they haven't run a big enough setup with enough stress on it to SEE real problems. I have seen both journalled Linux and logged Solaris filesystem turn up with data corruption and ended up staring at that fsck prompt wondering how many hours until it's done..... The antiquated filesystems that 99% of admins tolerate and work with every day should be lumped under some kind of Geneva provision against torture. It's a mystery to me why it's not resolved years ago and why there isn't a big push for it from anyone. "It doesn't matter how fast it is, if it isn't CORRECT!" should be some kind of mantra for a production data center but it still seems majority of my colleagues talk same as in 1980s' about how if we turn off this or that safety feature we can make the filesystem faster. OK stepping off my soapbox now.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
---- Cyrus Home Page: http://cyrusimap.web.cmu.edu/ Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html