On Tue, 15 Jan 2008 03:04:41 PST, Andrew Morton said: > In any decent environment, people will fsck their ext3 filesystems during > planned downtime, and the benefit of reducing that downtime from 6 > hours/machine to 2 hours/machine is probably fairly small, given that there > is no service interruption. (The same applies to desktops and laptops). I've got multiple boxes across the hall that have 50T of disk on them, in one case as one large filesystem, and the users want *more* *bigger* still (damned researchers - you put a 15 teraflop supercomputer in the room, and then they want someplace to *put* all the numbers that come spewing out of there.. ;) There comes a point where that downtime gets too long to be politically expedient. 6->2 may not be a biggie, because you can likely get a 6 hour window. 24->8 suddenly looks a lot different. (Having said that, I'll admit the one 52T filesystem is an SGI Itanium box running Suse and using XFS rather than ext3). Has anybody done a back-of-envelope of what this would do for fsck times for a "max realistically achievable ext3 filesystem" (i.e. 100T-200T or ext3 design limit, whichever is smaller)? (And one of the research crew had a not-totally-on-crack proposal to get a petabyte of spinning oxide. Figuring out how to back that up would probably have landed firmly in my lap. Ouch. ;)
Attachment:
pgp3YsaKqSDft.pgp
Description: PGP signature