I've been asked by my manager to perform some performance testing on the fsck performance on ext3 vs other filesystems (such as vxfs). In particular, we're trying to look for the point at which we can definitively say ext3 doesn't cut it. This has come about because the performance of fsck on some of our larger ext3 filesystems takes upwards of 8 hours, which isn't acceptable in our production environment. However, because of the licensing costs, we don't really want to say that all filesystems on SAN storage need to use a filesystem other than ext3 since that would mean even the small filesystems have to incur the licensing costs. Can anyone either point me to some whitepapers that talk about this? Alternately, can anyone make some recommendations on the best way to test this? I've been doing some rudimentary tests, but the fsck times look ridiculously short (15 minutes for a 600GB ext3 filesystem) compared to what we've seen in production (12 hours for a 600GB ext3 filesystem). What influences the time an fsck takes? That may give us some ideas for restructuring the data in the filesystem as well. Thanks Maarten Broekman Email: maarten.broekman@xxxxxxx -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list